Coding

I Work in Hollywood. Everyone Who Used to Make TV Is Now Training AI

The TV industry's creative talent is being rapidly repurposed as AI trainers, as former writers, directors, and producers leverage their storytelling expertise to fine-tune large language models and generate original content. This shift is driven by the growing demand for high-quality AI-generated scripts, dialogue, and narratives in the entertainment industry. Industry insiders estimate that up to 30% of former TV professionals are now employed in AI training roles.

Former TV writers, directors, and producers are increasingly working as AI trainers, annotating data and evaluating AI-generated content for platforms like Mercor, Outlier, Task-ify, Turing, Handshake, and Micro1. This shift follows the 2023 Hollywood strike and the industry’s failure to regain momentum, pushing creatives toward AI gig work to survive financially. One Hollywood showrunner, who previously created dramas for Paramount, Hulu, and the BBC, began AI training in September 2025 after a producer defaulted on a six-figure payment. After applying to 10 jobs and completing unpaid assessments, they were hired as a generalist data annotator at $52 an hour, later moving into expert roles paying up to $70–$150 an hour.

What it does

AI training tasks include assessing chatbot tone, annotating video timestamps (e.g., dog barks, balloon pops), generating extreme content for red teaming, and evaluating AI-generated scripts. Workers follow strict scoring guidelines, often copying verbatim from rubrics to avoid penalties. Projects are managed via Slack, Airtable, and Zoom, with team leaders—typically recent graduates—overseeing large pools of contractors. Workers are classified as independent contractors, despite rigid expectations like 24-hour task completion and constant availability.

Work conditions and pay

The work is marked by instability, abrupt project cancellations, and inconsistent pay. Projects advertised as multi-week engagements often end without notice. In early 2025, expert roles paid up to $150 an hour; by early 2026, rates had dropped to $50 for experts and as low as $16 for entry-level annotators—below California’s minimum wage. Workers report being rehired on nearly identical projects at lower rates, such as Mercor’s Project Musen to Nova transition, which cut pay from $21 to $16 per hour.

Platforms advertise flexibility, but workers describe being on call at all hours, with team leaders messaging at 3 a.m. to push urgency. Performance is tracked via scores (1–5), with low scorers threatened with removal. A "golden batch" of high-priority tasks is reserved for top performers, fueling competition. Despite promises, promotions to reviewer roles do not increase pay.

Tradeoffs

While AI training offers a potential income stream for displaced creatives, it lacks job security, fair pay, and humane working conditions. Contractors face burnout, with some filing lawsuits over misclassification. The system favors speed and compliance over creativity, undermining the very skills it claims to value. Workers report emotional strain, disrupted family life, and a sense of exploitation.

When to use it

For unemployed media professionals, AI training may provide short-term income, but it is not a sustainable career path. The volatility, low pay, and psychological toll make it a last resort rather than a viable alternative to traditional creative work.

The industry relies on the labor of experienced professionals while treating them as disposable. As one worker noted, the goal is to make the machine more human by making humans more like machines.

Similar Articles

More articles like this

Coding 2 min

Claude for Small Business

Small businesses can now tap into large language models with Anthropic's fine-tuned Claude, a customized AI solution that leverages the company's 137B parameter model to provide scalable, on-demand conversational support. By integrating Claude with existing workflows, small businesses can automate tasks, enhance customer engagement, and streamline operations without requiring extensive AI expertise. This move marks a significant expansion of large language model accessibility.

Coding 1 min

Arena AI Model ELO History

"Hidden Decline: AI Model Performance Plummets After Initial Hype, Data Reveals. A live tracker of flagship AI models' ELO ratings shows a stark contrast between initial launch excitement and subsequent performance decay, with generational jumps and slow declines becoming apparent only when viewed over time. The data raises questions about the long-term viability of AI models and the need for more transparent performance metrics."

Coding 1 min

The Other Half of AI Safety

A long-overlooked vulnerability in AI safety protocols is being exposed by a growing number of edge cases, where seemingly innocuous model updates can have catastrophic consequences, highlighting the need for more robust "backdoor" detection and mitigation strategies in large language models. Specifically, researchers have identified a class of "adversarial perturbations" that can be injected into model weights, compromising downstream applications. This "other half" of AI safety is now a pressing concern.

Coding 1 min

Tell HN: Dont use Claude Design, lost access to my projects after unsubscribing

"Subscription limbo: A user's experience with Claude Design's abrupt access revocation after downgrading from a paid plan, raising questions about the implications of complex contractual agreements on user data ownership and access rights in large language model ecosystems."

Coding 1 min

Medicare's new payment model is built for AI. Most of the tech world has no idea

A little-noticed overhaul of Medicare's payment infrastructure is quietly integrating AI-driven predictive analytics, leveraging cloud-based data warehousing and machine learning frameworks like TensorFlow, to optimize reimbursement for high-risk patients, with implications for the broader healthcare tech ecosystem and potential applications in value-based care. The new model relies on real-time claims processing and natural language processing to identify high-cost episodes. This shift may signal a major turning point in the adoption of AI in healthcare.

Coding 1 min

Meta won't let you block its AI account on Threads

Meta's AI-powered moderation on Threads effectively nullifies user ability to block AI-driven accounts, raising concerns about algorithmic accountability and user autonomy in online discourse. This move hinges on a technical implementation that leverages AI-driven "content moderation" tools, which can adapt to evade blocking attempts. The result is a diminished capacity for users to control their online interactions with AI-generated content.