Every previous creative technology automated some aspect of execution while leaving the source of creativity untouched. A camera automated the rendering of light. A word processor automated the physical act of typing. AI automates the generation of creative output itself, and it does so by training on the accumulated work of the humans it then displaces. This circularity drives the shade: an extraction cycle where human creativity is consumed as input and returned as competition.
A Stanford study published in Organization Science examined what happened when a major image marketplace allowed AI-generated content to compete alongside human-produced work. The dataset covered 3.2 million unique images and 62,000 artists. When AI entered in December 2022, the total number of images on the platform skyrocketed. Human-generated images fell dramatically. Consumers chose AI images over human-produced ones. The researchers concluded that generative AI is likely to crowd out non-AI producers and their goods. A Brookings Institution analysis of the Upwork freelancing platform, also published in Organization Science (Hui et al., 2024), found a 2% decline in contracts and a 5% drop in earnings for freelancers in AI-exposed occupations. The damage was concentrated among experienced freelancers offering higher-priced, higher-quality services, the opposite of what you would expect if AI were only replacing bottom-tier work. AI compressed the top of the market.
Over 70 copyright infringement lawsuits have now been filed against AI companies, more than doubling in 2025 alone. The largest settlement to date, $1.5 billion, resolved the Bartz v. Anthropic case over pirated training data. The major music labels filed $500 million suits against Suno and Udio. Disney and Universal sued Midjourney in June 2025. The New York Times lawsuit against OpenAI remains ongoing. The legal system is attempting to adjudicate a question the market has already answered in practice: AI-generated content is, for most commercial purposes, a substitute for the human-made work it was trained on.
What distinguishes AI from every previous creative technology disruption is the dependency. The printing press did not need to read every existing book to operate. The camera did not need to study every existing painting. AI image generators trained on the portfolios of working artists. AI music generators trained on the catalogs of working musicians. The tool requires the existence of the craft it devalues. The legal question of whether this constitutes infringement is unresolved. The economic reality is clear.
The scale of output that results defies any historical comparison. The French streaming platform Deezer, which has deployed the most aggressive AI detection tools in the music industry, reported in January 2026 that roughly 60,000 fully AI-generated tracks are now uploaded daily, accounting for 39% of all new music delivered to the platform. That figure had risen from 10,000 per day in January 2025 to 30,000 by September. Over the course of 2025, Deezer detected and tagged more than 13.4 million AI-generated tracks. AI music startup Suno, valued at $2.45 billion after a $250 million Series C round in November 2025, generates an estimated 7 million songs per day according to an investor pitch deck obtained by Billboard. That is an entire Spotify catalog’s worth of music every two weeks. The primary purpose of this volume is not creative expression. Deezer found that up to 85% of all streams on AI-generated music are fraudulent, generated by bots to siphon royalties from real artists’ share of the pool. By comparison, streaming fraud across Deezer’s entire human-made catalog accounts for 8% of streams. AI-generated music is, overwhelmingly, a vehicle for fraud at industrial scale.
In visual content, the trajectory parallels the music industry. Adobe reported that its Firefly image generator produced three billion images within months of launch, surpassing the total archives of many traditional photo libraries. The stock photography industry, once a stable $5 billion market, has been thrown into structural crisis. Getty Images and Shutterstock announced a $3.7 billion defensive merger in January 2025, combining catalogs of over 700 million files. Getty’s creative revenue has been declining 4.8 to 5.1% annually, even as the merged entity attempts to pivot toward AI licensing. A Conjointly study found that most consumers scored little better than a coin flip when trying to distinguish AI-generated images from photographs. When a client can generate a bespoke image for pennies instead of licensing one for dollars, the stock photography business model collapses.
Shutterstock’s response illustrates the extraction cycle in miniature. The company earned $104 million in 2023 licensing its content library to AI companies for training data, with projections reaching $250 million by 2027. The business model is selling your archive to companies building tools that will eventually replace the need for your archive. It is lucrative in the short term and self-cannibalizing in the long term. Every image that Meta or OpenAI trains on becomes raw material for generating synthetic substitutes for the very work that Shutterstock’s contributors created.
Across the broader web, AI-generated articles now constitute more than half of all English-language content, according to SEO firm Graphite. “Slop,” the term for low-quality AI-generated content flooding digital platforms, was named 2025 Word of the Year by Merriam-Webster and Australia’s Macquarie Dictionary. Online mentions of the term increased ninefold from 2024 to 2025, according to media intelligence firm Meltwater. When the cost of production approaches zero, supply becomes effectively infinite and the bottleneck shifts entirely from creation to attention and curation. This is the economics of infinite supply meeting finite demand, and it does not resolve in favor of the producers.
The content flood’s most consequential damage may be invisible: the quiet destruction of the commercial pipeline through which artists develop their skills. The entry-level gigs that traditionally funded creative apprenticeships (editorial illustration, stock photography, voice-over, translation, documentary graphics) are the same sectors most vulnerable to AI substitution. A Stanford study titled “Canaries in the Coal Mine” tracked employment effects through July 2025 and found a 6% decline in employment for entry-level workers (ages 22-25) in occupations most exposed to AI, even as older workers in the same fields saw 6-9% growth. The negative effects were concentrated in roles where AI automates tasks rather than augments them. Anthropic’s March 2026 labor market study confirms the pattern with independent data: job-finding rates for 22-25 year olds entering AI-exposed occupations dropped approximately 14 percent relative to unexposed occupations since the release of ChatGPT, with no comparable decline for workers over 25. A survey by the UK’s Association of Illustrators of nearly 7,000 practitioners found that one in three had already lost work to AI, with average lost wages of approximately $12,500. The Society of Authors, the UK’s largest trade union for writers, illustrators, and translators, reported that roughly a quarter of illustrators and a third of translators had lost work directly attributable to generative AI. A study in the Journal of Economic Behavior and Organization (Teutloff et al., 2025) found that generative AI cut demand for substitutable freelance skill clusters by up to 50% in short-term contracts, with demand for novice workers declining even in complementary roles where AI was supposed to help rather than replace.
The significance is in what happens next. The commercial work that gets automated first is precisely the work that young creators depend on to build their skills. A beginning illustrator learns craft by doing editorial commissions. A voice actor develops range through commercial gigs. A translator sharpens judgment through volume. When those apprenticeship positions vanish, the pipeline that produces the next generation of skilled creative workers narrows or breaks. The AI models that displaced this work will continue to train on the archive of output produced before the pipeline collapsed, increasingly relying on a fixed corpus of human creativity no longer being replenished.
The photography revolution offers the closest historical precedent, and its lessons are instructive but imprecise. Digital camera sales fell roughly 87% between 2010 and 2020, according to a Statista report cited in industry analyses. Academic research confirms that smartphone cameras reduced the commercial value of professional photography, pushing professionals toward self-employment and cross-subsidization of their practice (Mäenpää, 2023). The commercial middle was destroyed: corporate headshots, standard event coverage, basic product photography. The high end survived and, in some niches, thrived. Photography as a medium expanded enormously in total volume and cultural importance. More photographs are taken each year than ever before. What collapsed was the economic structure that supported professional photographers.
This maps closely to what AI is doing to creative work, with one critical distinction. Digital cameras still required a human behind them. The smartphone democratized photography by lowering the barrier to adequate execution. You no longer needed to understand f-stops and shutter speeds, but you still needed to point the camera at something worth shooting, compose the frame, choose the moment. AI lowers the barrier to conception itself. You do not even need to point.
That observation introduces the shade’s most important unresolved tension: the relationship between ideation and execution in creative work.
Having good ideas and having the skills to realize them are different abilities. Some people have strong conceptual frameworks, original perspectives, something to say, but lack the years of craft development required to say it at a publishable level. Others possess formidable technique with little to express through it. The traditional creative pipeline filtered for the intersection of both. The filter was treated as meritocratic. In practice it was exclusionary, selecting out every person whose bottleneck was execution rather than thinking. Their ideas never reached an audience.
AI breaks that coupling. For the first time, someone with editorial judgment and a clear thesis can produce work that communicates their ideas at a level the craft barrier previously prevented. According to a 2024 Adobe survey, 74% of creators said AI improves their efficiency while empowering them to explore creative pursuits they would not have attempted otherwise. An analysis cited in the Harvard Business Review found that teams using generative AI in ideation sessions produced significantly more original ideas and higher-rated concepts than those working without it. A study published in AI and Ethics (Springer Nature, 2025) confirmed the dual nature: AI accelerates creative workflows and expands the range of what individuals can attempt, while simultaneously risking homogenization when used without human editorial judgment. The accelerator effect is real, documented, and available to anyone with access to the tools.
The strongest case for AI as creative democratizer comes from practice. A writer using AI to research, iterate, and test ideas they control is not producing slop. They are compressing execution time in ways that enable more revision, more structural experimentation, more deliberate choices. The tool handles the parts of the process that are labor-intensive but require no judgment. The human makes every decision that matters: what the thesis is, what to cut, what voice to adopt, what structure serves the argument. The result is work that one person could not produce alone at that quality in that timeframe. Dismissing this because some people also use the same tools to generate 7 million throwaway songs per day confuses the tool with its application.
Ted Chiang, in a widely discussed August 2024 New Yorker essay, argued that this framing misses the point entirely. AI appeals to people who wish to express themselves without fully engaging in the artistic process. The selling point of generative AI is that it generates vastly more than you put in, and that disproportion is precisely what prevents the output from being art. Your first draft, Chiang writes, is not an unoriginal idea expressed clearly. It is an original idea expressed poorly, accompanied by an amorphous dissatisfaction that drives the iterative work of making it better. If you skip the struggle, you lose the engine of refinement. He compared using AI to write to bringing a forklift into a weight room. The effort is the point.
He identifies a real risk. For a student learning to write, bypassing the struggle of composition does forfeit the developmental benefit of that struggle. For someone using AI to avoid engaging with their own material, the output will reflect that absence. Chiang’s framework holds for anyone who uses AI to avoid thinking. It fails for someone who uses AI to think faster, test more options, and iterate toward a result they could not have reached alone. The actual practice of AI-assisted creativity, when done with rigor, does not conform to his binary of “the human creates” versus “the machine creates.” The human directs, rejects, restructures, and exercises judgment at every step. The tool compresses execution time, enabling more iterations, which increases the number of meaningful choices the human makes rather than reducing them. The distinction between AI-as-replacement and AI-as-collaborator is the central question of the shade.
The governance response to creative displacement is further advanced than in any other sector affected by AI, and it offers a prototype for the broader economy. During the 2023 Hollywood strikes, SAG-AFTRA established three foundational principles for AI in creative work: consent, compensation, and control. No performer’s likeness or voice may be replicated without prior written approval. Any use of a digital replica requires fair payment. Performers retain the right to decline replication. These principles have since been extended through a cascade of ratified agreements: animation (March 2024), sound recording (April 2024), commercials (May 2025), the network television code (August 2025), and a new interactive media agreement (ratified July 2025 with 95% approval) that officially ended a year-long video game strike. SAG-AFTRA filed an unfair labor practice charge against a subsidiary of Epic Games over the AI-generated replication of James Earl Jones’s voice for Darth Vader in Fortnite without notice or bargaining.
Legislative frameworks are building on this foundation. California’s AB 2602, signed in September 2024, requires a performer’s informed consent and proper representation before a digital replica can be used, with a reasonably specific description of intended uses. New York enacted SB 7676B in December 2024 with substantially identical provisions. Tennessee’s ELVIS Act (2024) became the first state law to outlaw unauthorized commercial voice clones. The federal TAKE IT DOWN Act, signed in May 2025, prohibits the publication of non-consensual synthetic intimate images. The NO FAKES Act, officially introduced in the Senate in July 2024, would create a federal right to sue over unauthorized use of a digital likeness extending up to 70 years after death. In music, the trajectory is moving from litigation toward structured licensing: by late 2025, Warner Music had settled and signed a licensing partnership with Suno, and Udio had reached agreements with both Universal and Warner, pivoting from open generation to fan-engagement platforms using licensed, opted-in catalogs.
Parallel to collective bargaining, an authenticity economy is emerging. Deezer became the first streaming platform to explicitly tag AI-generated music, removing fully synthetic tracks from algorithmic recommendations and editorial playlists, and now licensing its detection technology to other services. A joint study by Deezer and Ipsos (9,000 respondents across eight countries) found that 80% want AI-generated music clearly labeled, 73% of streaming users want to know if platforms are recommending synthetic tracks, and 70% believe fully AI-generated music threatens artist livelihoods. Consumer preference for AI-generated content has fallen from 60% to 26% in three years, according to global social marketing agency Billion Dollar Boy. Instagram CEO Adam Mosseri acknowledged that authenticity is becoming a scarce resource. Films like Sinners (2025) and Heretic (2024) marketed “no AI” production as a selling point. C2PA provenance standards and blockchain-based verification are being developed to cryptographically certify human-created content.
This trajectory carries its own risk. If “human-made” becomes a luxury label, the result is a two-tier creative economy: authentic art as premium goods for the affluent, AI slop as content for everyone else. That stratification would reproduce, in the cultural sphere, the same concentration dynamics that the collection traces in economic (#1), computational (#2), and political (#6) domains. The governed outcome requires both collective bargaining frameworks for workers who have unions and authentication infrastructure for the broader market. SAG-AFTRA’s members secured consent, compensation, and control. Freelance illustrators, stock photographers, independent musicians, and translators have no equivalent bargaining power. The governance gap between unionized and non-unionized creative workers will define whether the transition produces a stable new equilibrium or a permanent fracture.
Every previous creative technology disruption triggered the same cycle of panic and adaptation, and every time, the new technology destroyed certain jobs while expanding the total scope of the medium. The Jevons paradox applies: when production costs fall, total demand for creative output tends to increase. More people write, photograph, record music, and create visual content than at any previous point in history. The real precedent may be the word processor, which automated typing and created vastly more demand for written communication than existed before, rather than the loom, which permanently destroyed weaving jobs. In the long run, the total volume of creative activity will grow. New forms will emerge. Cultural adaptation will absorb the disruption.
The problem is the transition. The job losses are sharp, localized, and happening now. The cultural adaptation is diffuse, generational, and years away. AI does not need to be better than human artists to destroy creative livelihoods. It only needs to be good enough and cheap enough to satisfy the market for functional creative work: the stock image, the background track, the product description, the marketing copy. That functional market sustained the apprenticeship pipeline. A world where the demand for meaningful, human-made creative work survives as a premium category, while the commercial middle that funded the development of necessary skills has been hollowed out, is a world where creativity becomes the province of those who can afford to develop it without commercial support. That outcome would be a new form of exclusion, and its proponents would call it democratization.
Key tension: The same AI tools that enable people with ideas to overcome the execution barrier also destroy the commercial apprenticeship through which execution skills have traditionally been developed. Whether the net effect expands or contracts the creative ecosystem depends on whether anyone builds an economic bridge between those two realities.