Here's how the Coachella economics work for a typical indie band:
Imagine a band offered a slot at Coachella 2023. Not a headlining slot - a 2:30 PM set on the Mojave tent, competing with desert heat and five other stages. Their fee: $15,000. Sounds generous for a 45-minute set.
Then comes the fine print.
The radius clause: No shows within 1,200 miles of Indio between December 15 and May 1. That's five months. No LA shows, no San Diego, no Phoenix, no Vegas. In the most lucrative touring season, they'd be locked out of the entire Southwest.
The marketing requirements: Minimum $50,000 spend promoting their Coachella appearance. Social media campaigns, PR firms, sponsored content. All coming out of the band's pocket.
The production costs: Coachella provides a stage and basic sound. Want anything special - visuals, guest performers, enhanced production? That's on you. Budget another $30,000 minimum.
The opportunity cost: They'd have to turn down a 20-date club tour that would have netted $80,000. But club dates don't go viral. Coachella does.
Total cost to play Coachella: $145,000 in expenses and lost income. Total payment from Coachella: $15,000. Net loss: $130,000.
The pitch is always the same: "Think of the exposure. Billie Eilish was discovered at Coachella. Cardi B's career exploded after her set. This is your shot."
What goes unmentioned: for every Billie Eilish, there are hundreds of bands who paid six figures to play in the desert heat, gained 10,000 Instagram followers, and were forgotten by Monday.
Most bands take the deal. When Coachella calls, you answer. Even if it bankrupts you.
The typical outcome six months later? A band might gain 28,000 Instagram followers and 40,000 monthly Spotify listeners. Their booking fee for club shows might increase from $2,500 to $4,000. Sounds like success until you run the numbers: they'd need to play 65 shows at the new rate just to break even on Coachella. The average band plays 30 shows a year.
The reality: they're playing bigger rooms but carrying $80,000 in debt. Labels advance the money for Coachella promotion, then bands owe three more albums just to pay it back.
Data from music analytics firms shows the "Coachella effect" varies wildly:
- Chartmetric found just 2.24% average Spotify growth in 2024 when excluding outliers
- Individual acts range from minimal gains to 100%+ for breakouts like Chappell Roan
- Most non-headlining acts earn $10,000-$15,000 but lose money after production costs
- Cardi B lost money in 2018: earned $70,000 per weekend, spent $300,000 on production
The "Coachella bump" is real but brutal. Yes, you gain listeners. Yes, booking agents notice. But the numbers rarely work. One band, The Marías, saw their streams explode after their 2021 set - from 2 million to 20 million monthly listeners. Everyone cites them as proof the system works. Nobody mentions the 47 acts that same year who ended up deeper in debt.
The Coachella trap in four acts:
- Get the offer (dreams of stardom)
- Spend everything to "maximise the opportunity"
- Get a small career bump that doesn't cover costs
- Owe your label another album just to pay back the advance
Coachella pays its artists well - headliners get millions, even smaller acts get thousands. But artists still pay to play there, just not directly. The radius clause alone can cost emerging artists more than their annual income.
But Coachella is the exception. For most festivals, artists literally pay to play.
Sometimes it's upfront: $150 for a 20-minute set, $1,200 for 60 tickets you have to sell yourself. Sometimes it's disguised: "promotional fees," "marketing partnerships", "exposure opportunities."
The Civil Unrest Tour required bands to buy 60 advance tickets - if they couldn't sell them all, they lost money. CMA Fest had bars charging for time slots. Smaller festivals routinely ask non-headlining acts to purchase ticket bundles upfront.
The economics for a typical regional festival slot:
- Festival slot fee: $500 for 30 minutes
- Travel costs: 8-hour drive each way (gas: ~$200)
- Accommodation: Two nights (~$300)
- Ticket requirement: 20 tickets at $40 each to sell
- If only 12 sell: $320 loss on unsold tickets
- Total outlay: ~$1,320
- Total earnings: $0
- Social media growth: minimal (typically under 100 followers)
The justification? "Festival experience" on the resume might help book other gigs.
Does it? The venues know artists are desperate enough to pay to play festivals. So they offer less. The race to the bottom accelerates.
This is the music industry in 2025: artists paying for the privilege of performing their own work.
It's a protection racket with a stage.
THREE PLAYERS, THREE GAMES
The copyright panic around AI isn't a simple story of artists versus tech. There are three distinct groups with different incentives:
The Old Guard: Universal, Sony, Warner (the major labels), plus publishers like News Corp. They own the copyrights. They see AI as an existential threat - what's their catalog worth if AI can generate infinite music? They're the ones driving the copyright panic, using artist concerns to protect their business model.
The New Distributors: Spotify, Apple Music, YouTube. They're playing both sides. Infinite AI content would weaken everyone's negotiating position, but they also need publishers' marketing muscle and catalog to keep users engaged. They're fence-sitting, waiting to see which future is more profitable. Their real fear? AI companies becoming distributors themselves.
I now see twice as much referral traffic from ChatGPT as from Google. As I write this, Google has made AI summaries the first result and AI chat the first tab in its redesign - the most radical change to the world's most valuable interface in twenty years.
The AI Players: OpenAI, Anthropic, Google's AI division. They claim fair use while signing licensing deals. Sam Altman, like Musk, isn't content with one industry - they want to own search, creation, and distribution. The licensing deals might be defensive (avoiding lawsuits) or offensive (building moats). Probably both.
Each group wants something different. The Old Guard wants licensing requirements to maintain relevance. The New Distributors are desperately trying to remain distributors at all. The AI Players want to become the new everything. Artists? They want fair compensation and "respect" - though what respect means in an economy that already makes them pay to play is an open question.
SIGN WITH MORRIS, OR SIGN WITH NOBODY
The mob never left the music business. They just incorporated.
Morris Levy, who ran Roulette Records with the Genovese crime family, showed how it worked. When singer Jimmie Rodgers tried to leave the label, he ended up with a fractured skull and two weeks in a coma. When Tommy James of the Shondells got interest from multiple labels, they all mysteriously withdrew their offers - except Levy's. Sign with Morris, or sign with nobody.
The baseball bats became lawsuit threats. Payola became playlist placement fees. The same protection racket, now with better lawyers and worse terms. Every generation of technology promises to free artists from the last generation's extortion, then invents a more efficient version.
I watched this transformation firsthand. I ran a small independent label in the late 90s when mp3.com showed us the future. Their business model was breathtaking: buy CDs in stores, rip them to servers, sell "backup" access to anyone who claimed to own the disc. No licenses, no permission. When we protested, they laughed. "Sue us if you can afford it."
This was actual piracy - taking our physical recordings and selling access to them. Not analysing patterns. Not learning from data. Straight-up commercial exploitation of copyrighted recordings.
The bitter irony: mp3s were supposed to democratise music. Instead they killed the democrats first. Indie labels like mine needed ~500 sales to break even on a pressing. A typical run of 750-3000 netted a very modest profit which paid for the tiny enterprise. There were thousands of others like us at the nexus of fandom and entrepreneurialism. When students - our entire audience - started downloading instead, we were dead in months. The majors? They had cushion, catalog, lawyers.
We thought mp3s were destroying the industry. They were actually clearing the ground for a new order. The technology shift made traditional label functions - physical manufacturing, distribution, retail relationships - obsolete almost overnight. Artists could now reach audiences directly through CDBaby, later Spotify, YouTube, Instagram. The democratisation we were promised actually happened, in a way.
But when everyone can access distribution, discovery becomes the new bottleneck. The game changed from 'who can press and ship CDs' to 'who can get on Today's Top Hits.'
The majors adapted. They abandoned bankrolling hopefuls and pivoted to marketing muscle and playlist influence. By the time Spotify arrived, they negotiated equity stakes, preferential rates, guaranteed playlist positions. Instead of selling physical albums, they collected streaming royalties, upfront payments, and catalog licensing. Independent artists got tools but no leverage. Low budget indies were roadkill.
I've watched this cycle three times. Each technology promises to democratise music. Each time, it briefly does - then the survivors figure out how to capture it. Mp3.com's model - take first, pay nothing, call it innovation - became the template. Spotify just made it legal.
40,000 YEARS WITHOUT COPYRIGHT
For 40,000 years, humans created without copyright.
Cave paintings at Lascaux. Epic poems passed down through generations. Cathedral builders who never signed their work. The entire Renaissance, which was basically one long remix project - everyone stealing from everyone else, improving on what came before.
Shakespeare lifted every plot. Romeo and Juliet was adapted from Arthur Brooke's The Tragical History of Romeus and Juliet. Hamlet reworked Thomas Kyd's earlier play. King Lear borrowed from multiple sources. The greatest writer in English literature was what we'd now call a content aggregator.
Plagiarism is necessary, progress implies it.
Copyright didn't exist because it didn't need to exist. Creativity happened anyway. In fact, it flourished precisely because ideas could flow freely, be built upon, transformed, and reimagined.
Then came the printing press in 1440. For nearly three centuries, it spread knowledge without copyright law. Printers had local monopolies through guild systems and royal patents, but no universal author's rights existed.
Copyright only emerged in 1710 with the Statute of Anne - not to protect content, but to break the London printing guild's perpetual monopoly on it. Before copyright, the Stationers' Company controlled books forever. Copyright was actually the radical idea that monopolies should end - that after 14 years, works must enter the public domain.
This is the rich irony: copyright was invented to limit monopolies and guarantee public access, not to create exclusive control. It wasn't a natural right but a compromise: temporary monopoly in exchange for eventual freedom.
The Statute of Anne was explicit about this trade-off. It granted authors 14 years of protection, renewable once if they were still alive. After 28 years maximum, the work belonged to everyone.
Today's copyright terms - life of the author plus 70 years - would have horrified the system's inventors. They created copyright to encourage creativity by ensuring works would quickly become building blocks for future creators. Instead, we've turned it into a perpetual monopoly that prevents exactly the kind of remixing and building-upon that made Shakespeare possible.
The damage is visible everywhere. De La Soul's classic albums were kept off streaming for over 20 years because of sample clearances - they only arrived in March 2023. The Turtles sued them for $2.5 million over a 12-second sample, settling for $1.7 million. That single lawsuit helped kill the golden age of sampling. We've criminalised the very creativity copyright was meant to encourage.
Every new technology has triggered the same panic cycle:
Piano rolls in the 1900s: John Philip Sousa testified before Congress that mechanical music would destroy live performance. "These talking machines are going to ruin the artistic development of music in this country," he declared. The result? Publishers secured mechanical royalties through compulsory licensing. Music exploded, publishers got paid.
Radio in the 1920s: Record companies fought to prevent their music from being broadcast for free. They argued radio would kill record sales. The result? ASCAP and BMI emerged to collect royalties. Labels discovered radio drove record sales, not killed them.
Home taping in the 1980s: "Home Taping Is Killing Music," declared the British Phonographic Industry's campaign. The cassette would destroy the recording industry. The result? Blank media taxes in many countries. The industry's most profitable decade followed.
File sharing in the 2000s: The RIAA sued 35,000 people, claiming downloads would end recorded music forever. The result? Streaming deals where labels got equity stakes. More music than ever, but artists get fractions of pennies per stream.
Notice the pattern? Panic, lawsuits, then new revenue streams for businesses. Creativity never died - it exploded every time. But the businesses crying wolf always found a way to get paid. The artists who were supposedly being protected? Different story.
The critics cheering today's copyright lawsuits are making the same arguments that were made against home taping, radio broadcasts, and piano rolls. They're right that the pattern will repeat. They're wrong about who benefits.
MORE ART THAN EVER, MOST OF IT FREE
Right now, in 2025, we're living through the most creative period in human history.
Over 100,000 new tracks are added to streaming platforms daily, with over 200 million total tracks available by 2025. Every minute, 500 hours of video are uploaded to YouTube. Hundreds of millions of posts are shared on Instagram daily.
Most of it is created for free. Not because creators are forced to, but because they want to.
Millions post art on Instagram without expecting payment. Open source developers have created billions of dollars in value and given it away. Wikipedia is written by volunteers. Fan fiction authors produce novels longer than War and Peace for free. Podcasters spend hours each week creating content, hoping to build an audience.
When millions create for free, attention becomes the only scarcity. We're already drowning in content - at the current rate to watch 1 day's worth of YouTube content would take you 82 years, and meanwhile 2.5 million years worth of content will have been added. Now imagine that multiplied by the entire internet. In this ocean of infinite content, platforms like Spotify, YouTube, and Instagram control attention distribution - which is the only currency that matters.
The economic argument against AI training - "nobody will create if they can't monetise it" - is empirically false. We're witnessing the largest explosion of voluntary creativity in human history, happening right now, while people argue that creativity will die without stronger copyright protection.
Artists accept radius clauses, compete for playlist placement, chase viral TikTok moments. In the attention economy, exposure often matters more than direct payment. That's why bands pay to play Coachella.
This undermines the core copyright argument: if people create without expecting payment, if they actively pay for exposure, then how does AI training on existing works discourage creativity? The economic incentive never existed for most creators - it was always about something else.
To be clear: artists should get paid. But the current system - the one copyright maximalists are desperately defending - already ensures most don't. When Spotify pays $0.003 per stream, when labels take 80% of what's left, when artists pay to play festivals, the system is already broken. The publishers crying "theft" about AI training are the same ones who built an economy where artists create for exposure instead of income. They're not protecting artist compensation - they're protecting their own extraction model.
WHAT COURTS ARE ACTUALLY DECIDING
The common assumption is that the copyright issue is about AI outputs - that ChatGPT might spit out a Beatles song or create art "in the style of" a famous artist, somehow stealing sales from the original.
But that's not what the legal cases are about.
To understand why this matters, you need to know what a Large Language Model actually is. An LLM is essentially a massive mathematical thesaurus - it learns which words tend to appear near other words across billions of examples. The "weights" everyone talks about are just numbers representing how strongly different concepts connect. When you ask it a question, it's not searching a database of stored texts. It's using these statistical relationships to predict what words should come next.
The architecture is the opposite of copying in fact. These systems compress patterns from training data into abstract relationships - like how you might remember that "desserts often follow main courses" without memorising every meal you've eaten. (This is why LLM's are prone to cliches as they average across everything!) When they occasionally output something resembling copyrighted work, it's because the training process saw that exact phrase too many times and the statistical weight became too strong. This is a bug, not a feature.
But bugs make good lawsuits. The New York Times is suing OpenAI, claiming their articles were being reproduced. OpenAI counters that the Times used specifically engineered prompts to trigger this rare failure mode - like finding you can hack a vending machine with a foreign coin and claiming the whole system is designed for theft.
NYT aside, the actual legal cases aren't about outputs. They're about something far more technical and narrow: whether temporarily copying training material to a filesystem during the training process constitutes copyright infringement.
The core of many copyright cases centres on whether temporarily copying material to a computer's hard drive to analyse it during training violates copyright law - not whether the trained model itself contains copies of the works. Think of it like this: to learn patterns from a book, the AI system must first download and store that book temporarily, just like your browser downloads a webpage to display it. The legal question is whether that temporary download for analysis is fair use.
While plaintiffs throw everything at the wall - including claims about reproduction, derivative works, and outputs - the legal centre of gravity remains whether this temporary storage for pattern analysis is transformative fair use. The pattern analysis itself appears to be legal; it's the act of downloading copyrighted material to analyse it that's primarily in question.
When GPT-4 was trained on millions of books, it didn't store those books. It learned statistical relationships between words, phrases, and concepts. It's like how music theory emerged from analysing thousands of songs - we extracted the patterns (chord progressions, scales, rhythm structures) without keeping the songs themselves. The training data gets processed and discarded. What remains is a mathematical model of language patterns, just as music theory is a model of musical patterns.
Recent court rulings have recognised this distinction. In May 2025, Judge Alsup ruled that Anthropic's use of copyrighted books to train Claude was "quintessentially transformative" and therefore fair use. In June 2025, Judge Chhabria ruled in favour of Meta, finding that authors failed to demonstrate sufficient harm from AI training on their works.
The technical illiteracy runs deep. Netflix's new AI guidelines prohibit outputs that 'replicate' copyrighted material and mandate that tools don't 'store' training data - requirements that fundamentally misunderstand how these systems work. LLMs don't store data; they store mathematical weights. They're anti-copy machines that average across patterns, which is why output tends toward the generic and cliche. Netflix is regulating against behaviors that only occur when systems malfunction, like requiring cars not to fly. Even the companies setting industry standards don't understand what they're regulating.
Meanwhile the courts are getting it right: pattern learning is not piracy. But the copyright maximalists have convinced artists that something else entirely is happening - that AI is somehow stealing their work and preventing them from getting paid.
FOLLOW THE MONEY
The real economic threat is rarely examined clearly. Streaming platforms are being flooded with AI-generated music. Since those platforms pay from a shared royalty pool based on percentage of total streams, every AI track dilutes earnings. But publishers lose the most from this dilution, while Spotify benefits from having infinite content to serve.
The current royalty structure isn't a law of nature - it's a negotiated agreement that may no longer fit the technology. If Spotify serves less publisher content to consumers, they become a less valuable distribution channel. Can publishers pull their catalog? In theory yes, but where else would they go? Market forces should fix this, but instead we're watching a turf war between beneficiaries of a rigged game.
Publishers need artists to believe AI training is theft. Artists' outrage mobilises public support. The public's misunderstanding - thinking AI training is theft rather than seeing how AI dilution threatens publishers' revenue - creates political pressure. Politicians respond to constituent concerns. The entire apparatus of public opinion becomes a lever for the licensing regime publishers want. The actual threat to publisher income isn't pattern learning - it's being devalued by entirely new AI-generated content which publishers fear the public are just as happy to consume and which they don't get a cut of. It's like expecting royalties on every 12-bar blues song. (And don't get me started on the royalty battles over singing Happy Birthday which only entered the public domain in 2015!)
But these copyright wars usually end in deals, not destruction. Consider Google: they've scraped the entire internet for decades, faced endless publisher lawsuits, but eventually reached an uneasy détente. Publishers realised they couldn't out-Google Google, so they accepted the bargain: Google sends traffic, publishers optimise for SEO. An entire industry grew around this symbiosis.
Now that bargain is breaking. Google sends less traffic (keeping 59.4% of searches on their own properties). Meanwhile ChatGPT is now sending significant outbound traffic to publishers. Are they going to be the ones to out-Google Google with a better deal? They have the users. They have richer context than just search queries. They might offer publishers higher-quality traffic in exchange for laying off the lawsuits. Will the carrot be big enough for publishers to actively want to be crawled and pursue optimisation as they do with Google. We are already starting to see signs of this with smaller publishers and merchants and there is a nascent optimisation industry trying to settle on a name between LLMO or GRO.
Publishers see the opportunity and the threat. The same act that Google did "for the good of the internet" becomes "theft" when done by newcomers - unless those newcomers are willing to offer enough incentives.
This pattern reveals the real game. The people most upset about AI training aren't worried about creativity. They're worried about their economic niche.
Publishers who charge $30 for academic textbooks don't want AI to democratise access to information. Stock photo companies don't want AI to generate images for free. News organisations don't want AI to summarise articles without driving traffic to their sites.
These are legitimate business concerns, but they're not creativity concerns. The same publishers charging $30 for textbooks also pay authors $2-3 per book sold. The same stock photo companies that want AI licensing also pay photographers pennies per download. The business models that AI threatens often weren't serving creators well to begin with.
When these companies claim to protect creators, they're protecting their own middleman profits.
Meanwhile, the distribution platforms that actually control creative discovery continue consolidating power. Spotify's algorithm decides which songs get discovered. YouTube's recommendation system determines which videos go viral. Amazon's search algorithm controls which books get found. Apple's App Store policies decide which apps can exist.
These algorithmic decisions have more impact on creative careers than any AI training ever could. Yet the platforms that really control creative fate operate without challenge.
The real fight isn't about copyright - it's about who controls distribution and attention. Publishers block scrapers while negotiating licenses. Distributors scramble to become AI companies before AI companies become distributors. Everyone's trying to avoid becoming obsolete.
THE LICENSING TRAP
Watch the chess game: OpenAI and Anthropic claim fair use publicly while quietly signing "voluntary" licensing deals. Why pay for something you claim is legally free? Because exclusive licenses become moats. Once courts establish any licensing precedent, those early deals lock out competitors.
The AI companies might even prefer a licensing regime - as long as they help design it. Better to pay predictable fees you can pass to customers than face endless lawsuits. Better still if those fees are high enough to block new entrants. Any startup trying to compete would face an insurmountable barrier - licensing the same training data without the scale to absorb those costs.
These lawsuits are a smokescreen. When training data requires expensive licenses, only companies with Microsoft or Google backing can compete. The publishers get a new revenue stream (artists will see pennies, as always). The AI companies get a legally defensible monopoly.
New AI companies? They're locked out before they start. Can't train without licenses. Can't afford licenses without massive funding. Can't get funding without already having trained models. The circle closes.
The result: a cartel. The existing AI companies get grandfathered in with their early licensing deals. New competitors can't afford to enter. Publishers get a new revenue stream (and as Spotify shows, almost none will reach actual creators).
Governments get more than just the appearance of action. The UK floated the idea of a government-controlled registry for training data. Beyond blocking illegal content, such a registry could be used to let governments monitor and potentially even control what goes into AI systems. European governments could ban problematic content. Authoritarian regimes could block politically sensitive material. Nobody's yet discussing this other side of the coin of sovereign AI, but it seems obvious once you think about it.
Everyone wins except consumers, who pay higher prices, and potential innovators, who get locked out entirely. The licensing regime suits both big AI companies (who get predictable costs and competitive moats) and publishers (who get a new revenue stream). The business incentives are irresistible.
This is why the copyright panic focuses specifically on AI training while ignoring other forms of automated content analysis. Google has been analysing web pages for search ranking for decades. Spotify analyses songs to create algorithmic playlists. Facebook analyses posts to determine what gets seen. None of this triggered copyright lawsuits because it served existing platform interests.
But when AI companies started training models, publishers saw an existential threat. Do they genuinely believe it's theft, or are they deploying that charge cynically? Hard to say. What's clear is that "AI is stealing" mobilises public support better than "AI threatens our business model."
We're not really arguing about copying. We're arguing about control - who gets to set the terms in the digital creative economy.