01/21/2025 | News release | Distributed by Public on 01/21/2025 08:08
The text below is is from the MusicAlly news report.
AI was the biggest story for the music business in 2024, and - despite sundry TikTok dramas in both years - looks set to be the biggest story of 2025 too. So naturally we made AI the first track of our Music Ally Connect conference in London today, focusing on one specific topic: licensing.
The track started with an interview with Gadi Oron, director general of Cisac, the global collecting societies body, conducted by Music Ally CEO Paul Brindley.
Oron talked about Cisac's recent study, commissioned from a research agency, on the potential economic impact of GenAI technologies on the music industry.
"We wanted to have the evidence from an external, objective, reputable consultancy about what is likely to happen in the market if this [GenAI] sector goes unregulated," said Oron.
He outlined the three key challenges that Cisac sees: how AI music companies should be licensed for their inputs - the music used to train their models; how to deal with their outputs - the music they generate; and how to tackle the use of musical AIs for streaming fraud.
"We want to monetise. We want to license all of these AI platforms for the inputs," said Oron. "But at the moment, none of our members - none of the authors' societies - have licensed any of the music AI services. So it's a major issue for us, and at the moment it's extremely difficult."
On the second challenge, Oron questioned whether the outputs from musical AIs can be licensed, and whether they can be brought into the existing system of collective management.
"We need transparency in the market," he warned. "Everything is around transparency. Only the AI services know what they are using, how they are using it and where they are sourcing the information from. We don't have that data."
Oron praised the European Union's AI Act, which includes the principle that AI services must be transparent about their inputs, even if the implementation of this has yet to play out.
He noted that some governments around the world are thinking of introducing text-and-data mining exceptions, enabling AI companies to use copyrighted material to train their models.
"Our message to governments is don't introduce this exception," said Oron. But if they do: "We ask to be able to opt out, for governments to allow us to reserve our rights."
Oron also stressed that he does not think the current debates around AI are a simple parallel to those around filesharing in the late 1990s.
"In the mid-to-late 90s, the industry's approach was very different. Mainly, the record labels - and the film studios - tried to stop the use of this [filesharing] technology," he said.
"This is not where we are today. I don't see any executives saying 'we need to stop the use of AI with respect to music'. We all talk about licensing… We just want the ability to monetise, and we want to opt out so that we can negotiate."
Oron said he sees a hugely positive future for the use of GenAI technologies within the music industry, and healthy partnerships with the companies developing them.
"If there's something history taught us, it's that the industry always finds a way to adjust to new market realities and new disruptive technologies," he said.
"It's just a question of how long it takes. And how long it takes for these services to realise they have to play by the rules and pay for copyrighted content."
I would say yes, there is definitely an appetite to license
Chris Horton, Universal music group
Oron's keynote was followed by a panel on AI licensing chaired by ReedSmith partner Nick Breen. The panel included UMG's SVP, strategic technology, global digital strategy Chris Horton; Musical AI's COO Matt Adell; Sacem's director, development, phono and digital (and CEO of URights) Julien Dumon; and MatchTune's chief business development and rights officer Virginie Berger.
Is there an appetite within the AI sector to sign licensing deals for the music they are using to train models?
"I would say yes, there is definitely an appetite to license. What we're seeing right now is there's a division in the market between companies that want to take what we would call the ethical approach, and who want to make sure that they have all the rights so they are out talking to rightsholders about how they acquire the rights," said Horton.
"And then there are other companies who have just scraped the web, and are going and doing what they do. Depending how the law goes and how litigation goes, what we saw in the P2P days was that eventually they came together, and it became clear that the best and easiest path - and really the only path - was to go get a licence."
Virginie Berger took a different view, it's fair to say.
"Actually, I don't think that we can license. I think that we should license, but big tech - and when I talk about big tech, it's really big tech - they don't want a licence. It's already done: they went through all the music," she said.
"And the second reason is that we can't find the input from the output. We can't reverse-engineer and analyse what is the input. It's absolutely impossible. And if you read what the lawyers said in the lawsuit between OpenAI and the New York Times, they said they don't keep the inputs, so we don't know."
Berger also expressed scepticism about the willingness of governments, from the US to China, to regulate AI too strictly. "All the people around Trump, it's only the big tech, and they don't want to license," she said.
Sacem's Dumon took the middle ground between Horton and Berger's viewpoints. "There are tons of AI actors looking to get licensed, but the big ones - Suno, Udio, OpenAI - they are not in this situation," he said.
"We would love to be in a position to license both input and output, but the reality is much more complex, and it will certainly depend on the types of rights you are providing to these companies."
Adell ramped up the optimism again though, noting that his startup - which acts as a bridge between rightsholders and AI companies to craft and monitor licensing deals - already has evidence that those deals can be done.
"We started generating revenue January 1st. We have the first end-to-end deal with rightsholders and an AI company, Beatoven out of India. Their legacy company had been around for a while with a fairly-trained model, and we're now helping them build their next model," he said.
"We followed [German collecting society] GEMA's recommendations. GEMA recommended that there be a 30% pool of revenue passed back to rightsholders, and that is exactly what we've gotten Beatoven to agree to."
Adell suggested that the music industry should be talking and thinking more about revenue - and specifically a share of revenue being made by AI companies, including from their outputs. "There's absolutely an appetite for it. It's here…"
How is that pool of revenue divided? This has yet to be worked out. Adell noted that because no collecting society has licensed an AI firm - as Oron said earlier - for now Musical AI is only working with 'wholly owned' music: independent musicians and production libraries mainly.
However, it has built its system to work with whatever splits (for example between recordings and publishing rightsholders) the industry agrees around the world.
Berger talked about what MatchTune is doing, developing technology to detect when a track is AI-generated - and even which model was used to create it - to give distributors and streaming services the data they need to decide how they want to treat that music.
Horton talked about some of the technology that the music industry might work on to help, meanwhile. He referred back to the early days of the music-downloads market, when stores had to take different delivery formats from every major.
"So we created the DDEX standard so that there was one common format," he said. "One of the things that we're interested in doing now is working with other companies in the music industry to create a provenance standard, so that you start to track how a song was made from the very beginning, and at the very end you can look at the ingredients list and see what went in."
"There are going to be requirements around the world about labelling [AI-generated content] so we'll have to disclose to end users what it is they're getting," he continued.
"And the DSPs will want to know, because maybe they want some types of content and not others. And maybe we need to know from a licence standpoint: as content owners we're going to want to know what we're getting. This is work that has not yet started, but it's kinda percolating in the background, and I believe that we'll do something later this year."
What kind of discussions is UMG having with its artists and songwriters about AI and licensing? Horton said the conversations have just started, so it's too early to predict what they will conclude.
"I have an opinion on how it's going to go. We'll see if I'm right or not. But I feel at least initially, because it's so new, they're going to be more interested in technologies that help create a better relationship with their fanbase, or expand their catalogue, or give them a creative outlet," said Horton.
"I don't know that they're going to want to start with tools that will essentially directly compete with them in the market. We may get there eventually, as people get more comfortable with how it's used and what the services are doing. But I'm not sure that's where they want to start. But we'll see."
Opt-out, to me, sounds like the mafia walking into your store and saying 'it'd be a shame if something happened to your store…'
Matt Adell, Musical AI
Dumon talked about the importance of using technology to hold AI companies to account: to detect when a model has been trained on a society's repertoire, for example, even if the company that built it claims that it wasn't.
The conversation moved on to 'opt-outs' - when a government creates a text-and-data mining exception but allows rightsholders to opt out of that exception, and thus negotiate deals with AI companies themselves. In theory, at least…
"Opt-out is an illusion. It actually doesn't work. You can say 'yeah, we opt out' but in the end it's absolutely impossible to delete all the downstream copies.. If they say they won't train on Spotify, for instance, they will find the songs somewhere else," said Berger. "It [opt-out] is really an illusion."
Horton agreed. "Copyright is normally opt-in. You don't go and take something and then tell the person 'oh, by the way, I took it'. That's not how things have worked, so this is a change in the way copyright is handled," he said.
"But practically speaking, it also does not work. As Virginie said, we can opt out for the content on our websites, and we can ask our DSP partners to opt out on their sites, but our songs are in TV shows, commercials, radio, the dark web… There's no way for us to opt out everywhere… We don't have control of the internet, so it's just practically impossible."
Adell chimed in in: "Opt-out, to me, sounds like the mafia walking into your store and saying 'it'd be a shame if something happened to your store…'"
Adell also pointed to a recently-adopted law in California, AB-2013, which from 1 January 2026 will require any AI company doing business in that state to publish a list of their training data, and where they got it from.
"Now, of course the first pass at that is not going to be acceptable to a lot of people, but effectively now that means this year, anybody who doesn't want to publish a giant please-sue-me list on January 1st next year is going to be out there licensing and retraining their model this year," he said. "It's going to be an exciting year."
Adell also cocked a snook at the argument from some AI companies that training on copyrighted music is 'fair use' in the US.
"Fair use is misunderstood in the US a lot. Almost all of the time, when someone says 'fair use' they don't understand what they're talking about! Almost nothing is fair use in the United States, and almost nothing has ever gone to trial about what fair use is," he claimed.
"There is zero doubt in my mind: stealing all of Chris's stuff [UMG's catalogue] and making a billion-dollar business out of it is not fair use!"
Berger also raised the issue of whether AI companies really do keep the records of what they have trained their models on, even if they say they have not.
"I don't believe that last part. A lot of them, they're not just building one model and stopping right there. They're iterating, they're constantly improving, and if they have a clean dataset, they're going to want to use that again," said Horton.
"There are tools on the market specifically designed for AI companies - and I have to believe that they're using them - to help them track and maintain lists of their training data, and understand what it is and what's clean."
"You don't digest 10 million assets and not accidentally create a record of it," agreed Adell.
As the panel drew to a close, a question from industry consultant Becky Brook, sitting in the audience, made an important point about AI licensing.
There is a debate about whether wholly AI-generated content (including music) should qualify for copyright protection or not. The music industry has tended to be on the side arguing that it shouldn't. But if it doesn't qualify for copyright protection, does that mean it can't make money? And if not, what good are calls for the music industry to monetise those outputs?
"That's a great point," said Horton. "In the US, if you do prompt-generated content, there is no copyright in that output, so that is going to have implications about whether and how that content is monetised."
"There may be some things that are partly AI-generated and partly human-generated which would qualify for copyright, but even if there is no copyright, it could be that a service is still charging you to access that content. And if there's revenue there, as Matt said, I think that's something that the artists should share."