Getty Images

Last week, artificial intelligence (AI) music company Udio announced an out-of-court settlement with Universal Music Group (UMG) over a lawsuit that accused Udio (as well as another AI music company called Suno) of copyright infringement.

The lawsuit was brought forward last year by the Recording Industry Association of America, on behalf of UMG and the other two “big three” labels: Sony Music and Warner Records.

The lawsuit alleged Udio – which offers text-to-audio music generating software – trained its AI on UMG’s catalogue of music.

But beyond agreeing to settle, the pair have announced a “strategic agreement” to create a new product, to be trained exclusively on UMG’s catalogue, that respects copyright. We don’t have any details about the product at this stage.

In any case, the agreement puts both Udio and UMG in powerful positions.

Uncertainty remains

Some notable copyright campaigners have trumpeted the outcome as a success for creators in the fight against “AI theft”. But since it’s a private settlement, we don’t actually know how compensation for artists will be calculated.

To seasoned observers, the agreement between UMG and Udio mainly reflects the realpolitik of music big business.

In a panel discussion at last year’s SXSW festival in Sydney, Kate Haddock, partner at the law firm Banki Haddock Fiora, anticipated many lawsuits between copyright holders and AI companies would end in private settlements that may include equity in the AI companies.

Such settlements and strategic partnerships will help major labels set the ground rules for developing AI-music ecosystems. And it seems they are becoming common. Last month, Spotify announced a deal with UMG, Sony and Warner to produce “responsible AI products” across a range of applications. Again, we have little detail as to what this will look like in practice.

Such arrangements could allow music giants to benefit financially from non-infringing uses of AI, as well as getting a cut from uses that attract a copyright payment (such as fan remixes).

How does this affect creators?

According to Drew Silverstein, co-founder and chief executive of AI-powered platform Amper Music:

the real headline is that with one of the biggest rights-holders now actively engaging with generative AI music products, smaller players can’t afford to sit on the sidelines.

However, any vision of how such a settlement might serve smaller individual creators remains murky.

Even with AI companies agreeing to do deals to get training data (rather than helping themselves to it), these’s no straightforward model for how attribution and revenue can be equitably distributed to creators whose work was used to train an AI model, or who opt in for future use of their works in generative AI contexts.

Several emerging companies such as ProRata are claiming to develop “attribution tracing” technologies that can mathematically trace the influence on an AI-generated output back to its sources in the training data. In theory, this could be used as a way to divide royalties, just as streaming services count the number of plays on a track.

However, such approaches would assign extraordinary economic power to algorithms that regular stakeholders don’t understand. These algorithms would also be contentious by their nature. For instance, if an output sounded like 1950s bebop, there is no “right way” to decide which of the thousands of bebop recordings should be credited, and how much.

A more blunt but practical approach has been used by Adobe’s Firefly image-AI suite. Adobe pays artists an “AI contributor bonus”, calculated in proportion to the revenue their work has already generated. This is a proxy measure because it doesn’t directly capture any value a work brings to the AI system.

When it comes to generative AI, it’s hard to find attribution and revenue solutions that aren’t highly arbitrary, difficult to understand, or both.

The results of this are systems that risk being easily exploited and inequitable. For example, if there’s a payment structure, attribution tracing could encourage artists to create music that maximises the likelihood of attracting attributions.

Artists are already struggling to understand complex rules of success defined by powerful digital platforms. AI seems poised to exacerbate these problems by “industrialising” the sector even further.

Music as a public good

As it stands, individual artists don’t have clear, globally agreed protection from having their work used to train AI models. Even if they’re able to opt out in the future, generative AI is likely to present major power imbalances.

A model legitimately trained on a catalogue as vast as UMG’s – a giant tranche of the world’s most significant recorded music – will have the ability to create music in many different styles, and with a wealth of conceivable applications. This could transform the musical experience.

To understand what risks being lost, academic research is now reinvigorating a view of music when considered at the scale of AI, as a collectively produced shared cultural good, sustained by human labour. Copyright isn’t suited to protecting this kind of shared value.

The idea that copyright provides an incentive for creators to produce original work is faltering with AI–recording industry licensing deals. Looking for other ways to support original music might be the solution we need.

This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Oliver Bown, UNSW Sydney and Kathy Bowrey, UNSW Sydney

Read more:

Oliver Bown receives research funding from the European Research Council and the Australian Research Council.

Kathy Bowrey receives funding from the Australian Research Council.