AI & IP: Who owns music made by a machine?

Man sitting in front of screens © Unsplash

Remember Napster? A bunch of kids got laughed out of a music boardroom with a scrappy prototype for distributing music on this new thing called the internet. They launched independently, and in five years’ time the music industry’s total revenue had collapsed to a third of its peak. It never recovered. We’re sitting at the precipice of a new important technological shift in music – the emergence of creative Artificial Intelligence – and there’s that same intoxicating smell of opportunity mixed with potential catastrophe looming in the air.

This time, it’s all about intellectual property. And we need to get it right.
 

Rupert Parry

The modern-day music industry runs on intellectual property. Frankly, you can tell that just by looking at the lawsuits (of which there are plenty), or the recent record-breaking investments in music rights. But for an industry that invented sampling and cover songs, its relationship with IP is still complex, unstable, and conflicting. Especially when it comes to artificial intelligence.

That’s because IP is about a songwriter (or musician, or producer) owning the rights to a piece of music they’ve contributed to. But when AI is involved, it’s unclear who exactly the contribution is coming from. Because while we often think about AI as this one amorphous thing, but actually it contains multitudes: the code of which its algorithms are written, the data on which it feeds, the people who process that data, and the person at the end of the chain who presses the start button.

This will become an issue. While AI is often spoken about in more academic contexts, the influence of creative AI in music will be significant and—importantly—not just about computers writing songs. AI is already spurring on new forms of audio synthesis, mastering tracks, creating previously impossible instruments and voice replicas. It has the potential to contribute to any aspect of the music-making process where a pattern can be abstracted and applied.

So the question is: who owns a song written by AI, an instrument created by AI, or an AI voice?

And more importantly, who ought to?

Do androids dream of ownership?

The music industry isn’t the first to confront this question. In fact, in Australia, a 2007 court case about phonebooks set the precedent for how algorithmically generated content is copyrighted (yes, you read that right, phonebooks).
A phone booth and phone book. In Australia, a 2007 court case about phonebooks set the precedent for how algorithmically generated content is copyrighted. | © Unsplash The Yellow Pages found that one of their competitors was lifting results directly from their database, and promptly took them to court for copyright infringement. However after some deliberation, the court came back with an unexpected result: due to the fact that the phone book was assembled by an algorithm, and with minimal human effort, it wasn’t eligible for copyright protection. The case was dismissed.

Though this legal precedent was for a pretty simple algorithm, the implications for AI-generated works are significant. AI systems, like the phonebook algorithm, automatically find patterns in the data they are given and output a result. While there is plenty of human effort around the process itself, the actual generation is autonomous. Therefore, AI music is likely be devoid of copyright protection in Australia, given legal precedent.

Similar debates are being had worldwide. In the US, arguably the financial centre of today’s music world, AI songs are at risk of being determined “derivative works” due to their dependence on the data of others. These derivative works are void of copyright, and can even be copyright infringement for the original rights holders. If a creator was willing to argue that using this data is “fair use”, the result of such a challenge would depend on the leanings of an individual judge – a degree of uncertainty that would make any serious musician uneasy.

So it appears that early legal decisions set us up for a world in which AI created content is un-ownable. But this leaves creators with no financial incentive to experiment or innovate with creative uses of AI, as whatever they make will be free of copyright and impossible to monetise.

A group called The Artificial Inventor Project is aiming to change this. They’ve built a model named DABUS, which is capable of strange but unique inventions. By then attempting to patent these inventions in countries around the world, they hope to show that an AI-generated work can be owned. In 2021, they succeeded, and the South African patent office was the first in the world to grant authorship of a patent to an AI. As a result, its creators are entitled to all revenue from the patent’s use.

But while earning money from an invention certainly seems like a positive step, we might question whether the creators of DABUS deserve 100% of the proceeds. From what the Artificial Inventor Project has revealed publicly about their algorithm, it appears that—like most AI systems—DABUS has trained on a bunch of data created by other people. AI without training data is just an empty shell, a bunch of code that knows how to learn patterns but has nothing to learn from. DABUS isn’t just a singular creation, but rather (like all AI) it owes its very existence to the data it trains on.

So while giving away the rights of any computer-generated work stifles innovation, giving all the rights to the person who runs the code ignores the where an AI’s intelligence comes from. It doesn’t help that many modern large AI models operate on unlicensed data (one of the biggest AI music models, Jukebox, is trained on 1.2 million songs ripped from the internet without permission).

There needs to be a middle way, one which supports not just those wielding the AI, but also those whose data has been used to train it.

When it comes to music, it’s uh… the vibe of the thing.
Even if this issue of data provenance was overcome, creators experimenting in AI music soon run into an even bigger issue: the legal machinery of the music industry.

Most modern AI algorithms are statistical models that minimise ‘loss‘ over a dataset. In practical terms, that means that the AI is trying to emulate its dataset as close as possible. In the music domain, you can hear this at play in a song generated by Open AI’s Jukebox, which was partially trained on Prince’s back catalogue – its similarity to Prince, while somewhat garbled, is clear and uncanny.

Even though AI outputs will have no actual part of the original data in them, any likeness to copyrighted works could be the foundation of a devastating infringement case, of which there is plenty of precedent in the music industry.

In 2015, the Marvin Gaye estate successfully sued Robin Thicke & Pharrell Williams over their song Blurred Lines, alleging it infringed Gaye’s Got To Give It Up, despite the fact the songs didn’t share any melodies, chords or samples. Instead, the case focussed around the ‘feel’ of the song, which was similar enough for the court to award the Gaye estate almost $5 million. And just this year, Olivia Rodrigo doled out songwriting credits to artists that influenced her latest album, under public pressure due to her songs having a similar ‘vibe’.

The same holds true for the style and distinctive traits of individual artists. In 1998, Bette Midler successfully sued the Ford Motor Co. after they hired one of Midler’s backing singers to impersonate her for a car ad. Ford had permission to use the song, and Midler’s name and picture weren’t used, so the case was just a test for whether or not the likeness of Midler’s voice was copyrightable. It turns out the answer is yes.

So, given the current state of the music, if your AI has trained on a particular artist’s songs, style or voice, then its output will likely not be your property. In one sense, this is a good thing – no one wants a world where anyone can appropriate and exploit an artist’s hard work for free. But on the other hand, it doesn’t give artists emerging in this space a clear path forward. Artists need a fair legal structure by which they can experiment and create with AI without the fear of being sued into oblivion.

The future of music & intellectual property

So how do we solve this problem? I believe we need a way to license data easily, and compensate fairly with royalties — incentivising and protecting both artists creating original music, and those innovating with new tech.
A woman in a white dress. Berlin-based musician Holly Herndon has produced an AI emulation of her voice, Holly+. | Wikimedia Commons

Berlin-based electronic musician Holly Herndon has already set the tone for how this might work. In a collaboration with Never Before Heard Sounds she produced an AI emulation of her voice, Holly+. Anyone can upload their voice, and have it transformed into Holly’s, and use it for free. However, for any commercial uses of Holly+, a DAO will act as a community gatekeeper and beneficiary for Holly’s voice. We could use blockchain systems like this to guarantee the provenance of data, and ensure money flows back to the individual members of certain datasets.

In fact, something like this already exists at the government level in Australia – a statutory licensing scheme allows the use of copyrighted materials without explicit permission in certain cases, so long as there is fair remuneration to the rightsholders. It’s the “ask for forgiveness, not for permission” model of fair use. Primarily used by, for example, universities or schools who want to distribute an excerpt of a book to students, it could be a powerful tool if adapted for data rights. Such a scheme would allow technologists to use music freely, but compensate musicians whose data was used through micro-royalties, via the standard royalty streams.

Intellectual property rights are important, but if over-extended can strangle creativity. Technology is important, but if we don’t anticipate its consequences and give them due consideration… well, we’ve all seen what happens there. If we can walk a middle path, we can give a new generation of AI creators and musicians certainty about the value of their works, and give rights holders a new income stream. But we need to start yesterday – the technology is already here. And while it’s easy to think of this sort of AI as a magical intelligence in the cloud, in reality it’s humans and artists that are doing the legwork. They deserve to get paid for it.

There’s lots of potential for AI in the music industry, and we ought not to shut that down. But a world in which technologists can steal music and generate imitations without recourse is an inequitable one.