An unstoppable force? AI music’s ocean of content

Man on beach, ignoring the wave Photo credit: Tyler Milligan / Unsplash

The possibility of machines composing original music has developed from a fringe idea into a mainstream topic. Headlines are full of technological promise as they herald the progress made in this area, but what does it mean for the musicians making new works and audiences looking for music that means something to them?

Jochen Gutsch

Music as an art form lends itself well to the involvement of artificial intelligence. The fact that numeric values can be assigned to most aspects that make up a composition, means that music can be broken down and translated into a mathematical ‘language’ that computers cannot only understand but also speak. In reverse, this means that once machines have learned a number of rules, they can follow this logic and calculate possible new compositions from scratch — whether it’s a three-chord pop song or a three-act opera.

If we define music as a collection of sounds with varying pitches, dynamics and textures that are assembled in a rhythmic and harmonic structure, the composer’s role can be seen as that of an organiser or engineer who arranges the ingredients in reference to each other, decides on their characteristics, assigns voices and adds instructions for the performers. Guided by taste, ambition, demand, target audience or a client’s brief, the composer makes a series of decisions utilising an extensive skill set that is creative, or technical, or both — and in most cases the more technical steps in the process are already assisted by sophisticated computer software.

For that reason, it’s to be expected that AI will play an increasingly important role assisting composers with some of the more technical tasks involved in creating music. But if machines are tasked with creating new work from scratch, and if results are meant to sound like music to human ears, they will need clear frameworks within which to operate — otherwise their compositions will be perceived as a collection of random noises. These frameworks can either be set manually or they can be learned from a large pool of existing compositions. At this point, the importance of human guidance, references, curation, validation and other interferences becomes clear. It is crucial that humans lay down very clear instructions and goals as a starting point — even if machines are then set to move along their generative paths independently. This presents a problem that evokes a Catch-22 situation: If we give machines the task to create original music, but we don’t allow them to step outside our own set of parameters, how can they truly innovate?
 


Thinking outside the box


Artists tend to push the boundaries of their art forms, and challenging the norm is often considered a creative act, with some experiments more successful than others. At the same time, steering away too far from conventions will mean audiences lose interest. Finding the sweet spot where a piece of music challenges listeners while keeping them engaged and attentive is not easy, and the key factors needed to get the balance right include artistic vision, creative intuition and historical context.

Two of the most well-known examples of a composer breaking conventions are works by John Cage. 4’33” is an instruction for musicians not to play any notes at all, while an interpretation of As Slow As Possible is performed by an automated church organ in a small German town for a duration of 639 years. With these works, Cage broke the types of rules AI would need to successfully create music, and he was lauded far and wide for his innovative approach.

There are countless examples of such radical works that become famous, however the same is true on a more humble scale. A sentence that is often said in creative collaborative processes is “Technically it doesn’t make sense, but somehow it feels right … so, let’s just go with this.” These gut-feeling moments are where the magic happens and they have the capacity to separate the ideas that draw us in emotionally from those that leave us untouched. Often we are unable to analyse why one notion ‘worked’ while the other was abandoned: we feel it, but we cannot explain it to others, let alone to machines. Screenshot from Bloom app The Bloom app can generate its own music: the dots represent notes the app chose to play. Bloom was released by Brian Eno & Peter Chilvers in 2008. | © Bloom, Brian Eno & Peter Chilvers

Where the magic doesn’t need to happen


With intuition out of the game and a human-made framework firmly in place, it stands to reason that the forte of AI-made music will neither be wildly emotive nor eccentrically innovative creations. But music can perform a wide range of functions, and not all of them require inventive, touching or inspiring compositions. For instance, music can serve as a background soundtrack for another activity or it can simply be entertaining without a necessity for connection on a deeper intellectual, social or emotional level. In this field, AI can thrive, especially where authorship is explicitly unwanted in order to circumnavigate copyright issues and royalty payments. Provided with a package of well-defined goals, AI can be tasked with creating an infinite body of new work that will be fully functional for a specific role it was designed for. In fact, this is already happening.

Ambient pioneer Brian Eno has worked on ‘generative music’ for decades. In recent times his team has developed several apps that can either be played by humans or be left alone. In the latter case AI takes over and creates an endless stream of ever-changing ambient music. Thanks to a set of well-selected parameters, the outcomes will always sound like they are ‘curated’ by Eno. This music has no intention to surprise or disturb the listener: it follows the tradition of Eric Satie who famously coined the term “furniture music” as far back as 1917, stating that his compositions were “intended to be heard, but not listened to.”

With recordings predominantly published and consumed online these days, listeners already have access to an incomprehensible amount of music. This is but one of the many amazing possibilities the internet has given us. However, the online world already provides more stimuli than many of us can handle. The addition of more and more computer-generated material will make this ocean of content swell even more, thereby making it even harder to navigate. Whether the prospect of an endlessly self-multiplying pool of music is a desirable scenario is questionable. As the volume of content becomes boundless, we run the risk that it is rendered increasingly meaningless, potentially resulting in some of us choosing to disengage completely from digital archives, platforms and services. Hinterlandt performing music composed by Jochen Gutsch in Sydney While the author (on piano) has used digital means to compose and produce music for years, he prefers to perform in a purely acoustic setting, with an ensemble of classically trained musicians. | © Katelyn-Jane Dunn / Courtesy: City Recital Hall

Where the magic still happens


Music is first and foremost a language for us humans to communicate with each other. We invented and developed it, and we filled it with expressive nuances, emotional subtleties and a multitude of cultural, social, historic and political codes and contexts. The subject matter conveyed through music is so rich that it is often credited with the ability to say things words cannot express.

From the very beginning we have used tools to assist us with the creation and performance of music: voices and musical instruments, notation and printing, recording and reproduction, synthesizers and computers, and now AI and neural networks. But while these tools continue to get more refined, the place of music in our lives has not fundamentally changed, and the very human pursuit at the heart of the matter has not been replaced.

As the excitement about progress in the field of AI-created music grows, so does the fear that the balance may shift, ultimately leaving humans on the sidelines while machines gain complete creative ownership. At this stage I remain hopeful that humans and machines are not pitched against each other to compete for attention in the precious world of music. Ultimately, audiences are the curators of their individual tastes and, while algorithms have been at work rapid-firing suggestions and recommendations at us for years, at the end of the day we make our own choices about the music we find truly meaningful.

Recommended Articles

API-Error