Quick access:

Go directly to content (Alt 1) Go directly to first-level navigation (Alt 2)

Sound and Music Synthesis
What Role Does AI Play in the Creation of Music Today?

Sound and Music Synthesis: What Role Does AI Play in the Creation of Music Today?
This image was generated by DALL-E, an AI system trained to create images from a description. Drawing from the entire text of this article, it portrays a future scenario where AI and music coexist harmoniously. | © Patrick Hartono; created with DALL- E

Computers have evolved and shifted the music creation paradigm, blurring the lines between music, science, and technology even more than they did before. From early experiments like the Illiac Suite in the 1950s to today's advancements in Artificial Intelligence (AI), this technological transformation has fundamentally altered the musical landscape, raising important questions about the role of AI in music creation. This article explores these questions, providing perspectives on AI’s impact on the music industry and how society has responded to this transformation.

A Brief Introduction to AI Applications in Music

AI, as a branch of computer science, focuses on computers' capacity to “learn” from data, also known as Machine Learning (ML). ML uses statistical techniques to teach computers how to process information from data. This means AI-based applications are able to grasp the subtleties of musical patterns and harmonies well enough to make music that rivals that of humans.

Magenta, a project developed by Google, is a noteworthy example in this domain. Leveraging Machine Learning (ML), Magenta offers a range of tools for artists to engage with algorithms, enabling the creation of fresh and innovative music.

In 2018, researchers at Goldsmiths, University of London, initiated the project, MIMIC, which stands for Musically Intelligent Machines Interacting Creatively. This project expands on Magenta by not only providing the ability to create music but also the means to interact creatively with human musicians. MIMIC brings the exciting element of collaboration between humans and machines, creating new possibilities for musical exploration and expression.

How far have advancements in AI and its application in music come?

AI in music production involves not only musical composition but also encompasses musical analysis and the process of music creation itself.

Data Mapping
Wekinator is an ML-based software that is mainly used to execute data mapping processes in music, in real-time. The software detects genres, instruments, rhythms and tones through a microphone, integrating with the computer’s audio and visual components. Wekinator receives inputs from various sources such as webcams, audio inputs, and sensors connected to Arduino. Processing is done through computational models, generating outputs that can be used with experimental music devices such as Max/MSP and SuperCollider.

Generative AI
Generative AI uses its ability to draw from large datasets to create new content. AI Meta's AudioCraft, for example, uses the EnCodec neural network model to produce high-quality sounds and compress files for faster execution. Another example is MuseNet, which leverages GPT-2 technology to create various kinds of musical pieces that last up to four minutes using ten instruments. It learns musical patterns from MIDI files, identifying harmony, rhythm, and style without needing explicit programming.

Other Applications
AI has made its mark on many other applications in the music industry. It can be found in everything from the automation of the mixing and mastering to emotional analysis in music, as well as music recommendation systems. In music production, iZotope uses AI to generate custom effects settings based on the given track's sound palette. It can suggest initial mixing levels for vocals, reverb, and mastering effects, speeding up the process of achieving a music creator’s desired quality.

Brain.fm uses AI to produce music compositions that facilitate specific mental states, such as relaxation and concentration. Its goal is to bring music to mental health, demonstrating AI's potential in the field of music therapy. AI's ability to respond to emotional conditions can help individuals experience the positive impact of music in various aspects of their lives.

Aiva Technologies enables music composers and creators to generate variations of their own work, facilitating a “collaboration” between human creativity and generative AI technology. Such collaboration can result in more diverse and innovative music, opening up new possibilities.

Amper Music from Shutterstock is another example of how AI can be used to make music. This platform allows content creators to quickly and easily adjust composition styles, overall ambiance and the duration the composition should be. With the help of AI, individuals with limited musical knowledge can create professional-sounding music, minimising barriers to music creation.
Sound and Music Synthesis: What Role Does AI Play in the Creation of Music Today? This image was generated by DALL-E, an AI system trained to create images from a description. It portrays a future scenario where AI and music coexist harmoniously. | © Patrick Hartono; created with DALL- E

Should we be concerned about the presence of AI in music?

As AI brings many changes in the music world, there are still unanswered questions, for instance, "Is human creativity under threat?" Here we can draw from Theodor Adorno's views on the cultural industry, specifically in the context of the music. Adorno saw popular music as a cultural product of society that is homogenous, which risks diminishing the critical power of listeners and the creativity of artists. Most AI applications can only produce music in commercial genres that follow already existing trends.

For some, this raises concerns about the loss of the human element in music creation. However, the possibility of showcasing the integrity and creativity of art still remains.

If AI is limited to creating music in commercial genres, then the challenge lies in how to reach beyond the reach of AI, creating something truly new.

This dynamic can be seen as a turning point, the beginning of a new era, where every musician, songwriter, and composer is tested: Are you truly a creator – whether a creative individual or an artist? If so, you would surely be able to surpass machine capabilities in what you create.

It's essential to remember that AI was ultimately created by humans mimicking other humans. The question then is whether it is time for us to entrust music creation fully to machines. Should we be against this? Or should we try to create in harmony with AI? These are questions that not only test musicians but also all aspects of the music industry as a whole, shaping how society views art in the future.

"This is a time where those who merely follow the mainstream must begin to introspect."
 

Top