The future of music

In this essay, I will be discussing technology and music, more specifically tackling the question of if the constant improvement of technology will cause the decrease of need for human songwriters. I will be focusing on the past of song writing with the use of technology, as well as focusing on the potential future.

The history of music is long, and nothing is known about its origin (Gray, 2005, 7). It has slowly evolved throughout the thousands of years. The way we produce music has also evolved – from using hands and stones as instruments, to playing acoustic instruments, to a present use of computers that can already create music by themselves. The creation of artificial intelligence gives us the opportunity to create music much more easily than ever before, without the need of thinking of the melody or the lyrics. 

According to the Oxford English Dictionary, artificial intelligence is ‘the capacity of computers or other machines to exhibit or stimulate intelligent behaviour’. The first fully developed piece of music made that was composed by artificial intelligence was the score Illiac Suite for String Quartet in 1957 (Holmes, 2020, 398). This piece of music, although it was composed by a computer, contained sounds of instruments that were previously played by humans. A deep learning-powered program called DeepBach that was created by Gaetan Hadjeres and François Pachet was created more than 50 years later after the Illiac Suite. This program can take previous works of composers, analyse the patterns found in them, and produce new music that could have been written by the original composer. It did so with Bach, hence the name DeepBach. Although this artificial intelligence still has some faults, like its limited to a single key only, it is still very promising in terms of the future of artificial music (Vincent, 2016).

Although the programs that I have mentioned in the previous paragraph can create new music, they still rely on previous works created by humans. The program DeepBach analyses patterns made already in the previous music, and therefore, it is absent of the spark of originality and unpredictability that humans have. As Bolter (1984, 2) stated, ’there is an enormous gap between what computers were built to do (mathematics and symbolic logic) and the wide range of skills that humans possess. Programmers must still work close to their machine’s natural talents‘. Technically, because the program uses previous patterns, it’s not creating new songs at all. Human unpredictability is missing and in order to be incorporated, human needs to intervene in the AI’s creation, making it the human’s creation. The use of ‘machine’s natural talents’ from the previous quote is very interesting. It insinuates that technology is natural and talented, almost as if technology came from nature. We could argue that it has, because AI is a human’s creation. The words also imply that machine is a being in itself, however, it is still dependant on other people.

Humans are still essential when it comes to new songs created by technology, because in order for the AI to create new music, it needs to be based on the works produced by people. Humans need to train the AI to do this, but the question is, can humans teach AI to be original? Right now, AI can only analyse previous patterns and it doesn’t have a mind on its own. Technically, the songs made by AI are still made by humans, because humans created the program. Going back to the definition of artificial intelligence in the third paragraph, it is stated by OED that AI is a machine that stimulates intelligent behaviour. This would insinuate that AI is an instrument of the human mind, that helps with the creation of music, just like a musical instrument. Music from an acoustic guitar produced by touching the stings with fingers, just as artificial music is made when fingers are touching the computer’s keyboard. 

The question is – will the human’s instrument we call computer ever gain mind of its own? If so, this would mean the decrease of job positions currently occupied by songwriters. Companies, instead of hiring people, would just buy a program and made music on their own. In fact, it’s already happening. AI called Aiva has recently become the first AI to gain the worldwide status of a composer. The music created by this program is already being used as a soundtrack in movies, games and advertisements (Kaleagasi, 2017). Right now, this program still needs human input and it’s still very far away from becoming completely independent. Like DeepBach, this AI analyses works of human composers and looks for certain patterns. It is already quite indistinguishable from the works of human writers. It saves companies large amounts of money and time, but it is potentially very bad for human composers. It is almost as if the people building the AI want to create conscious mind to replace the human mind and the labour of human workers. 

It is most likely, that AI will eventually be able to generate song lyrics on a such a convincing level as it already does now with notes for classical music. The possibility of AI song writing infiltrating the music charts is very probable based on the way AI is already intertwined with the mainstream media. This however raises a question if AI could ever replace the human originality that goes into song writing. There are songs that quite literally changed the world, like Imagine by John Lennon or I Will Always Love You by Whitney Houston, and it would be difficult to imagine them automatically generated by a computer. Would the powerfulness of these songs be perceived differently? Possibly. The mere fact that the songs would be written by a computer and not a human in a matter of minutes would perhaps cheapen the experience people have when they listen to these songs. The time that is put into the labour of humans versus the time that the computer needs to complete the labour is significantly different. Current composers of classical music take months to compose a finished piece of music whereas computer only takes a few minutes to complete the same action. Human writers put blood and tears into their writing and perhaps that’s what makes us emotional when we read the works – the fact that a song writer experienced the emotions we can hear and feel from the text. Computer, unlike a human, doesn’t have a mind of its own. It can produce a song that seems the same within minutes, but the quickness cheapens the experience. Whether songs make us laugh or cry, we expect them to make us feel emotion in some sort of way. Songs communicate a certain emotion and can a computer, an emotionless device, do that? If AI replaced human song writers completely, it is possible that we would lose touch with the emotions that can be found within songs written by humans. Some songs are written like poems, they have thoughts on their own as well as human feelings and experiences. One would argue, that in order to produce that kind of work, it has to be lived through, otherwise it doesn’t have the same impact. The question then is – will it be possible for a computer to generate not only human emotions, but the unique experiences and unpredictable thoughts humans have?

Hatsune Miku – a virtual idol that is completely made through a computer, including her voice, sells out her concerts on a frequent basis. Even though she is not human, she found a worldwide success. At her concerts, she is projected on stage and performs together with a live band. Despite not being a real human, people love her. Why is that? It’s most likely because her fans from all over the world heavily contributed to her creation. Her audiences have co-written the songs she performs on stage (Kloet & Kooijman, 2016, 126). People actually had the possibility to choose her to be famous. There are millions of videos on YouTube that include her songs that are not actually hers but were made by people for the AI. There are also many videos that include animations of her doing dance routines that were created by the fans. It could be said that she is a piece of art created by humans, who put their emotions, experiences and thoughts into her, like they would if they were writing a song or a poem, and she is not just automatically generated. She contains the humanity that others put into her and that’s why she has fans. Her songs communicate with her audience and pass along the emotions and thoughts that other people had in mind when they were creating her and her songs.

In conclusion, if future technology could completely replace human writers, it would cause a big problem. A lot more people would become unemployed. Even if AI won’t replace human writers completely, it will have a huge effect on the future of the music industry, as it already has now. A lot of human writers would have to either learn how to work with the AI and combine their skills with the computer or go into a different line of work. We could lose touch with the humanity that goes into creating art. Perhaps songs written by humans will eventually become a rarity and the mainstream media will only contain songs that are generated through a computer. We expect that manual labour will be the first to be replaced by technology, which is probably true, but the replacement of humanity could eventually come too. Creativity could become just manual labour done by the AI. Or perhaps human songs would become much more valuable than they are now. People would start appreciating the human creativity and would be willing to pay more in order to experience it. One of the reasons we love art is because it’s emotionally moving but if it was created by AI, it just wouldn’t have the same effect.


Bartu Kaleagasi. (2017). A new AI can write music as well as a human composer.

Bolter, J. (1984). Artificial Intelligence. Daedalus, 113(3), 1-18. Retrieved December 11, 2020, from

De Kloet, J., & Kooijman, J. (2016). Karaoke Americanism Gangnam Style: K-pop, Wonder Girls, and the Asian Unpopular. In Lüthe M. & Pöhlmann S. (Eds.), Unpopular Culture (pp. 113-128). Amsterdam: Amsterdam University Press.

Gray, C. (2009). The history of music (Routledge history of civilization series).

Holmes, T. (2020). Electronic and experimental music : Technology, music, and culture (Sixth ed.).

James Vincent. (2016). Can you tell the difference between Bach and RoboBach?

Oxford University Press. (n.d.) Artificial Intelligence. Oxford English dictionary. Retrieved December 11, 2020,

Leave a Reply

Your email address will not be published. Required fields are marked *