top of page
  • Facebook
  • Twitter
  • Instagram
Search

A New Awakening: The Dangers of A.I. In Music

  • rbedwell3
  • Aug 29
  • 4 min read

I have had a bit of a wake-up call recently. I found myself participating in the process of ‘creating’ music, based around a set of lyrics, that was fed into an A.I. software. Within twenty seconds or so, a ping alerted us to the fact that the process had finished and the completed product was ready for consumption.

Before it had started, however, a series of choices were made within a set of narrow parameters. Genre: folk, rock, pop, EDM, etc and then the vocalist choice of male or female was decided upon.

But, without even mentioning the fact to the participants of the experiment, the most important elements were conspicuous in their absence and were not an option: the melodic line that the ‘vocalist’ would be following, the chord progression that made up the various parts of the song, the overall structure: verse, chorus, bridge, solo, intro, outro, and all the other sections that go to make up the musical form. The choice of instruments: four piece band, trio, soloists, strings, accompanists. Not important.

As a musician, this omission by the software developers was obvious from the outset, but to a non-musician, this may not be the case, because if one is not aware of the elements that make up a creative process, how can one know if they are absent?

The result? An initial surprise at what seemed to be a well produced ‘sound’, much akin to what the Phil Spectre ‘wall of sound’ must have seemed like to its first audience. Polished, as you would expect from an artificially created sound, but no warmth in the voice, empathy with the words or connection between one instrument and the rest. No notes held over chord changes that created a dissonance to be resolved, tension and release effecting the listener. No little drum fills that were referring to a previous line in the lyrics. No unintentional sounds that actually enhanced the music but were initially a mistake. Not even a remote possibility of that.

Then the experiment was repeated, this time with another set of words and the musical genre was altered according to the whim of the participants. The result was a different song, but the similarities were striking. This is when it became obvious to me. The ‘music’ that is generated by the software is based around templates, just like the computer ‘assisted’ pop music of the last decade has been. If the software was building the sound around a chosen series of chords, they would then be open to alteration. If the melodic line that the vocalist was following was connected to the chords and the key that the music was in, again it would be an option to alter it to the participants choice: sing the third on this chord instead of the fifth etc. But no.

So, if templates are what the software developers are using for the musical structure, who is deciding which elements are included or excluded in such templates? Who are the people that have become an AI police of musical ideas?

And that is when the nausea began. An intriguing experiment had turned into a process of internal questioning. What would be the point of participating in such an event? To have something to listen to on a quiet evening at home or whilst driving? I doubt it. To learn how to compose music to words that have been written? Obviously not, because without the knowledge to understand the melodic line and its relation to the chord progression in the first place, no understanding would be possible.

The uneasiness I was feeling continued, and so I felt compelled to understand why. What happens to a musician when he composes a song or any other piece of music for that matter? Reducing it down to the most basic philosophical elements, the intellectual part of the composer is intimately connected to the intuitive side. As the player proceeds with the compositional process, certain internal notifications can be felt, so subtle but so affecting, a lump in the throat, the hair standing up on the neck, a smile being felt, a tingling on the skin, a stomach sensation. These are the moments in music that tell the composer that he is on the right track, or the wrong one if they are absent. These are the ingredients of music that are impossible to explain to non-musicians, let alone allow them to be replicated by machine. Is that the source of the nausea, listening to sound without these hidden elements present? Like eating a meal without any of the underlying nutrients that the ingredients contain. Perhaps.

How long before the public accepts the sound of A.I. created music, when the songs on the radio and the music accompanying their preferred entertainments, are routinely created by an artificial thought process? Will they even be aware that the sounds they are ‘listening’ to has been carefully constructed by unseen guardians, defining parameters by which the aspiring musician is allowed to express themselves. I doubt it.

NB: I had used images created by AI in previous posts, and I apologise to those who objected, as I am no artist and wanted something to accompany my writing. This is no longer the case, and I will be using genuine ‘art’ and mentioning the artist too. Today’s art is Davinci’s ‘Fight between a lion and a dragon’. It’s funny when the truth is only really obvious when it is in one’s own back garden, but there we are.

ree

 
 
 

Recent Posts

See All

Comments


© 2023 Robert Bedwell

bottom of page