time to change the way we view music and the arts

“Well “tech”nically, it’s all been done before.” Or, “Let’s not fear robot composer overlords.”

Honda ASIMO robot conductor

Honda’s ASIMO (Advanced Step in Innovative Mobility) Robot
(Cit. NPR.org)

We, as a living, feeling, sentient species, are constantly developing new technology to either expand the awareness and application of human engineering / knowledge, and or, to make the overall summation of developed human existence, even more convenient.

However, while the generalized increase of “tech” usage and interest therein amidst modern living hasn’t ebbed, it seems partitions of society are simultaneously concerned about the overuse and overrunning of said tech, specifically in the way of artificial intelligences, (whether complex or non-sentience-imitating like a basic algorithm,) when music is involved.
What’s “worse?”: Sentient human beings consciously composing songs that we know recycle so many of the same metaphors and rhyme schemes (how many ‘together forevers’ have there been even in a small time frame like just the last five years?) or, the introduction of artificial, non-sentient systems constructing music that could conceivably be equally as repetitious, conversely make music with less of an attempt at conventional rhyme scheme or simply connect words and phrases based on a systematic artificial approach that lets the machine figure out that it “will be appealing for human listening experiences.” (That last part might be some what still in the realm of science fiction, as the AI composers of today have yet to tackle song with intricate dialogue but you’d think they’ll eventually get there and one of those two outcomes will develop.)
Facing fact, there’s no denying that every note, every progression, every rhythmic pattern has been put out there. Through thousands and thousands of pieces, stylistic periods, people and years,”it’s been done.” Thusly, wouldn’t it be feasible to propose that where modern commercial music is concerned, we don’t relish any of it for its melodic or lyrical diversity but for the narratives and individual stories that come behind something we listen to –whether the outcome of that listening experience is to love or hate a song. After all, if we were that repelled by something seemingly programmed and generic, all those “together forevers” would have driven us crazy by now, right?
I think, what at least some of the anti-AI composer crowd is apprehensive about, is the fact that something so inherently connected to the human spirit and human emotions could possibly become ruled by things that function inherently without either and that such a result would be a defacement of the art form itself. I can empathize with this sentiment, as I do hold music very deeply against my own humanity. However, if we think about the fact that no one has gotten sick of commercial repetition yet because of artist song insight, then, to be blunt: we have nothing to fret over, since a computer will never have a living, relatable human experience. That’s right: relatable human experience.
Even if a future super computer can compose a piece to the breadth and depth of the classical masters and thousands of people visit a local arena to hear its debut, there will be no pre-existing narrative behind the notes it assembled. There’s a chance computer-composed music might one day reach a complexity level warranting a genuine emotional reaction from people that hear these hypothetical pieces but, that’s still not even a reason to worry excessively out about the potential for loss of the human factor in music. Conceivably, a human reaction from any future computer composed music would be due to the connection an individual makes with patterns they hear, that trigger some previous thought, emotion, event or otherwise; much in the way that olfactory memories come about when we encounter various scents. So short of any particular note or pattern “striking such a nerve” of anyone listening, machines are at a strict loss in the genuine connection department.
The reverse side to the AI-composer evolution is that, while the human factor in music is important, perhaps our fascination with improving this kind of purposed technology comes from a more macro and straightforward enjoyment of things that are new. I think paired with that enjoyment, we just like to know that no matter what, music will always have something connected to it that is unique to us; uniquely alive, human and not something that can be dialed into a program. This might be where projects like the development of “brain pulse music” (See video below) and music composed from the brain’s signal, as explained in this research paper done by Chinese scholars at Tufts University, come into play. The melodies derived from both projects aren’t necessarily packed with hooks of the week but are both sourced from the brain, which is definitely something that cannot be copied and is as individual as a person’s fingerprint. While I did say that we have hit every note and pattern there is, what is new in this case, would be in where the music comes from and how it manifested into a finished work, even if the notes chosen contain hooks or chords we’ve heard 1000 times or more. Going back to machines, see the progression from an AI like “EMI” to a system like “Iamus.” 

(Videos with audio below)

This is where I see that human-created music and AI composers could easily coexist. The two entities both exist in a world where there are no original melodies to be written. Yet, the continuing sophistication and building of computers that can compose more complex scores and the continuing unfolding of human victory, tragedy, love, loss, discovery and reflection leave two very distinct areas for ongoing individuality, even if we’re all simply dipping back into the same repertoire of musical options over and over again.

Leave a Reply

Basic HTML is allowed. Your email address will not be published.

Subscribe to this comment feed via RSS