The Music of the Algorithms: Tune-ing Up Creativity with Artificial Intelligence

Image from Pexels.com

No, “The Algorithms” was not a stylishly alternative spelling for a rock and roll band once led by the former 45th Vice President of the United States that was originally called The Al Gore Rhythms.

That said, could anything possibly be more quintessentially human than all of the world’s many arts during the past several millennia? From the drawings done by Og the Caveman¹ on the walls of prehistoric caves right up through whatever musician’s album has dropped online today, the unique sparks of creative minds that are transformed into enduring works of music, literature, film and many other media are paramount among the diversity of things that truly set us apart from the rest of life on Earth.

Originality would seem to completely defy being reduced and formatted into an artificial intelligence (AI) algorithm that can produce new artistic works on its own. Surely something as compelling, imaginative and evocative as Springsteen’s [someone who really does have a rock and roll band with his name in it] epic Thunder Road could never have been generated by an app.

Well, “sit tight, take hold”, because the dawn of new music produced by AI might really be upon us. Whether you’ve got a “guitar and learned how to make it talk” or not, something new is emerging out there. Should all artists now take notice of this? Moreover, is this a threat to their livelihood or a new tool to be embraced by both musicians and their audiences alike? Will the traditional battle of the bands be transformed into the battle of AI’s? Before you “hide ‘neath the covers and study your pain” over this development, let’s have a look at what’s going on.

A fascinating and intriguing new report about this entitled A.I. Songwriting Has Arrived. Don’t Panic, by Dan Reilly, was posted on Fortune.com on October 25, 2018. I highly recommend clicking through and reading it if you have an opportunity. I will summarize and annotate it here, and then pose several of my own instrument-al questions.

Treble Clef

Image from Pexels.com

Previously, “music purists” disagreed about whether music tech innovations such as sampling and synthesizers were a form of cheating among recording artists. After all, these have been used in numerous hit tunes during the past several decades.

Now comes a new controversy over whether using artificial intelligence in songwriting will become a form of challenge to genuine creativity. Some current estimates indicate that during the next ten years somewhere between 20 to 30 percent of the Top 40 chart will be in part or in full composed by machine learning systems. The current types of AI-based musical applications include:

  • Cueing “an array of instrumentation” ranging from orchestral to hip-hop compositions, and
  • Systems managing alterations of “mood, tempo and genre”

Leonard Brody, who is the co-founder of Creative Labs, in a joint venture with the leading industry talent representatives Creative Artists Agency, analogizes the current state of AI in music to that of self-driving cars wherein:

  • “Level 1” artists use machines for assistance
  • “Level 2” music is “crafted by a machine” but still performed by real musicians
  • “Level 3” music is both crafted and performed by machines

Drew Silverstein, the CEO of Amper Music, a software company in New York, has developed “AI-based music composition software”. This product enables musicians “to create and download ‘stems’”, the company’s terminology for “unique portions of a track” on a particular instrument and then to and modify them. Silverstein believes that such “predictive tools” are part of an evolving process in original music.²

Other participants in this nascent space of applying algorithms in a variety of new ways to help songwriters and musicians include:

Image from Pexels.com

Bass Clef

The applications of AI in music are not as entirely new as they might seem. For example, David Bowie helped in creating a program called Verbasizer for the Apple Mac. It was used on his 1995 album entitled Outside to create “randomized portions of his inputted text” to generate original lyrics “with new meanings and moods”. Bowie discussed his usage of the Verbasizer in a 1997 documentary about his own creative processes entitled Inspirations.

Other musicians, including Taryn Southern, who was previously a contestant on American Idol, used software from Amper Music, Watson Beat and other vendors for the eight songs on her debut album, the palindrome-entitled I Am AI, released in 2017. (Here is the YouTube video of the first track entitled Break Free.) She believes that using these tools for songwriting is not depriving anyone of work, but rather, just “making them work differently”.

Taking a different perspective about this is Will.i.am, the music producer, songwriter and member of the Black Eyed Peas. He is skeptical of using AI in music because of his concerns over how this technological assistance “is helping creative songwriters” He also expressed doubts concerning the following issues:

  • What is AI’s efficacy in the composition process?
  • How will the resulting music be distributed?
  • Who is the audience?
  • How profitable will it be?

He also believes that AI cannot reproduce the natural talents of some of the legendary songwriters and performers he cites, in addition to the complexities of the “recording processes” they applied to achieve their most famous recordings.

For musical talent and their representatives, the critical issue is money including, among other things, “production costs to copyright and royalties”. For instance, Taryn Southern credits herself and Amper with the songwriting for I Am AI. However, using this software enabled her to spend her funding for other costs besides the traditional costs including “human songwriters”, studio musicians, and the use of a recording studio”.

To sum up, at this point in time in the development of music AIs, it is not anticipated that any truly iconic songs or albums will emerge from them. Rather, it is more likely that a musician “with the right chops and ingenuity” might still achieve something meaningful and in less time with the use of AI.

Indeed, depending on the individual circumstances of their usage, these emerging AI and machine learning music systems may well approach industry recognition of being, speaking of iconic albums – – forgive me, Bruce – – born to run.

Image from Pixabay.com

My Questions

  • Should musicians be ethically and/or legally obligated to notify purchasers of their recordings and concert tickets that an AI has been used to create their music?
  • Who owns the intellectual property rights of AI-assisted or wholly derived music? Is it the songwriter, music publisher, software vendor, the AI developers, or some other combination therein? Do new forms of contracts or revisions to existing forms of entertainment contracts need to be created to meets such needs? Would the Creative Commons licenses be usable here? How, and to whom, would royalties be paid and at what percentage rates?
  • Can AI-derived music be freely sampled for incorporation into new musical creations by other artists? What about the rights and limitations of sampling multiple tracks of AI-derived music itself?
  • How would musicians, IP owners, music publishers and other parties be affected, and what are the implications for the music industry, if developers of a musical AIs make their algorithms available on an open source basis?
  • What new entrepreneurial and artistic opportunities might arise for developing customized add-ons, plug-ins and extensions to music AIs? How might these impact IP and music industry employment issues?

1.  Og was one of the many fictional and metaphorical characters created by the New York City radio and Public Broadcasting TV legend Jean Shepherd. If he was still alive today, his work would have been perfect for podcasting. He is probably best known for the use of several of his short stories becoming the basis for the holiday movie classic A Christmas Story. A review of a biography about him appears in the second half of the November 4, 2014 Subway Fold Post entitled Say, Did You Hear the Story About the Science and Benefits of Being an Effective Storyteller?

2.  I attended a very interesting presentation by Drew Silverstein, the CEO and founder of Amper Music, of his system on November 6, 2017, at a monthly MeetUp.com meeting of the NYC Bots and Artificial Intelligence group. The program included two speakers from other startups in this sector in an evening entitled Creating, Discovering, and Listening to Audio with Artificial Intelligence. For me, the high point of these demos was watching and listening as Silverstein deployed his system to create some original music live based upon suggestions from the audience.

Editor’s Note: This article published with permission of the author with first publication on his blog – The Subway Fold.

Posted in: AI, Copyright, Intellectual Property