9 Feb 2023
Listen Here: Portrait of the artist as a ROBOT
Subscribe to CX E-News
The world is an unpredictable place. We all know this from having been collectively blindsided in the last few years by unforeseen global events. Now another event looms on the horizon that may just influence the music industry in ways that almost no-one has given a second thought.
The prelude to this event is a touchy subject in itself; one that’s been done to death in recent times and yet somehow remains scarcely understood by musicians and producers worldwide. I’m talking about the benefits and pitfalls of digitally producing and editing music.
Yeah, I know… boring, right? I’ve always found conversations about digital recording, editing, pitch-correcting and other ways of ‘interfering’ with a human performance a bit like watching paint dry. Almost as boring as the analogue- versus-digital debate. Hang in there; my reasons for diving into this tedious mire will become clear in a moment.
Like most of us, I just use the tools in front of me to do what I need do to get the job done, and to hell with theory or principle. If I need to fix something, I just do it. If I enjoy the sound of something, I record it. There’s no moral judgement involved – no right or wrong.
But I think in some ways we’ve been asking the wrong questions over the years about digital editing and been seduced over a couple of decades now by the power it affords us.
For example, we’ve regularly posed the question: “Does pitch correction software remove the ‘emotional content’ – the humanity if you will – from a vocal?” when perhaps we should instead have been asking: “What is ‘emotional content’ and how do I recognise, capture, nurture or emphasise it?” There’s nothing wrong with pitch correction tools per sé in the hands of creative artists…
With rhythms, it’s a similar story. We’re forever asking ourselves, “Is the performance ‘tight’ enough, ‘solid’ enough?” – these sorts of adjectives. It should come as no surprise then that our computer software has been engineered specifically to aid us in our incessant quest for tightness. The software simply reflects our perceived needs and the questions we’ve demanded answers to.
Perhaps we should have instead been asking ourselves how a rhythm feels, and of our software developers, how we might be able to pinpoint it in a recording. Not simply: “Is this performance bang on the timeline, and if not how do I ‘fix’ it?”
We should be asking the question: “What is ‘feel’?”, not “What is 120 divided by 60?” (which is effectively the only thing our computers currently provide the answer to).
Thus far computers have been excellent at showing us mathematical facts about performances: whether a vocal is flat or sharp, whether a beat is early or late. These are relatively easy tasks for a machine to execute technically since the questions can be framed and solved in purely scientific terms: the pitch is flat because it’s this frequency, not that one; this beat is late because it’s arrived at this point in time, not that one, and so on.
But our software hasn’t got the foggiest clue about what an emotional human performance looks or sounds like, and worse, its subtlety discourages us from examining performances from this perspective ourselves. Computers have no tools with which to examine musical performance from this angle, and yet software developers regularly claim their toolkits are ‘comprehensive’. They are not.
As a result, software has inadvertently developed an uncanny ability to remove some of the most precious human attributes we naturally possess, throwing them out in a vain attempt to ‘fix’ the so-called problems. Software has specifically developed this way because we’ve asked it to, and because we currently have no way of defining (let alone measuring) these other, less scientific human characteristics.
If neither we nor our software can adequately define adjectives like ‘feel’, ‘humanity’, or ‘emotion’ well enough to identify these qualities in a performance, or acknowledge their removal when they’re lost to editing, then there’s arguably some truth to the assertion that some of our most compelling human qualities are indeed being destroyed by our production processes, rather than captured, emphasised or nurtured by them.
Tomorrowland. And so here we are now in 2023…
And where is music today? Well, there’s no doubting that artists have discovered some amazing new sounds and explored incredible sonic landscapes in the last 80 years or more. People all over the world have produced some fascinating, beautiful, radical, soulful, engaging, entertaining and confronting music with the tools at their disposal. And despite the hyperbole, album production today is as healthy as it has ever been and I’m consistently blown away by what others bring to the table. It never gets old.
But there’s also been an explosion – in this century more so than any other – in the amount of truly generic, derivative garbage that’s consumed by the mainstream listener like a McDonald’s cheeseburger.
Like the food production industry, the music production industry has learned to cater to the mainstream listener by generating lots and lots of easily digested, familiar products that millions upon millions of people consume daily.
The food industry has its junk food, the music industry its junk music.
It’s perhaps not surprising then that during this slow slide into digital audio manipulation par excellence, where a large proportion of the music consumed worldwide is highly modified and deeply generic – the musical ‘Happy Meal’ as it were – a new, arguably more invasive, unpredictable influencer is quietly establishing a beachhead on our shores that may worsen the portrait of our industry further; generative Artificial Intelligence (A.I.).
A.I. is taking hold in all walks of life right now, and in music production specifically its influence has begun to emerge in at least one particularly insidious way, through its new role in the so-called ‘writing’ of original music.
At the click of a button now, A.I. can ‘pen’ a song for you (although whether you own it at the end of the process remains unclear). Literally anyone can now go to various sites that perform this ‘songwriting’ task for you (because hey, songwriting is confusing, time-consuming and difficult – who needs that drama in their life, right?) and moments later – for better or worse – there you have it: the lyrics, chords, everything manufactured for you. Too easy.
To me, this raises some serious questions about how we value songwriters and how we produce contemporary music in the here and now, as well as conjure sinister thoughts in my head about where this leads us in the future.
Are we sleepwalking artistically into the blades of a twenty-foot meat grinder? Does this mark the beginning of the end of music as we know it, of art as we know it, such that in 10- or 20-years’ time the notion of a human songwriter, producer or engineer will seem quaint at best, a distant memory worse still? Or will this happen in two- years’ time, not 10 or 20?
This might sound like nonsensical fear mongery to many, but they’re fair questions to ask. There have been several revolutions during the relatively brief time we’ve been recording ourselves as a species; many of them technological. We’ve been powerless to stop all of them so far, it would seem… so how will A.I. affect the music industry when it takes hold?
I can’t really imagine where this development might lead us in 12 months let alone 20 years, although my instinct is to feel gravely concerned.
But if I’m a believer in things going in cycles, not straight lines or off on tangents, although I’m not sure whether I am or not, to be honest, the time may be fast approaching where most songs are written, arranged, and to a larger extent performed by generative A.I. systems and software, including – wait for it – the vocals.
At this tipping point record companies will be able to generate their own content without the need for a pesky thing called an ‘artist’ gumming up their corporate works with annoying human demands like, for example, needing to be paid. Moreover, this switchover will have been made particularly seamless for them by our 20-year obsession with tuned, synthesised and generally artificial sounding vocals in popular music. When this day dawns, when artificial voices perform the main vocal role, many listeners won’t be able to tell the difference!
The logical extension of this economic model may then see the emergence of new ‘A.I. artists’ like ‘Woolworths’ and ‘Coke’, even ‘McDonalds’ – a genuinely horrifying prospect. But then, maybe, just maybe, there will come a breaking point.
What I suspect will emerge out of this ugly scene, this wholesale hijacking of music as a human artform, is a re-emergence of musical artists young and old, humans with an emotional story to tell, who have a face and a name, who pen their own lyrics and arrange them on an instrument with skills they’ve developed over some years. They might eschew technological interference from tools like Beat Detective and AutoTune (et al), believing strongly that it destroys the best aspects of what they offer the world – their human artistry and perspective. They will sound different from the mainstream – like Cher did back in ’98 with Believe, they will release their music independently at first, some to raging success, and then perhaps at least a few corporations will retreat back to doing what they do best: selling junk food.
Though they will surely try, corporations will fail in their attempt to usurp artists and capture a creative market. The suits can pretend to be artists all they want, but they simply don’t understand that what makes an artist is not their output or product, but rather their process.
I am always in awe of those who toil at the coalface of the music industry, whose process, principles and artistry keep them going even in the face of general disinterest from the world around them. Some of these artists stay the course for 50 years without mainstream recognition, and still they keep at it! That is commitment, that is dedication.
But a day may dawn soon where all this flips on its head again, and who knows, someone reading this article might suddenly be thrust into the limelight.
Or we may not exist in a cyclic paradigm at all… maybe it’s a random, tangential one.
Andy Stewart owns and operates The Mill on Victoria’s Bass Coast. He’s a highly credentialed producer/engineer who’s seen it all in studios for the last three decades. He’s happy to respond to any pleas for recording or mixing help…
contact him at: email@example.com
Published monthly since 1991, our famous AV industry magazine is free for download or pay for print. Subscribers also receive CX News, our free weekly email with the latest industry news and jobs.