LISTEN HERE

9 Feb 2026

Expertise Is Everything… Or Is It?

by Andy Stewart

While much of today’s cutting-edge software encourages us to hand over the reins to AI – to make decisions about everything from EQ curves to song structures – we’re also told that surviving in this industry depends on the opposite: emphasising the unique sounds we bring, maintaining our technical expertise, and refining our taste and judgement – the very things that make us human…

But the two concepts – to me at least – seem to contradict one another. On the one hand, the trend in audio software seems to be all about automated processes that, in many instances, wholly replace the needs of an engineer to apply their technical expertise. On the other, our skills and expertise are apparently also the only thing that will set us apart in an AI-dominated future. But the very tools we already use to produce music – and more broadly audio content – seem destined to rob us of the very skills we’ve worked so hard to develop, along with the desire, it seems, to even apply them.

You only have to use one of the latest real-time resonant peak analyser plugins or automated bus compressors for a short while to realise how quickly these sorts of tools take away your desire to manually craft the parameters of a processor. It’s a seductive, slippery slope – one that can also quietly undermine your confidence, making you doubt your own judgement. When that confidence falters, some of the qualities that make your work unique can slip away along with it. In short, the decisions you make as an individual (human) are commonly usurped by AI’s predilection for taking over almost every role, and that’s dangerous for us humans…

The long-term risk here is that as humans gradually hand over control of most aspects of audio production in all its forms, we may lose not only our understanding of these tools, but also the skills we’ve spent years honing. And besides, who says AI can do things better, and on what basis is this assertion made? If we are to agree that music is an art form, and that the decisions around every aspect of how a song should be made are ultimately subjective, then how are the decisions AI makes any better?

What concerns me most is the prospect of a future where human interaction with audio is no longer seen as valuable – or even viable – either technically or artistically. Indeed, the argument doing the rounds today is that AI is already superior in every way, and that humans by comparison are slow and expensive. But these are not measures of art, but rather, manufacturing. It’s a mistake to confuse the two.

It’s not just the final product that matters – it’s the process itself. For audio professionals of all kinds, the process is our passion, our craft, and our livelihood. Moreover, many of us care about it deeply enough that we are also interested in passing our knowledge on to the next generation (who are naturally drawn to music and creativity), if not as a career, then at least as a pursuit worth exploring. To deny the next generation of budding audio enthusiasts and professionals the lived experience of making music as we currently do, is like outsourcing the ride of a perfect wave to a machine: efficient, perhaps, but utterly missing the point.

The future is always uncertain, of course, but if the next era of recording, mixing, mastering, post-production, and sound design is to keep humans at the fulcrum, then the expanding use of AI to automate technical work and guide creative decisions deserves close scrutiny. Otherwise, in a relatively short period of time, our so-called ‘unique’ human traits of subjectivity and taste will be mercilessly overrun by the efficiencies of AI, which are increasing exponentially.

Dancing With The Devil

If any of this resonates – if you believe you offer clients a distinctive sound that isn’t easily replicated, and that they work with you not just for results you produce but for the human perspective you bring – then the next question is unavoidable: are you already using AI-powered tools to improve efficiency, inform artistic decisions, or solve technical problems that once seemed insurmountable?

If you’re anything like me, then the answer to this question is a simple ‘Yes!’ For all my protestations about AI and the automated processes that can undermine my instinct to solve artistic or technical problems myself, I still use these tools every day – and will, in all likelihood, rely on them more and more as time goes by. But whether this is innately a good or bad thing is hard to say with certainty.

One thing I instinctively recognise is that I’m lucky to have been trained over the years in countless audio processes and techniques long before this new era of AI dawned, which gives me the perspective of being able to understand – and more importantly, hear – what works and what doesn’t when these new tools are applied.

But one thing is certain, while it’s true that some of the newest plugins offer automated processes that conceal at least some of their internal mechanics – which some engineers find frustrating or in some cases disturbing – they sound remarkable, performing aural feats of near-magic. While they’re not the best educators of engineering best-practice by a long shot, they nevertheless offer solutions that were science fiction a couple of years ago, opening the door to new ways of recording and mixing.

Take, for example, a simple plugin like Black Salt Audio’s Silencer. Not only does this simple, inexpensive tool practically eliminate spill from a hi-hat into a snare mic – one of the most common issues of any drum kit recording over the years – it also has the capacity to perform similar feats of magic across all kinds of recordings, making the problem of ‘spill’ and crosstalk between microphones in a live recording vanish like stars after sunrise. Over the course of this year there will likely be a thousand plugins on the market just like it, at which point every form of spill between any instruments will likely have been conquered.

This opens up the prospect of recording in spaces that were previously too noisy or ill-suited to tracking music, and will also encourage engineers to track groups of musicians in one space again – the old-fashioned way – albeit without the concerns now that each mic or instrument could cause problems for others in the space. Phase anomalies will be eliminated and concerns about mistakes and the editing limitations they bring will likely vanish. This is a profound development.

Other plug-ins that use real-time analysis to control resonances and EQ anomalies, like Oeksound’s Bloom and Soothe2, or Soundtheory’s Gullfoss, also provide effective solutions to problems that engineers previously could often only partially resolve. These plugins have, in some cases, been seen as guilty of concealing their processes, and the companies themselves have been cagey in interviews about how their plugins really work. Regardless, there’s no denying the sonic benefits these modern tools bring to the table. Frankly, I wouldn’t want to be without them now.

Even a classic tool like SSL’s famous bus compressor has been given an automated refresh. The new autoBus plugin by Solid State Logic has an AI engine strapped across it courtesy of Sonible that offers users a viable and remarkable sounding automated solution to mix bus compression – a process that countless operators have failed to grasp (or at least master) over the decades. This plugin is simple yet deceptively sophisticated, and while I already have three analogue stereo SSL bus compressors in the studio here at The Mill, I’ve used the new autoBus on mixes to great effect already. It’s simple to apply, and dastardly in what it suggests about my future input into the process!

So despite my reservations about AI and other automated processes, which have the tendency to dull our impulse to wrestle with artistic or technical problems ourselves, I use these tools daily, and will almost certainly continue to do so with increasing frequency. Whether that makes me a hypocrite or complicit, I’m not sure.

What I do refute, however, is the claim that AI – because it is more efficient and faster at certain tasks – is therefore objectively better at everything. This is false.

The artistry and craft of making music is a subjective process. And by process, I mean, humans made it… using their minds and bodies in a space, over time. They did not punch it out as a robot might – fast and efficiently. They took time to produce a unique musical outcome that no other process can replace. And perhaps more importantly, everyone involved in this process will carry the memories of having made it with them long into the future. It’s a human process, an organic process, which has innate value wholly unrelated to the product that a listener might stream later on. You can’t buy it, or indeed sell it. It’s a memory that those lucky enough to have experienced this process carry with them for a lifetime.


CX REGULAR, ANDY STEWART

Andy Stewart owns and operates The Mill on Victoria’s Bass Coast. He’s a highly credentialed producer/engineer who’s seen it all in studios for over four decades. He’s happy to respond to any pleas for recording or mixing help… contact him at: andy@themill.net.au

Subscribe

Published monthly since 1991, our famous AV industry magazine is free for download or pay for print. Subscribers also receive CX News, our free weekly email with the latest industry news and jobs.