News

7 Apr 2026

IMMERSED OR MISGUIDED?

If you can’t hear the main vocal at a show you paid good money to see, does it really matter whether the mix is in mono, stereo or immersive?

The conversations I have around the installation of immersive audio systems at large live arena-type shows sometimes remind me of the chatter I hear around the colonisation of Mars. There are lots of opinions and regurgitated theories about how and when a Mars expedition might get underway, yet almost no‑one does the maths to realise just how impractical – some would say utterly fanciful – the whole thing actually is. Meanwhile planet Earth is on the ropes.

The same sentiment applies to immersive audio systems at large live venues – do the physics and mathematics around these systems, or indeed the economics of installing these sorts of complex sound systems into large arenas (and then disassembling them and shipping them off to the next venue), really stack up when you take into account all the extra time and costs involved?

At least one person I know thinks it doesn’t.

Backtracking Slightly

The other day, whilst glancing up at my laptop to check that the House music was still playing (muted too of course) during a live mix of an M. Ward show at Meeniyan Hall, I noticed an email in my inbox from none other than Howard Page, Sting’s FOH engineer, and the Senior Director of Engineering at Clair Global.

I didn’t open it at the time, for obvious reasons – I’m not one to check emails while mixing a live show, although I’ve seen others do it! Besides, I had assumed it was probably just another promotional email from an audio company where Howard Page was, perhaps on this occasion, the subject of an article or mix tutorial.

But it wasn’t.

As I discovered after the show, the email was from Howard himself, writing to me to discuss the impracticalities of immersive audio for large‑scale touring acts.

To preface all this, I must point out that Howard and I aren’t well acquainted, although I have sat with him once before at a Sting show at the Myer Music Bowl in Melbourne some years ago. But we haven’t had contact since. So I was more than a little surprised to see the email in my inbox.

As it turns out, Howard has been holidaying in Australia, and whilst here he’s been reading my articles in CX Magazine (amongst other things I hope), which was nice to hear! But his main reason for making contact was to press home a point about immersive audio systems in the live touring realm, one that he’s adamant is being sidelined by the hype surrounding various immersive formats.

His assertion is that ‘immersive audio’ – generically speaking – has its place in theatres, fixed installs etc., but not in the real world of large-scale, high‑intensity world touring. In addition, the fundamental properties of the speed of sound – he rightly argues – makes the practicalities of playing dynamic, rhythm‑based music in a large space a time‑alignment nightmare for most punters in a large live venue, with only a small group at the centre of the sonic focus able to enjoy time‑ and phase‑coherent audio. Others on the periphery aren’t so lucky.

Immersive On A Major Scale

The only way ‘immersive audio’ in a large venue can work – particularly on the scale of let’s say, a soccer stadium – is when the music is fundamentally time incoherent in the first place, and by that I simply mean, the main musical elements have no strict timing cues, like a drum beat, for example. Instead, the music might be made up of slow, amorphous synth pads, vocals or sound effects. For music like this, where the timing is less critical, immersive audio on a large scale is entirely possible, even though everyone is actually hearing a slightly different version of the performance.

But I’m confident when I contend that this isn’t what 50,000 people are likely to want to witness at a large gig! People go to big concerts like this to party, dance and sing along to songs they know well – not be put to sleep by ambient synth pads. But in a large space, when the core elements of an immersive system are comprised of a rock band, for example – and given that the speed of sound is something we can’t change – when these elements arrive at the ear of 5,000 people on the periphery of an immersive system, the timing of the music for these customers, who’ve paid good money to be there, will be a shambolic mess.

The way around this fundamental problem is to pick a small section of the venue – presumably in the centre, in front of the stage – and make that sonically ‘immersive’, and then play either a stereo or even mono source into other parts of the arena.

But this is surely, by definition, an admission of sorts that immersive systems in general, are compromised by the size of the venue.

But Howard Page’s concerns around immersive audio systems on tour are more specific even than this, though he also shares these aforementioned concerns. His others stem from what he sees as the impracticalities of an international touring act installing immersive systems into venues that sometimes, for a multitude of differing reasons, often simply cannot accommodate the installation of such vastly more complex, expensive, and time‑consuming systems. A venue might, for example, have extremely restricted rigging weight limits, sightline issues, or be simply constrained by time – these systems invariably take far longer to construct, tune and then deconstruct than your average left/right system with in‑fills and delays.

Immersed In Plugins

I’m planning on meeting up with Howard to explore this topic in more detail in the next week or two, whereupon hopefully by next issue you’ll be hearing from Howard either as part of a discussion, or even directly from the man himself, not sure at this stage. So stay tuned for that.

But while I still have space here, there’s something else I’d like to weave into this article, if I may, that Howard mentioned in one of his emails the other day. It was a comment that really struck a chord with me that relates only indirectly to the immersive audio debate, but fundamentally to two things that are central to any conversation about live performance in general – the mixing, and the customer.

His comment was this: “Whenever I sit around watching other engineers mix at large‑scale festivals, I am often stunned by the amount of extra plugins and processing they apply, in most cases to the absolute detriment of the end result. I often make the not‑so‑subtle joke to audio students – ‘Do you think Mrs. Robinson in seat E15 knows or cares what plugins you have on the lead vocal if she cannot understand a word the singer is singing because the bass and kick drum’s low‑end are towering over the mix?’”

Of course not, she’d be deeply disappointed by that outcome.

To which I would add (in furious agreement with Howard): When Mrs. Robinson is there to see her favourite artist perform live – let’s say she’s a huge fan of Lady Gaga, and she’s paid good money for that E15 seat – does she expect the 500,000W PA to be powerful enough to allow Lady Gaga to be clearly audible in the mix, or would she prefer NOT to hear the person she’s come to see, but rather the kick drum and the guitars? Is that where her focus is; is that who’s name is on the poster out front – “Kick Drum and the Guitars!”?

One thing’s for sure, if Mrs. Robinson was going to a Sting concert she wouldn’t suffer that fate. No one is more concerned with the audience’s focus than Howard Page.

He is meticulous about creating a fantastic sounding, sonic focus around the person everyone is there to see – Sting. He’s unapologetic about it too. In reality, he’s far less concerned about what you or I might think of his mix than what his customers experience – the audience that has paid to be there! To Howard, that’s what matters most.

And he’s right to think that way.

I would assert that when people complain about a live mix sounding ‘bad’, the overwhelming reason is because they couldn’t hear the singer – the person they came to see, upon whom they were focussed all night.

If you get the vocal mix wrong at a live show – regardless of whether it’s low or dull, too compressed or too wet – the rest of the sound counts for nothing. Yet time and time again this is precisely what happens at live shows – mix engineers bury the lead vocal in the mix to the point where punters can’t hear the one thing they’re almost exclusively focussed on – the singer.

As a mix engineer, if you can’t get that right, your mix is a dud. It might not be seen that way by the engineering fraternity in the room, perhaps; they might have liked how the rest of the instruments were sitting, or the coverage of the subs in the venue, but punters couldn’t care less about all that. They just leave wondering how, with all that equipment, they still couldn’t hear the singer.

Regardless of whether a sound system on any given night is immersive, or indeed mono, the concerns of most paying customers remain the same. Their focus is on the star they’ve come to share a space with and more importantly, hear sing, not what the PA is, or how it’s configured. Frankly, if you can’t hear the singer, what does it matter anyway?

I remember mixing a gig at the Corner Hotel years ago; a record launch for an album that I’d also produced. The mix sounded great on the night – even though I was a relatively inexperienced live engineer at the time – mainly because the singer was (and remains) amazing, and I had her loud and glamorous in the mix.

After the show, about 20 people came up to me at Front of House to tell me how great they thought the mix sounded… praise I accepted graciously. One individual, meanwhile, made the negative comment: “I thought the vocal was a bit loud…”

He was an audio engineer.

Andy Stewart owns and operates The Mill in Victoria, a world‑class production, mixing and mastering facility. He’s happy to respond to any pleas for pro audio help… contact him at: andy@themill.net.au

Subscribe

Published monthly since 1991, our famous AV industry magazine is free for download or pay for print. Subscribers also receive CX News, our free weekly email with the latest industry news and jobs.