Question about resolving mono compatibility issues

Edit: Have to remove part of this thread

Can someone who really understands this stuff skim through this and verify the info on there is accurate?

Next I’m wondering how to systematically apply this to a multi-track music mix. Lets say that I have 16 choir mics hanging from a 80 ft ceiling, dropped across a 120 member choir and 30 member orchestra across a 80 ft wide stage that’s slanted on a 25 foot incline inside a 200ft wide x 300ft deep baseball stadium shaped room with a 3000 seating capacity. Stadium has a closed ceiling and was designed and rigged by Wrighston-Johnston acoustics with a 3x mono line array cluster in the middle flown from the ceiling at the front of the stage. They are one of the best acoustics firms in the country, and the speaker arrays are time aligned the best they can be in this room.

The choir arrangement is orchestra in the front, sopranos stage right, ten/bass center, altos left. In the post production mix I want all men left, all altos right, and the sopranos wedged in the middle because they carry the most distinct parts of the melody.

So looking at the stage you have this:
Soprano | Bass | Ten | Alto

But watching them on TV, I want the mix panning them like this:
Ten/Bass | Soprano | Alto

My actual question: After I select the mics which I’ll use and discard the mic channels I don’t need, to maximize my phase correlation would you start by soloing 2 opposite panned mics then checking the phase correlation then move on to another 2, get the phase as close to +1 as possible, then check all 4 of them together? Or would you start by panning all guys and altos hard left/hard right, throw the imager on the choir bus and treat them all together?

Not sure why you would want to move the stereo image around like that, the stage configuration looks right to me, I would mix it as you see it.

If you cant get good mono compatability between adjacent mics, they are probably just too close together, try to factor this in when you set your mic height and width apart.

The panning wont matter when you sum to mono, I would set my mics according to distances from the source and pan it any way you like after.

Not very hi-tech, I know :slight_smile:

I want the melody (which is the top voice in SATB which means the soprano) centered. Remember that the venues FOH is mono. Broadcast mixes need to be delivered in 5.1 with mono compatibility.

I had no control over the mic config on the stage.

yes it does. This seems to be precisely what’s causing the mono compatibility issues. I’m not sure how to fix it.

From what I know, there’s something called “the 3-to-1 rule” which means each mic should be 3 times further from the next mic as it is to the source it is mic’ing. So, for instance, if you put some mics in front of a choir/chorus you might have each mic 2 feet in front of some singers, and 6 feet between each mic at the front of the stage. That’s supposed to be largely to reduce phase issues.

Now, what you’re describing seems like a nightmare in terms of managing mic signals. I can understand the desire for lots of mics, and suspending them from up high might seem practical, but as a sound engineer you have to look at it from the phase relationships.

I’m not sure I’m clear on this whole setup, but I’d probably say find the mic tracks you’re going to use and line them all up in the DAW, then zoom in to the waveforms at a granular level. You can manually adjust phase by sliding the audio regions one way or the other, if you can get the waves to line up with peak/trough. Yes, it’s going to be a PITA.

Might be a silly question, but are you using any stereo widening plugins?

1 Like

uh, after having watched it, no, it’s not accurate. I could go through and talk about point by point where his understanding is inaccurate, but in short, narrowing the stereo field has absolutely no affect on the mono mix, which means his entire premise is wrong. He’s using the correlation meter to determine if it’s “bad phase” which akin to using color to determine how much flavor koolaide has.

4 Likes

The best way to determine mono comparability is to listen in mono and pay attention to anything that is changing the feel of the mix. Stuff won’t sound as good in mono, but if the vocals become intelligible, or the snare disappears, or a guitar suddenly becomes the focus over the vocals, then you have mono comparability issues.

You can learn certain effects and techniques that tend to cause specific issues and learn when it’s ok that they mess up the mono compatibility and when it’s not worth it.

1 Like

Assuming mic placement is fixed and cast in stone…

You might be able to cheat a little with mic placement / phase anomalies by adding delays, but would also eq it to hide what you are doing. Hopefully it will still sound big without getting weird.

I dunno, post a link to a 30 second clip of it, we might be able to come up with a workaround that defeats the phase meter and still sounds good.

edit… do it by PM if necessary

I wouldn’t do this. With that many mics with a source as large as a choir and an orchestra, your correlation meter is not going to be your final judge.

I’m not going to pretend to know the best answer for mixing something like this because it’s so highly dependent on the mic positions and seating arrangements, amongst many other things. I would probably try to approach the mix starting in mono and figure out which mics are causing you the biggest issues. If you can get everything to play well together in mono, then you pan pan around at will to get your stereo mix.

No, panning won’t have any effect on your mono mix. Well, if your pan law is set to -6dB it won’t have any effect. If the pan law is set to anything else, you will have some level changes when you sum to mono.

1 Like

No - meaning even when they’re off and I solo the choir bus, I still can’t get a proper mono fold.

Boz just said that won’t fix anything. And my problem isn’t getting it big. I can do that just fine. Forget about 5.1 for now. The producers complaint is that even in stereo, the mix doesn’t fold to mono.

@bozmillar - let me make sure I understand this correctly. Narrowing the image has nothing to do with mono compatibility because all a phase meter is telling you is if there are frequencies being masked/eroded/cancelled frequencies coming through the opposite channel - right? And the problem exists regardless of pan position?

You mean even phantom center is weird? Well maybe you should just mute (or reduce its volume by 6-9 dB) on every other mic then. :thinking:

and/or flip their phases… start in the middle and work your way outwards.

or your master buss settings/ plugins / processors are now out of phase… :hot_face:

correct, because summing to mono is literally the same thing as narrowing the stereo field to mono. If summing to mono screws up the mix, then narrowing it will have the same effect. Think of it this way. If you put panipulator (or any other mono summing plugin) on a track or bus and look at the phase meter, it will give you a perfect reading. It doesn’t mean it sounds right.

A phase meter is just telling you how different the left and right channels are. Not that it really matters beyond knowing that the suggestions in that video will make zero difference to your mono mix, so you need to find another solution.

1 Like

Dude. I’m not using all 16 mics lol. I think I used 6 of the cleanest ones. The 16 mics were placed by an extremely experienced acoustics design team that hung them in specific places to optimize the delivery of the choir to the front of house. I wasn’t required to mix every single mic for TV broadcast.

ok… that helps a lot - maybe you can limit your choices to those mics placed furthest from the orchestra? Its hard to guess what you are actually hearing - I understand that you probably dont want to post it - maybe try just 3 mics on the choir (LCR) and 3 on the orchestra - and then see where you at… only then, add stuff back in very carefully.

Basically mix it like a live band, outwards to in, furthest ‘stereo mic pairs’ first… then fill in the spaces.

Beyond that, take the hit on it and sub the mix out, it might just end up being extremely difficult to mix without any specific recording mic setup (eg decca tree).

note: Its also possible that the pa system that is killing you. I avoid the pa system and monitors at all costs with my mics when recording live.

If even if I could, that won’t matter anyway. My problem in the first place was (and still is) a conceptual understanding of mono compatibility.

Oh, thats not really rocket science…well maybe it is! (this is a tangent I really shouldn’t be embarking on at this time of night)

Imagine two sine waves on an oscilloscope, you usually want your two audio waves to sync up either perfectly or 180º out of phase with eachother so you can flip one or the other and normal service can resume.

Anything in between can be a minor irritation or a complete PITA.

Scientic explanation for anybody still following along.

It starts getting overly complicated when you have two speakers involved, reproducing the same or almost identical signals.

Really, one speaker is the simplest form of mono and all mono checks should be done with just one.

However, that won’t help when recording something acoustically whilst simultaneously reproducing that sound in a speaker to roughly the same amplitude (but slightly out of time/phase) of the ambient sound, is a really bad idea.

ok, Im done - maybe I should’ve left this one to boz

Things are quiet at work today so prepare for a treatise. :rofl:

Things can be anywhere from totally phase coherent (two identical signals, or two signals where only the amplitude of different frequencies is changed, such as a theoretically perfect coincident stereo pair) to totally phase incoherent, where the signals are identical but opposite and will totally cancel, to having no phase relationship whatsoever, which is a little more complicated. It partially comes down to psychoacoustics; your brain processes the sounds it detects and tries to look into the sound to work out what components are making it up - it’s how from a single mastered waveform we can hear individual instruments. I’ll come back to that point.

Two mono but totally different tracks, like a guitar and a tambourine, have no phase relationship. You could hard pan them apart, your phase meter would read 0, but when you collapsed them to mono there would be no cancellation at all.

What about similar signals? We start to enter less certain territory.

Example A;

Say you have a good guitar player, who double tracks an identical rhythm part and the two tracks are hard panned apart. These will largely have no phase relationship, because of all the subtle timing and tonal differences across every single harmonic making up each note. But there will be times when harmonics from each track line up or cancel each other just by chance. So there will be some elements of the perceived soundfield that float closer to the phantom centre, and some times when elements might cancel and appear extra wide in stereo. This is where psychoacoustics come in - because even though that’s technically what’s happening, our brain is really good at keeping a lock in the different “characters” that make up the sonic picture, as long as those phase coherencies/ incoherencies don’t go on for too long - it keeps tabs on how each signal is evolving and discounts brief instances of stereo collapse, if that makes sense. It’s only when phase relationships go on for a while that the brain starts to say “maybe these two sounds/ signals/ tracks are actually one thing”. But it’s not simply one way or the other. It’s a continuum. This is why on a purely perceptive level, it’s easier to get a wide feeling stereo field if you change the two tones by using different guitars, amps, chord inversions etc - it reduces the number of temporary commonalities between the left and right panned tracks.

Example B;

You have a singer in a big room. You put a close mic on them to pick up their sound, and you put a room mic up to pick up the room.

This is a more complicated situation, because by its nature there will always be some degree of phase coherency, or perhaps better expressed as a static phase relationship, between the two signals. The trick is to make them have as little phase relationship as possible; to get as little direct sound and as much room sound in the room mic as possible, which will minimise comb filtering on the direct sound of the vocalist.

Example C;

You have a drum kit. Let’s keep it simple: you have a kick mic, a snare mic, and two overheads. All elements of the drum kit appear in all mics, and each mic picks up each element at different times and different volumes. There is a static collection of phase relationships. At some frequencies they are phase coherent, at others they’re phase incoherent. This manifests as comb filtering. Luckily, a drum kit (usually) is not changing the notes/ frequencies it’s emitting through the performance, unlike a pitched instrument, so the phase relationships become a known quantity. You can therefore place the mics such that they have a phase relationship which works with the known, fixed frequencies coming off each part of the kit. You can also observe the 3:1 rule-of-thumb (I’m over the moon that @Stan_Halen correctly described this!) to maximise in each close mic the signal that you want vs the volume of the bleed. You still get comb filtering in the signal, but it matters less because it’s at a lower level.

Example D: (more a curiosity really…)

You take the aforementioned drum kit recording where you carefully place the mics, and decide you want to high pass the overheads to get rid of the kick fundamental. You high pass up to 100hz with a high order filter because you want to keep as much as possible above that, and now your snare sounds really weak, even though its fundamental is at 200hz. Because the high pass filtering has shifted the phase of the low end and low mids, there’s now a different phase relationship between the overheads and snare track, and 200hz has been knocked out of phase. So you apply the same high order HP filter to the snare… and it gets thicker sounding, because it’s back in phase. Mixing is crazy. You shake your head with a wry smile, turn off your gear and go to the pub.


So, my understanding is that what you’ve got is a badly recorded choir. That might sound harsh, but if you’re struggling to get a stereo mix that collapses into mono without sounding bad, then the fundamental problem is that the mics were in the wrong place.

If we set aside the fact that you need to create a stereo image that makes sense musically, and just concentrate on the requirement to deliver a stereo mix that folds down neatly to mono, then what you need to do is use the tracks that have the most tenuous phase relationship imaginable. Let’s, for argument’s sake, say that the two mics that are furthest apart represent this. You just take the left-most mic (call it mic 1) on the Soprano side, and the right-most mic (mic 16) on the Alto side, and hard pan them. Likely, you’ll have the problem of having very little phantom center and also feeling that the basses and Tenors are under-represented in the resulting image. But because the left and right mics have such different sonic pictures, they don’t interact and as such will collapse fine to mono - they won’t cancel out in any significant way.

But if you were to then fade in mic 2 on the soprano side, the one closest to mic 1… maybe you’ll start to run into difficulty. Because the mics might be closer to each other than to the signals they’re picking up, it won’t have the effect of shining a sonic spotlight a little closer in from the far left side of the choir - rather, it’ll just bring up a static phase relationship between mic 1 and mic 2 that will cause comb filtering and no matter where you pan mic 2 to try and separate the signals laterally and help your ear distinguish between the signals from mics 1, 2 and 16, it’ll cause a problem when collapsed to mono.

It sounds to me like if you were to use all 16 mics, you’d just get a terrible smear of all parts of the choir phasing across all the mics, and that panning them across the stereo field would slightly help your brain rationalise the resulting sound but that it’d collapse to mono very poorly. So the solution, as I see it and without having the mix in front of me to experiment, is to use as few mics as possible to represent to the total spread of the choir across the stereo field. And further, to pick mics that have as little in common with the other mics as possible.

You can do this by monitoring in mono as you audition mic combinations, looking for the most solid and least watery/ comb filtery sound as possible. Personally, I’d start with the two mics that best represent the entire choir, pan them appropriately, then start to audition third mics to look for what you feel is lacking. Try nudging them in time or flipping polarity too to see if that helps. When selecting the stereo imagine, you might find that your hands are tied in that the mic combination that gives the most solid mono fold is not the mic combination that gives the desired stereo field. You’ll have to decide what’s more important to you.

1 Like

I’m really sorry - I edited the title to show the real question is about mono compatibility. As Boz pointed out, it has nothing to do with phase correlation. I didn’t realize this was misleading until i looked at it this morning and kept wondering why everyone keeps commenting on phase.

I don’t have any control over the position of the mics.