First, what exactly do you mean by “get the phase as close to +1 as possible”? Are you proposing to move the clips so that their temporal alignment is changed? I mean, I’ve done a lot of that in the past, and it has always ended up in tears.
So in answer to your question, I would do none of the above. I would mix it and get it sounding as good as I can. If it’s obvious that some sort of phasing/comb filtering is screwing up the overall mix, then obviously you will need to check it out, see exactly what the problem is, and what can be done about it. Every case is individual, I don’t think you can apply systematic rules and procedures - and in any case, if the only problem you’ve got in the mix is a bit of phasing then it’s not really a problem. In any case, phasing can enhance in some circumstances - it’s not like the existence of phasing is a blight on all mixes, it’s only a problem if it doesn’t sound good.
Can you clarify your criteria for selecting/de-selecting specific mics?
Next, I don’t understand this:
Why do you want to mix differently to how it appears just for TV?
But more importantly for an audio point of view, how are you going to create such a contrived stereo image when all you have is mics suspended from a ceiling?
From a technical point of view I’d be choosing the best-sounding stereo pair and then take it from there. I’d like to bet you could get 90% of what is required from a decent-sounding pair. That would probably help with mono playback too.
Well, colour me confused then because I’m pretty sure that mono compatibility and phase correlation do go pretty much hand in hand. The only other thing that could screw up mono compatibility would be using panning to resolve frequency masking/ separate problematic parts of an arrangement. In your description of the miking situation, it sounds like the exact problem you’re facing that’s making mono compatibility poor is the phase relationship between the mics at your disposal.
If you get a chance, take a quick sec to glance over Boz’s responses.
Whichever had the cleanest signal. I tried to use no more than 2 of them per section.
Its for the same reason you’d mix a movie or newscast. (Like how you never move diegetic voices out of the center speaker regardless of what is happening on the screen). Notice how in a presidential debate the candidates aren’t panned R to L depending on their position on the stage. With music (especially choral and classical) you never want your dominant voice (the instrument carrying the melody) panned left.
The choir was spread too wide and the mics were hung too low.
Perhaps @bozmillar can explain a little about why this is not the case. Vaughn may be a little confused on this too because he keeps proposing solutions that have to do with phase. I misunderstood this myself well when I first posted the thread.
That’s what I ended up doing. Then I upmixed to stereo, then unmixed to 5.1. Which isn’t the ideal way of working, but it got it resolved. I also had installed a PTHD update which reset my pan law. When Boz reminded me to check them it answered a lot of question about why my volume levels for hard panned material were jumping around when I folded to mono, and causing them to seem out of balance when collapsed.
It’s not that mono comparability has nothing to do with phase correlation, it’s that the way that video was showing how to use a phase correlation meter was just fundamentally wrong. If you understand what a correlation meter is measuring and what it’s not measuring then it’s a very useful tool. But it was very clear from his description that he did not know what the measurement means.
A correlation meter is a 1 dimensional display of a 5 dimensional issue. It can give the same reading for wildly different issues.
If something does not translate to mono, the main reason will probably be some sort of phase issue. My admittedly tiny mind can’t come up with anything other then phase problems when a mix does not translate to mono well.
I think I see where the confusion is coming from in my response. I didn’t say that mono compatibility has nothing to do with phase. I said that using Waves (or any other) stereo narrowing plugin will not have any effect on the mono mix, even though it makes it look nicer on the correlation meter. His method for fixing mono issues was what was wrong, not the fact that they are caused by phase issues. He’s using a tool in that video that doesn’t fix the issue.
I’m following this thread with great interest. I’m not a physicist by training (actually I’m a pharmacologist), but I’ve taught enough High School science to try to understand this problem.
If you play a sine wave on the L and R (hard panned), if they are perfectly in phase, it will sound essentially like a mono signal. If you turn that signal to mono, you should not experience any perceived difference in the sound, as essentially both ears are receiving the same sine wave in the same way.
If you then take one of the sine waves (while you’re playing in stereo) and start to move it out of phase with he other sine wave, you will start to create a widening effect. This will be caused by each ear receiving the waves at slightly different times, causing you to perceive space between the two waves. You can put one wave completely and perfectly out of phase with the other, and this will be the “widest” the two waves will sound. Now if you turn this stereo sound to mono, the two waves should cancel each other out and you end up with silence.
To my untrained physics brain, taking a complex stereo recording and flipping it to mono would result in all the different waves that make up the music essentially becoming one complex wave. That would mean all the phase issues, interferences, and interactions that made the stereo image fantastic would now be represented by one wave. So if waves were cancelling each other out in stereo (creating widening effects), it would now be perceived as drops in volume, or cancellation. Things that were perfectly in phase would be perceived as a boost in volume. All the variations in phase would therefore result in complex resultants in the final wave. Does this make sense to anyone else apart from me?
I guess what I’m trying to understand here is that a band with 5 instruments is going to be easier to mix in mono than a choir. Just the sheer number of sound waves that a choir is going to generate is going to be hard enough to mix, let alone make sound good in both stereo and mono.
Has anyone ever heard a good choir recording in mono? As an experiment, maybe we should get a well recorded stereo choir recording, throw it into a DAW and flip it to mono?
Yes, although it’s not as simple as this in practice. It depends on how correlated the channels are, and what is causing the decorrelation.
Think of it like this. Drums panned to the left and a guitar panned to the right are completely uncorrelated. But if you sum them together, they won’t cancel each other out.
But a guitar on the left and the exact same guitar on the right with a slight delay are much more correlated, but will definitely cause issues when summing to mono.
When the decorrelation is static (think time offset or polarity flip or all pass filter) the results are usually not so good when summing to mono.
When the decorrelation is random or complex (think doubling a take or even complex room reverb) then summing to mono is generally less damaging. But then again, it really depends on what you consider damage. I( can’t count the number of times I’ve double tracked guitars where they sound great in stereo but turned to mush when summed to mono. They weren’t canceling each other out, but they were smearing themselves together.
So mono compatibility isn’t just about correlated/not correlated. It’s context dependent, which is why what works for one situation might not work at all for another.