Things are quiet at work today so prepare for a treatise.
Things can be anywhere from totally phase coherent (two identical signals, or two signals where only the amplitude of different frequencies is changed, such as a theoretically perfect coincident stereo pair) to totally phase incoherent, where the signals are identical but opposite and will totally cancel, to having no phase relationship whatsoever, which is a little more complicated. It partially comes down to psychoacoustics; your brain processes the sounds it detects and tries to look into the sound to work out what components are making it up - it’s how from a single mastered waveform we can hear individual instruments. I’ll come back to that point.
Two mono but totally different tracks, like a guitar and a tambourine, have no phase relationship. You could hard pan them apart, your phase meter would read 0, but when you collapsed them to mono there would be no cancellation at all.
What about similar signals? We start to enter less certain territory.
Example A;
Say you have a good guitar player, who double tracks an identical rhythm part and the two tracks are hard panned apart. These will largely have no phase relationship, because of all the subtle timing and tonal differences across every single harmonic making up each note. But there will be times when harmonics from each track line up or cancel each other just by chance. So there will be some elements of the perceived soundfield that float closer to the phantom centre, and some times when elements might cancel and appear extra wide in stereo. This is where psychoacoustics come in - because even though that’s technically what’s happening, our brain is really good at keeping a lock in the different “characters” that make up the sonic picture, as long as those phase coherencies/ incoherencies don’t go on for too long - it keeps tabs on how each signal is evolving and discounts brief instances of stereo collapse, if that makes sense. It’s only when phase relationships go on for a while that the brain starts to say “maybe these two sounds/ signals/ tracks are actually one thing”. But it’s not simply one way or the other. It’s a continuum. This is why on a purely perceptive level, it’s easier to get a wide feeling stereo field if you change the two tones by using different guitars, amps, chord inversions etc - it reduces the number of temporary commonalities between the left and right panned tracks.
Example B;
You have a singer in a big room. You put a close mic on them to pick up their sound, and you put a room mic up to pick up the room.
This is a more complicated situation, because by its nature there will always be some degree of phase coherency, or perhaps better expressed as a static phase relationship, between the two signals. The trick is to make them have as little phase relationship as possible; to get as little direct sound and as much room sound in the room mic as possible, which will minimise comb filtering on the direct sound of the vocalist.
Example C;
You have a drum kit. Let’s keep it simple: you have a kick mic, a snare mic, and two overheads. All elements of the drum kit appear in all mics, and each mic picks up each element at different times and different volumes. There is a static collection of phase relationships. At some frequencies they are phase coherent, at others they’re phase incoherent. This manifests as comb filtering. Luckily, a drum kit (usually) is not changing the notes/ frequencies it’s emitting through the performance, unlike a pitched instrument, so the phase relationships become a known quantity. You can therefore place the mics such that they have a phase relationship which works with the known, fixed frequencies coming off each part of the kit. You can also observe the 3:1 rule-of-thumb (I’m over the moon that @Stan_Halen correctly described this!) to maximise in each close mic the signal that you want vs the volume of the bleed. You still get comb filtering in the signal, but it matters less because it’s at a lower level.
Example D: (more a curiosity really…)
You take the aforementioned drum kit recording where you carefully place the mics, and decide you want to high pass the overheads to get rid of the kick fundamental. You high pass up to 100hz with a high order filter because you want to keep as much as possible above that, and now your snare sounds really weak, even though its fundamental is at 200hz. Because the high pass filtering has shifted the phase of the low end and low mids, there’s now a different phase relationship between the overheads and snare track, and 200hz has been knocked out of phase. So you apply the same high order HP filter to the snare… and it gets thicker sounding, because it’s back in phase. Mixing is crazy. You shake your head with a wry smile, turn off your gear and go to the pub.
So, my understanding is that what you’ve got is a badly recorded choir. That might sound harsh, but if you’re struggling to get a stereo mix that collapses into mono without sounding bad, then the fundamental problem is that the mics were in the wrong place.
If we set aside the fact that you need to create a stereo image that makes sense musically, and just concentrate on the requirement to deliver a stereo mix that folds down neatly to mono, then what you need to do is use the tracks that have the most tenuous phase relationship imaginable. Let’s, for argument’s sake, say that the two mics that are furthest apart represent this. You just take the left-most mic (call it mic 1) on the Soprano side, and the right-most mic (mic 16) on the Alto side, and hard pan them. Likely, you’ll have the problem of having very little phantom center and also feeling that the basses and Tenors are under-represented in the resulting image. But because the left and right mics have such different sonic pictures, they don’t interact and as such will collapse fine to mono - they won’t cancel out in any significant way.
But if you were to then fade in mic 2 on the soprano side, the one closest to mic 1… maybe you’ll start to run into difficulty. Because the mics might be closer to each other than to the signals they’re picking up, it won’t have the effect of shining a sonic spotlight a little closer in from the far left side of the choir - rather, it’ll just bring up a static phase relationship between mic 1 and mic 2 that will cause comb filtering and no matter where you pan mic 2 to try and separate the signals laterally and help your ear distinguish between the signals from mics 1, 2 and 16, it’ll cause a problem when collapsed to mono.
It sounds to me like if you were to use all 16 mics, you’d just get a terrible smear of all parts of the choir phasing across all the mics, and that panning them across the stereo field would slightly help your brain rationalise the resulting sound but that it’d collapse to mono very poorly. So the solution, as I see it and without having the mix in front of me to experiment, is to use as few mics as possible to represent to the total spread of the choir across the stereo field. And further, to pick mics that have as little in common with the other mics as possible.
You can do this by monitoring in mono as you audition mic combinations, looking for the most solid and least watery/ comb filtery sound as possible. Personally, I’d start with the two mics that best represent the entire choir, pan them appropriately, then start to audition third mics to look for what you feel is lacking. Try nudging them in time or flipping polarity too to see if that helps. When selecting the stereo imagine, you might find that your hands are tied in that the mic combination that gives the most solid mono fold is not the mic combination that gives the desired stereo field. You’ll have to decide what’s more important to you.