I got moved to a new team at work recently, and we’ll be doing machine learning. So I just started this ML course on coursera today and came across this:
Naturally, as someone who has very recently struggled to get a good singer-songwriter sound with 2 mics and my guitar and vocals I was like
I tried googling for it a bit… but couldn’t find anything. Why is there not a plugin (or even a standalone program marketed to people who record music) that does this?? Reducing bleed between microphones has been a big issue in this space for about as long as recording has been a thing. I have heard that iZotope RX can do some pretty amazing stuff, and I’d believe the software is out there, but if it was a known algorithm in 2014 it seems like someone should have tried to market it to me in an inexpensive plugin. I have half a mind to do it myself! Then I could record my guitar and vocals at the same time (with 2 different mics) and get rid of most of the bleed, which would make mixing a lot easier. Maybe it doesn’t work well in high fidelity? I guess the examples were quite lo-fi…
Thought someone here might know what’s up with this, or at least be interested. Also, hi!
A few years ago, there was this drum mic bleed specific software that used machine learning - I haven’t heard much of it lately, though:
…and iZotope recently released this - pretty effective, apparently…
Hey Cristina! Nice to hear from ya. So, this is not the same thing I realize, but similar perhaps?
Watching that video reminded me of a Lewitt Mixing Contest last year. They’ve got a plugin called the Polarizer which lets you change the the polar pattern on a “dual input” microphone on the fly.
It was used on the L/R OH mics on these tracks.
I remember being pretty amazed at the difference toggling the setting on the plugin made. Strictly from a layman’s perspective, my guess is that it would be tweaking the phase alignment and timing of the two channels. Just a WAG.
Pretty interesting post!
I watched a Reaper tutorial a few days ago where the guy used a multiband compressor sort of as a gate to reduce bleed on a snare drum Mic. The technique seemed to work pretty well, but in your case it seems like your voice range and the range on the acoustic might overlap too much. He basically adjusted the threshold and the compression ratio in the band he wanted to eliminate until it was much less audible. It would boil down to isolating the frequencies the bleed occurred in without killing everything else.
I think I will have to give this a go at some point, when I’m more versed in using ML algorithms. I suspect it just doesn’t work quite as well as I hope. But mark my words I’m going to find out before this course is over!
Good to hear from you Cristina! Hope you and yours are safe and well.
I’ve messed with this quite a bit a few years ago. It’s pretty hard to do well, especially in higher frequencies and transients. If you have a microphone array, you can basically create a heat map of where the sounds are coming from and filter accordingly. The more microphones you have, the easier you can do it reliably at higher frequencies.
The hard part is you get a log of ambiguity of direction at high frequencies when the microphones are more than a wavelength apart.
Adobe Audition actually has a spacial filtering tool. It’s been an effect in that software since the 90s. I don’t think they’ve made any changes to the algorithm since then, but it’s worth taking a look at.