Scoring Central

Full Version: Multi-mic Recording
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Hi everyone!

I figure it would be nice to have a little discussion. Multi-mic, or the concept of having multiple, mixable mic positions right in the end instrument, is popular and growing. However, some implementations can be more successful than others.

I've written a short article with a few examples from my own projects on how I approach recording for a multi-mic library:
https://versilianstudios.wordpress.com/2...-sampling/

Do you use multi-mic in your own libraries?
How do you approach multi-mic?
What are the factors that make multi-mic implementation successful vs. unsuccessful?
What instruments are most useful to be multi-mic'd?
If in a sub-par recording space, how do you balance avoiding bad room tone vs. mic position diversity?
Multi-mic stuff is fine, but I really tend to use the single mono (usually close) channel, especially in large ensemble contexts. Between early reflections in ValhallaRoom and Proximity (the Tokyo Dawn plugin), I just find it so much easier to place an instrument where I want it on the virtual sound stage.

Even when I do use the additional channels, I typically turn of the main mono channel to achieve some specific effect or sense of distance. The one place I do try to use the multi-mics as they come is with strings, especially when they're recorded in-situ and I'm using all sections all from the same developer. To me it just has a more "natural" sound.

The thing is I've heard stuff by people who use the multi-mic setup across the entire orchestra and for the people who really know what they're doing it just sounds *fantastic*.

Sadly, however, I'm not yet one of those people...

Kurt
(07-01-2016, 04:42 AM)kmlandre Wrote: [ -> ]Multi-mic stuff is fine, but I really tend to use the single mono (usually close) channel, especially in large ensemble contexts.  Between early reflections in ValhallaRoom and Proximity (the Tokyo Dawn plugin), I just find it so much easier to place an instrument where I want it on the virtual sound stage.

Even when I do use the additional channels, I typically turn of the main mono channel to achieve some specific effect or sense of distance.  The one place I do try to use the multi-mics as they come is with strings, especially when they're recorded in-situ and I'm using all sections all from the same developer.  To me it just has  a more "natural" sound.

The thing is I've heard stuff by people who use the multi-mic setup across the entire orchestra and for the people who really know what they're doing it just sounds *fantastic*.  

Sadly, however, I'm not yet one of those people...

Kurt

You should check out the work of Eduardo Tarilonte. Most of his sampling is (was?) done mono (see Era) and 'inflated' with convolution reverb! Effectively cuts RAM usage in half, while still producing a decent sound (although, you will notice after using it a lot, that it doesn't "sound like" other instruments).

Personally, I always sample and record in stereo, typically spaced pair. I find the subtle phasing between the channels helps reveal the intricacies of the tone and spaciousness in a way a single mono mic simply is incapable of, even when combined with brilliant reverb and manipulation. A close mic is nice (with EWQL SO I typically use mostly close and a little main, because it is so darned wet/reverberant), but I typically prefer 'mid' mics, or 'main' mics that are cardioid, so they don't get too much hall reverb, but get plenty of instrument and a sense of space.
(07-01-2016, 04:53 AM)Samulis Wrote: [ -> ]You should check out the work of Eduardo Tarilonte. Most of his sampling is (was?) done mono (see Era) and 'inflated' with convolution reverb! Effectively cuts RAM usage in half, while still producing a decent sound (although, you will notice after using it a lot, that it doesn't "sound like" other instruments).

  I've got a few demos of his stuff and I really like it, though I've never been able to afford any of it.

(07-01-2016, 04:53 AM)Samulis Wrote: [ -> ]Personally, I always sample and record in stereo, typically spaced pair. I find the subtle phasing between the channels helps reveal the intricacies of the tone and spaciousness in a way a single mono mic simply is incapable of, even when combined with brilliant reverb and manipulation. A close mic is nice (with EWQL SO I typically use mostly close and a little main, because it is so darned wet/reverberant), but I typically prefer 'mid' mics, or 'main' mics that are cardioid, so they don't get too much hall reverb, but get plenty of instrument and a sense of space."You should check out the work of Eduardo Tarilonte. Most of his sampling is (was?) done mono (see Era) and 'inflated' with convolution reverb! Effectively cuts RAM usage in half, while still producing a decent sound (although, you will notice after using it a lot, that it doesn't "sound like" other instruments).

The trouble I have with stereo stuff (not philosophical, just skill-wise) is that the reverb tends to sound weird when I try to pan the instrument right or left.  I've had some success running things into a mid/side-decoder and then panning it, but it still seems to have a bit of that "doesn't 'sound like' other instruments" quality you mentioned above...

How do you deal with that and get things sitting properly on the sound stage?

Kurt
(07-01-2016, 10:10 PM)kmlandre Wrote: [ -> ]
(07-01-2016, 04:53 AM)Samulis Wrote: [ -> ]Personally, I always sample and record in stereo, typically spaced pair. I find the subtle phasing between the channels helps reveal the intricacies of the tone and spaciousness in a way a single mono mic simply is incapable of, even when combined with brilliant reverb and manipulation. A close mic is nice (with EWQL SO I typically use mostly close and a little main, because it is so darned wet/reverberant), but I typically prefer 'mid' mics, or 'main' mics that are cardioid, so they don't get too much hall reverb, but get plenty of instrument and a sense of space."You should check out the work of Eduardo Tarilonte. Most of his sampling is (was?) done mono (see Era) and 'inflated' with convolution reverb! Effectively cuts RAM usage in half, while still producing a decent sound (although, you will notice after using it a lot, that it doesn't "sound like" other instruments).

The trouble I have with stereo stuff (not philosophical, just skill-wise) is that the reverb tends to sound weird when I try to pan the instrument right or left.  I've had some success running things into a mid/side-decoder and then panning it, but it still seems to have a bit of that "doesn't 'sound like' other instruments" quality you mentioned above...

How do you deal with that and get things sitting properly on the sound stage?

Kurt

Well, I always try to use dry samples whenever possible. I use as little reverb as possible- typically less than 20-30% at MOST. 90% of bad mixing I would say is simply too much reverb- it would sound fantastic with less, but people think they need to "make" a space with the reverb, rather than "augment" a space.

Basically, all the reverb should do is blur the layers together a little, and provide a bit of "framing" to the outside edges of the mix. It's like antialiasing (literally), if you are familiar with that concept- get rid of the jagged edges! Reverb should almost never be used on sources that already have reverb, even if they are from different spaces.

There are literally an infinite number of ways a recording could sound, depending on mics, space, etc. It is important to have an idea on what sounds "good", and what sounds "bad", even when both sound "real".

For example, this is a real recording, but it sounds bad.
http://www.newgrounds.com/audio/listen/462542

Why? Well, first, the mics are too far away- there is too much reverb. Believe it or not, this is in the middle of a concert hall, where most people experience a concert from! Many many people try to use this sort of position/reverb level in their songs, and you can instantly hear why it is not a good approach- all the lines and articulation are blended and smoothed together into a big muddy ball. Second, the stereo field is awkward (actually, it's backwards too, but that's another issue), because it was recorded with a coincident array at a great distance; at a great distance, spaced pair sounds more realistic to me. I hear so many tracks that sound like this, and people say, "wow, that's so epic sounding with all that reverb!"... all I can do is shake my head, hahaha.

Let's look at some more stems, again from the same hall, but this time with a closer array added, a spaced pair of LDC's (large diaphragm condensers), instead of the XY pair SDC's (small diaphragm condensers)-

Here's our initial "far" pair, very very similar to the recording above- washy, indistinct... like the reverb of singing into a washtub. Big Grin
https://instaud.io/rXI

And here is the "mid" pair (actually the very same mics used for 90% of VSCO's Mid/Main array, in the same pattern, but about 20 feet farther out into the hall)-
https://instaud.io/rXK


So, the first step to removing the washtub sound from the far mics is actually widening so that it feels more like wall/hall reflections. What we are going to do is actually use this array like a reverb signal (yes, that's right. All we did was record our own 'hard-baked' reverb signal).
https://instaud.io/rXJ

Now they can be blended to taste.
https://instaud.io/rXL

Notice that the final mix takes the bass-end reverberance of the far mics and combines it with the tight precision and detail of the close® mics. The biggest point is, the instruments in the dry mix are not panned that widely, only the reverberant signal is. I never have reverb at a width less than 100%, never sum to mono before reverb, and never ever use mono reverb. Doing otherwise is literally asking for mud- reverb has no place in the center of a mix. Wink

Another example with a choir, same technique, same mics, same concert-
Far (spread): https://instaud.io/rXP
Close: https://instaud.io/rXQ (hear how dry that is?)
Mix (+ spot mic on piano): https://instaud.io/rXR

As a final warning, here's what can happen with too much and non-fitting reverb. The guy who created this mix literally said to me as explanation, "but the reverb said "hall" on it!" ... that doesn't mean you can just throw it on without adjusting anything-
Raw audio (mix of live performances (both stereo and mono recordings, very dry), VS plugins (same), and Spitfire albion (super duper wet))- https://instaud.io/2WP
Guy's mix- https://instaud.io/2WQ
My mix (with proper panning and more controlled reverb use, only used on dry instruments)- https://instaud.io/1ms

His mix sounds like someone's camcorder at a school concert- it is confused, focus and space is neither present nor apparent. You have no idea what is going on. I can promise you, if he turned down his reverb, it'd sound way better.

Speaking of panning, let's talk about that.

For the most part, I deal in small, light gestures. Many people like to deal in extremes, panning things over 50% one way or the other, but even that is not natural!!!

This track was recorded with two mics (coincidentally the same LDC's from before), about two feet away from each other, facing at 135-degree angle from eachother about, in the center of a rectangular classroom-
https://instaud.io/3KV

(reverb and some other mastering/mixing touches were added later)

You'll hear even instruments far in certain directions have reverberant components on the other side of the room. This is what panning things hard always leaves out, and another leading cause of an unnatural sound, especially when panning mono sources (and another reason I use stereo pan whenever possible).

The most important part of realistic panning is achieving balance.

Compare these two mixes:
https://instaud.io/gBn
https://instaud.io/hIh

You'll notice how the first mix feels too full- too much reverb, and the cornets sound bad doubled like that. So what did I change? Less reverb, panned 2nd cornets over to the right, so they BALANCE and complement the first cornets who are on the left.

Here's another brass case, this time a real ensemble (you've heard this one before)-
https://soundcloud.com/samulis/gossnerquintet1

This time the bass voice (tuba) is in the middle, with trombone slightly left,  horn far left, 1st trumpet slightly right, and 2nd trumpet far right. This puts the top (melody) line right next to the bass (good for intonation for one thing), and keeps the trumpets together so they can play off of each other (although an interesting (and very cool) alternative would be to swap trumpet 1 and horn). Each voice is far enough from the others to have its own unique place in the stereo field, but not too far to feel isolated or lost, so the harmonies can mix. By the way, this is the first recording I've dug up that uses a coincident (XY) array only, no spaced pair (fun fact).

Well, if I write anymore, I will probably break Mattias' nice forum here, so I will probably stop now. Essentially the gist of the approach is- (1) use less reverb than you think you need, (2) use less pan than you think you need, (3) think of the framing of the work: reverb goes on the outside, holding everything together and augmenting the pre-existing sense of space, not creating it; dry audio goes on the inside, providing the punch and the clarity. When one tries to "make" a space using reverb and tools of all sorts, it is always trying to fit a round peg in a square hole- something gets lost in the translation. Smile
Thanks for the thoughtful answer!

Well,  that's going to take some time to digest.   Let me do that and get back to you in about two weeks... ;-)

Kurt
(07-02-2016, 01:44 AM)kmlandre Wrote: [ -> ]Thanks for the thoughtful answer!

Well,  that's going to take some time to digest.   Let me do that and get back to you in about two weeks... ;-)

Kurt

Water can help digestion too. ;D
(07-02-2016, 01:50 AM)Samulis Wrote: [ -> ]
(07-02-2016, 01:44 AM)kmlandre Wrote: [ -> ]Thanks for the thoughtful answer!

Well,  that's going to take some time to digest.   Let me do that and get back to you in about two weeks... ;-)

Kurt

Water can help digestion too. ;D

WATER?!?  WATER?!?!

I DON'T NEED NO STINKIN' WATER!!

I NEEDS ME SOME WHISKEY!! ;-)

K
(07-02-2016, 06:22 AM)kmlandre Wrote: [ -> ]
(07-02-2016, 01:50 AM)Samulis Wrote: [ -> ]
(07-02-2016, 01:44 AM)kmlandre Wrote: [ -> ]Thanks for the thoughtful answer!

Well,  that's going to take some time to digest.   Let me do that and get back to you in about two weeks... ;-)

Kurt

Water can help digestion too. ;D

WATER?!?  WATER?!?!

I DON'T NEED NO STINKIN' WATER!!

I NEEDS ME SOME WHISKEY!! ;-)

K

I wouldn't mind that myself... just spent three hours making all the .sfz patches of VSCO: CE.