Production Expert

View Original

Recording Classical Piano Part 4 - Mixing In Stereo

I was recently asked to record two pieces of piano music played by my friend Jason Hardink (pianist with the Utah Symphony). In this article—the fourth of 5—I’m going to describe the process of making a stereo mix.

Mixing in classical music—and especially so in chamber or solo music—isn’t at all like the process of mixing popular music of any form. Mixing in pop (generic term I use) is absolutely part of the creative process. Placement, EQ, effects are all part of that vocabulary. Of course you mix in classical as well, but not as dynamically. We trust that the player got it right—dynamics, timbre and all. So our job is first and foremost not to leave any evidence we were ever there. Mixing for a solo instrument or small chamber group is generally pretty static: once you’ve got the balance, levels and positioning you shouldn’t really need to do anything. It’s different for an orchestra. There’s no such thing as a “true” vantage. The players are spread over a large space and no static balance is ever going to work perfectly. There are spots (accent mics) that may need to come in and out. Reverberation levels may need to change as the music moves along. While you could make the argument that—even for solo piano—certain changes in balance might contribute something artistically, I felt that Jason had that all under his fingers.

What do I want out of a stereo mix?

Some of the answers are obvious—clarity, musicality, reasonable mono-compatibility. Another thing that’s important to me is robust width. I really do dislike the notion of the ‘sweet spot’. Of course, there’s a place where room calibration and speaker positioning are targeted. The mixer has to work there. But many mixes will collapse if the listener moves off the sweet spot. There must be decorrelated time differences across the plane so that people around the room have an enveloping experience.

I mixed in the same session that I used for editing. The only difference is that I consolidated (rendered) the tracks so that they no longer had edits I might mistakenly pull apart. I am still working at 192K with all files in floating point. With that, let’s look at the mix window and then a few of the plugins I used. The dark blue tracks on left are the tracks on disk. They’re all routed to aux channels (purple) for mixing. This simplifies my converting this mix to Atmos (next article). Here you can see that my A/B pair (flanking omnis) and room mics are hard left and right. These give me the robust width I need. The “Decca pair” are spread just a little less and the close mid/side pair give me focus and detail, along with good mono if needed.

Here’s the mix window, showing all the tracks. For most Pro Tools users, this will be a very simple mix. There’s one thing here you might find interesting. The M/S decoded output goes to the 5th channel (as seen in the graphic above). You should notice that I have left panned to right and right panned to left, with both channels toed-in. If you take a look at the video, you’ll see my M/S mics will actually pick up more of the treble strings on the left side and bass on the left. This is not an exact low-to-high stereo image because strings in the piano cross. But high notes predominated on the left side and low on the right. I decided the listener would enjoy a vantage similar to the player, so I simply flipped the channels. Panning hard to left and right gave me too much separation. Toeing both sides in toward center gave me a more natural sound and also gave some of the mixing that comes from the piano lid.

Now let’s look at the plugins I used.

M/S decoding is easy to do with just a bit of bussing in Pro Tools (or any other DAW). But it’s even easier to do with a plugin. This free decoder from Vogengo is a handy tool to have. It takes a two-channel input (assumed to be mid and side) and gives you a decoded output.

I’d mentioned that the room had a little whistle around 16K. This was quite low and probably not hearable. But it bugged me. Rather than going after it in RX, I simply used a very tight and targeted notch. Neutron 3 gave me great visual feedback so I could see what I was doing. Notice I did this in the individual tracks rather than the final mix. That left some room for reverb to fill any gaps (I doubt there were many to begin with). The tamed version of the whistle is well under -100dBFS.

In the first installment I mentioned that I used an omni pair spaced about 10” apart and off the tail of the piano. This is a technique that was often used by Decca and gave a nice sense of spaciousness. The off-axis signal had a slightly greater spacing than ORTF, while the piano itself was pretty close to omni. It was hard for me to find a place for this pair in the mix. But I tried something that worked out pretty well. In iZotope’s Ozone suit there’s a little tool called the Imager. My best guess is that it looks for intensity differences between channels and converts that into a time delta.

Imager gave me a greater and more robust width while still leaving the central piano with only very minor time differences. Mono testing gave me very satisfactory results.

The hall itself sounds very nice, but I wanted to extend tails a little. And RX can dry things up as well, so I wanted to correct for that. Even though this is a stereo mix, I used Stratus 3D so I could make a preset that also worked in Atmos. The reverb level is. about 12dB down from the dry audio.

Mastering

Once I’d completed the mix, it was time for a little mastering. There are some dangers in doing your own mastering, so it’s important to take a breather at the end of mix and put on a new hat. Since this was a project that was targeted toward video, I needed to make sure that the overall level (as measured in LUFS or LKFS) met generally-accepted standards. I aimed for an integrated level of around -16LKFS. These pieces were both quite dynamic, so I needed some gentle limiting and compression to get there.

Ozone is a great tool that’s relatively easy to begin using. Here’s the EQ panel, showing a couple of very minor tweaks. In mastering, the smallest changes can make a very big difference. In the dynamics section there was very gentle compression (1.7 : 1) below -10db.

At the very end of the process, I use insight 2, which measures power, imaging, max levels and so on. Over the course of the entire mix, I aimed for an integrated loudness between -16 and -17 LKFS. This lets me get the mix into a range that can work on desktop, tablet, phone while not sounding manipulated. Insight is a passive measurement tool: the actual adjustments are made in Ozone.

To finish up, I did an offline bounce to 48K sampling rate with a 24-bit word size. This went over to Ashkan for final muxing into the video. This converted the audio to AAC. Unfortunately, lossy compression is the standard for video streaming, but by keeping the mix at the highest possible standard, we preserve as much as possible.

In the final segment I’ll talk about mixing in Atmos. Look for it in about a week. In the meantime, you can watch the videos now. The Eckardt piece is here and the Roens piece is here.

See this gallery in the original post