Production Expert

View Original

How To Get Consistent Podcast Dialogue Fast

This is Partner Content - What does this mean?

In Summary

With podcast editors wrestling with a number of disparate sounding voices, getting one tonal signature for the whole programme can take time. Here we use an intelligent tool to blend guest audio seamlessly in one window.

Going Deeper

Many Voices, One Show

Engineers understand that all human voices are naturally different, but spoken word content can throw up some extra challenges not encountered in other types of programme. In the world of remote recording, voices don’t have the ‘family sound’ of being captured in the same studio with all the commonality that affords. Aside from varying gear, guests’ technique on the mic can make sounds diverge even further, laying bare any shortcomings even more, guests (hopefully) take it in turns to speak with little or nothing else going on underneath to distract the listener away. All in all, recording and mixing multiple voices for spoken word programme can be a tall order.

Using Manual Remedies

There are a few things dialogue editors can do to tie voices together using conventional tools. Above all else, levels and any automation can get a working balance straight way. This might be followed by some very light compression or limiting across the top few dB of each guest can further bring the mix together.

However getting a consistent tonal balance across every voice with EQ needs skill and experience to achieve. This can take a lot of time that dialogue editors don’t always have.

Using Accentize SpectralBalance

In the video, we check out Accentize SpectralBalance to automatically bring the tonality of voices together. This is an intelligent dynamic EQ that has been engineered to give voices a common spectral fingerprint. By continuously adapting its own correction curve, a single instance of SpectralBalance can sit across a dialogue bus. With all voices being processed through the same instance, podcast editors can instantly make disparate sources sound far more homogenised. Alternatively, users can go for one instance per voice for more bespoke treatments.

In a typical scenario, we see what it can do across three voices using Dynamic Mode for podcast dialogue. Although well recorded, there are inevitable differences in timbre that could benefit from a tonal workout with SpectralBalance. We also talk about how SpectralBalance’s Static Mode can be deployed to freeze its response, or to form the basis of custom targets that can be re-used as needed.

SpectralBalance Key Features And Benefits:

Time Saver: Drastically reduces the time spent on repetitively listening to dialogue takes and finding the appropriate EQ settings. The automatic equalization is ready in seconds.

Dynamic Mode: Internal algorithms listen and if necessary adapt the processing over 50 times per second. The adaption amount and speed can be precisely controlled by custom parameterization.

Custom EQ Targets: Can listen to target audio and automatically create an individual target EQ curve. Suitable for EQ matching tasks such as podcasts or ADR.

Efficient Processing: Artificial neural networks optimised to keep the required processing power to a minimum. Accentize claim ease with running multiple instances in parallel on an average workstation.

See this gallery in the original post