I’m currently working tirelessly in the studio, and on other digital work stations, polishing off the sound design for an animation. As this is my first audio post production task, without a band in sight, I’ve been somewhat surprised at the transferable sonic components used. As a general rule, the basics to watch out for in both live band recording and audio post production are time keeping, acoustic environments, blend, volume, and balance as a whole. But the entity that has exasperated my week has been EQing and making good use of the sonic spectrum. Although equalising has never been my strongest attribute in the audio world, I’d like to say I’m pretty confident balancing out a standard live recording, however, present me with ADR, foley, FX, atmospheric sounds and music, within a 5.1 mix-down; I start to panic. But never underestimate a woman on a mission; I have decided to break down the sonic spectrum, for those who are new to the audio game and those who need want to develop a good contrast between sounds.
Below is the audio spectrum; humans can hear between 20 to 20 KHz, which slightly decreases with age. Although pitch detection isn’t liner we can recognises octave changes with frequencies that are doubled or halved, this still leaves us with a large scope, but it also means recording in different, but close frequency bands can easily masked or sound messy: Continue reading