Audio Engineering is dependent upon the key personnel – the engineers – understanding the fundamentals. Signal Flow is one of the core fundamentals of audio engineering. Irrespective of the studio console, an experienced engineer who understands signal flow will adapt very quickly to each and every studio environment, irrespective of whether the studio is analogue, digital or digital virtual in its equipment makeup.
At a very elementary level, there are two phases in Audio Engineering:
The Recording phase, as part of the Production process. [NB: the Recording process is also referred to as tracking]
The Mixing phase, as part of the Post-Production process
For the purposes of introducing aspiring audio engineers to the two phases, I am going to refer to stages of the process as stage 1,2,3,4,5 etc. To commence this introduction, we are going to look this week at the three stages of the recording phase.
In a simple signal flow scenario, a single sound is generated by a sound source and captured by a transducer, converting the acoustic sound source and transforming it into electrical energy. The electrical signal is then passed down a microphone cable towards the single channel console.
In the situation where there is a eight channel console or mixing desk, the engineer has the opportunity to capture eight sound sources simultaneously via eight transducers, converting the eight acoustic sound sources and transforming them each into electrical energy. The electrical signals are then passed down eight microphone cables towards the eight channel console.
This Stage 1 Signal Flow is known as the Channel Path, as the sound source is being routed down the Channel Strip on the console.
Note: On a console that is designed primarily for recording/tracking, such as Neve C75 or a Behringer Erodes SX4882, the Channel Path will by default be routed via the large/primary faders, and the Monitor Path will by default be routed via the small/Mix B faders or pots.
In contrast: On a console that is designed primarily for mixing, such as a Audient ASP8024 or a ASP4816, the Channel Path will by default be routed via the small/secondary faders, and the Monitor Path will by default be routed via the large/primary faders.
Of course, one of the advantages of an in-line console is to have the flexibility in routing options and being able to switch what Path is routed to the faders as the need desires. Therefore, my reference to the Channel and Monitor Path and their respective faders are based on a console that is designed primarily for recording/tracking.
In the second stage – Stage 2 Signal Flow, the signal is routed from the console to a device that is capable of recording this signal. Traditionally, this device was a magnetic tape player. In the current era, this device has been largely replaced by a digital virtual tape player, a computer running an audio-specific digital application such as Pro Tools. This device is now known as a Digital Audio Workstation (DAW). For the analogue signal flow to flow successfully from the console to the digital virtual tape device, the signal must pass through an interface converting the analogue signal to a digital signal (A/D converter)
Of course, the signal from the tape device has to return back to the console.
In the third stage – Stage 3 Signal Flow, the signal is routed back from the tape device to the console.
This Stage 3 Signal Flow is known as the Monitor Path, as the sound source is routed through the secondary faders or Mix B of the console, for the purposes of monitoring the signal. This signal is known as post-tape (ie after tape device). Post-tape monitoring is considered the normal monitoring mode for recording/tracking monitoring.
The final part of the recording signal flow, is to add some headphone monitoring in the live room for the artist, so that they can hear as they are being recorded.
This headphone mix is generally routed from the console’s auxiliary functions, allowing an independent stereo mix of the various eight channels being recorded/tracked. Industry standard for a headphone mix is ‘post-tape’ ‘pre-fader’. Once the headphone mix has been established, it can be tested via the talkback function on the console.
Establishing immediate communication between the engineer in the control room and the artist in the live room is essential in a studio environment. Therefore getting signal flow across the three stages, a headphone mix and talkback need to be the focus of any tracking session, and successfully achieved as quickly as possible at the beginning of the session.
Audio Engineering is an enjoyable, technical and creative pursuit. It is dependent upon the engineer understanding the fundamentals. Signal Flow is one of the core fundamentals. Understanding and practicing these three stages of the Recording phase’ Signal Flow until committed to muscle memory is essential to the development of the aspiring engineer.
Next week we will develop our fundamental Signal Flow knowledge to the Mixing phase (Stage 3, Stage 4 and Stage 5)[ June 2013].
Images courtesy of: David L Page Accessed 2nd June, 2013
With over 20 years experience in the arts & post-compulsory education, David has lived, studied and worked Internationally including Japan, India, Fiji, the US and NZ.
David has extensive interests as per the extensive blogs hosted on his site (see below).
Additionally, David has published in both lay texts and academic (peer-review) publications.