Pro Tools 12.4 update

Pro Tools 12 logo
The Pro Tools 12.4 update is a minor update relative to the Pro Tools 11 update which saw substantial change to the operating platform it was built upon (from 32 bit to 64 bit). Such a change had significant positive impact on the operating efficiencies of the DAW; but unfortunately, the Pro Tools 11 update also had significant impact on all users, forcing them to comply with updates of third party plug-ins, some of who were very slow to respond with the upgrades, despite having been provided a long lead time of several years from the manufacturers (Pro Tools and Logic Pro to name at least two). I for one was disappointed at how few, and how long it took many of the third party manufacturers to finalise their updates. Some to this day have still not upgraded, forcing me to sacrifice my investment in their product, and repurchase from alternative 64 bit providers.
However, as that was about twenty months ago, it is now time to welcome Pro Tools 12.

Changes with Pro Tools 12.4

The most exciting change from a Pro Tools’ user’s perspective is, Pro Tools 12.4 now allows:
  •  128 tracks of audio and 512 instrument tracks  (in Pro Tools 12 native)
  • track input monitoring (in Pro Tools 12 native)
  • advanced metering  (in Pro Tools 12 native)
  • AFL/PFL solo mode option
  • MIDI to audio commit
However, there are more systemic changes in Pro Tools 12.4 –  four levels of changes in fact –  that can result in greater efficiencies for Pro Tool’s User’s.
The first level of changes include:
  • Pro Tools subscriptions
  • AVID apps manager
  • In application plug-in purchase
Pro Tools 12.4 comes with the option of purchasing or subscribing to Pro Tools in a variety of ways, to accommodate the user’s particular budget. Additionally, Pro Tools now offers a free light version for those aspiring Pro Tools users who either do not have the funds yet, or want to trial Pro Tools prior to purchasing it.
The AVID apps manager is a great innovation that allows for your Pro Tools application and your AVID based plug-ins to be automatically updated when you start up Pro Tools (providing you are connected to the internet). I have found that it is best to let the AVID apps manager complete its’ updates prior to beginning a session.
PT12 Application Manager.png
The in application plug-in purchase feature of Pro Tools 12.4 now allows you to purchase or rent AVID plug-ins, from within your DAW session.This is particularly useful when you are working with other Pro Tool’s users who send you their session to complete a task, but have used certain AVID plug-ins that you do not own. The in application plug-in purchase feature allows the user to navigate through the session drop down menus to rent or purchase the particular plug-in that your peer has included in the session, for the period you need it for.
The second level of changes include:
  • the use of templates for new sessions
  • new blank sessions
  • opening recent sessions
  • showing or hiding the dashboard on dashboard at start-up
The new dashboard window in Pro Tools 12 replaces the (up to Pro Tools 11) Quick Start window. Whilst it is similar to the Quick Start menu in function, the new layout has two tabs – Create and Recent.
The Create tab allows you to create a new Pro Tools session, with choice of creating a session from a template, or opening the a new blank session, selecting the session parameters that you are needed.
Some the template sessions include Blues, Drum and Bass and Dubstep.
PTs Dashboard
Additionally, under the Recent tab, you can choose to open one of your recent sessions.
This dashboard window can be bypassed at start up by de-selecting it (check box in the bottom left-hand corner.
The third level of changes include:
  • metadata inspector window
The metadata inspector window allows you to update specific details regarding a Pro Tools’s session, such as the title, the artist’s name, contributors, and session location. Other information such as the sample rate, bit depth, date created, date modified and session bpm are also listed in this window, but can not be edited.
The fourth level of changes include a range of I/O Setup Improvements. These are:
  • changes to the output and Bus pages
  • unlimited Bus Paths
  • subpaths for output paths
  • downmix and upmix output busses to outputs
  • monitor path
  • session interchange and I/O mapping
  • using keyboard modifiers when enabling or assigning output busses
  • audition path improvements
  • AFL/PFL path improvements
  • restore from session
  • I/O Settings files automatically created and reconciled for different playback engine
  • organise track I/O menus by preference
  • importing I/O settings from session files
  • I/O setup setup in session notes
Pro Tools 12.4 introduces unlimited bus paths. 24 bus paths are created by default, but additional can be added.
Pro Tools 12 IO Setting window
The I/O Setup mappings are saved to both the system and the session, allowing for quick adaption when opening a session with different I/Os to what the session was created with. Pro Tools 12.4 allows the user to import the I/O Settings from either the session files (.ptx) or the I/O Settings file (.pio); and then open the session in both the original I/O Setting configuration or the newly mapped I/O Setting configuration. Routing subpaths for outputs are now possible in the Setup/I/O window, as is the capacity to quickly down or upmix by mapping to alternative outputs when you are playing back your session on a different interface or console to that you originally mixed it in. Additionally, both the Monitor Path and the Audition Path will be automatically mapped when you playback your session on an alternative system. These changes enable greater mobility of Pro Tools across multiple users and locations with greater efficiency.
In Pro Tools 12 HD, there are AFL/PFL path improvements allowing any available output path to be used, and for mismatched channel widths, the path is automatically down or up mixed to the selected channel.
References
AVID. 2015.  What’s new in Pro Tools version 12.4  New York: AVID
All other images courtesy of David L Page  Accessed 12th December, 2015
– ©David L Page 15/12/2015
Copyright: No aspect of the content of this blog or blog site is to be reprinted or used within any practice without strict permission directly from David L Page.
Advertisements

Post-Production Instrumental Editing and Processing Options

MIDAS Console_looking left
As a mix engineer I guess you will receive a tracking session at some point in  which you will appraise the instrumental elements of the session as being in need of some work: perhaps some subtle work, or perhaps some extensive work. Options are available to do this by the spadeful with the very large range of accessible resources available to the practitioner.
However, what you need to do as the mix engineer at that point in time, is to make a quick decision: what extent of post-production instrumental editing or processing is required in order to achieve the desired musical or sonic effect for this production project?  In this example I will focus on one of the essential instruments in contemporary music – the central element of the rhythm section – the drums. However, most of the options I cover below can be applied to other instrumental elements of a session, al be it with different sonic hardware and/or virtual applications.

Sound Repair, Reinforcement, Supplementation, Replacement

Sound repair, sound reinforcement, sound supplementation and sound replacement are terms that I have found aspiring audiophiles use interchangeably. However, they are different, offering different levels of solutions to different production problems at different times. I will introduce the essential differences between each, and outline a particular production scenario where each may be employed.
1. The entry level of post-production drum processing is known as repair. The term sound repair is usually restricted to minor editing using either manual or DAW-based editing functions. In Pro Tools, minor editing to drum tracks can be done using a combination of Beat Detective, Elastic Audio or manual editing using the standard editing tools provided, your eye and most importantly, your ear. Elastic Pitch can also be used for minor editing of melodic or harmonic instruments when they are found to be slightly out of tune to the other instrumentation in the session.  Whilst the term editing is primarily associated with cutting and moving audio files regarding timing issues, I include applying audio processing under the category of repair. This can include manipulating the sonic qualities of the audio file in terms of spectral (equalisation, filters), dynamic (compression, limiters, gates and expanders) and time-domain (reverberation, echo, delay, flanging, chorus, etc) qualities via audio processing.
2. The next level of post-production drum processing is known as sound reinforcement. This solution uses various methods to ‘reinforce the original sound – usually a tone underneath the original signal to reinforce the lack of tone within the original signal. This production solution became very popular in the 1980’s with disco music, which led into the early stages of EDM. In the 1990’s digital reinforcement was used via devices such as a dbx 120A sub-harmonic synthesiser to reinforce the sub-harmonic frequencies of the production.

DBX_120A_Subharmonic Synth

  • In the current era, external devices are still used such as the dbx 510 sub-harmonic synthesiser as a means to reinforce the sub-harmonic frequencies (as shown below on right-hand side of 500 series rack). This option can be used for both corrective or creative purposes.

AE Project Studio Rig.20160601

  • These days this style of processing – sound reinforcement – is usual in many forms of music to use virtual reinforcement devices, such as layering an in-the-box oscillator under the original signal to reinforce the original tone.
    3. The next level of post-production drum processing is known as sound supplementation. Products such as Wavemachine Lab’s Drumagog and Steven Slate’s Trigger were developed to allow the engineer/producer to add sonic texture to the original recording to supplement it/boost it in terms of sonic qualities that were considered to be deficient. These qualities could include timbre, frequency or dynamic envelope. This situation could be due to one of several reasons: due to an imperfect recording technique overall. For example: due to poor microphone placement; poor or ineffective microphone technique for the desired effect; poor or ineffective live room for the desired effect, to name a few reasons;  imperfect or ineffective microphones used for the desired effect. This could be the actual quality of the microphone, the condition of the microphone – a suitable microphone type, or polar pattern a suitable type; an imperfect quality instrument or tuning; or even an imperfect instrumentalist technique in the original recording. This option of post-production drum processing is usually used as a corrective measure, but not always, just to bring the original tone home somewhat more. It would be quite unusual in this era for most productions to have some form of sound supplementation incorporated.

Wavemachine Labs Drumagog 5

Steven Slate Trigger 2.jpg
4. The final level of post-production drum processing is known as sound replacement. Sound replacement involves – as it sounds – the replacement of the original sound source for an alternative sound source. There are so many options available in this era in terms of post-production drum processing options. Drum replacement options such as: Steven Slate’s SSD, Toontrack’s EZ Drummer, AIR Technology’s Strike, and Native Instruments many and varied drum instruments could be useful and suitable for your particular project solution. All of these listed virtual instruments use a sample system to replace the original track’s audio file. The underlying reason to replace the original audio track could be due to: an imperfect recording technique overall. For example: a poor microphone placement; a poor or ineffective microphone technique for the desired effect; a poor or ineffective live room for the desired effect, to name a few reasons;  an imperfect or ineffective microphones used for the desired effect. This could be the actual quality of the microphone, the condition of the microphone – a suitable microphone type, or polar pattern choice for the desired effect; an imperfect quality instrument or tuning; or even an imperfect instrumentalist technique in the original recording. This option of post-production drum processing is primarily used as a corrective measure. it is essentially radical surgery, used in an emergency salvation when all has gone wrong, and no options exist, including time to re-record it in the instance of an urgent project. or used to create ‘demos’ prior to actual tracking. alternatively, with time on your side as a producer, you may choose for the best option: to re-record the original sound source. Whilst this is the most obvious option, there may be external factors that prevent this obvious choice from being a valid option.

Steven Slate SSD5

Toontrack EZ Drummer

AIR Instrument Strike
I expect as a mix engineer you will receive a tracking session at some point in your careers in which you will appraise the drum elements as being in need of some work – perhaps some very subtle repair work, some subtle reinforcement, or perhaps the session will be in need of some extensive work. With the options available in this era, you will need to make a quick decision: what extent of post-production drum processing is required in order to achieve the desired musical or sonic effect? You will have different options avaialble, offering different levels of solutions to different production problems at the different stages of production. Whether sound reinforcement, sound supplementation or sound replacement – each level of post-production drum processing offers different levels of solutions to different production problems at different times. It is up to you as the mix engineer or produce to understand the different stages of production, the needs of the particular mixing session, and employ the most appropriate level of post-production drum processing in which to realise the desired effect.
References
AE Project Studio’s Rack image courtesy of: https://au.pinterest.com/pin/543739354993444064/ Accessed 10th December, 2015
AE Project Studio’s image courtesy of: https://au.pinterest.com/pin/543739354993444064/ Accessed 21st January, 2015
AIR Instrument’s Strike image courtesy of: http://www.airmusictech.com/product/strike-2  Accessed 21st January, 2015
DBX’s 120A image courtesy of: http://dbxpro.com/en/products/120a  Accessed 21st January, 2015
Pro Tools 12: http://www.avid.com/pro-tools  Accessed 21st January, 2015
Steven Slate’s SSD image courtesy of: http://www.stevenslatedrums.com/products/platinum/  Accessed 21st January, 2015
Steven Slate’s Trigger image courtesy of:  http://www.stevenslatedrums.com/trigger-platinum.php  Accessed 21st January, 2015
Toontrack’s EZ Drummer image courtesy of:  https://www.toontrack.com/product/ezdrummer-2/  Accessed 21st January, 2015
Wavemachine Lab’s Drumagog image courtesy of:  http://www.drumagog.com  Accessed 21st January, 2015
– ©David L Page 22/01/2015
– updated ©David L Page 10/12/2015
Copyright: No aspect of the content of this blog or blog site is to be reprinted or used within any practice without strict permission directly from David L Page.

Signal Flow Part 2

As developed in last week’s Signal Flow Part 1 [June 2013], Audio Engineering is an enjoyable technical and creative pursuit. It is dependent upon the engineer understanding the fundamentals. Signal Flow is one of the core fundamentals. Understanding and practicing these three stages of Signal Flow until committed to muscle memory is essential to the development of the aspiring engineer.
This week we will build upon the three stages of Signal Flow to Part 2, including Stage 4 and Stage 5.
Note: On a console that is designed primarily for recording/tracking, such as Neve C75 or a Behringer Erodes SX4882, the Channel Path will by default be routed via the large/primary faders, and the Monitor Path will by default be routed via the small/Mix B faders or pots.
In contrast: On a console that is designed primarily for mixing, such as a Audient ASP8024 or a ASP4816, the Channel Path will by default be routed via the small/secondary faders, and the Monitor Path will by default be routed via the large/primary faders.
Of course, one of the advantages of an in-line console is to have the flexibility in routing options and being able to switch what Path is routed to the faders as the need desires. Therefore, my reference to the Channel and Monitor Path and their respective faders are based  on a console that is designed primarily for recording/tracking.

Audio Processing

Before we proceed to Stage 4 of Signal Flow, lets add some external processing into Stage 2 of the Recording process. In the Audio Industry there are accepted ways to route specific types of audio processing when using external audio processing hardware. 
Audio Industry 8 Channel Studio Signal Flow.P10
The industry standard is to route signal from either the Insert Send/Return within a Channel strip, or via the Auxiliary Send/Returns across a number of Channel strips. In circumstances where we only want to process the signal for one particular channel, we would use Insert Send/Returns. This is usually the case when we want to apply Spectral or Dynamic processing to a particular signal such as an individual instrument. For instances when we want to apply particular processing across a number of Channels or instruments, then we use the Auxilliairies Send/Returns.
Firstly we must Send the raw signal from the Channel strip or strips (referred to as dry signal) to the external audio processing hardware.

Audio Industry 8 Channel Studio Signal Flow.P11

 Once we have processed the signal (referred to as wet signal), we must return the processed signal back to the particular Channel strip or strips on the console.

Audio Industry 8 Channel Studio Signal Flow.P13

We now have added processing to either one or a number of the Channel strips on the console, within Stage 2 of the Recording process. Before we proceed, let’s recap on the Signal Flow.

Audio Engineering Signal Flow

Recording phase – Stage 1,2,3: 

Last week in Audio Signal Flow stage one, two and three, I introduced the capturing of a sound source, routing the signal to the console, monitoring that sound source pre-tape (Channel Path), taking this signal to tape, recording it on the multi-track-recorder (MTR), and then returning the signal back to the console where we had the opportunity to monitor the signal one more time post-tape (Monitor Path).
Being able to monitor the path at both pre-tape and post-tape allows the engineer to check all stages of the signal flow to ensure that there is good signal, at good levels (not too low or not too high), no extraneous sounds (such as noise, hum, buzz, crackle, hiss, etc) that could be a problem if we were to discover such once we have completed the recording phase. Once we have successfully captured and recorded the sound source to the MTR (Recording phase), we need to progress to the next phase, Mixing as part of Post-Production process

Mixing phase – Stage 3, 4, 5: 

Stage 3:

In the mixing phase, traditionally completed by a specialist engineer other than the recording engineer, the recorded tracks are routed from the tape back to the console. This stage equates to Stage three of the Recording phase, but perhaps with a more focussed intention. The purpose of this stage is to commence mixing the various tracks of audio into a blended organise song with both corrective and creative processing applied. The fundamentals as available on most analogue consoles, include manipulating the gain levels, the stereo field via panning, and the spectral qualities via the equalisation and filters on the console. Once we are satisfied a balanced mix of all of the instruments within the mix has been achieved, we need to capture this mix onto tape to be able to play it back at anytime in the future.

Stage 4:

In the fourth stage – Stage 4 Signal Flow, the multiple signals are routed from the console to a device that is capable of recording this summed balanced and processed signal into a single stereo track. This mix of the multiple channels is referred to as a Stereo Mix.

Audio Industry 8 Channel Studio Signal Flow.P14

In order to do this, we route the signal from the console’s Master fader send…….
Audio Industry 8 Channel Studio Signal Flow.P16
… to the DAW via a AD/DA interface (for example Stereo Input 1+2)..
[Note: If the Studio has a patchbay setup, you are more than likely in need of routing this Stage 4 Signal Flow via that patchbay].

Audio Industry 8 Channel Studio Signal Flow.P18

This completes Stage four, from the console to the MTR.

Stage 5:

Once we have done this, we need to prepare to Return the signal back to the console for Monitoring the Stereo Mix track. This is Stage 5 of the Mixing phase. So how do we do Return the signal to the console for monitoring this Stereo Mix track?
You will note the Outputs in the tape are already being used, returning the signal of the original mix to the console’s Channel Strips (Stage 3 Monitoring Path) in order for us to monitor the original mix. So, we need an alternate way to route this Stereo Mix track Return back to the console, in order to monitor and confirm this final Stereo Mix track meets our technical and creative standards.
Whilst each studio will have its own particular routing and naming protocol, most consoles have what we refer to as a 2-Track monitoring function. 2-Track monitoring refers to the monitoring of the final Stereo Mix track Return – Stage 5 of the Mixing phase. As the mono outputs are occupied by Stage 3, we need an alternative routing option from the DAW via the AD/DA interface. This is usually done via a digital output such as an ADAT or a SPIDF output.
[Note: Before we leave the DAW, we need to confirm the Stereo Mix levels in the virtual MTR are appropriate via Input Monitoring. If the levels are not appropriate (too high or too low), you need to re-check your mix levels, and adjust accordingly]. 

Audio Industry 8 Channel Studio Signal Flow.P20

In order to monitor this Stereo Mix track signal on the console, you need to select the 2-Track function on the console, in order to monitor this final Stereo Mix track Return – Stage 5 of the Mixing phase.
[Note: each console manufacturer is likely to have their own naming protocol for this 2-Track monitoring function. For example, a number of more recent consoles refer to this final Stereo Mix monitoring function as DAW].

Audio Industry 8 Channel Studio Signal Flow.P21

Selecting this 2-Track function on the console, will route the Stereo Mix signal to the monitors for confirming the technical and creative merits of your final Stereo Mix. To confirm that you as the Mixing Engineer is actually monitoring Stage 5 (and not Stage 3), if you select the mute button on the Stereo Mix track within the DAW, the monitoring signal to the Control Room monitors should be cut. Deselect the mute button to continue to monitor this Stereo Mix signal.

Audio Industry 8 Channel Studio Signal Flow.P22

This completes the 3 stages of the Mixing phase – Stage 3, Stage 4 and Stage 5.
Audio Engineering is an enjoyable, technical and creative pursuit. It is dependent upon the engineer understanding the fundamentals. Signal Flow is one of the core fundamentals. Understanding and practicing these three stages of the Recording phase’ Signal Flow and the three stages of the Mixing phase Signal Flow until committed to muscle memory is essential to the development of the aspiring engineer.
Reference
Images courtesy of:  David L Page  Accessed 3rd June, 2013
– ©David L Page 10/06/2013
– updated ©David L Page 14/02/2016
Copyright: No aspect of the content of this blog or blog site is to be reprinted or used within any practice without strict permission directly from David L Page.

Signal Flow Part 1

The fundamentals of Audio Engineering

Audio Engineering is dependent upon the key personnel – the engineers – understanding the fundamentals. Signal Flow is one of the core fundamentals of audio engineering. Irrespective of the studio console, an experienced engineer who understands signal flow will adapt very quickly to each and every studio environment, irrespective of whether the studio is analogue, digital or digital virtual in its equipment makeup.
At a very elementary level, there are two phases in Audio Engineering:
  1. The Recording phase, as part of the Production process. [NB: the Recording process is also referred to as tracking]
  2. The Mixing phase, as part of the Post-Production process
For the purposes of introducing aspiring audio engineers to the two phases, I am going to refer to stages of the process as stage 1,2,3,4,5 etc. To commence this introduction, we are going to look this week at the three stages of the recording phase.

Recording phase

Stage 1: 

In a simple signal flow scenario,  a single sound is generated by a sound source and captured by a transducer, converting the acoustic sound source and transforming it into electrical energy. The electrical signal is then passed down a microphone cable towards the single channel console.
In the situation where there is a eight channel console or mixing desk, the engineer has the opportunity to capture eight sound sources simultaneously via eight transducers, converting the eight acoustic sound sources and transforming them each into electrical energy. The electrical signals are then passed down eight microphone cables towards the eight channel console.
Audio Industry 8 Channel Studio Signal Flow.P2
This Stage 1 Signal Flow is known as the Channel Path, as the sound source is being routed down the Channel Strip on the console.
Note: On a console that is designed primarily for recording/tracking, such as Neve C75 or a Behringer Erodes SX4882, the Channel Path will by default be routed via the large/primary faders, and the Monitor Path will by default be routed via the small/Mix B faders or pots.
In contrast: On a console that is designed primarily for mixing, such as a Audient ASP8024 or a ASP4816, the Channel Path will by default be routed via the small/secondary faders, and the Monitor Path will by default be routed via the large/primary faders.
Of course, one of the advantages of an in-line console is to have the flexibility in routing options and being able to switch what Path is routed to the faders as the need desires. Therefore, my reference to the Channel and Monitor Path and their respective faders are based  on a console that is designed primarily for recording/tracking.
Audio Industry 8 Channel Studio Signal Flow.P3.png

Stage 2:

In the second stage – Stage 2 Signal Flow, the signal is routed from the console to a device that is capable of recording this signal. Traditionally, this device was a magnetic tape player. In the current era, this device has been largely replaced by a digital virtual tape player, a computer running an audio-specific digital application such as Pro Tools. This device is now known as a Digital Audio Workstation (DAW). For the analogue signal flow to flow successfully from the console to the digital virtual tape device, the signal must pass through an interface converting the analogue signal to a digital signal (A/D converter)
Audio Industry 8 Channel Studio Signal Flow.P4.png
 Of course, the signal from the tape device has to return back to the console.
Audio Industry 8 Channel Studio Signal Flow.P5.png

Stage 3:

In the third stage – Stage 3 Signal Flow, the signal is routed back from the tape device to the console.
Audio Industry 8 Channel Studio Signal Flow.P6.png
This Stage 3 Signal Flow is known as the Monitor Path, as the sound source is routed through the secondary faders or Mix B of the console, for the purposes of monitoring the signal. This signal is known as post-tape (ie after tape device). Post-tape monitoring is considered the normal monitoring mode for recording/tracking monitoring.
Audio Industry 8 Channel Studio Signal Flow.P7.png
The final part of the recording signal flow, is to add some headphone monitoring in the live room for the artist, so that they can hear as they are being recorded.
Audio Industry 8 Channel Studio Signal Flow.P8.png
This headphone mix is generally routed from the console’s auxiliary functions, allowing an independent stereo mix of the various eight channels being recorded/tracked. Industry standard for a headphone mix is ‘post-tape’ ‘pre-fader’. Once the headphone mix has been established, it can be tested via the talkback function on the console.
Audio Industry 8 Channel Studio Signal Flow.P9.png
Establishing immediate communication between the engineer in the control room and the artist in the live room is essential in a studio environment. Therefore getting signal flow across the three stages, a headphone mix and talkback need to be the focus of any tracking session, and successfully achieved as quickly as possible at the beginning of the session.
Audio Engineering is an enjoyable, technical and creative pursuit. It is dependent upon the engineer understanding the fundamentals. Signal Flow is one of the core fundamentals. Understanding and practicing these three stages of the Recording phase’ Signal Flow until committed to muscle memory is essential to the development of the aspiring engineer.
Next week we will develop our fundamental Signal Flow knowledge to the Mixing phase (Stage 3, Stage 4 and Stage 5)[ June 2013].
Reference
Images courtesy of:  David L Page  Accessed 2nd June, 2013
– ©David L Page 03/06/2013
– updated ©David L Page 13/02/2016
Copyright: No aspect of the content of this blog or blog site is to be reprinted or used within any practice without strict permission directly from David L Page.

Introduction to Audio Engineering

Welcome to Audio Engineering and the world of studios. A studio represents different things to different people. Some see it as a technical place to track and mix artists’ expression (Burgess 2014; Burgess 2013; Burgess 1997). Others see a studio as an instrument, in which to develop an artist’s ideas into something more, possibly fusing several musical styles into a new genre (Eno 2004; Eno 1982). Irrespective of your perspective and motivation, one needs to start at the beginning – the fundamentals.

Hans Zimmer Studio

                                          (Hans Zimmer home studio)
Knowledge and Skill base required
The practical knowledge and skillset required of an Audio Engineer/ Producer is both vast and very complex. In addition to the knowledge and skillset of mixing and recording, both in themselves very involved and potentially taking years to master, there are a range of other knowledge and skills required. There is a range of equipment to know about, and theories required to know the studio environment, and to be able to succeed in this position on a professional basis.
Whilst there is a common industry view that perhaps there is less onus on being an engineer in the original sense of the word in terms of analogue gear and being able to fix that gear in this era, I would argue that the extent of knowledge and skills required is no less vast and complex. In fact, I would argue that with the development of audio gear in this digital era, there is a broader knowledge and skillset base required than was previously required. Some of the aspects a budding Audio Engineer/Producer must become quite conversant with are:
Step 1
  • Firstly, one must understand a generic Studio Setup
  • Then one must learn the specifics of the particular Studio Setup. For example: the console, patchbay, interface, computer system, and assorted outboard peripherals
Step 2
  • Secondly, one must understand a generic Signal Flow of a console
  • Then one must learn the specifics of the particular console in the studio you are going to use. For example: MIDAS Heritage 1000 , Neve  VXS, SSL AWS 948, API Legacy, Amek Media 51, Euphonix System 5, Audio ASP8024, Behringer Eurodesk SX4882 or Behringer X32
Step 3
  • Thirdly, one must understand a generic audio interface, and what role it plays in the signal path of a modern studio (AD/DA)
  • Then one must learn the specifics of the particular audio interface (AD/DA). For example: Avid HD 16×16, Apogee Symphony, Antelope Audio Orian 32+, Universal Audio Apollo, Apogee Ensemble, Focusrite Saffire, Focusrite Scarlett, Fireface 800 or PreSonus Studio 192
Step 4
  • Fourthly, one must understand a generic Tape Device
  • Then one must learn the specifics of the particular tape device, whether magnetic or virtual tape. For example: Avid Pro Tools, Logic Pro, Reason, Ableton Live, FL Studio, or Reaper.
Step 5
  • Fifthly, one must understand generic principles behind peripherals for audio processing (outboard gear, etc), why we should use them, when we should use them, and how we should use them
  • Then one must learn the specifics of the particular peripherals for audio processing in the particular studio. For example: Teletronix LA-2A, Urei 1176, Fairchild 670, Tube-tech CL1B, Manley ELOP+, Neve 2254, DBX 160, Empirical Labs Distressor, SSL-G Series Bus Compressor, Manley Variable MU limiter, Chandler EMI TG1, Alesis 3630, API 3124+, Eventide Reverb 2016, Focusrite Octopre MkII Dynamic, or Behringer MDX2600 Composer
In addition to this studio environment knowledge base and skillset required as outline above, more than likely one will have to contend with the various one-off technical issues that will happen from day to day with either electrical or mechanical equipment limitations and/or malfunctions. As we each experience on a daily basis, these can be very prevalent and disrupt even the best laid plans for a mixing or recording session. There are a range of issues that can happen at any point in time in a studio, and therefore the modern day Audio Engineer/Producer must have a broad knowledge and skillset base in order to problem solve through these issues in order to move on with the object of the session; either to record, or to mix.
I have deliberately overlooked mentioning the additional soft skills knowledge and skillset required of an Audio Engineer/Producer in terms of daily interacting with people related to the studio environment. These soft skills include communication, negotiation, patience and social skills. Whilst extremely important knowledge and skillset to have, they could be considered to be beyond the realms of an industry-based subject matter expert (SME) in this discussion.
Additionally, if you are recording and mixing, then most assume that the modern Audio Engineer/Producer/producer must have a degree of understanding and skills in the creative arts processes of: songwriting, music, arrangement, and/or instrumentation to draw on as may be required for the client.
Therefore, in conclusion in this brief discussion, the practical knowledge and skillset required of a modern day Audio Engineer/Producer is still to this day very vast and complex.
A budding Audio Engineer/Producer must develop a very broad knowledge and skillset base across the disciplines of the industry subject matter, the broader relevant Creative Arts and the soft skills; in order to operate within and around the studio environment, and to be able to maximise their chance of developing a successful professional career as a Audio Engineer/Producer.
References
Burgess, Richard James. 2014. The history of music production. New York: Oxford University Press.
Burgess, Richard James. 2013. The art of music production: the theory and practice. New York: Oxford University Press.
Burgess, Richard James. 1997. The art of record production. London: Omnibus Press.
Eno, Brian. 2004. “The studio as compositional tool.” In Audio culture: readings in modern music, edited by Christoph Cox and Daniel Warner, 127-130. New York: Continuum.
Eno, Brian. 1982. Ambient 4: on land. Editions EG. Compact Disc.
Hans Zimmer’s home studio image courtesy of:  http://www.scpr.org/programs/the-frame/2015/01/20/41178/interstellar-composer-hans-zimmer-says-hollywood-i/?slide=2  Accessed 12th December 2015
– ©David L Page 09/05/2013
– updated ©David L Page 27/01/2016
Copyright: No aspect of the content of this blog or blog site is to be reprinted or used within any practice without strict permission directly from David L Page.

Pro Tools User Tip #1

Tips when using Pro Tools on a general use computer

Pro Tools 12 image

When you are using someone else’s computer, such as in a Studio or general use C-Lab environment, you are not likely going to be aware of what the previous person used Pro Tools for. They may have, for example used an alternative interface and customised the routing (I/Os) within Pro Tools for their own specific use. Therefore it is always advisable when you are creating a new Pro Tools session or re-opening an existing Pro Tools session to confirm the following:
  • Audio File Type: wav
  • Sample Rate: whatever your clients project requires – for example, 48 Hz
  • Bit depth: whatever your clients project requires – for example, 24 Bit
  • I/O Settings: Stereo Mix (not ‘Last Used’)
 Screen Shot 2016-03-02 at 6.05.59 am
Once you have the Pro Tools session open, can I suggest you check the two sections in the Setup drop down menu:

Setup/Playback Engine

PTs Drop down menu

  • Setup/Playback Engine: does it have ‘Built in Output’ selected, or is ‘Pro Tools Aggregate I/O’ selected because the previous person was using an alternative external interface? Without an interface, you need to have ‘Built in Output’ selected

PTs Playback Engine

  • Setup/Playback Engine: H/W Buffer Size: make sure you have the correct H/W Buffer Size selected for the type of session you are conducting – mixing or recording/tracking
    • 1024 Samples (recommended for Mixing)
    • 64 Samples (recommended for Recording/Tracking)
  • When you have completed this task, close this display window by pressing the ok button in the bottom right-hand corner.

    Setup/IOs:

PTs IOs
  • Setup/IOs: reset all of the I/O tabs (Input, Output, Bus, etc) by pressing the default button in the bottom left-hand corner of the I/O display window for each of the I/O tabs. If any routing changes had been made in the previous session, this action will reset all of the I/Os back to the original default setup. When you have completed this task, close this display window by pressing the ok button in the bottom right-hand corner.
PTs Setup IO View
 These simple steps should become your standard operating procedure every time you create a new Pro Tools session or re-open an existing Pro Tools session on a general use computer, or in a Studio used by others.Having completed these simple steps, you can be confident that your Pro Tools’ session is correctly setup, ready for a successful Mixing or Recording/Tracking session, minimising the chance of experiencing signal flow issues during your Pro Tools’ session.
If however, having followed these simple steps you do have Signal Flow issues during your Pro Tools session, can I also suggest that you check the following:
  • Overall session signal flow. Remember, Pro Tools is no different to a studio, it is just ‘in a box’. All of the rules of Signal Flow still apply
    • are your session inputs (in the ‘mix’ window) routed correctly?
    • are your session outputs (in the ‘mix’ window) routed correctly?
    • are all ‘mute’ buttons off?
    • are there any ‘solo’ buttons selected?
 If you have checked all of the above suggestions, but continue to have issues whilst within the Pro Tools session, I suggest you consult the Studio Supervisors.
References
AVID Pro tools 12 image courtesy of:  https://www.avid.com/US  Accessed 11th January, 2016
All other images courtesy of David L Page Accessed 11th January, 2016
– ©David L Page 09/07/2012
– updated ©David L Page 13/01/2016
Copyright: No aspect of the content of this blog or blog site is to be reprinted or used within any practice without strict permission directly from David L Page.