Microphones Part 13 – Other

medium-diaphragm-condenser-microphones

Current stock

My current studio and live microphone stocks a modest range of dynamic microphones, dynamic ribbons, condensers, tubes and contact microphones. The range of microphones include: Shure 57s, 58s, Beta52As & SM7Bs; Electro Voice RE20s; Sennheiser e935s, e945s, e906s & MD441Us; MXL 550s & 551s; Rode NT-USBs, NT1s, NT3s, NTRs, NT4s & NTKs; Audio Technica AT2020s; AKG C414XLIIs, P420s & CGN523Es; Royer 121s; Neumann TLM193, OPR87C, OPR87I & OPR84s; Mojave MA101FETs; DPA 4099s;  IK  reference mics; a Zoom H6 XY and MS mic; Sony lapel mics; and a range of contact mics. This stock allows for versatility in most recording scenarios that have been presented to me; of course coupled with great instruments, amplifiers,outboard processing hardware, interfaces, consoles, and of course artists. But sometimes, in certain scenarios, even these are not enough.

Current Research Study Project

In my current doctoral research study project, I have designed a composition requiring me to source sonic samples of significant aspects of my life. Water is one of the most significant and influential elements in my life and my life partner’s lifestyle [see blog or Media Use Part 1], I felt a need to be able to record water samples across a range of contexts which I have experienced. The ocean, rain, waterfalls, swimming pools, and domestic water use. However, this needed to occur without causing damage to my current range of microphones. Ready and portable – armed with my Zoom H6 -my research project would not be complete without the range of real water samples – out in the environment. However, I also felt a need to record sonic samples of water from a submersed perspective. Of my current stock of microphones, there were none that allowed me to record in a submersed scenario, without needing a further layer between the microphone and the element of water, such as by using plastic bags or tubs, duct tape and silicon. I therefore felt an alternative solution was needed.

zoom-h6-01

Hydrophone H2a-XLR

I researched my options, exploring what other audio engineers have used to gather some water-based samples. I finally decided to purchase a fully submersible microphone, and  I now received what will be the latest microphone to add to my stock of studio and live microphones: an Aquarian Audio Products Hydrophone H2a-XLR microphone.
A hydrophone microphone is designed to be immersed in water – natural or salt water – multiple times without degrading from excessive water damage or corrosion.
H2aXLR_9m.P3.jpg
The Aquarian Audio hydrophone microphone is quite compact, measuring just 25mm wide, but 46 mm long. It weighs just 105 grams.
h2axlr_9m-p2
It is a condenser microphone, requiring 48v power in order to charge the electro-static transduction process. As such it is extremely sensitive, with minimal extraneous noise. “The hydrophone sensor is cable of picking up sounds from below 20Hz to above 100KHz” (Aquarian Audio Products 2016). Designed for deep water where maximum microphone bandwidth can be achieved,  the Aquarian Audio Hydrophone apparently boasts an operating depth of up to 80 metres. However, the model I purchased came with a 9 metre cable, a length I thought was more than adequate for the sample events I am looking for.
H2aXLR_9m.P1.jpg

Using a Hydrophone – Context

Having just received the microphone, I am still yet to venture out into a deep water environment where I can test the microphone to its full capacity. However, I was keen to immediately test the microphone to get an idea of how sensitive it was going to be, how accurate it was going to potentially be in capturing the original sound source, and how much noise it may or may not inherently have. Using my Zoom H6 with this hydrophone to gather a number of preliminary samples, I considered the options I had immediately around me. I chose the 60,000 litre salt water fibreglass swimming pool found in our front garden as my first test environment. A place where my partner and I have spent considerable hours over the past two decades, it is surely a significant part of our lives, and therefore somewhere I am going to need to gather sample events for my composition. In saying that, embarking on this test I acknowledged there would be some limitations of using this test environment to trial the functionality of this condenser microphone. Namely, the structure of the pool – the pool is 4 metres wide, 9.5 metres long  and 1.9 metre deep (reducing to about 1.4 metres in the shallow end) and made of a fibreglass shell with the sides and bottom curved into one continuous surface. Due to this particular environment, the hydrophone microphone would likely display a narrower bandwidth than what it would optimally have in deeper waters; and the captured sound source was likely to include the original sound source and a number of reflections off the hard surfaces of this domestic swimming pool.  Irrespective, as I was going to need samples of this environment eventually, I considered it a useful initial test environment.

Using a Hydrophone – Part 1

The first 5 sample events I believe demonstrate the sensitivity this condenser microphone has in underwater situations. I was surprised how sensitive the microphone was, despite the large amount of water residing between/separating the subject and the microphone capsule during these recordings. As indicated above regarding the reflections, the captured sample events demonstrates a cacophony of sonic textures resulting from a fusion of both the intended sound source and its’ multiple reflections.
Note also the frequency range of each sample event relative to the micopphones’ depth and proximity to either the surface, the bottom, or the sides of the swimming pool. I have been reminded that in a shallower water environment: there is likely to be less fully developed low frequencies due to the shorter distance between any surfaces. Additionally, in calm water conditions the sound waves under the surface are likely to rebound back off a flat water surface, phase cancelling the original signal below it. This phenomena of a varying frequency range is particularly noticeable in Using a Hydrophone – Part 2 sample events 7 and 8 when the condenser microphone capsule is being bounced up and down at variable depths under the surface, and then breaches the surface of the water. Listen and compare the frequency range and the sonic texture of each sample event as the condenser capsule moves through the water.

Sample Event 1  (click to access audio)

In the first sample, the hydrophone was submerged in the swimming pool to a depth of about 1.5 metres. The Zoom H6 track 3 gain level was set to 6 (of 10). My friend (the subject) was in the pool and approximately 2 metres away from the hydrophone, facing it and blowing bubbles under water in the direction of the microphone. The reverberations off the nearby pool surfaces are quite noticeable from about 1/3 third into the sample event, providing a minor delay of the original signal until the end of the sample event.

Sample Event 2 (click to access audio)

In the second sample, the hydrophone was maintained in the swimming pool at a depth of about 1.5 metres. The Zoom H6 track 3 gain level was set to 7 (of 10). The subject was in the pool and approximately 3 metres away from the hydrophone, facing it and blowing bubbles. The overall levels are softer in this second sample event while she was mimicking what she had done previously – with the exception of when the hydrophone capsule got knocked by something (tall volume spike midway) – despite the gain level being increased marginally. See image i below. The reverberations off the nearby pool surfaces are quite noticeable from about one third into the sample event, providing a minor delay of the original signal until the end of the sample event for the second third, but then decays and releases back to mainly the original signal in the final third of the sample event.  As a result of the decaying signal, the amplitude reduces. With the return to the original signal in the final third, there is greater clarity of the signal.
pts-sample-event-1-2-comparison-20161117Image I – Pro Tools 12 Sample event 1 (top) and Sample Event 2 (bottom)

Sample Event 3  (click to access audio)

In the third sample event, the hydrophone was submerged in the swimming pool to a depth of about 1.5 metres. The Zoom H6 track 3 gain level was maintained at 7 (of 10). The subject was in the pool and approximately 3 metres away from the hydrophone, facing it and trying to talk underwater.  I note that despite her being farther away from the hydrophone capsule than she was in the first sample event, as she was trying to talk loudly under water toward the microphone capsule, the audio is louder than both sample events 1 and 2. As you can see in image ii below, the overall mass of the wav file is exponentially greater in this third event than both the previous two sample events, with the subject’s speaking voice producing far greater mass and density than she did when blowing bubbles underwater. This mass and density represents increases in sound pressure levels, and reverberant signals, resulting in a cacophony of sonic textures.  Had I included a longer sample, you would observe, as per the sample event 2, at a certain point the signal decays and releases back to mainly the original signal, with reduce amplitude, but greater clarity.
PTs Sample Event 1 2 3 Comparison.20161117.png
Image II – Pro Tools 12 Sample event 1 (top), Sample Event 2 (middle, Sample Event 3 (bottom)

Sample Event 4  (click to access audio)

In the fourth sample event, the hydrophone was submerged in the swimming pool to a depth of about 1.5 metres. The Zoom H6 track 3 gain level was maintained at 5 (of 10). The subject is in the pool and approximately 0.5 metres away from the hydrophone, facing it and blowing bubbles. Sonically, this fourth sample event demonstrates a cacophony of sonic textures, resulting from excessive sound pressure levels due to the close proximity of the transducer relative to the sound source, and the accompanying  reverberant signals from the multiple surfaces of the pool.  The inherent distortion results from excessive sound pressure levels, with an over-gained signal. For non-audiophiles: note the clean flat line along the top of the wav form indicating a form of dynamic limiting. Given that no dynamic processing was used to achieve this limiting of the audio signal, the limiting effect indicates acceptable gain levels for the equipment were exceeded, resulting in what is referred to as digital (signal) clipping. See image iii below (top wav form).

PTs Sample Event 4 + 5 Comparison.20161117.png

Image III – Pro Tools 12 Sample event 4 (top) and Sample Event 5 (bottom)

Sample Event 5  (click to access audio)

In the fifth sample event, the hydrophone was maintained at a depth of about 1.5 metres. The Zoom H6 track 3 gain level is reduced to 5 (of 10). The subject was in the pool and approximately 0.5 metres away from the hydrophone, facing it and trying to talk underwater. As you can see in image iii above (bottom wav form), the overall mass of the wav file is exponentially greater in this fifth event than the previous sample events, with the subject’s speaking voice producing far greater sound pressure levels than she did when blowing bubbles underwater. Sonically, this fifth sample event is heavily distorted due to the excessive sound pressure levels due to the close proximity of the transducer relative to the sound source. The digital recording is therefore clipped given the amplitude far exceeded the specified gain levels of the equipment. For non-audiophiles: in this example the cleaner flatter line along the top of the wav form – relative to the previous example – indicating extreme limiting of the audio signal. Again, as no dynamic processing was used – it similarly indicates excessive sound pressure levels at unacceptable gain levels for the equipment, resulting in severe digital (signal) clipping across almost the entire length of the audio wav file.  It is also worth noting the very thin sound of this sample event as a result of the absence of low frequencies in the shallow depths; and yet as per sample event 4, there is a cacophony of sonic textures given the multiple reverberant signals arriving from the numerous surfaces of the pool.

Using a Hydrophone Part 2

In the following examples, I gathered a number of sample events using the hydrophone closer to the surface of the water line.  I hope the sample events further show how sensitive the hydrophone microphone is, effectively capturing sonic qualities of very subtle movements.

Sample Event 6  (click to access audio)

In the sixth sample event, the Zoom H6 track 3 gain level was set at 6 (of 10). The hydrophone was being dragged along the surface of the swimming pool at a relatively slow walking pace. The sound of rushing of water is the wake of water that the small condenser capsule (25mm wide, but 46 mm long, weighing 105 grams.) is creating and capturing as it breaches the surface of the water. I think you will agree that this confirms both the sensitivity and low noise levels of this particular microphone. The deeper frequency you hear (boomy quality) in the audio file is when the transduction surface of the microphone capsule is re-submersed  under the surface of the water.

Sample Event 6wp (click to access audio)

Sample event 6wp indicates that it is the same sample as sample event 6, but with post-production audio processing added. In the studio – following recording the sample – I chose to add two (2) reverb processing devices – a Eventide and a Lexicon reverb – to the initial audio file. While doing this, and listening to the altered sonic textures of the audio, I am imagining the many applications that I could use such an effect in my sonic compositions and sound design.

Sample Event 7 (click to access audio)

The seventh sample event is a similar execution as sample event 6, with the hydrophone being dragged along the swimming pool at a relatively slow walking pace, but being bounced in and out of the water in an approximately 30 centimetre arc.  The popping and gurgling sounds are occurring as the capsule breaches the surface of the water (popping), then followed by the re-submersion (gurgling). It is a similar but more exaggerated version of sample event 6, with the sample event’s frequency varying dependent on where the condenser microphone capsule is relative to the water: being just under the surface, at depth (only about 30 cms in this example), breaching the surface, or above the surface of the water.

Sample Event 8 (click to access audio)

The eighth sample event is a similar execution as sample event 7, with the Zoom H6 track 3 gain level remaining at 6 (of 10). The hydrophone was being dragged along the surface of the swimming pool at a relatively slow walking pace, but being bounced in and out of the water over a much larger arc – approximately 1.5 metres.   This is a more exaggerated version of sample event 7, with the popping and gurgling sounds associated with the breaching and re-submersion are relatively deeper in tone due to the greater depth, speed and height the capsule was dropped from, back into and under the water.  Sonically, you may hear what sounds like wind noise in this audio sample event. I noted at the time that this was due in combination to both the faster movement of the capsule above the surface of the water after breaching; but also partially due to the wind in our local area picking up nearing the end of the test. You will also note that near the end of the sample event you can hear a voice – talking, describing my actions. This voice was captured by the microphone capsule after it had breached the surface of the water, with the speaker’s mouth about 2 metres away.

Sample Event 9 (click to access audio)

The ninth and last sample event had the hydrophone submerged in the swimming pool to a depth of about 1.5 metres held stationary. The Zoom H6 track 3 gain level remained at 6 (of 10). The subject was approximately 2 metres away from the hydrophone drop point, swimming up and down the pool in freestyle form. The low frequency plop occurred every time the subject kicked her feet, with training flippers on.  The bass frequency was pronounced, reverberating off the surfaces of the  pool, producing a sound somewhat similar to a deep tom sonic boom after the skin had been struck. And yet, the hydrophone microphone still clearly captured what sounds to be running water – the sound of the subject’s hand and arms entering and breaching the surface of the water with each and every stroke. Again, I am imagining the many applications that I could apply some processing to this sample event, and use such an effect in my sonic compositions and sound design.

Summary

The Aquarian Audio Products Hydrophone H2a-XLR microphone is an extremely sensitive fully submersible condenser microphone, with minimal extraneous noise. It is well designed and constructed to be impact resistant, using sturdy materials. Whilst it is designed to be submersed in a far greater depth than I have tested to date, I believe I have made a good purchase with this hydrophone, something that will complement my current stock of studio and live microphones. I believe this microphone will allow me even greater versatility in a range of recording scenarios that I can foresee me being presented. I daresay I will probably now go searching further afield, exploring less predictable outdoor terrain, and feeling the need to be less mindful than I usually would taking my more expensive studio microphones. I am looking forward to progressing my sonic compositions and sound designs using water samples across the range of contexts which I have experienced in my life – the ocean – including boating, body surfing, snorkelling and scuba diving – rivers,  waterfalls, natural pools, and domestic water use – in order to capture specific sample events that represent significant events and memories. I look forward to this next chapter in my creative practice.
It is intended for this series of microphone-related blogs to continue.
References
Aquarian Audio Products. 2016a.  H2a-XLR Hyrophone Users Guide http://www.aquarianaudio.com   Accessed 17th November 2016
Aquarian Audio Products. 2016b. http://www.aquarianaudio.com  Accessed 15th November 2016
DLP Soundcloud. 2016.  DLP Soundcloud  Accessed 17th November 2016
Hydrophone images courtesy of: Aquarian Audio Products  Accessed 16th November 2016
AE Project Studio Microphone Case image courtesy of: DLP Pinterest site  Accessed 16th November 2016
Pro Tools 12 Sample Event Images courtesy of: David L Page  Accessed 16th November 2016
Zoom H6 image courtesy of: Sound on Sound  Accessed 16th November 2016
– ©David L Page 17/11/2016
Copyright: No aspect of the content of this blog or blog site is to be reprinted or used within any practice without strict permission directly from David L Page.
Advertisements

Pro Tools 12.4 update

Pro Tools 12 logo
The Pro Tools 12.4 update is a minor update relative to the Pro Tools 11 update which saw substantial change to the operating platform it was built upon (from 32 bit to 64 bit). Such a change had significant positive impact on the operating efficiencies of the DAW; but unfortunately, the Pro Tools 11 update also had significant impact on all users, forcing them to comply with updates of third party plug-ins, some of who were very slow to respond with the upgrades, despite having been provided a long lead time of several years from the manufacturers (Pro Tools and Logic Pro to name at least two). I for one was disappointed at how few, and how long it took many of the third party manufacturers to finalise their updates. Some to this day have still not upgraded, forcing me to sacrifice my investment in their product, and repurchase from alternative 64 bit providers.
However, as that was about twenty months ago, it is now time to welcome Pro Tools 12.

Changes with Pro Tools 12.4

The most exciting change from a Pro Tools’ user’s perspective is, Pro Tools 12.4 now allows:
  •  128 tracks of audio and 512 instrument tracks  (in Pro Tools 12 native)
  • track input monitoring (in Pro Tools 12 native)
  • advanced metering  (in Pro Tools 12 native)
  • AFL/PFL solo mode option
  • MIDI to audio commit
However, there are more systemic changes in Pro Tools 12.4 –  four levels of changes in fact –  that can result in greater efficiencies for Pro Tool’s User’s.
The first level of changes include:
  • Pro Tools subscriptions
  • AVID apps manager
  • In application plug-in purchase
Pro Tools 12.4 comes with the option of purchasing or subscribing to Pro Tools in a variety of ways, to accommodate the user’s particular budget. Additionally, Pro Tools now offers a free light version for those aspiring Pro Tools users who either do not have the funds yet, or want to trial Pro Tools prior to purchasing it.
The AVID apps manager is a great innovation that allows for your Pro Tools application and your AVID based plug-ins to be automatically updated when you start up Pro Tools (providing you are connected to the internet). I have found that it is best to let the AVID apps manager complete its’ updates prior to beginning a session.
PT12 Application Manager.png
The in application plug-in purchase feature of Pro Tools 12.4 now allows you to purchase or rent AVID plug-ins, from within your DAW session.This is particularly useful when you are working with other Pro Tool’s users who send you their session to complete a task, but have used certain AVID plug-ins that you do not own. The in application plug-in purchase feature allows the user to navigate through the session drop down menus to rent or purchase the particular plug-in that your peer has included in the session, for the period you need it for.
The second level of changes include:
  • the use of templates for new sessions
  • new blank sessions
  • opening recent sessions
  • showing or hiding the dashboard on dashboard at start-up
The new dashboard window in Pro Tools 12 replaces the (up to Pro Tools 11) Quick Start window. Whilst it is similar to the Quick Start menu in function, the new layout has two tabs – Create and Recent.
The Create tab allows you to create a new Pro Tools session, with choice of creating a session from a template, or opening the a new blank session, selecting the session parameters that you are needed.
Some the template sessions include Blues, Drum and Bass and Dubstep.
PTs Dashboard
Additionally, under the Recent tab, you can choose to open one of your recent sessions.
This dashboard window can be bypassed at start up by de-selecting it (check box in the bottom left-hand corner.
The third level of changes include:
  • metadata inspector window
The metadata inspector window allows you to update specific details regarding a Pro Tools’s session, such as the title, the artist’s name, contributors, and session location. Other information such as the sample rate, bit depth, date created, date modified and session bpm are also listed in this window, but can not be edited.
The fourth level of changes include a range of I/O Setup Improvements. These are:
  • changes to the output and Bus pages
  • unlimited Bus Paths
  • subpaths for output paths
  • downmix and upmix output busses to outputs
  • monitor path
  • session interchange and I/O mapping
  • using keyboard modifiers when enabling or assigning output busses
  • audition path improvements
  • AFL/PFL path improvements
  • restore from session
  • I/O Settings files automatically created and reconciled for different playback engine
  • organise track I/O menus by preference
  • importing I/O settings from session files
  • I/O setup setup in session notes
Pro Tools 12.4 introduces unlimited bus paths. 24 bus paths are created by default, but additional can be added.
Pro Tools 12 IO Setting window
The I/O Setup mappings are saved to both the system and the session, allowing for quick adaption when opening a session with different I/Os to what the session was created with. Pro Tools 12.4 allows the user to import the I/O Settings from either the session files (.ptx) or the I/O Settings file (.pio); and then open the session in both the original I/O Setting configuration or the newly mapped I/O Setting configuration. Routing subpaths for outputs are now possible in the Setup/I/O window, as is the capacity to quickly down or upmix by mapping to alternative outputs when you are playing back your session on a different interface or console to that you originally mixed it in. Additionally, both the Monitor Path and the Audition Path will be automatically mapped when you playback your session on an alternative system. These changes enable greater mobility of Pro Tools across multiple users and locations with greater efficiency.
In Pro Tools 12 HD, there are AFL/PFL path improvements allowing any available output path to be used, and for mismatched channel widths, the path is automatically down or up mixed to the selected channel.
References
AVID. 2015.  What’s new in Pro Tools version 12.4  New York: AVID
All other images courtesy of David L Page  Accessed 12th December, 2015
– ©David L Page 15/12/2015
Copyright: No aspect of the content of this blog or blog site is to be reprinted or used within any practice without strict permission directly from David L Page.

Post-Production Instrumental Editing and Processing Options

MIDAS Console_looking left
As a mix engineer I guess you will receive a tracking session at some point in  which you will appraise the instrumental elements of the session as being in need of some work: perhaps some subtle work, or perhaps some extensive work. Options are available to do this by the spadeful with the very large range of accessible resources available to the practitioner.
However, what you need to do as the mix engineer at that point in time, is to make a quick decision: what extent of post-production instrumental editing or processing is required in order to achieve the desired musical or sonic effect for this production project?  In this example I will focus on one of the essential instruments in contemporary music – the central element of the rhythm section – the drums. However, most of the options I cover below can be applied to other instrumental elements of a session, al be it with different sonic hardware and/or virtual applications.

Sound Repair, Reinforcement, Supplementation, Replacement

Sound repair, sound reinforcement, sound supplementation and sound replacement are terms that I have found aspiring audiophiles use interchangeably. However, they are different, offering different levels of solutions to different production problems at different times. I will introduce the essential differences between each, and outline a particular production scenario where each may be employed.
1. The entry level of post-production drum processing is known as repair. The term sound repair is usually restricted to minor editing using either manual or DAW-based editing functions. In Pro Tools, minor editing to drum tracks can be done using a combination of Beat Detective, Elastic Audio or manual editing using the standard editing tools provided, your eye and most importantly, your ear. Elastic Pitch can also be used for minor editing of melodic or harmonic instruments when they are found to be slightly out of tune to the other instrumentation in the session.  Whilst the term editing is primarily associated with cutting and moving audio files regarding timing issues, I include applying audio processing under the category of repair. This can include manipulating the sonic qualities of the audio file in terms of spectral (equalisation, filters), dynamic (compression, limiters, gates and expanders) and time-domain (reverberation, echo, delay, flanging, chorus, etc) qualities via audio processing.
2. The next level of post-production drum processing is known as sound reinforcement. This solution uses various methods to ‘reinforce the original sound – usually a tone underneath the original signal to reinforce the lack of tone within the original signal. This production solution became very popular in the 1980’s with disco music, which led into the early stages of EDM. In the 1990’s digital reinforcement was used via devices such as a dbx 120A sub-harmonic synthesiser to reinforce the sub-harmonic frequencies of the production.

DBX_120A_Subharmonic Synth

  • In the current era, external devices are still used such as the dbx 510 sub-harmonic synthesiser as a means to reinforce the sub-harmonic frequencies (as shown below on right-hand side of 500 series rack). This option can be used for both corrective or creative purposes.

AE Project Studio Rig.20160601

  • These days this style of processing – sound reinforcement – is usual in many forms of music to use virtual reinforcement devices, such as layering an in-the-box oscillator under the original signal to reinforce the original tone.
    3. The next level of post-production drum processing is known as sound supplementation. Products such as Wavemachine Lab’s Drumagog and Steven Slate’s Trigger were developed to allow the engineer/producer to add sonic texture to the original recording to supplement it/boost it in terms of sonic qualities that were considered to be deficient. These qualities could include timbre, frequency or dynamic envelope. This situation could be due to one of several reasons: due to an imperfect recording technique overall. For example: due to poor microphone placement; poor or ineffective microphone technique for the desired effect; poor or ineffective live room for the desired effect, to name a few reasons;  imperfect or ineffective microphones used for the desired effect. This could be the actual quality of the microphone, the condition of the microphone – a suitable microphone type, or polar pattern a suitable type; an imperfect quality instrument or tuning; or even an imperfect instrumentalist technique in the original recording. This option of post-production drum processing is usually used as a corrective measure, but not always, just to bring the original tone home somewhat more. It would be quite unusual in this era for most productions to have some form of sound supplementation incorporated.

Wavemachine Labs Drumagog 5

Steven Slate Trigger 2.jpg
4. The final level of post-production drum processing is known as sound replacement. Sound replacement involves – as it sounds – the replacement of the original sound source for an alternative sound source. There are so many options available in this era in terms of post-production drum processing options. Drum replacement options such as: Steven Slate’s SSD, Toontrack’s EZ Drummer, AIR Technology’s Strike, and Native Instruments many and varied drum instruments could be useful and suitable for your particular project solution. All of these listed virtual instruments use a sample system to replace the original track’s audio file. The underlying reason to replace the original audio track could be due to: an imperfect recording technique overall. For example: a poor microphone placement; a poor or ineffective microphone technique for the desired effect; a poor or ineffective live room for the desired effect, to name a few reasons;  an imperfect or ineffective microphones used for the desired effect. This could be the actual quality of the microphone, the condition of the microphone – a suitable microphone type, or polar pattern choice for the desired effect; an imperfect quality instrument or tuning; or even an imperfect instrumentalist technique in the original recording. This option of post-production drum processing is primarily used as a corrective measure. it is essentially radical surgery, used in an emergency salvation when all has gone wrong, and no options exist, including time to re-record it in the instance of an urgent project. or used to create ‘demos’ prior to actual tracking. alternatively, with time on your side as a producer, you may choose for the best option: to re-record the original sound source. Whilst this is the most obvious option, there may be external factors that prevent this obvious choice from being a valid option.

Steven Slate SSD5

Toontrack EZ Drummer

AIR Instrument Strike
I expect as a mix engineer you will receive a tracking session at some point in your careers in which you will appraise the drum elements as being in need of some work – perhaps some very subtle repair work, some subtle reinforcement, or perhaps the session will be in need of some extensive work. With the options available in this era, you will need to make a quick decision: what extent of post-production drum processing is required in order to achieve the desired musical or sonic effect? You will have different options avaialble, offering different levels of solutions to different production problems at the different stages of production. Whether sound reinforcement, sound supplementation or sound replacement – each level of post-production drum processing offers different levels of solutions to different production problems at different times. It is up to you as the mix engineer or produce to understand the different stages of production, the needs of the particular mixing session, and employ the most appropriate level of post-production drum processing in which to realise the desired effect.
References
AE Project Studio’s Rack image courtesy of: https://au.pinterest.com/pin/543739354993444064/ Accessed 10th December, 2015
AE Project Studio’s image courtesy of: https://au.pinterest.com/pin/543739354993444064/ Accessed 21st January, 2015
AIR Instrument’s Strike image courtesy of: http://www.airmusictech.com/product/strike-2  Accessed 21st January, 2015
DBX’s 120A image courtesy of: http://dbxpro.com/en/products/120a  Accessed 21st January, 2015
Pro Tools 12: http://www.avid.com/pro-tools  Accessed 21st January, 2015
Steven Slate’s SSD image courtesy of: http://www.stevenslatedrums.com/products/platinum/  Accessed 21st January, 2015
Steven Slate’s Trigger image courtesy of:  http://www.stevenslatedrums.com/trigger-platinum.php  Accessed 21st January, 2015
Toontrack’s EZ Drummer image courtesy of:  https://www.toontrack.com/product/ezdrummer-2/  Accessed 21st January, 2015
Wavemachine Lab’s Drumagog image courtesy of:  http://www.drumagog.com  Accessed 21st January, 2015
– ©David L Page 22/01/2015
– updated ©David L Page 10/12/2015
Copyright: No aspect of the content of this blog or blog site is to be reprinted or used within any practice without strict permission directly from David L Page.

Signal Flow Part 2

As developed in last week’s Signal Flow Part 1 [June 2013], Audio Engineering is an enjoyable technical and creative pursuit. It is dependent upon the engineer understanding the fundamentals. Signal Flow is one of the core fundamentals. Understanding and practicing these three stages of Signal Flow until committed to muscle memory is essential to the development of the aspiring engineer.
This week we will build upon the three stages of Signal Flow to Part 2, including Stage 4 and Stage 5.
Note: On a console that is designed primarily for recording/tracking, such as Neve C75 or a Behringer Erodes SX4882, the Channel Path will by default be routed via the large/primary faders, and the Monitor Path will by default be routed via the small/Mix B faders or pots.
In contrast: On a console that is designed primarily for mixing, such as a Audient ASP8024 or a ASP4816, the Channel Path will by default be routed via the small/secondary faders, and the Monitor Path will by default be routed via the large/primary faders.
Of course, one of the advantages of an in-line console is to have the flexibility in routing options and being able to switch what Path is routed to the faders as the need desires. Therefore, my reference to the Channel and Monitor Path and their respective faders are based  on a console that is designed primarily for recording/tracking.

Audio Processing

Before we proceed to Stage 4 of Signal Flow, lets add some external processing into Stage 2 of the Recording process. In the Audio Industry there are accepted ways to route specific types of audio processing when using external audio processing hardware. 
Audio Industry 8 Channel Studio Signal Flow.P10
The industry standard is to route signal from either the Insert Send/Return within a Channel strip, or via the Auxiliary Send/Returns across a number of Channel strips. In circumstances where we only want to process the signal for one particular channel, we would use Insert Send/Returns. This is usually the case when we want to apply Spectral or Dynamic processing to a particular signal such as an individual instrument. For instances when we want to apply particular processing across a number of Channels or instruments, then we use the Auxilliairies Send/Returns.
Firstly we must Send the raw signal from the Channel strip or strips (referred to as dry signal) to the external audio processing hardware.

Audio Industry 8 Channel Studio Signal Flow.P11

 Once we have processed the signal (referred to as wet signal), we must return the processed signal back to the particular Channel strip or strips on the console.

Audio Industry 8 Channel Studio Signal Flow.P13

We now have added processing to either one or a number of the Channel strips on the console, within Stage 2 of the Recording process. Before we proceed, let’s recap on the Signal Flow.

Audio Engineering Signal Flow

Recording phase – Stage 1,2,3: 

Last week in Audio Signal Flow stage one, two and three, I introduced the capturing of a sound source, routing the signal to the console, monitoring that sound source pre-tape (Channel Path), taking this signal to tape, recording it on the multi-track-recorder (MTR), and then returning the signal back to the console where we had the opportunity to monitor the signal one more time post-tape (Monitor Path).
Being able to monitor the path at both pre-tape and post-tape allows the engineer to check all stages of the signal flow to ensure that there is good signal, at good levels (not too low or not too high), no extraneous sounds (such as noise, hum, buzz, crackle, hiss, etc) that could be a problem if we were to discover such once we have completed the recording phase. Once we have successfully captured and recorded the sound source to the MTR (Recording phase), we need to progress to the next phase, Mixing as part of Post-Production process

Mixing phase – Stage 3, 4, 5: 

Stage 3:

In the mixing phase, traditionally completed by a specialist engineer other than the recording engineer, the recorded tracks are routed from the tape back to the console. This stage equates to Stage three of the Recording phase, but perhaps with a more focussed intention. The purpose of this stage is to commence mixing the various tracks of audio into a blended organise song with both corrective and creative processing applied. The fundamentals as available on most analogue consoles, include manipulating the gain levels, the stereo field via panning, and the spectral qualities via the equalisation and filters on the console. Once we are satisfied a balanced mix of all of the instruments within the mix has been achieved, we need to capture this mix onto tape to be able to play it back at anytime in the future.

Stage 4:

In the fourth stage – Stage 4 Signal Flow, the multiple signals are routed from the console to a device that is capable of recording this summed balanced and processed signal into a single stereo track. This mix of the multiple channels is referred to as a Stereo Mix.

Audio Industry 8 Channel Studio Signal Flow.P14

In order to do this, we route the signal from the console’s Master fader send…….
Audio Industry 8 Channel Studio Signal Flow.P16
… to the DAW via a AD/DA interface (for example Stereo Input 1+2)..
[Note: If the Studio has a patchbay setup, you are more than likely in need of routing this Stage 4 Signal Flow via that patchbay].

Audio Industry 8 Channel Studio Signal Flow.P18

This completes Stage four, from the console to the MTR.

Stage 5:

Once we have done this, we need to prepare to Return the signal back to the console for Monitoring the Stereo Mix track. This is Stage 5 of the Mixing phase. So how do we do Return the signal to the console for monitoring this Stereo Mix track?
You will note the Outputs in the tape are already being used, returning the signal of the original mix to the console’s Channel Strips (Stage 3 Monitoring Path) in order for us to monitor the original mix. So, we need an alternate way to route this Stereo Mix track Return back to the console, in order to monitor and confirm this final Stereo Mix track meets our technical and creative standards.
Whilst each studio will have its own particular routing and naming protocol, most consoles have what we refer to as a 2-Track monitoring function. 2-Track monitoring refers to the monitoring of the final Stereo Mix track Return – Stage 5 of the Mixing phase. As the mono outputs are occupied by Stage 3, we need an alternative routing option from the DAW via the AD/DA interface. This is usually done via a digital output such as an ADAT or a SPIDF output.
[Note: Before we leave the DAW, we need to confirm the Stereo Mix levels in the virtual MTR are appropriate via Input Monitoring. If the levels are not appropriate (too high or too low), you need to re-check your mix levels, and adjust accordingly]. 

Audio Industry 8 Channel Studio Signal Flow.P20

In order to monitor this Stereo Mix track signal on the console, you need to select the 2-Track function on the console, in order to monitor this final Stereo Mix track Return – Stage 5 of the Mixing phase.
[Note: each console manufacturer is likely to have their own naming protocol for this 2-Track monitoring function. For example, a number of more recent consoles refer to this final Stereo Mix monitoring function as DAW].

Audio Industry 8 Channel Studio Signal Flow.P21

Selecting this 2-Track function on the console, will route the Stereo Mix signal to the monitors for confirming the technical and creative merits of your final Stereo Mix. To confirm that you as the Mixing Engineer is actually monitoring Stage 5 (and not Stage 3), if you select the mute button on the Stereo Mix track within the DAW, the monitoring signal to the Control Room monitors should be cut. Deselect the mute button to continue to monitor this Stereo Mix signal.

Audio Industry 8 Channel Studio Signal Flow.P22

This completes the 3 stages of the Mixing phase – Stage 3, Stage 4 and Stage 5.
Audio Engineering is an enjoyable, technical and creative pursuit. It is dependent upon the engineer understanding the fundamentals. Signal Flow is one of the core fundamentals. Understanding and practicing these three stages of the Recording phase’ Signal Flow and the three stages of the Mixing phase Signal Flow until committed to muscle memory is essential to the development of the aspiring engineer.
Reference
Images courtesy of:  David L Page  Accessed 3rd June, 2013
– ©David L Page 10/06/2013
– updated ©David L Page 14/02/2016
Copyright: No aspect of the content of this blog or blog site is to be reprinted or used within any practice without strict permission directly from David L Page.

Signal Flow Part 1

The fundamentals of Audio Engineering

Audio Engineering is dependent upon the key personnel – the engineers – understanding the fundamentals. Signal Flow is one of the core fundamentals of audio engineering. Irrespective of the studio console, an experienced engineer who understands signal flow will adapt very quickly to each and every studio environment, irrespective of whether the studio is analogue, digital or digital virtual in its equipment makeup.
At a very elementary level, there are two phases in Audio Engineering:
  1. The Recording phase, as part of the Production process. [NB: the Recording process is also referred to as tracking]
  2. The Mixing phase, as part of the Post-Production process
For the purposes of introducing aspiring audio engineers to the two phases, I am going to refer to stages of the process as stage 1,2,3,4,5 etc. To commence this introduction, we are going to look this week at the three stages of the recording phase.

Recording phase

Stage 1: 

In a simple signal flow scenario,  a single sound is generated by a sound source and captured by a transducer, converting the acoustic sound source and transforming it into electrical energy. The electrical signal is then passed down a microphone cable towards the single channel console.
In the situation where there is a eight channel console or mixing desk, the engineer has the opportunity to capture eight sound sources simultaneously via eight transducers, converting the eight acoustic sound sources and transforming them each into electrical energy. The electrical signals are then passed down eight microphone cables towards the eight channel console.
Audio Industry 8 Channel Studio Signal Flow.P2
This Stage 1 Signal Flow is known as the Channel Path, as the sound source is being routed down the Channel Strip on the console.
Note: On a console that is designed primarily for recording/tracking, such as Neve C75 or a Behringer Erodes SX4882, the Channel Path will by default be routed via the large/primary faders, and the Monitor Path will by default be routed via the small/Mix B faders or pots.
In contrast: On a console that is designed primarily for mixing, such as a Audient ASP8024 or a ASP4816, the Channel Path will by default be routed via the small/secondary faders, and the Monitor Path will by default be routed via the large/primary faders.
Of course, one of the advantages of an in-line console is to have the flexibility in routing options and being able to switch what Path is routed to the faders as the need desires. Therefore, my reference to the Channel and Monitor Path and their respective faders are based  on a console that is designed primarily for recording/tracking.
Audio Industry 8 Channel Studio Signal Flow.P3.png

Stage 2:

In the second stage – Stage 2 Signal Flow, the signal is routed from the console to a device that is capable of recording this signal. Traditionally, this device was a magnetic tape player. In the current era, this device has been largely replaced by a digital virtual tape player, a computer running an audio-specific digital application such as Pro Tools. This device is now known as a Digital Audio Workstation (DAW). For the analogue signal flow to flow successfully from the console to the digital virtual tape device, the signal must pass through an interface converting the analogue signal to a digital signal (A/D converter)
Audio Industry 8 Channel Studio Signal Flow.P4.png
 Of course, the signal from the tape device has to return back to the console.
Audio Industry 8 Channel Studio Signal Flow.P5.png

Stage 3:

In the third stage – Stage 3 Signal Flow, the signal is routed back from the tape device to the console.
Audio Industry 8 Channel Studio Signal Flow.P6.png
This Stage 3 Signal Flow is known as the Monitor Path, as the sound source is routed through the secondary faders or Mix B of the console, for the purposes of monitoring the signal. This signal is known as post-tape (ie after tape device). Post-tape monitoring is considered the normal monitoring mode for recording/tracking monitoring.
Audio Industry 8 Channel Studio Signal Flow.P7.png
The final part of the recording signal flow, is to add some headphone monitoring in the live room for the artist, so that they can hear as they are being recorded.
Audio Industry 8 Channel Studio Signal Flow.P8.png
This headphone mix is generally routed from the console’s auxiliary functions, allowing an independent stereo mix of the various eight channels being recorded/tracked. Industry standard for a headphone mix is ‘post-tape’ ‘pre-fader’. Once the headphone mix has been established, it can be tested via the talkback function on the console.
Audio Industry 8 Channel Studio Signal Flow.P9.png
Establishing immediate communication between the engineer in the control room and the artist in the live room is essential in a studio environment. Therefore getting signal flow across the three stages, a headphone mix and talkback need to be the focus of any tracking session, and successfully achieved as quickly as possible at the beginning of the session.
Audio Engineering is an enjoyable, technical and creative pursuit. It is dependent upon the engineer understanding the fundamentals. Signal Flow is one of the core fundamentals. Understanding and practicing these three stages of the Recording phase’ Signal Flow until committed to muscle memory is essential to the development of the aspiring engineer.
Next week we will develop our fundamental Signal Flow knowledge to the Mixing phase (Stage 3, Stage 4 and Stage 5)[ June 2013].
Reference
Images courtesy of:  David L Page  Accessed 2nd June, 2013
– ©David L Page 03/06/2013
– updated ©David L Page 13/02/2016
Copyright: No aspect of the content of this blog or blog site is to be reprinted or used within any practice without strict permission directly from David L Page.

Pro Tools User Tip #1

Tips when using Pro Tools on a general use computer

Pro Tools 12 image

When you are using someone else’s computer, such as in a Studio or general use C-Lab environment, you are not likely going to be aware of what the previous person used Pro Tools for. They may have, for example used an alternative interface and customised the routing (I/Os) within Pro Tools for their own specific use. Therefore it is always advisable when you are creating a new Pro Tools session or re-opening an existing Pro Tools session to confirm the following:
  • Audio File Type: wav
  • Sample Rate: whatever your clients project requires – for example, 48 Hz
  • Bit depth: whatever your clients project requires – for example, 24 Bit
  • I/O Settings: Stereo Mix (not ‘Last Used’)
 Screen Shot 2016-03-02 at 6.05.59 am
Once you have the Pro Tools session open, can I suggest you check the two sections in the Setup drop down menu:

Setup/Playback Engine

PTs Drop down menu

  • Setup/Playback Engine: does it have ‘Built in Output’ selected, or is ‘Pro Tools Aggregate I/O’ selected because the previous person was using an alternative external interface? Without an interface, you need to have ‘Built in Output’ selected

PTs Playback Engine

  • Setup/Playback Engine: H/W Buffer Size: make sure you have the correct H/W Buffer Size selected for the type of session you are conducting – mixing or recording/tracking
    • 1024 Samples (recommended for Mixing)
    • 64 Samples (recommended for Recording/Tracking)
  • When you have completed this task, close this display window by pressing the ok button in the bottom right-hand corner.

    Setup/IOs:

PTs IOs
  • Setup/IOs: reset all of the I/O tabs (Input, Output, Bus, etc) by pressing the default button in the bottom left-hand corner of the I/O display window for each of the I/O tabs. If any routing changes had been made in the previous session, this action will reset all of the I/Os back to the original default setup. When you have completed this task, close this display window by pressing the ok button in the bottom right-hand corner.
PTs Setup IO View
 These simple steps should become your standard operating procedure every time you create a new Pro Tools session or re-open an existing Pro Tools session on a general use computer, or in a Studio used by others.Having completed these simple steps, you can be confident that your Pro Tools’ session is correctly setup, ready for a successful Mixing or Recording/Tracking session, minimising the chance of experiencing signal flow issues during your Pro Tools’ session.
If however, having followed these simple steps you do have Signal Flow issues during your Pro Tools session, can I also suggest that you check the following:
  • Overall session signal flow. Remember, Pro Tools is no different to a studio, it is just ‘in a box’. All of the rules of Signal Flow still apply
    • are your session inputs (in the ‘mix’ window) routed correctly?
    • are your session outputs (in the ‘mix’ window) routed correctly?
    • are all ‘mute’ buttons off?
    • are there any ‘solo’ buttons selected?
 If you have checked all of the above suggestions, but continue to have issues whilst within the Pro Tools session, I suggest you consult the Studio Supervisors.
References
AVID Pro tools 12 image courtesy of:  https://www.avid.com/US  Accessed 11th January, 2016
All other images courtesy of David L Page Accessed 11th January, 2016
– ©David L Page 09/07/2012
– updated ©David L Page 13/01/2016
Copyright: No aspect of the content of this blog or blog site is to be reprinted or used within any practice without strict permission directly from David L Page.