This blog is a continuation of a series. See herefor the previous blog (2016a).
Psychedelic Rock Music
At the outset of my research study, I imagined my creative practice was to be a five (5) track EP of original compositions. The song style was to be a roots-based, a style I had been accustomed to writing over many decades. However, I wanted to ensure I was challenging my practitioner self, and therefore reflected on sub-genres I had not yet explored. Over the first few months of the pilot study, I found my self returning to my musical influences to garner some inspiration. As part of this process, I investigated and developed my understanding of the history of music production . As described in my blog here (2016a), I had always loved psychedelic rock but had no experience in producing that style. I therefore turned my focus to learning as much about this style of music as I could.
As part of this process, I began to experiment within the digital virtual environment with processing audio to arrive at a psychedelic aesthetic. This blog is a record of one of those experiments in sound processing techniques, rather than using external hardware experimental processing as they had done in the 1950s and 1960s.
I commenced with a recording of a Taylor 815ce acoustic guitar, capturing the sound with both a DI (via a built piezo pickup) and several contact microphones attached to the body of the guitar. The first track is a segment of an original acoustic recording – a sample, or sound event – , with no processing applied.
In the nineteenth sample, I applied a digital virtual synthesis instrument-based processor to the sound event – Native Instruments’ Absynth 5, with the setting 808 Kick.
Whilst it was a fun practice task applying a range of digital virtual processing to the sound event sample – dynamic, spectral, time-domain and various combinations of these – , I noticed that the processing alone – the processing applied to the sound event samples – did not inspire my creativity for another production project. The processing I applied were merely colourful effects in my mind, not influential sounds to ultimately influence the direction of a composition. As I continue to delve into this style and experiment in multi-textured complex layers of music and sound that characterise that particular musical style, I will continue to investigate the various technologies and processing techniques advocates of psychedelic rock music used. I will likely explore external hardware technologies, and feel at this time I will need to be more inquisitive with less predictable processing options. I am looking forward to progressing my sonic compositions and sound designs using a range of technologies. I look forward to this next chapter in my creative practice.
This blog continues a series of blogs on Mixing (Page 2014).
As a mix engineer I guess you will receive a tracking session at some point in which you will appraise the instrumental elements of the session as being in need of some work: perhaps some subtle work, or perhaps some extensive work. Options are available to do this by the spadeful with the very large range of accessible resources available to the practitioner.
However, what you need to do as the mix engineer at that point in time, is to make a quick decision: what extent of post-production instrumental editing or processing is required in order to achieve the desired musical or sonic effect for this production project? In this example I will focus on one of the essential instruments in contemporary music – the central element of the rhythm section – the drums. However, most of the options I cover below can be applied to other instrumental elements of a session, al be it with different sonic hardware and/or virtual applications.
Sound repair, sound reinforcement, sound supplementation and sound replacement are terms that I have found aspiring audiophiles use interchangeably. However, they are different, offering different levels of solutions to different production problems at different times. I will introduce the essential differences between each, and outline a particular production scenario where each may be employed.
1. The entry level of post-production drum processing is known as repair. The term sound repair is usually restricted to minor editing using either manual or DAW-based editing functions. In Pro Tools, minor editing to drum tracks can be done using a combination of Beat Detective, Elastic Audio or manual editing using the standard editing tools provided, your eye and most importantly, your ear. Elastic Pitch can also be used for minor editing of melodic or harmonic instruments when they are found to be slightly out of tune to the other instrumentation in the session. Whilst the term editing is primarily associated with cutting and moving audio files regarding timing issues, I include applying audio processing under the category of repair. This can include manipulating the sonic qualities of the audio file in terms of spectral (equalisation, filters), dynamic (compression, limiters, gates and expanders) and time-domain (reverberation, echo, delay, flanging, chorus, etc) qualities via audio processing.
2. The next level of post-production drum processing is known as sound reinforcement. This solution uses various methods to ‘reinforce the original sound – usually a tone underneath the original signal to reinforce the lack of tone within the original signal. This production solution became very popular in the 1980’s with disco music, which led into the early stages of EDM. In the 1990’s digital reinforcement was used via devices such as a dbx 120A sub-harmonic synthesiser to reinforce the sub-harmonic frequencies of the production.
In the current era, external devices are still used such as the dbx 510 sub-harmonic synthesiser as a means to reinforce the sub-harmonic frequencies (as shown below on right-hand side of 500 series rack). This option can be used for both corrective or creative purposes.
(AE Project Studio 2015)
These days this style of processing – sound reinforcement – is usual in many forms of music to use virtual reinforcement devices, such as layering an in-the-box oscillator under the original signal to reinforce the original tone.
3. The next level of post-production drum processing is known as sound supplementation. Products such as Wavemachine Lab’s Drumagog and Steven Slate’s Trigger were developed to allow the engineer/producer to add sonic texture to the original recording to supplement it/boost it in terms of sonic qualities that were considered to be deficient. These qualities could include timbre, frequency or dynamic envelope. This situation could be due to one of several reasons: due to an imperfect recording technique overall. For example: due to poor microphone placement; poor or ineffective microphone technique for the desired effect; poor or ineffective live room for the desired effect, to name a few reasons; imperfect or ineffective microphones used for the desired effect. This could be the actual quality of the microphone, the condition of the microphone – a suitable microphone type, or polar pattern a suitable type; an imperfect quality instrument or tuning; or even an imperfect instrumentalist technique in the original recording. This option of post-production drum processing is usually used as a corrective measure, but not always, just to bring the original tone home somewhat more. It would be quite unusual in this era for most productions to have some form of sound supplementation incorporated.
4. The final level of post-production drum processing is known as sound replacement. Sound replacement involves – as it sounds – the replacement of the original sound source for an alternative sound source. There are so many options available in this era in terms of post-production drum processing options. Drum replacement options such as: Steven Slate’s SSD, Toontrack’s EZ Drummer, AIR Technology’s Strike, and Native Instruments many and varied drum instruments could be useful and suitable for your particular project solution. All of these listed virtual instruments use a sample system to replace the original track’s audio file. The underlying reason to replace the original audio track could be due to: an imperfect recording technique overall. For example: a poor microphone placement; a poor or ineffective microphone technique for the desired effect; a poor or ineffective live room for the desired effect, to name a few reasons; an imperfect or ineffective microphones used for the desired effect. This could be the actual quality of the microphone, the condition of the microphone – a suitable microphone type, or polar pattern choice for the desired effect; an imperfect quality instrument or tuning; or even an imperfect instrumentalist technique in the original recording. This option of post-production drum processing is primarily used as a corrective measure. it is essentially radical surgery, used in an emergency salvation when all has gone wrong, and no options exist, including time to re-record it in the instance of an urgent project. or used to create ‘demos’ prior to actual tracking. alternatively, with time on your side as a producer, you may choose for the best option: to re-record the original sound source. Whilst this is the most obvious option, there may be external factors that prevent this obvious choice from being a valid option.
I expect as a mix engineer you will receive a tracking session at some point in your careers in which you will appraise the drum elements as being in need of some work – perhaps some very subtle repair work, some subtle reinforcement, or perhaps the session will be in need of some extensive work. With the options available in this era, you will need to make a quick decision: what extent of post-production drum processing is required in order to achieve the desired musical or sonic effect? You will have different options avaialble, offering different levels of solutions to different production problems at the different stages of production. Whether sound reinforcement, sound supplementation or sound replacement – each level of post-production drum processing offers different levels of solutions to different production problems at different times. It is up to you as the mix engineer or produce to understand the different stages of production, the needs of the particular mixing session, and employ the most appropriate level of post-production drum processing in which to realise the desired effect.
This blog continues a series of blogs on Mixing (Page 2014).
Developing a skillset by following a process
I was presenting to a group of Bachelor of Audio Trimester 2 students, preparing for the mixing stage of their final creative productions. In reflection, I felt a similar sentiment as my blog last year, ”Effectively guiding creative artists through a task: process” [May 2014]. In summary, the key elements are:
Mixing is a process…..
1) Yes mixing is individual, but it is an individual processbased on developed workflows accumulated over many hours of experience…
2) The ONLY thing that separates different mixing engineers is their perspective …. what they are aiming to achieve. What are you aiming to achieve in your mix? Can you articulate that aim clearly? Have you nominated a reference track as your guide through this process?
3) Genre dictates in large part what workflow you choose. The mix process (workflow) you choose should be congruent to the genre.
4) For every workflow choice, there is a positive and a downside to that process. What provides you the most benefit, with the least amount of negatives in your desired aim, for your desired outcome?
5) There are no rules, BUT like all things technical and creative, there are fundamentals that you need to develop – learn, and practice – BEFORE you commence to attempt to discard them. In the previous blog (Page 2014), I proposed Owsinski’s Mixing elements as a very worthy guide.
6) There are a diverse range of mixing approaches put in front of you, with diverse perspectives, views, and workflows. There is NO correct workflow. You are shown options, for you to decide for yourself what workflow will work for you. If in doubt though, through your lack of time to develop this skill thoroughly yet, then please consult Owsinski’s Mixing elements as a very worthy guide (see point 5)
It is intended for this blog to continue in a series of Mixing blogs here.
This blog continues a series of blogs on Mixing (Page 2014a).
Guiding Creative Artists: steps along a path
To walk down any path, it is usual to be sequential in that process. If you want to get to z, it is usual (but not always) appropriate to progress through each of the letters to get there. a , b ,c ,d, e, f, g, h, etc… To follow the suggested steps – a series of steps to follow, observing what is around you – you can generally arrive at your destination in a timely manner. As you develop, it is of course important to sometimes stop or even perhaps deviate on the set path, and to experience what occurs when you attempt a, b, c, d, etc in a natural logical sequence; and when you do not follow such a logical sequence. What is the result of following a before b, b before c, etc – or not?
As we get to the business end of yet another Trimester, I observe our Creative Media students again getting quite angst with their attempts to mix their final productions. I observe that most are yearning for what is actually a very straightforward process. Following suggested the steps – a series of steps that has been provided across several Modules to follow and experiment with, observing what occurs when they attempt a, b, c, d, etc, and understanding that in many instances, a must come before b, b before c, etc.
Whether myself, my peers or my students – in a studying phase or in our professional lives – it seems to be common amongst most of us to want some guiding process or sequence that we can initially follow, at least until we can get comfortable with the task, and then feel confident enough to be able to fly on our own, and then possibly self-empowered enough to customise the process into our own individual unique workflow. Not necessarily process in specifics or for what could be classified as micro-management, but process in terms of the introduction of a concept for global understanding, with a series of logical process steps to be able to realize the task at least all the way through.
Over recent weeks, as I have again introduced a group of novice audio production students to the art and technique of mixing in a Tri 1 Bachelor’s unit, I was reminded of how overwhelming such a task is. Initial questions most common continue to be: “how do I EQ?”, “what do I compress?”, to “what EQ works on a kick drum?”, to “does EQ come before compression?”, to name a few.
Leading audio production author Owsinski outlines in his book “Modern Mixing Techniques” the steps and elements of mixing that he sees as common to all mixes are:
session set up,
interest (Owsinski, 2013).
In my observation of those far more experienced in the audio industry, it is usually the fundamentals of: deciding upon a reference track, setting up your session, and setting your gain structure and balance first, that get the most immediate attention. The completion of these fundamentals, allowing the mix engineer to progress through a workflow, seems to be consistent to successful mixing sessions. In fact, it is quite often the case with a well played and recorded session, that once the fundamentals have been completed, the experienced mix engineer may only need to use minimal audio processing at best, because the mix is already sitting nicely where it is, with all of the instruments placed within their own space, at good levels, negating the need for further attention and processing.
In contrast, in general the novice will overlook these fundamentals, eager to dive into what they perceive as mixing, applying audio processing, inserting as many plug-ins as they can, and start turning the EQ and Dynamic pots to extremes until they achieve obvious changes in sonic qualities. However, quite often as a result of their actions, the overall gain structure and stereo image is now adversely affected, presenting a range of other issues within the mix such as clipping, distortion, raised noise floor, lack of clarity, masking and possibly also unacceptable degradation of the audio quality: quite possibly, the exact opposite of what they were trying to achieve from the outset.
At this point, it is then not uncommon for the novice to exhibit a range of responses with their first mix task attempt: confusion, overwhelm, becoming debilitated with the task at hand, to panic, immediately re-entering the deep end and start randomly pushing more buttons, ‘knob twirling’, and adding even more audio processing devices trying to fix what they have created.
And yet, when they next have the opportunity of observing an experienced mix engineer approaching a mix task, what they are likely to witness is someone proceeding through a flurry of steps, moving swiftly, effectively and efficiently through a series of sub-conscious moves, as they progress through their customised workflow, developed over many hours, and countless mix session tasks. Essentially, the experienced mix engineer will have a clear goal of what they are trying to achieve and a clear roadmap of how they are going to achieve it.
I can guarantee such experienced activity necessitates commencing with the fundamentals.
In my observation, a novice creative artist may not understand the need or function of having a clear goal of what they are trying to achieve, before they commence the task. If you do not know what you are trying to achieve, most will probably end up somewhere other than where they wanted to. And whilst this may not be problematic in a creative streaming situation, it will not assist the creative artist if they need to be working to a brief for an external client (quite possibly, merely their Lecturer for the achievement of an assessment task).
Additionally, a novice creative artist may not have a clear roadmap of how to get to their goal. The fundamentals, a series of steps that allows the artist to progress sequentially through the task at hand, increasing the possibility of an outcome in the vicinity of what they were attempting to achieve. I have found that positive questions that assist in this stage need to be process-based questions such as: “now, what do I need to do first?” “then?” “then?”
I have also found a secondary benefit that this fundamental stage facilitates. It seems to assist the novice creative artist to catch their breath, ground themselves and focus before they immerse themselves into the technical and creative task at hand. Once they commence the task, the goal should be to move effectively and efficiently through a series of steps to a desired outcome (‘goal’).
The fundamentals are an ideal place for the novice creative artist to commence a technical and creative task, at least until they can get comfortable with the task, then feel confident enough to be able to fly on their own, and then self-empowered enough possibly to customise the process into their own individual unique workflow.
It is intended for this blog to continue in a series of Mixing blogs here (Page 2014b).
As developed in last week’s Signal Flow Part 1 [June 2013], Audio Engineering is an enjoyable technical and creative pursuit. It is dependent upon the engineer understanding the fundamentals. Signal Flow is one of the core fundamentals. Understanding and practicing these three stages of Signal Flow until committed to muscle memory is essential to the development of the aspiring engineer.
This week we will build upon the three stages of Signal Flow to Part 2, including Stage 4 and Stage 5.
Note: On a console that is designed primarily for recording/tracking, such as Neve C75 or a Behringer Erodes SX4882, the Channel Path will by default be routed via the large/primary faders, and the Monitor Path will by default be routed via the small/Mix B faders or pots.
In contrast: On a console that is designed primarily for mixing, such as a Audient ASP8024 or a ASP4816, the Channel Path will by default be routed via the small/secondary faders, and the Monitor Path will by default be routed via the large/primary faders.
Of course, one of the advantages of an in-line console is to have the flexibility in routing options and being able to switch what Path is routed to the faders as the need desires. Therefore, my reference to the Channel and Monitor Path and their respective faders are based on a console that is designed primarily for recording/tracking.
Before we proceed to Stage 4 of Signal Flow, lets add some external processing into Stage 2 of the Recording process. In the Audio Industry there are accepted ways to route specific types of audio processing when using external audio processing hardware.
The industry standard is to route signal from either the Insert Send/Return within a Channel strip, or via the Auxiliary Send/Returns across a number of Channel strips. In circumstances where we only want to process the signal for one particular channel, we would use Insert Send/Returns. This is usually the case when we want to apply Spectral or Dynamic processing to a particular signal such as an individual instrument. For instances when we want to apply particular processing across a number of Channels or instruments, then we use the Auxilliairies Send/Returns.
Firstly we must Send the raw signal from the Channel strip or strips (referred to as dry signal) to the external audio processing hardware.
Once we have processed the signal (referred to as wet signal), we must return the processed signal back to the particular Channel strip or strips on the console.
We now have added processing to either one or a number of the Channel strips on the console, within Stage 2 of the Recording process. Before we proceed, let’s recap on the Signal Flow.
Audio Engineering Signal Flow
Recording phase – Stage 1,2,3:
Last week in Audio Signal Flow stage one, two and three, I introduced the capturing of a sound source, routing the signal to the console, monitoring that sound source pre-tape (Channel Path), taking this signal to tape, recording it on the multi-track-recorder (MTR), and then returning the signal back to the console where we had the opportunity to monitor the signal one more time post-tape (Monitor Path).
Being able to monitor the path at both pre-tape and post-tape allows the engineer to check all stages of the signal flow to ensure that there is good signal, at good levels (not too low or not too high), no extraneous sounds (such as noise, hum, buzz, crackle, hiss, etc) that could be a problem if we were to discover such once we have completed the recording phase. Once we have successfully captured and recorded the sound source to the MTR (Recording phase), we need to progress to the next phase, Mixing as part of Post-Production process
Mixing phase – Stage 3, 4, 5:
In the mixing phase, traditionally completed by a specialist engineer other than the recording engineer, the recorded tracks are routed from the tape back to the console. This stage equates to Stage three of the Recording phase, but perhaps with a more focussed intention. The purpose of this stage is to commence mixing the various tracks of audio into a blended organise song with both corrective and creative processing applied. The fundamentals as available on most analogue consoles, include manipulating the gain levels, the stereo field via panning, and the spectral qualities via the equalisation and filters on the console. Once we are satisfied a balanced mix of all of the instruments within the mix has been achieved, we need to capture this mix onto tape to be able to play it back at anytime in the future.
In the fourth stage – Stage 4 Signal Flow, the multiple signals are routed from the console to a device that is capable of recording this summed balanced and processed signal into a single stereo track. This mix of the multiple channels is referred to as a Stereo Mix.
In order to do this, we route the signal from the console’s Master fader send…….
… to the DAW via a AD/DA interface (for example Stereo Input 1+2)..
[Note: If the Studio has a patchbay setup, you are more than likely in need of routing this Stage 4 Signal Flow via that patchbay].
This completes Stage four, from the console to the MTR.
Once we have done this, we need to prepare to Return the signal back to the console for Monitoring the Stereo Mix track. This is Stage 5 of the Mixing phase. So how do we do Return the signal to the console for monitoring this Stereo Mix track?
You will note the Outputs in the tape are already being used, returning the signal of the original mix to the console’s Channel Strips (Stage 3 Monitoring Path) in order for us to monitor the original mix. So, we need an alternate way to route this Stereo Mix track Return back to the console, in order to monitor and confirm this final Stereo Mix track meets our technical and creative standards.
Whilst each studio will have its own particular routing and naming protocol, most consoles have what we refer to as a 2-Track monitoring function. 2-Track monitoring refers to the monitoring of the final Stereo Mix track Return – Stage 5 of the Mixing phase. As the mono outputs are occupied by Stage 3, we need an alternative routing option from the DAW via the AD/DA interface. This is usually done via a digital output such as an ADAT or a SPIDF output.
[Note: Before we leave the DAW, we need to confirm the Stereo Mix levels in the virtual MTR are appropriate via Input Monitoring. If the levels are not appropriate (too high or too low), you need to re-check your mix levels, and adjust accordingly].
In order to monitor this Stereo Mix track signal on the console, you need to select the 2-Track function on the console, in order to monitor this final Stereo Mix track Return – Stage 5 of the Mixing phase.
[Note: each console manufacturer is likely to have their own naming protocol for this 2-Track monitoring function. For example, a number of more recent consoles refer to this final Stereo Mix monitoring function as DAW].
Selecting this 2-Track function on the console, will route the Stereo Mix signal to the monitors for confirming the technical and creative merits of your final Stereo Mix. To confirm that you as the Mixing Engineer is actually monitoring Stage 5 (and not Stage 3), if you select the mute button on the Stereo Mix track within the DAW, the monitoring signal to the Control Room monitors should be cut. Deselect the mute button to continue to monitor this Stereo Mix signal.
This completes the 3 stages of the Mixing phase – Stage 3, Stage 4 and Stage 5.
Audio Engineering is an enjoyable, technical and creative pursuit. It is dependent upon the engineer understanding the fundamentals. Signal Flow is one of the core fundamentals. Understanding and practicing these three stages of the Recording phase’ Signal Flow and the three stages of the Mixing phase Signal Flow until committed to muscle memory is essential to the development of the aspiring engineer.
Images courtesy of: David L Page Accessed 3rd June, 2013
As outlined in my Pre-Production Plan (Page 2010) blog several months ago, lets return to the basics. It is the goal of the audio industry to facilitate to the realisation of recorded artefacts – recorded and then distributed in the mediums of shellac or vinyl records, magnetic tapes, compact discs and now more commonly as wav or MP3s. Within the process of producing these artefacts, there are considered to be three stages of the production process:
The pre-production stage;
The production stage;
The post-production stage.
Whilst there are blurred lines between several of these stages depending upon what musical style (genre) one is working within, the acoustic style recording process essentially adheres to the following three stages:
The pre-production stage is about planning for the production of the artefact
The production stage is the actual recording process of the artefact. This process is commonly referred to as the tracking
The post-production stage follows the tracking stage, preparing each track so that it is well balanced in terms of instrumentation, levels, frequency and dynamics – in preparation for the final step of this stage – the mastering process – prior to the artefact being released to the public
At the heart of the production lies the actual reason for the artefact – the song or composition. This focus – the main reason for the production process – needs to be maintained throughout the creation of the artefact process. It must not become secondary to the process. It is often said that that to make a great song, you need to have:
a great song;
a great performance;
a great recording;
a great mix;
and great mastering.
Step 1: the artist creates a song in terms of melody, harmony, rhythm, counterpoint melodies, counter-point rhythms and instrumentation – creativity following both technical and musical theory guidelines. This is then practiced and adjusted or moulded as required.
Step 2: the song then needs to be arranged, with appropriate instrumentation relative to the musical style (genre) of the song.
Step 3: once the song is considered to be finished, it needs to be recorded. An essential aspect of the production process is the performance of the musicians used in the tracking process. If the musicians are both technically proficient and aesthetically sensitive, then there is hope that the song could be captured as the songwriter or composer had intended the song to be. A great song needs to be performed in the best possible way in order for that song to stand and be considered as it was intended.
Step 4: Whilst there are a number of approaches to the recording process, one usual approach is to capture the instruments as they were intended to sound. In this recording approach, the recording process is fundamentally a technical process; with the recording objective being to capture the song as close to as possible the pure or natural sound or tone of the instrumentation being recorded. In this recording approach, whilst some creativity can be used to get pre-desired effects, recording is fundamentally a technical process to capture the pure or natural sound or tone as being played. Note: if your offer to an artist is ‘to record them’, then this is the step at which you provide them whatever assets you have as a result of the recording, or production process. To go to the next step is to provide them ‘a mixed song = a produced song’, an entirely more complex and time consuming process…
Step 5: Once this recording, or production process is complete – ie not further recording is required – the song needs to be mixed.
The Mixing Process
It is the role of the mixing engineer to take all of the recorded tracks and commit each of them to a final mixdown, in order to realise the pre-agreed qualities of the required end goal – the sum total of these tracks as being part of the cultural production artefact – in a balanced manner. This is the goal – the target – of mixing.
Mixing involves working on each track independently. Mixing is a constructive, engineering process, considering each of the recorded sound source elements – usually, but not necessarily, the instruments – that have been captured to tape as part of the tracking process. As outlined by Ownsinski (2013), the mixing process includes adjusting the amplitude levels and panning each instrument within each of these tracks, allowing each instrument to be heard within the balance of the mix – to sit within its own space – spectrally and dynamically. The mixing engineer usually also adds processing – sonic and automation –to the recorded process to embellish the tracks sonically and musically.
The mixing process therefore is both a technical and a creative process.
The mixing process is considered to be a technical process in terms of any auditory or sonic corrections required from the recording (tracking) process. This is usually required if for any reason there are deficiencies in either the recording or performance equipment; or the tracking or performance process. A mix engineer may assess the recorded assets as being deficient in either amplitude, their stereo position, or the actual recorded frequency range of the particular instruments.
The mixing process is considered to be a creative process as the mix engineer can influence the final tracks, by determining the relationship between multiple musical and sonic captured sound sources within the one track. Moylan (1992) agrees mixing is both a technical and an aesthetic process. Mixing requires the creation of a sound stage – width, depth and height – that is both congruent to the genre, but also allows the mix engineer to transport the listener to a land that they imagine, an environment that is congruent for the artist and the cultural production to exist. A good mixing engineer arranges the sound sources in a way that creates moments of interest for the listener, that engages them without them necessarily being conscious of the manipulation. Sometimes the mixing engineer may decide it is necessary to take what could have been a pure or natural sound or tone, and dress it up for the desired outcome – usually, for an audience’s entertainment. The degree of dress up used by a mixing engineer should be guided by the pre-agreed qualities of the required end goal; but often an experienced mix engineer is recruited for their experience and creativity. In this case, they will use techniques to create a soundscape – breadth, height, depth and dynamics -, along with a range of processing and automation techniques to add colour, interest, texture, space to highlight the existing or original song hooks, and also to add a range of extra hooks. There are after all, many options in the creative process.
However, as mixing involves working towards an end-goal of a cultural production where all of the sounds and tracks work as a homogenous whole, it is important that the mix engineer progressively builds a mix to a pre-agreed aesthetic. In order to facilitate this process, it is necessary to use a reference track to guide all of the participants in the production process at every step of the process. This reference track will suggest what type of song they are producing, the musical style (genre), the balance of what the finished tracks should become, and the degree of production creativity that it appropriate to include in the particular track. The pre-agreed reference track is especially important within the mixing stage. The mixing engineer will use a reference track – a track that they can make reference to regarding the overall balance of the desired artefact – as an agreement with the person directing the production process (known as key stakeholder – possibly the song-writer, artist, manager or record label owner) – of what this end-goal, the cultural production will sound similar to, musical style-wise, musically and sonically.
It is intended for this blog to continue in a series of Mixing blogs here (Page 2014).