Experience.
This page is here to show what I can offer you. This page details my experience, knowledge, skills and abilities that I have acquired as well as polished while working in the sound production industry, working with many different bands and people.
Use of EQ, Side-chain compression and Time Domain Effects on the Teddy Picker Session
_pn.png)

On the left you can see a screenshot of a session I did called Teddy Picker, by the Arctic Monkeys, if you want to listen to the song it is on the projects page, for any extra little information on the session you can find it on the blog. During this session I was having trouble with the bass and kick fighting with each other, to fix it I decided to put a filter on the bass to cut everything off below 60Hz, this let the kick have more space in the mix. I then boosted 120Hz on the bass for low-end then dipped around 300-400Hz to take out some of the muddiness, following this for extra string attack I decided to add a small shelf from 3kHz upwards, this gave me a very clean and deep bass sound, exactly what I wanted for this mix. Following this, I began to EQ the kick drum as seen in the second photo. For the kick I boosted 60Hz where I created space in the bass for the low end this made the kick a lot punchier, after this I dipped around 100Hz to take out the unwanted muddiness following this I then boosted 3-5kHz for the beater head hitting the skin of the drum, this gave me a well-rounded sound and gave a good separation between the two instruments. To further help with the clashing issue I decided to use side-chain compression on the bass, this let the kick punch through the bass as the bass would dip level whenever the kick would be hit.
To set up this I put a compressor on the bass track and routed it to the kick drum, following this, I set a ratio of 4 to 1 and set my attack to 0.39s. I set this attack as it is fast enough to catch the kick, dipping the bass fast and letting the kick punch through, but I ensured the attack wasn’t too fast to the point it killed the low-end of the bass off. I then set my release to 294ms, this held the bass down long enough so the kick could get through however it didn’t hold it long enough to the point the bass would layer over itself making it sound muffled, unclear and take all the power out. Due to the use of both side chain compression and EQ, the balance between the instruments in this track is very good.
A different example of compression can be seen on the kick drum. This is an example of holding compression, I used it in order to better control the dynamic range of the kick drum. To set it up, use a fast attack and slower release just taking the top of the signal. This will give you a more consistent kick sound.
This is the guitar track from the Teddy Picker track I mixed. I thought I'd explain the post more as it is already on my Instagram (link below).
First of all EQ, on this EQ you can see that I have boosted around 100Hz, I did this as this is where most of the bottom end of the guitars sits. After this to take out some of the mud without sacrificing the warmth of the guitar I attenuated around 300-500Hz. Finally, I boosted around 2-5kHz for the pick sound, which is quite present throughout which is what I wanted. This frequency also pushes the guitars to the front.
You can see another example of guitar EQ on this track on the GTR 3 highlighted photo.
In this mix I used different types of automation, I used:
-
Panning automation
-
Level automation
-
Aux send automation
-
Plugin automation
I used panning automation on guitar one in order to make the guitar go wider, as explained in the blog, I did this as it left more space for the solo to shine through in the track. Speaking of, I also used the level automation to bring the level down on the guitar one track, which let the solo become the certain point of the song.
I used the level automation on different parts of the song as well. There are a few small guitar phrases in the song I felt were lead parts and had to be emphasized more in the track, I used this automation in order to make it push through the track a bit more.
For the Aux send automation I used an aux reverb on the guitar (will explain in the next paragraph), it activated when the solo came on, I did this as it gives the guitar a nice tone as well as let it have a bit more space in the mix without interrupting the solo.
Last on automation, I used two types of plug-in automation on guitar No.1. I used an overdrive and compressor. I used the overdrive as I felt as if the lead guitar would sound nice if slight broken up while the solo is playing, making guitar one feel grittier, giving the track an overall nice feel, the compressor on top of this was to simply control the dynamic range of the guitar during this section of the song as the overdrive gave the guitar gain. To control the dynamic range of the instrument I used a short attack and longer release with a ratio of 4:1. This holds the signal but not long enough for the presence of the guitar is taken away. Essentially holding compression just ensures nothing peaks and the guitar is always under control.
To actually put the automation on the track first select the arrangement button next to the padlock on the top left of where your tracks are displayed. Then select the track you want to apply automation on. After this use the mixer for things such as sends, level and pans and the other options for plug-in automation. Following this simply draw in what you would like to accomplish and with some trial and error, you will achieve this.
Here is an example of Time Domain Effects used on this track. I used three different reverb sends in this track, all for different groups. I did this for the snare, vox and guitars. I did this to save me from putting reverb on each guitar separately, which could cause my session to slow down and would increase file size. This also lets me choose how much I send through the reverb unit, letting me decide the balance of each guitar. Giving me a mix of wet and dry signal in my mix. You can see the reverb send on guitar 1 not sending very much. However, when it reaches the solo it fully sends. Making a bigger impact in the session.
I also use aux returns tracks for snare reverb, vox reverb, delay and chorus on most of the tracks I do. Tracks such as Creep and Set me free. Which, can be viewed on the projects page. Aux returns let you really control how much signal you route through a given plug-in, overall giving you a nice sound and saving space as you can have all guitars or vocals routing through one reverb unit instead of many.



How materials effect your recording
There are many different materials in a recording studio, due to walls, floors and ceiling all being made of different materials. These can all have an adverse effect on your song. Due to this, I looked at the absorption co-efficent. When looking at the absorption coefficient of the room I was recording in I soon came to realize the mix would be very bright for the Teddy Picker song. However, the reverb in the room and how the sound was reflecting in the room did not have a good effect in the recording, the same was happening for the drums in the drum room.
So, to counteract these issues I decided to use acoustic screens. I use two acoustic screens on the guitar, side by side on the amp, this controlled the overall reverb and ambience of the room giving me a clearer sound. I also used two acoustic screens in the drum room, both positioned behind the drummer as I wanted to keep the reflections of the glass and still needed to see the drummer to give him a signal that we were live and ready to go. In addition to this, if there are more unwanted frequencies I could implement things such as Helmholtz resonator, these take certain frequencies out of the room which can come in useful if a certain frequency is overpowering your mix. A real-time analyser can also be used to calculate the frequency response of a room, this will then give you the information necessary to get a flat frequency response. In my opinion, this information is essential when recording a song as your track could suffer if a room has a bad frequncy response.
Mic Techniques/Positions and Stereo Arrays
Before I explain the mic positions I use when recording a song, I thought it would be best to recap what was said on my projects page about types of microphones. In the songs I record, I use two different types of microphones. Dynamic and condenser. The difference between these two mics is how they operate. A dynamic microphone works off the electromagnetic principle, whereas the condenser works off of the principle of capacitance (through the use of Phantom Power, 48v). For a long explanation on these please visit the teddy picker session on the projects page where I explain a scenario where both types of mics were used for their own separate reasons (snare, ride and guitar). I will also give an extra three examples here on why I use different types of mics of certain things. Also, feel free to check out this article for more information on mics https://musicianshq.com/whats-the-difference-between-dynamic-and-condenser-microphones/. Now onto mic positions.
When miking up instruments such as the drum kit, bass or guitars the main mic techniques I use, are spot mics. I use spot mics when recording as it allows me to create a very tight, present and quality sound from the source of the instrument. In addition, because I am using cardioid mics I essentially get very minimal spillage from the instruments I am recording, giving me an even cleaner signal to work with. The only real apparent issue with this technique is when you do get spillage you can run into phasing problems. Phasing issues happen for one of two reasons, intensity and time arrival differences. There are simple fixes for things such as this; using acoustic barriers between instruments, using directional mics (cardioid and hypercardioid mics), spreading instruments further apart and finally ensuring that the mic is positioned correctly, in the sense that it is not pointing directly at a different part of the drum kit. This ensures it is occupying its own space.
Examples Of Spot Miking In a Studio Environment

The following photos are from a session recorded back in October.
This photo shows the position of where I placed an SM57 dynamic cardioid mic on the snare. I positioned the mic above the hoop of the drum by about two inches pointing down towards where the stick would hit the centre of the snare. This gave me a clean signal allowing me to pick up the stick attack as well as the crack of the snare while minimising the chances of the drum accidentally hitting the microphone when playing

I used an SM57 as it is a dynamic microphone meaning it has a high SPL. Its also a cardioid mic, meaning it attenuates frequencies at a right angle (off-axis spillage) to the mic by 6dB. I also used this mic for its tailored frequency response, this mic boosts frequencies around 4-8kHz. This is good as the crack of the snare as well as the stick attack on the drum occurs at these frequencies. Giving you a very nice snare sound.
This a picture of the spot mic on the ride cymbal. I used a cardioid condenser as they are more accurate and can pick up higher frequencies better. I positioned the mic under the cymbal as this way it wouldn't get hit when the cymbal is played and lets me get a closer, more accurate sound. I positioned the mic around three inches under the cymbal pointing between the edge and the bell of the cymbal. I did this as it gave me a glassy sounding ride whilst not picking up too much of the bell sound which can cause some build up in the high-mid frequencies. This is the setup for a pop song. For a more jazz sounding ride, I would position the mic closer to the bell as that's where the ride is struck in most jazz songs.
For things such as the ride, hi-hats and crash. I use cardioid condenser mics. I do this as they have a wider frequency response as they work on the principle of capacitance. Since they are condenser mics, they give me a cleaner sound with less spillage. In the picture, you can see an ATM33a microphone. I selected this specific mic as it let me point directly at the sound source I wanted to pick up and wasn't too big. I also chose this mic for its tailored frequency response, this mic boosts frequencies from 2-10kHz by around 4dB, it also attenuates frequencies around 100Hz and below by 3dB. This lets me achieve the glassy cymbal sound I wanted with minimal spillage from other parts of the kit as well as cutting out unwanted frequencies. The frequency response is pictured to the right.


This is a photo of the hi-hat in this session. I used a cardioid condenser mic. I positioned the mic over the edge of the hi-hat, pointing downwards. I placed the mic there to give me a glassy cymbal sound as well as give me a nice clashing sound when the hi-hat is forced shut using the pedal, which happened throughout the song. Overall this setup gave me a very glassy as well as bright cymbal sound, allowing me to pick up two different types of sound from one mic position.

This is a photo of the floor tom of the kit in this session, tom 1 is miked up to a similar standard. The mic is positioned over the top of the floor tom, around 2 inches above the hoop, placed about an inch over the hoop, pointing down close to the centre of the drum.
I used a dynamic cardioid mic as the mic would be exposed to a high sound pressure and I knew this mic would be able to cope with it.
I had it pointing just off centre to get the stick attack of the drum as well as the deep low frequencies of the floor tom.
overall giving me a well-rounded sound


This is a photo of a kick drum miked up from a different session (all mics from this session are explained on my Instagram, @adamdundasmixing or click the Instagram logo at the bottom of the page. This spot mic was a dynamic cardioid microphone placed on axis to where the beater head hits the skin, placed around 3 inches from the skin. I placed the mic here as I was recording a pop song, this position gave me good bass development as well as the sound of the beater head hitting the skin. Overall giving me a well-rounded kick drum sound. This position is the starting point for the songs I record, if I don't like the sound I'm getting then I adjust the position of the mic until I'm happy. A good starting point for a different genre, for instance, rock or metal, would be to place the mic closer to the beater head as this gives you a clicker sounding kick with less bass development, allowing the kick to push through the heavy bass and guitars which occupy most of the low-end in those type of songs.
I also use the spot miking technique on the other side of the kick drum. To do this I placed a rifle mic pointing straight at the centre of the kick drum. This reinforces the kick drum and adds a bit more depth to the recording.
As for things such as guitars and bass, a good starting point for them is between the cone and outer edge of one of the speakers located in the cabinet. With about an inch off of the grill cabinet. This gives you a good balance of low-end and higher-end frequencies. If you want more low-end in your mix, angle your mic so it is pointing more towards the edge of the speaker, this is where the bass develops and for more high-end, to brighten up the mix of the instrument. Angle it more towards the centre of the speaker.
Disclosure: I do not have photos from sessions where I miked up a guitar or bass amps so here are two examples. One from Musicradar.com (left) and the other from puremix.net (right).


For miking up guitar amps I typically use two microphones. An SM58/SM57 and an AKGC1000. I use these different mics depending on what's being recorded. If it is heavy I will use an SM58/SM57 as it can handle the high SPL level, these mics also have a tailored frequency response that benefits guitars as they boost around 4-8kHz. This is where a lot of the presence of the guitar is, making the guitar a lot brighter. This is also where the pick sound is on a guitar. These frequency boosts help give the guitar a brighter as well as precise sound.
If it is a solo or something not as heavy I will use an AKGC1000 as it lets me achieve a very clean and accurate sound due to its tailored frequency response. These mics also feature a cardioid polar pattern, cutting off all the unwanted noise.
Stereo Arrays
In addition to using spot mics when recording I have also used many different techniques. These include the AB(spaced pair), XY (coincident pair) and ORTF(near-coincident pair) configurations. Each of these has different uses as well as advantages when actually recording the song. Stereo mics add a layer of depth/distance as well as the real left-to-right and perspective you would have if you were standing in front of the instruments listening live. They even add natural reverb to your mix really transforming your song. These type of thing are lost when only using spot mics.
If you intend to use these types of techniques. First off, you need identical microphones with the same frequency response as well as polar pattern. You should check they are the same model number too and ensure to match their levels. Some manufacturers do actually offer stereo pairs. Brands such as Shure or AKG.
Before going onto where I have used these miking techniques. I thought I'd explain what these techniques are to the people that don't know.
First off, the AB configuration. Also known as spaced pair miking. For AB miking you place the two identical microphones a few apart and at the same height (usually 1.5m). You do not need a specific polar pattern for this as any works. Omni-directional is the most popular in the industry. The bigger the gap between the two mics, the bigger the stereo image. This method creates a stereo image because the instruments in the centre of the group of instruments produce the same signal for both mics. When monitoring the mics you hearing the centre image of the instruments between both speakers. If an instrument is closer to one mic (off-Centre) than the other then the sound reaches the closer microphone before reaching the other microphone. Both mics do produce the same signal. The only difference is the time arrival between them, which creates the stereo image. This technique does not work in mono as it will phase due to time arrival differences.
I use these mics for the accurate stereo image they create. I also use them to capture room noise and give the recording a warm ambience. This enhances the sound of your recording and makes it sound bigger. Giving the recording some natural reverb.
The next configuration is the XY configuration. To set up this stereo array you need two identical microphones both pointing in opposite directions and meeting at a right angle. This array is mono-compatible, therefore it won't have any phasing issues when being listened to in mono. It gets its stereo spread from level differences between the two microphones. This technique gives you a very sharp as well as clean signal, giving you a precise stereo spread.
The last stereo array I use when recording is the ORTF technique (near-coincident pair).
To set up an ORTF configuration, place the microphones a few inches apart with both of the microphones grilles pointing forwards at a 110-degree angle. The common rule for this mic is seven inches apart at 110 degrees. The wider the microphone are apart, the bigger stereo spread you will actually get. Essentially, what this mic position reflects is what you would hear if you were standing in front of the band. Due to this, this mic technique gives you accurate localisation. I use this configuration as it makes something like a backing vox or band sound more spacious and sound warmer in your mix, compared to the XY configuration, due to the increase in the stereo field.
The stereo array in this configuration occurs through time arrival differences in lower bass frequencies and decibel level difference (intensity) in the higher frequencies. Overall this technique gave my mix an added bit of warmth as well as making the recording sound a lot bigger and not very tight.
Stereo mics are essential, in my opinion, when recording live orchestral/traditional music
and when recording different genre in the studio.
Now, onto where I actually use these techniques in a real scenario.
Examples of Using Stereo Arrays In the Studio

When I record bands, I use two separate AB configurations. The first one is the overheads. I place these around three feet above the kit over the cymbals, ensuring they are both equal heights and both angled the same. I place them over the cymbals as it gives me a stereo image of the glassy cymbals that reinforces the spot mics. This will also fill my drum sound with natural reverb. Making my mix sound brighter and give it warm ambience.
Pictured below is the other AB pair I use when recording drums. The room mics. I typically place these mics both three feet back from the kit, both facing the centre of the drum kit. I do this as it widens the stereo image of the kit, making it sound bigger. Fills the drums with natural reverb and a warm ambience.
Overall, enhancing the drum sound I have, as mentioned when explaining AB miking above in the previous section.



Here is another example of using Stereo Arrays in real life. This is a floor plan I made for a session I was recording called 'My Sweet Lord', it should be available on the projects page of my website. I use floor plans as they give me starting points for all mic positions, it also gives me a picture of how I am going to record the session. Saving the band and myself time before the session has even started.
For this session, I used two separate ORTF miking setups. I did this for the backing vocals as well as the band as it made the mix sound as if you're standing right in front of the group of backing vocalists with the band playing just behind them. This gave my mix more space, making the song sound warmer with natural reverb. Really adding a new layer of depth in my recording. Overall giving the effect that the song was recorded live and not with the use of overdubs. Making the song sound pure and not just put together in a midi. Which was exactly what I was going for in this song.
I used an AB configuration on this song as well as it gave me a wider sound, capturing the natural reverb of the room. I was very happy with how this recording and mix turned out.
This is from the "Down To The River" track. Another floor plan, that was made for the same reasons as the one above.
You can see that I used a combination of AB, XY and ORTF in this track. Why I used ORTF and AB are the same as before. To widen, put a warm ambience and natural reverb in the mix as well as make the listener feel like they are standing in front of the singers and bands.
There are three different XY configurations used on this track. I used this technique as it gave me a precise stereo spread of the singing groups. Overall adding an extra layer to my recording. Making the recording feel pure and rich.
To listen to both these songs please refer to the projects page.

MIDI/Audio Editing, Recording and Interface Setup
Audio Editing and Interface Setup
When I was given the "Set Me Free" track to remix I instantly went for the guitars in the track. There were 11 separate guitar tracks all laid out. I began going through each of them individually and making cuts using "command + E", which let me highlight and only take the specific sections I wanted. After this, I faded them in and out (I did this by zooming in, highlighting the part of the track that needed to be faded and used the shortcut "Option+Command+F", which inserted my fade. The same shortcut also inserts cross-fades when put over two separate tracks). This ensured that the tracks I cut were smoother and felt more natural in the mix, also avoiding the possibility of pops through the track. This gave me many ideas I could use in the track.
The first thing I did when cutting out the separate guitar parts was make the guitar (GTR) loop in blue, this part plays through the song, essentially the backbone of the track. To make this part I cut it into two bars and duplicated it using "Command+D".
Next, I felt as if it needed something to accompany it throughout the track so I found a chord, broke it out and edited in fade in and outs. I then duplicated it throughout the session (Pink GTR playing throughout).
After this, I used the same techniques of chopping out parts of audio on different tracks in the session and I ended up with a wide range of parts with no real arrangement.

Which took me to my next step, arranging the session. I began taking the different parts of audio and putting them into places such as the pre-chorus and chorus as I wanted the song to have a build-up through the song and go back to square 1 when the verse hit. The song has many different parts of edited audio. However, you wouldn't be able to tell. I used techniques such as crossfades to ensure all the edited parts of audio were smooth. I even recorded my own section on an electric guitar.
To do this I used my audio interface and laptop. To set it up, I plugged two TRS jack cables from the back of the Focusrite into my KRK Rokit 5 monitors. These monitors are plugged into the wall using kettle plugs. I then plugged my interface into my laptop using a USB-c cable to USB. The USB-C plugged into the Focusrite with the USB going into the side of my laptop. After everything was plugged in, I was ready to power up. I first started by opening my laptop and DAW then turned on my speakers at the back. The next step of recording with a Focusrite is to plug the jack into the audio interface, click the instrument button (INST) and set a good gain.
Gain Staging is essential when recording, it is the first step to a good sound. If your gain is too low you will get a poor and not powerful sound that will bring up a lot of spillage when brought up in level when mixing. Or, you can risk the channel peaking if the gain is too high. A good starting point for a mixing desk is between -3 and -6 as this will give you some headroom if things do go above this level. For a Focusrite, when the light is flashing green you are hitting around -6. This is where you want to keep your signal as you can cause the channel to peak if it goes red. Which causes your channel to distort.
Once the gain was set I was ready to play. I record armed my channel and began playing. But, I muted a note on accident.

To fix this mute note, I played the same note and let it ring out, this is called a drop-in. I then cut the end of the muted noted out and replaced it with the note that was ringing out. After this, all I had to do was crossfade the two separate parts of audio together, both at the start where they connect and where it finishes, just before the next note. This made the separates parts of audio equal in power and fixed the muted note. Making it sound as if it had been done in one take. The same drop-in technique can be utilised in any session where a drummer, guitarist, bass guitarist or vocalist makes a mistake. Crossfades can also be used whenever splicing two different takes together in order to make them sound the same.
I regularly use this technique in my recording sessions.
Creating Drum Loops/MIDI Editing
During this same session, I decided that I did not like the drums that were already programmed into the song, they were basic and didn't feel right with the song.
Because of this, I decided to programme my own drums into the Ableton session. I had an idea of what I wanted the drums to sound like so I got to work making different loops of different parts of the song. These changing parts can be heard in the song.
The top-left screenshot is the intro drums, I didn't want the kick to come in yet as I knew it would have a better impact if they came in when the vocals came in. When they did come in, the drums change to the top-right photo, in this beat the high-hats go from downbeats to upbeats, giving more life to the drums.
The next part of my drum sound I knew I needed to create was pre-chorus drums. Pictured bottom right. Virtually the same but with crashes and an open hat to indicate the change in the part of the song. Just before the pre-chorus I wanted more of a build-up before the chrous. To do this I didn't use any open hats and followed the melody of the drums. This gave me a good starting point to add in some sort of fill before the chorus.




For the fill, I only used a hi-hat for a quick build and then used the tom, snare floor tom, kick and crash to make a fill suited for the song. (pictured bottom right).
The last part of the drum programming I done is the chorus drums. I wanted to have a crash and open hat in this part as I felt it would sound great with the different moving parts building up with it. It did.
When I was happy with how they sounded I moved onto levels, panning and then processing. Adding things such as EQ, compressors (to bring up levels as midi doesn't have a varying dynamic range if the velocity is the same) and reverb plug-ins such as Vahalla.


Drum programming is essentially MIDI editing. My proficiency has greatly increased in MIDI the last few years. This can be seen through the track "Can't Get U" on my projects page. The session was first given to me in a very disorganised manner with many issues. The claps were out of time, the strings were playing the wrong notes and the intro synth also had timing issues. In addition to this, I also had to add a snare, on beat with the clap playing at the same points apart from the start.
To fix up these MIDI files I first started by fixing the clap as it was the most apparent mistake in the mix. I cut out the parts that were out of time, selected 10 bars of the clap that was in time and then duplicate it twice, fixing the out of time parts. To add the snare I simply duplicated the parts where the claps came in then added a small extension for the intro.

To fix the strings, I found the key of the song then played over the part of the song that was out of key correctly. Following this, I dragged some of the notes into place timing-wise. Therefore, fixing my song. The looped section is where the notes were out.

As for the intro synth, all I had to was select where the notes were descending and move it over, around a bar and a half left. This then allowed me to duplicate these sections of MIDI and insert them into the places of the track they were meant to be by using shortcuts such as "Command+C" for copy and "Command+V" for paste.
I have also done different types of MIDI editing when composing the music for an advert, using various plug-ins, making different loops and playing along with the advert to give it a bigger, more dramatic effect. Although, I currently do not have access to this session. A blog post will inform you when it is available.
Health and Safety. Fault finding and Signal Flow
The most essential thing to remember when working inside a studio. Is the health and safety of the artists and producers. Therefore, whenever I record a band I always make sure I complete a risk assessment of the room. I do this to make sure that every risk is taken to the lowest level reasonably practical.
When completing the risk assessment I look for the hazards, the people that can be harmed by them, i then evaluate the risks involved and the existing control methods to prevent them, following this I look over my findings. The risks involved can be anything from trip hazards to electrocution. Here is an example of what a typical risk assessment looks like.

Through the use of these assessments I can keep everyone involved safe and minimise injury from the issues listed in the diagram.
Another essential thing to do when working in a new studio is to learn the signal flow of that studio. I do this as it gives you all the information of where to find a fault in the setup when something goes wrong. Making fixes easier and more efficient, saving time in the session.
When recording within the riverside complex in studio 2, I found the signal went from: MIC - XLR - DIST.BOX - MULTICORE - SOUNDCRAFT - PATCHBAY - MOTU INTO BINARY - THROUGH DAW - MOTU INTO ANALOGUE - PATCHBAY - LEFT SIDE OF SOUNDCRAFT - MONITORS. This is essential information when working in a studio you aren't familiar with, making fault-finding a lot easier.
For instance, if I was getting no signal into the desk I could track backwards. I would first change the channel on the distribution box to see if it was the channel that was the issue. If it wasn't I know I would then check the cable or replace it with a different XLR and if they were not the problem then I would know it's the mic that isn't working and I would then replace the mic.
Another example of how essential it is to know the signal flow of your studio: If the desk was getting signal and the DAW was getting signal but nothing was coming through the monitors, that means everything up until the DAW is fine and there is an issue after that part. Therefore, the first thing I would change is the channel I am routing the signal through on the left side of the desk. If that does not work, I would go back through each part of where the signal goes until I find where it goes wrong.
Recording Process
I start a session the day before as I will know what's due to be recorded through pre-production contact with the band or artist. Because of this, I can make a template of the song session and group guitars, drum kit, keys and vocals, organizing the session before it has even started. I usually set a sample rate of 48 or 88.1kHz and 24bit bit depth. This is the industry standard for recording. Sample rate is the rate of samples taken every second and bit depth is the how much dynamic range is stored per second.
From there I can then begin routing the parts of the songs through the different groups and out onto the master fader, I also set up the input and outputs from the mics so all I need to do when the band is in is mic up, soundcheck and record. Then, I save the session template. It would look something like this; "SONG BAND NAME DATE SESSION NUMBER".
This lets me easily find the session if we don't have enough time to finish the session or they need to go due to unforeseen circumstances.
Onto recording the session, once the mics have been set up in the correct position I will begin sound checking. The first and most important part of recording is the gain staging. gain staging is an essential part of getting a good signal for your microphone. Too quiet, then you will bring up a lot of spillage when bringing up the level later in the mixing stage. Too much, your signal will become distorted because the signal will be "too hot". Typically, I set my gain between -3dB and -6dB, this gives me some headroom incase the signal starts to run hot, giving me a good level throughout, just as mentioned before on the MIDI section.
After this I will start putting compressors on channels that need them and add in EQ on each mic, EQ is the act of attenuating and adding certain frequencies in order to achieve a bright, more rounded, less muddy, glassier or cleaner sound depending on the instrument. Examples of my use of compression and EQ are at the very start of the page and on my projects page. After this, I can set up a reverb send for the vocals in the track, giving the vocals more space in the mix. Following this, I would begin adjusting and setting up a headphone mix for the artists, ensuring they can hear themselves well and can also hear the other parts of the band to keep in time. Once this is complete I start recording.
After the backing track is recorded I can move onto overdubs such as the main vox and solos. Using the drop-in technique if any mistakes are made in the session. Overdubs are just recording over a backing track on a different channel as sometimes it isn't possible for a band to record a full song at one time.
Automation
Examples of automation are at the start of the projects page. However, I feel as if I should give a more in-depth example and a visual representation of how I use automation in a mixdown. For this example, I will be referring to the Creep Mix on the projects page and how I used level automation on the vocals.

In the photo, you can see the arrangement page of the creep session. I used volume automation on the vocals during different parts of the session so that the vocals sounded wider and had a greater effect when the song progressed. You can see there are 5 vocal takes. The main vocal stays at the same level and take two and three come in during the first chorus. Making the track feel full when this part comes in.
However, when the song reaches the bridge I wanted it to feel as if this was the climax of the song. So, this is when I added in the other two vocals in the track, panning them slightly wider than the others, this made my track feel complete adding a new layer to the song. I am very happy with the final result of this song.
This is just a small example of how I can utilise something such as level automation to keep a track interesting.
Mastering
LUFS
The first thing that is good to know about mastering is what LUFS are. LUFS, stand for loudness units full scale. This is a new unit of measure used to perceive the loudness level of audio artefact. It came out from the loudness war. This was when there was an ongoing trend of raising the volume of the music. It even got to the point where the audio just wasn't as good and wasn't as high quality as before.
People began realising this when Guitar Hero 3 released, as the GH3 version of Death Magnetic was an overall better record compared to the actual CD version. To read up more on the loudness war, please use this link https://en.wikipedia.org/wiki/Loudness_war.
Back onto LUFS, LUFS are a standardised measurement of how loud a piece of audio is that also takes into account the human perception of a given piece of audio alongside electrical signal intensity. Put simply, this means this unit of measure is very accurate.
Why Use LUFS? Well, LUFS are used to set goals for audio normalization for things such as music streaming, TV and any sort of broadcasting.
When using them to master, bring up level, on a track. The LUFS you aim for vary on what you are actually mastering for. Currently, for a streaming master you want to aim for around -14LUFS with a true peak of -1dB and for a CD master you want to be aiming for -9LUFS with a true peak of -0.3dB, this is the current standard for loudness level within the music industry.
Using Izotope RX8
As explained on the projects page I used RX8 in order to clean up tracks such as "Another Girl Another Planet", "Here Comes The Sun" and "La Fillie".

On the left you can see the screenshots of what I done with the audio.
On "Another Girl Another Planet" I used a de-click to take away all the clicks underlying the track. Next was the de-crackle, this took out a lot of the noise sitting under the guitars, making the track sound a lot cleaner. To find where the crackle was the most I activated the output noise only part of the plugin, then using the settings I went through the bandwidth of the plug- adjusting it so it would take out as much as the crackle as possible without affecting my tracks overall quality. I also used a Spectrum Analyser, this let me take out more unwanted noise in the track, to do this I used the learn feature of the plug-in and then fine-tuned it myself, this took away a lot of bad frequencies in my track. I also used the normalize plug in to bring the track up to level.
On here comes the sun, I used the same plug-ins, but with different parameters, this gave me the best possible sound to the track without taking away the life of the song.


On this track, La Fillie. I ensured to cut out all the bad frequencies, noise and loud crackles, making the track sound pure and just like the original that was given to me
In addition to this, I also used the attenuation features, fade features and extra plug-ins such as de-rustle as these helped make each track cleaner and up to professional industry standard.
Overall, I am happy with the results of all the tracks as they were all improved to a good extent. Please view on the projects page of the website.
Ozone 9
Another DAW I have recently worked on is Ozone 9, I used this to master three separate tracks for a client. They will be available soon on the projects page, currently delayed due to technical issues.
However, I can still talk about how I used this plug-in. Ozone 9, by Izotope, has many different stock plug-ins. Ranging from EQ's and imagers to maximisers and spectral shapers.
For my projects, I used Exciters, EQs and vintage EQs, imagers and maximisers. The screenshots below are from "Reflections" session.

To start off I always use an EQ or vintage EQ. I used this vintage EQ first, as I felt as if the song was lacking in low-end as well as tops but also had a bit of muddiness due to the big guitars. This EQ made it very easy to select the frequencies I wanted and instantly made my track cleaner. I also used the mid/side EQ, this let me clean up and adjust more frequencies of the track, making the audio sound crisp.
Next, I used an Exciter. This let me put slight harmonic distortion on certain frequencies by using different types of settings such as the sound through an analogue desk, dual-triode and tube. This gave frequencies in my song a little extra warmth. It also added some saturation to the song alongside rich/crispness. Giving me what the EQ couldn't.


I then resorted back to a graphic EQ. This let me really tone down those low-mids and help clean the track. I then gave the track more tops and hi-mids to let the cymbals, vocals and guitars stand out a bit more in the mix. Really making my track feel rich and less muddy.
I then used a compressor to better control the dynamic range of the song.
This brings up the quieter parts and brings down the louder parts of the track. I had it so that the compressor was taking off a small margin, this way it made the track sound more rounded but didn't damage the overall dynamic effect the song had.


I then used an imager to increase the stereo spread of the frequencies in the song. Making my song sound wider as well as giving it more depth.
Imagers can also be used to thin the stereo spread of some frequencies. I did this in a different track Called "Hangover Blues"

This is the imager from "Hangover Blues", as you can see the stereo spread is pretty wide in this track. However, I thinned and gave less depth to the low-mids as there was a lot of muddiness in the track due to the way it was recorded and the overall style of the song. This fixed some of the mud issue but still kept the overall feel of the song.
Before I used the maximiser I still felt as if the track needed some more low-end. To get the low-end I wanted I used a low-end focus. This let me focus in on the exact frequency range I wanted to boost, to give that extra bit of low-end I needed.


I then used this maximiser on the track to bring up the level of the track to -14LUFs. I did this by using the learn threshold feature down the bottom right corner, which set my threshold to -3.6dB.
This let me bring up the level without bringing up things such as spillage and bad frequencies. I then set a ceiling of -1dB (the true peak of the song), this made sure there were no peaks in the song.

Pictured left is the other maximiser settings used on the track for the CD master. I set the threshold to learn -9LUFS and then set a true peak of -0.3dB. This ensured there were no clipped samples on the track and brought level up to the current industry standard.
When I finished these masters, I checked them over using the waveform statistics on RX8 and they both showed there was no clipped samples. In addition, they showed that LUFS level was within 1LUFS to what I set it at and both had the true peak of what I set. An example of this is pictured below.
