Software and Hardware

Having an understanding of the software and hardware involved in the music production process is imperative in being able to produce high quality recordings. This page covers the functions of the DAW and other programming environments, new and emerging software, and the uses of a range of hardware.

Software

Functions of the DAW

A DAW, standing for digital audio workstation, is “a piece of software for recording, editing and mixing audio and MIDI files.”

DAWs include Logic, ProTools, Cubase, Ableton, Reason, FL Studio and many more.

Here are the main functions of a DAW:

  • To record audio: This is usually done in real-time, meaning that there is a very small delay between the sound wave being captured and it being recorded.

  • To process MIDI data: This can either come from a physical instrument like a keyboard or from virtual software instruments from within your DAW.

  • To edit audio: DAWs use non-destructive editing, meaning that any changes that you make can be reversed easily (not permanent). DAWs also allow for non-linear editing, which means that you don’t have to edit in chronological order; you can flit between channels, takes and sections of a project as you please.

  • To add reverb: The fact that DAWs can add reverb to a signal means that you don’t have to record a signal with a natural reverb which can cause recording problems. Convolution reverb is especially good because you can place your signal in a specific space, such as the Royal Albert Hall or the Sydney Opera House.

  • To support amp modelling: Amp modelling allows you to record electric guitar and bass via direct injection (DI) without the need for an amp. DI signal is much cleaner than recording an amp, but it can sound a little unnatural, which is where amp modelling comes in. Usually in plugin form in your DAW, amp modelling emulates the sound of different amps, from valve to solid-state.

  • To support software instruments: Software instruments generate sound when triggered by 'drawn in'/programmed MIDI data or when a key on a MIDI controller keyboard is pressed. Software instruments create sound either via samples of real instruments or using synthesis and sequencers.


Other Programming Software

MIDI, standing for Musical Instrument Digital Interface, is “a universal language that is used by musical technology equipment and is used to send instrument and controller information."

OSC, standing for Open Sound Control, is another protocol like MIDI that can be used to transfer instrument and controller information.

As explained on this page on sequencing and MIDI, MIDI does not contain audio. It is purely a way of transferring data which can then be used in conjunction with a synthesiser to make and manipulate sounds. This is the same with OSC.

The differences between MIDI and OSC lie in their data type, transfer type and transfer methods.

As you will read on this page, MIDI is stored as binary data, it is transferred serially (meaning that the data is sent one bit at a time, sequentially), and it can be sent via MIDI cable or USB.

OSC, on the other hand, uses a message containing an address, a type string and an argument, rather than just binary.

The address controls where the message is sent to. It is modelled after web addresses and is designed to be easier to read compared to binary. The address always starts with a ‘/’ and is followed by a string of characters which can be programmed.

The type string states what kind of information is in the ‘arguments’ section. An ‘i’ represents an ‘integer’ or number. An ‘f’ represents a ‘float’ message. An ‘s’ represents a ‘string’ message and a ‘b’ sends a ‘blob’ message.

The ‘argument’ part of the message contains a set of bytes representing the data being sent. This could be the pitch of a note, whether or not a sustain is on or off, when a note starts and ends etc. It will be sent as a string of binary.

To put this into context, imagine that you want to change the frequency of a note being played on an oscillator to an A (440Hz), the message you send may be something like this:

/oscillator/ ,f 440.0

OSC is transferred over the internet, either via ethernet cables or wirelessly over WiFi.


New and Emerging Software

Online DAWs and Remote Working

Being able to work away from the office, or studio as it were, is more important now than it ever has been.

Online DAWs are similar to regular DAWs in that they are used to record audio and sequence MIDI data, but rather than having to download the software onto your hard drive, this software exists on the internet in your browser or web-based mobile app.

Online DAWs are not only cheaper than traditional DAWs but they also allow for far greater accessibility and flexibility when it comes to collaboration with others.

The downside to this technology, however, is that you don’t get the same processing power as you do with a traditional DAW, but as the internet progresses and becomes ever more powerful, perhaps one day online DAWs will become the norm.

Examples of online DAWs:

  • Soundtrap

  • BandLab

  • SoundBridge

  • Soundation


Drum Sample Replacement

Drum sample replacement is the process of replacing drum recordings with sampled drum sounds. The technology can also be used to thicken the sound of the drums by adding drum samples as another layer, rather than replacing the drum recording.

This can be done manually by importing a drum sample sound every time a part of the recordings needs replacing or layering. For example, you might input a hand clap sound every time there is a snare played in the recording and replace it with that.

The process can also be done automatically via the use of a plugin. The plugin analyses your drum recording, predicts what each sound could be (kick, snare, high hat etc.) and then replaces the sound with a sampled sound, all in real-time.

Amp Modelling

Amp modelling is the process of emulating the sound of an amplifier, such as a guitar amp. This technology can be used when an instrument (most commonly an electric guitar) has been recorded using DI .

A DI recorded electric guitar or bass can sound unnatural, and amp modelling can be used to help with this. Amp modelling technology is highly customisable and it can be used to replicate a range of different amp types and characteristics.

This technology isn’t just limited to plugins and software; hardware amp modelling devices are also available. This is a great advantage because you can have a large variety of amp sounds and models at your disposal.

Kemper profiling, or simply just Kemper, is a range of devices that also emulate the sounds of various amps. However, they are not classed as amp modelling devices because, rather than make the sound from scratch, the profiller analyses the characteristics of an amplifier and replicates the input signal all the way to the output stage.

Hardware

Microphones

The main function of a microphone is to capture sound.

The diaphragm on the microphone vibrates as sound waves hit the surface. These vibrations are then converted into an electrical signal. There are three types of microphone (dynamic, condenser and ribbon) and they all record sound in slightly different ways.

More information on microphones can be found here.

Audio Interfaces

An audio interface is “a device that connects a computer to audio peripherals such as microphones, speakers and music instruments."

Audio interfaces contain A/D converters and D/A converters. They sometimes have preamps inside as well.

A/D converters, standing for analogue to digital converters, encodes the analogue information from a microphone into a digital signal that your DAW can understand.

A D/A converter, standing for digital to analogue converter, does the exact opposite of what an A/D converter does. It allows for digital output signals from your DAW to be converted into analogue signals for output devices such as speakers or headphones.

Microphone Preamps

A preamp is “an amplifier for boosting signal to a level suitable for processing and further amplification.”

Preamps boost input signal levels to line level signals, which enable hardware and software further along in the signal chain to process the signal. Learn more about signal levels here.

A microphone preamp specifically boosts the output signal from the microphone to a ‘hearable’ level for your DAW or mixing desk, or whatever technology you have next in your signal chain . This process occurs because the output level from a microphone is so small, due to the minimal amount of voltage generated by the movement of the diaphragm inside the microphone.

Most amplifiers, as well as audio interfaces, contain microphone and instrument preamps which allow you to plug your microphone or guitar, for example, straight into the amp. Otherwise, a separate, free-standing preamplifier will be required, adding another step to your signal path.

DI (direct injection) Boxes

A DI box is “a unit that converts high-impedance unbalanced signals (line/instrument level) into low impedance balanced signals."

If you wish to know more about impedance and balanced/unbalanced signals, please see this page on leads and signals.

DI boxes are commonly used to record electric instruments, such as electric guitars, electric bass guitars and keyboards. It takes the output signal from the instrument directly to the next device in your signal path. It can be a good alternative to recording the output of a guitar amp as it is a much cleaner signal. The sound of the amp can be easily replicated using amp modelling software or plugins so you are not missing out on the authentic ‘amp’ sound.

Mixing Desks

A mixing desk is “a device for changing the relative levels, affecting the EQ and changing the dynamics of a number of audio signals and blending them together.”

Looking at a mixing desk can be a bit intimidating, but really it's pretty simple to understand. Much of the complexity is due to repetition of the same pattern.

There are two sections to the mixing desk, the input section where input signals arrive from devices such as microphones and electric instruments, and the monitoring section where audio that had been recorded is sent to output devices such as speakers.

The input section will consist of a number of mono channels. Each channel will have a control for the gain, EQ, auxiliary sends (see here for info), panning and the overall level.

The monitoring section will normally have two channels, allowing for a stereo output. What’s included will vary between desks, but on most there will be LED lights to show the level of what’s being monitored, a section controlling aux sends for each channel, mix monitoring buttons to determine which mix you want to hear and a master mix level.

Outboard Effects

Before software was used in music production, any audio effects that you wanted to add to a recording had to be added using outboard effects units. Even though DAWs with effects plugins are now available, many studios still use outboard equipment in their recordings as they can have a different, sometimes more pleasing sound quality when compared to their software counterparts.

When using software plugins, you can add effects to a recording after it’s been recorded at the mixing stage, but when using outboard equipment, the signal is sent through the unit and printed as it’s being recorded.

Guitar Pedals

Guitar pedals are used to manipulate the sound of electric guitars and bass guitars. Much like outboard effects, the guitar pedals are part of the signal chain when recording, meaning that whatever effects are added to the guitar sound can’t be changed afterwards.

Controller Keyboard

Controller keyboards handle MIDI data, meaning that they don’t produce their own sound. The notes you play on the keyboard trigger sounds on another synthesizer, which can be a software synthesizer inside your DAW, thanks to MIDI messages. More information on MIDI can be found on this page on sequencing.