This is part 7 of our series of blog posts with simple explanations for the most common terms in audio, as well as features and functions found on recording studio equipment. The resource to end the confusion and help you focus on creating amazing music!
Fellow audio nerds, please note:
The explanations are as non-technical and simple, as possible. I want it to be practical and useful and I don’t care if everything is 100% scientifically correct. All that matters to me are the results that the people who read this will hopefully get.
It’s not meant to be an academic piece of work on audio technology and if you are an experienced engineer who thinks this is stupid, then I’m really sorry, but this is not for you then. And I can definitely understand your desire for accuracy and your love for nerdy discussions, because I’m a total freak in this regard, myself (just ask my wife). I just think inflating our egos by throwing around complicated terms and definitions is not going to help anyone trying to capture a great song.
Computer, Software And Audio Files
“Digital Audio Workstation” - This is your recording software (ProTools, Logic, Cubase, Ableton, Garageband, Reaper, Audacity, Sonar, Reason, etc.).
Driver Software / Routing Mixer
This is the software that comes with most interfaces. It allows you to configure the interface and define which signal is sent where. Some even have built-in effects for either monitoring, or to record through.
Lossless audio file. The professional standard.
Lossless audio file. An Apple standard. Not as common, since Apple products can also process WAV files. So not really any reason to use AIFF.
A compressed, lossy file. Doesn’t sound as good as WAV. Only to be used, when you have limited disc space available or need to upload to some limited online storage or service. If you can, always use WAV.
A compressed, lossy file. Doesn’t sound as good as WAV. Often even worse than a well encoded MP3. No real reason to use this format.
A compressed, but lossless file. Can be used to send and upload good sounding, lossless audio while using less space than WAV or AIFF. To keep it simple and avoid compression/conversion (which can always affect the audio quality), using WAV is still recommended, whenever possible.
A compressed, lossy file. Doesn’t sound as good as WAV. Only to be used, when you have limited disc space available or need to upload to some limited online storage or service. If you can, always use WAV. Apple iTunes uses AAC.
Audio latency is a delay or lag affecting digital audio playback. You hear something slightly late, because processing a digital signal introduces latency. The more powerful your computer is and the better and faster your interface is, the lower the latency.
Simply put, the higher the buffer size, the more time your computer has to process stuff. It’s a buffer that saves your computer from making mistakes, when it can’t keep up with the amount of data needing to be processed.The bigger the buffer-size, the higher the latency and the lower the CPU usage. Low latency requires a small buffer size and causes high CPU usage. You can set the buffer-size in the settings of your DAW (audio software).
Bounce / Render / Mixdown
This describes the process of mixing multiple audio tracks down to one stereo (or mono) track. Or just re-rendering a single track to create a new version of it. Some DAWs (audio software) call it “bounce”, some call it “render”, some call it “export” and some use all of these terms.
Stems are combinations of individual tracks that get exported from a session. When you combine the stems back together, you hear the whole song again. Although many people use this term in the wrong context, stems are NOT individual tracks! They are groups of tracks, that can be used for live backing tracks, or sent to stem mastering. For example: Drums, Guitars, Vocals, etc… NOT: Kick, Snare, Lead Guitar, etc.
A track is a single audio file that is part of mix. For example: kick, snare, lead vocal, bass guitar, etc.
A multitrack is a folder with multiple audio tracks of the same song. Many people refer to this as “stems”, but this is just wrong, sorry. Stems are groups of tracks. If you send your song to a mixing engineer, you send him/her a multitrack (all the individual tracks), not just stems.
Dither is noise, that is added to a digital signal to hide/remove distortion that happens when converting the signal from one bit depth to another, except for 32 Bit files. Don’t think about it and just do it whenever you render a track or song to 16 or 24 Bits.
The waveform is a visual interpretation of a digital audio file. Your DAW (recording software) shows you your audio files as waveforms.
A plugin is a software effect that can be inserted on the tracks, groups and busses of your DAW (recording software) to process the audio in a certain way.
Virtual instruments are software instruments that can be used inside your DAW (recording software) to make music without using “real” acoustic instruments.
A sample is a recording of an instrument, that can be played back and added to a song. For example a drum hit, a note on a piano, or an SFX sound like an explosion. A sample player is used to play the samples back. It can be programmed, automated and often played like a real instrument.
Trigger / Triggering
Triggering means sending trigger “commands” to a sample player that cause the player to fire off a certain sample. A common example would be to enhance or replace drum hits with drum samples.
An amp sim is a digital simulation of a guitar amp or bass amp (“amp” meaning amplifier).
Software Monitoring / Direct Monitoring
Software monitoring means listening to a signal after it has gone through the DAW (recording software) and all the processing applied there.
Direct monitoring means listening to a signal directly from the input of your interface, or preamp before it hits the DAW (recording software).
Direct monitoring is latency free, but you never really know what the recorded signal exactly sounds like. Software monitoring causes latency, but you hear exactly what’s going on, all the way through the chain.
You can usually choose in your interface’s routing/driver software or inside your DAW (recording software)
Quantising means aligning audio to a grid, a certain tempo. So that every note or drum hit lands on (or around) a certain beat. This can be done automatically or manually. Mistakes can be corrected and a bad performance can get some flow and groove back.
Pitch correction is the process of changing and improving the intonation (pitch) of a recorded voice or instruments. Mistakes can be corrected and flat or sharp notes are no longer disturbing.
Consolidating / Comping
Consolidating or comping means combining different audio events on a single track to one single file. So, for example, if you have recorded the bass guitar for the verse, chorus and bridge in individual takes, you can then comp them together to one file that you can then send to the mixing engineer.
These are the other posts in this series:
General Audio Terms
The Production Process
Routing And Processing
Microphones And Mic Accessories
Preamps, Converters, Interfaces
Cables and Connectors