Aliasing in audio is one of those concepts that you might know intuitively but don't have the language for. Most music producers or even casual listeners can identify something is off when aliasing occurs, though it's worth having a firm understanding behind how the original audio signal is being damaged so that you can avoid this destructive audio process.
Below, we'll explain what aliasing in audio is, what it sounds like, and share some strategies to help you avoid introducing aliasing into your mix.
Understanding Aliasing in Audio
Aliasing is a destructive process that can occur when converting an analogue signal (or live sound) into digital audio. When an audio signal is recreated digitally at a sample rate that's too low for the audio, it can cause high frequency components to become misrepresented, under-sampled, and ultimately, distorted.
You can hear what aliasing, sometimes called aliasing distortion, sounds like here:
What is the Nyquist Frequency?
When it comes to discussing aliasing in audio, you may come across this concept referred to as the Nyquist theorem or Nyquist frequency. This concept is named after Harry Nyquist, who discovered that in order to prevent aliasing when converting an analogue signal to a digital audio signal, you must process audio at twice the highest frequency component.
Let's break that down a little. In simple words, the highest frequency is half the sample rate to prevent errors when converting an analog signal to digital audio. This digital audio processing occurs in your interface, within a mixing console, or whenever signal is run through an analog to digital converter (A/D converter).
We can use the Nyquist theorem to explain why it's commonplace to use a sample rate of 44.1 kHz or 48 kHz in digital audio. Since the frequency range of human hearing is roughly 20 Hz - 20 kHz, if we double the upper end, we get 40 kHz. Add a little buffer and you'll get the most common sample rates used to process and playback audio within the digital system of today.
Still not sure how to wrap your head around the Nyquist frequency theorem and how it relates to aliasing? Take a look at this explainer:
What Causes Aliasing in Music?
In scientific terms, aliasing occurs when an oscilloscope cannot sample an analogue signal fast enough, creating misidentified frequency content that are presented as artifacts within the processed audio. In other words, it's when your sample rate is too low to process a particular piece of audio, leading to errors and indistinguishable high frequency components in the digitized signal.
Aliasing also occurs in other mediums like video as it relates to sample rate. For some, aliasing may be better explained through the example of the wagon wheel effect, showcasing temporal aliasing:
What Does Aliasing Sound Like?
You can look up examples of audio aliasing or listen to the snippet we included above, it sort of sounds like high-pitched, atonal frequencies that are somewhat rough on the ears. As we'll discuss below, there are some downsampling plugins that recreate it deliberately for a certain effect. For example, Ableton's stock plugin "Redux" uses downsampling to create artifacts within audio:
How to Avoid Aliasing in Audio
In order to avoid aliasing in audio, stick with sample rates that are 44.1 kHz or above when recording or processing audio. This is assuming that you're converting analogue signal to digital audio within the context of producing and sharing songs across standard outputs like streaming platforms or social media.
Other more nuanced outputs, like broadcast, TV, and film, may require different sample rate requirements, all of which should be provided by your respective distribution channel. To avoid aliasing audio, simply use the right sample rate for the job.
Is Aliasing Ever Used Creatively?
Yes! Aliasing distortion or downsampling can have a unique Lo-fi effect that is incorporated in retro emulation plugins, vinyl plugins, and other effects like Ableton's Redux to consciously create artifacts in the audio. This effect is generally used sparingly, but it can add a lot of character to your mixes when used deliberately but sparingly.
How Do I Know Which Sample Rate to Use?
According to the Nyquist theorem, you need to use a sample rate that's double the amplitude of the highest frequency within your composition. Keeping in mind that the range of human hearing taps out around 20,000 Hz, you should be completely fine to use the standard sample rate of 44.1 kHz. Some engineers process audio at 48 kHz just in case to capture all of the converted audio signal in full fidelity, but this may be overkill for the average listener.
Remember that the larger the sample rate, the more data will be captured and therefore, the bigger the file. There is a tradeoff when it comes to selecting a higher sample rate, but generally speaking, if you opt for 44.1 kHz or higher, you should be totally fine for reproducing analog audio to a digital signal accurately.
It stands to reason that 44.1 kHz should cover you in most scenarios any how, since this is what reflects the human range of hearing, but never forget the power and simplicity of using your ears. You can hear when audio signals have unwanted artifacts. ALWAYS test your mixes and masters on multiple output channels to ensure you aren't missing any stylistic or technical errors in the audio processing like aliasing.
Can I Fix Aliasing in Audio?
The only way to eliminate aliasing altogether is to process your analogue signal at a proper signal rate according to the Nyquist frequency concept detailed above. However, aliasing baked into audio can become less noticeable using a low pass filter, making it so that you're hearing less of the high frequency signals that have become distorted.
Ideally, you can change your sample rate settings to avoid the issue altogether. This can be done in the preferences of your digital audio workstation, or within your A/D converter, or any other piece of gear that helps you convert audio from an analog to digital audio signal.
It's also worth noting that some distortion-focused plugins could introduce aliasing. To mitigate this, you can opt for high quality settings such as the high quality on/off toggle presented in Ableton Live on such plugins:
It's an engineering cliche for a reason: when in doubt, trust your ears. Process deliberately and A/B test each effect layer as you mix to make sure you're introducing only desired frequency content into your mix. Proper gain staging and recording technique can also help you produce a clean audio signal while recording.
Avoiding Aliasing in Audio is a Good Rule of Thumb
As we've discussed, there are some select scenarios in which we utilize aliasing in audio within the context of trying to create audio artifacts for a particular effect. However, outside of that space, audio signals should be preserved, processed, and manipulated within their audio frequency range to keep high frequency components intact.
So long as you use proper digital audio processing technique and preserve your sample from one production stage to the next, you won't have to worry too much about running into the harsh frequencies that come with audio aliasing.