A form of data compression designed to reduce the size of audio data files. Audio compression algorithms are typically referred to as audio codecs. As with other specific forms of data compression, there exist many "lossless" and "lossy" algorithms to acheive the compression effect.
Lossless Compression
In contrast to image compression, lossless audio compression algorithms are not nearly as widely used. The primary users of lossless compression are audio engineers and those consumers who are paranoid about the quality loss from commercially available lossy compression techniques such as Vorbis and MP3.
First, the vast majority of sound recordings are natural sounds, recorded from the real world, and such data doesn't compress well. In a similar manner, photos compress less efficiently with lossless methods than computer-generated images do. But worse, even computer generated sounds can contain very complicated waveforms that present a challenge to many compression algorithms. This is due to the nature of audio waveforms, which are generally difficult to simplify without a (necessarily lossy) conversion to frequency information, as performed by the human ear.
The second reason is that values of audio samples change very quickly, so generic data compression algorithms don't work well for audio, and strings of consecutive bytes don't generally appear very often. However, convolution with the filter [-1 1] (that is, taking the first difference) tends to whiten the spectrum a bit and allows traditional lossless compression to do its job; integration restores the original signal. More advanced codecs such as Shorten and FLAC use linear prediction to come up with an optimal whitening filter.
Lossy Compression
Most lossy audio compression algorithms are based on simple transforms like the discrete cosine transform (DCT), that convert sampled waveforms into their component frequencies. Some modern algorithms use wavelets, but it is still not certain if such algorithms will work significantly better than those based on DCT because of the inherent periodicity of audio signals, which wavelets seem not to handle well. Some algorithms try to merge the two approaches.
Most algorithms don't try to minimize mathematical error, but instead maximize subjective human feeling of fidelity. As the human ear cannot analyze all components of an incoming sound, a file can be considerably modified without changing the subjective experience of a listener. For example, a codec can drop some information about very low and very high frequencies, which are almost inaudible to humans. Similarly, frequencies which are "masked" by other frequencies due to the nature of the human cochlea, are represented with decreased accuracy. Such a model of the human ear is often called a psychoacoustic model or "psy-model" for short.
Due to the nature of lossy algorithms, audio quality suffers when a file is decompressed and recompressed. This makes lossily-compressed files less than ideal for audio engineering applications, such as sound editing and multitrack recording.
Examples
Some examples of popular audio codecs:
- MP3 (MPEG-1 layer 3 audio codec)
- AAC (advanced audio codec)
- Vorbis
- WMA (windows media, based on AAC)
Other examples can be found on the codec page.
See also: psychoacoustics, audio file format, audio signal processing, data compression, video file formats, audio storage, codec, digital signal processing, speech coding