I\'ve been wondering this for a while: If flac and wav are Lossless formats, why isn\'t that a set bitrate? Meaning why is 24 bit wav superior to 16 or 8 bit wav? Is anything Lost when going from a 24 bit wav file to a 16 bit? or am I comparing different things, i.e. the 24 bit sampling refers to fidelity vs mp3 bitrate referring to % compression.
i just made myself far more confused by asking that.
The CD "Red Book" standard is 16 bits per sample. 24 is more/better, and yes, it is technically a loss when you downconvert.
[INDENT]The bit rate is 1411.2 kbit/s:
2 channels x 44,100 samples per second per channel × 16 bits per sample = 1,411,200 bit/s = 1,411.2 kbit/s.
Address :
http://en.wikipedia.org/wiki/Red_Book_(audio_Compact_Disc_standard)[/INDENT]
To confuse matters more, you can intentionally add random noise in the process (dither).
[INDENT]
Dither is an intentionally applied form of noise used to randomize quantization error, preventing large-scale patterns such as "banding" (stepwise rendering of smooth gradations in brightness or hue) in images, or noise at discrete frequencies in an audio recording, that are more objectionable than uncorrelated noise. Dither is routinely used in processing of both digital audio and digital video data, and is often one of the last stages of audio production to compact disc.
Address :
http://en.wikipedia.org/wiki/Dither[/INDENT]
Compression is lossy when you cannot get back to the same original. If you use a lossless compression routine, e.g. FLAC, on a 24-bit WAV, you\'d get back the same 24-bit WAV file when you de-compress it.
I won\'t quote the article, but more here:
http://en.wikipedia.org/wiki/Audio_compression_%28data%29#Lossy_audio_compressionSorry that I\'m using wikipedia as an authoritative source--but like lossy compression--it\'s good enough for this application.