A compression algorithm can be evaluated in several different ways.

We could measure the relative complexity of the algorithm, the memory required to implement the algorithm, how fast the algorithm performs on a given machine, the amount of compression, and how closely the reconstruction resembles the original.

A very logical way of measuring how well a compression algorithm compresses a given set of data is to look at the ratio of the number of bits required to represent the data before compression to the number of bits required to represent the data after compression. This ratio is called the compression ratio.

Another way of reporting compression performance is to provide the average number of bits required to represent a single sample. This is generally referred to as the rate.

In lossy compression, the reconstruction differs from the original data.

Therefore, to determine the efficiency of a compression algorithm, we need to have some way of quantifying the difference.

The difference between the original and the reconstruction is often called the distortion.

 

 

Lossy techniques are generally used for the compression of data that originate as analog signals, such as speech and video.

In the compression of speech and video, the final arbiter of quality is human.

Because human responses are difficult to model mathematically, many approximate measures of distortion are used to determine the quality of reconstructed waveforms.

Other terms that are also used about differences between the reconstruction and the original data are fidelity and quality.

When the fidelity or quality of reconstruction is high, the difference between the reconstruction and the original is small.