Hands-On Industrial Internet of Things
上QQ阅读APP看书,第一时间看更新

Analog to digital

An analog to digital converter (ADC) acts as an interface between the continuous values of the analogue world and the digital numerical values used in a calculation system. An ADC accepts an unknown analogical signal (typically, a voltage) and converts it into a digital word with n bits, representing the ratio between the input voltage and the full scale of the converter.

Generally, ADC converters include, or must be preceded by, a sampling and holding circuit to avoid changes in the input voltage during the conversion operation. The input to output relationship for an ideal three-bit monopolar converter is shown in the following diagram:

Input/output in an ideal 3-bit ADC

The output assumes that the ​​encoded values are between 000 and 111 when the input varies between zero and Vfs. The amplitude of the code transition of the least significant bit corresponds to a variation of the input of  , which also indicates the converter resolution. Ideally, the code transitions should happen, for the three-bit converter example, at the values of   .

The characteristics of an ADC can be expressed in the same way as the DAC. If an analogical voltage is applied to the input, the output of the ADC is an equivalent binary word:

                                                       

The approximation () is necessary since the input voltage is reproduced from the binary word according to the converter's resolution. An alternative formula to calculate the binary word in output is given by the following equation:

Here, N10 is the decimal representation of the binary word.

The binary code must be steady for a change in the input voltage that is equal to 1 LSB. For an ideal converter, this means that when the input voltage increases, the output code should first underestimate the input voltage and then overestimate it. This error, called a quantization error, is represented in the following diagram:

Quantization error

The transitions of the encoded values in an ideal converter should be on a straight line. This does not always happen, and we can define differential and integral linearity errors just like we did for the DAC. Each increment of the encoded values should correspond to a variation that's equivalent to the LSB of the voltage in input. The differential linearity error is the difference between the real increase of the input that causes an increase of one bit and the ideal increase corresponding to the LSB. The integral linearity error is the sum of the differential linearity errors.

For this type of converter, it is possible to define a condition of monotonicity, an offset error, and a gain error.

With regards to the representation of signals, we can have the following:

  • Analogic signals: These can assume all of the values within a range so that their values change continuously with time
  • Quantized signals: These can assume a limited number of values, according to the resolution
  • Sampled signals: These are analogical signals that are evaluated in specific time instances and separated by the sampling time
  • Digital signals: These are quantized and sampled signals that are coded as binary numbers