# Why ADCs Use Integer Math

Choose Subtopic

### Every Measurement Has Finite Precision

Anyone who does measurements recognizes that all measurements are contaminated with noise, so that all measurements have finite precision. Suppose one is measuring electrical potential. The potential may be 1.073 V, or (with sufficient care in the measurement) 1.073182 V. But the measurement is not 1.073182131415161788 V because humanity hasn't figured out how to measure potential to 19 significant figures, nor is it likely that measurement to that precision could be significant, given that the noise in many systems is approximately the uncertainty in the thermal energy of an electron. At room temperature, where the energy kT is 4.1×10-21 Joules per electron or 25.7 mV, averaging over 1.6×1019 (1 Coulomb) of electrons decreases the uncertainty by a factor of (1.6×1019)1/2 = 4×1010 = 6.4×10-13 V, ~ 1 pV. Measuring smaller amounts of charge (or small currents over short periods of time) increases the noise. If one measures 1 microampere for 1 millisecond, that is 10-9 Coulomb, the signal-to-noise ratio drops by at least a factor of (10-9)1/2
= 3×104, and the smallest useful voltage increment increases to 6.4×10-13 V * 3×104
= 20 nV. It makes no sense to digitize data with resolution significantly smaller than the noise amplitude -- the least significant bits will exhibit only noise, not useful information. It's just like a bathroom scale. For a 150 lb (70 kilogram) person, resolution of 0.25 lb (100 g) may be useful. But would 1 mg resolution make sense? We gain and lose about 0.5 g (1/1000 lb) each time we breath. Such resolution obscures useful information.