Finally, there is an almost magical device, a multiplying digital to analog converter or MDAC. The reference potential, instead of being a fixed value, can come from anywhere -- an analog signal, or another DAC. So let's suppose we want to multiply a potential times some scale factor that changes over time. Perhaps we have incoming data that ranges from 10 mV to 10 V, but we always want an output in the range from 10 mV to 100 mV. That means we need to scale the voltage by 1 (for inputs between 10 and 100 millivolts), by 0.10 (inputs between 0.1 and 1 V), or by 0.01 (inputs from 1 V to 10 V). What to do? Use the analog signal, scaled by 2^{N}/2^{N}-1 as the reference input to an MDAC! If we then use straight binary coding to the MDAC, when we put in a full scale code of 2^{N}-1, the output = V_{observed}. But if we put in a code of (2^{N}-1)/10, we scale the output down by a factor of 10, and if we put in a code of (2^{N}-1)/100, we scale down by a factor of 100. So now we can use a computer to control the magnitude of an analog signal. "But wait a second," you protest (or at least one hopes you protest). "Suppose I have an 8 bit converter. 2^{8} = 256. If I run at full scale,

V_{out} = (2^{8}/(2^{8}-1))*(2^{8}-1) * V_{observed} = V_{observed}.

But if I want 2^{8}/10, I can't do it exactly; 255/10 = 25.5, and I can't represent that exactly as a digital number! Oops.