Saturday, April 4, 2020

SAR ADC error budget measured in the number of bits or error percentage

we used the example where   we needed to digitize an analog input signal in the range of 0-1V and that digital samples are represented as 8-bits values and we found that 0.003922V ( 1V/255 ) is the resolution of our ADC.

Obviously in our example ADC number of bits ( 8 bits ) and the analog input signal range of 0-1V  drives the resolution of our ADC ( 0.003922V ) and all of this represents the most basic error budget of our ADC( or any ADC for that matter ).
  • So the conclusion here is that ADC number of bits ( 8 bits ) and analog input signal range of 0-1V drive how much error in measurement  is introduced by our ADC .
It would be nice to represent this ADC error in measurement with one single number for example a percentage of the analog input signal range of 0-1V of our ADC and to do this we can use a following formula:

ADC error [%] = 100* ( analog input signal range )/2^^(ADC number of bits)

=> for our example: ADC error [%] = 100*( 1V -0V) /( 2^^8 -1 ) =  0.3922%


By looking into this simple formula it is clear that we could make a smaller ADC error [%] if we do one of two things ( or both of them in the same time ):
  • decrease ADC analog input signal range
  • increase ADC number of bits
e.g. If we increase ADC analog input signal range from 0-1V to 0-0.25V the ADC error will decrease from 0.3922% to 0.1%  :
=> ADC error [%] = 100*( 0.25V -0V ) /( 2^^8 -1 ) =  0.1%


As a conclusion:
  •  if you come across that somebody is talking about, for example an ADC error budget of 12 bits or 0.024% what they really mean is the ADC error budget of 12 bits ADC for the input signal range of 0-1V is  0.024% because:


=> ADC error [%] = 100*( 1V -0V) /( 2^^12 -1 ) =  0.024%

© 2011-2020 ASIC Stoic. All rights reserved


No comments:

Post a Comment