feature article
Subscribe Now

Improving ADC Results Through Oversampling and Post-Processing of Data

Today’s mixed-signal programmable system chips (PSCs) include a configurable successive approximation register (SAR) analog-to-digital converter (ADC). These ADCs are often the architecture of choice for medium-to-high resolution applications with sample rates under 5 megasamples per second (Msps) and resolution ranging from 8 to 16 bits. This resolution is sufficient for a variety of applications, such as portable or battery-powered instruments, industrial controls and data or signal acquisition.

The implementation of a sophisticated reconstruction algorithm can enhance the results of an ADC, but in many cases, it is not cost effective or necessary to do so. However, when increased accuracy is required and bandwidth is not of primary concern, digital post-processing techniques, such as oversampling, averaging, and decimation, can be used to increase the effective resolution or sample rate of measurements.

ADC Basics

Used to capture discrete samples of a continuous analog voltage and provide a discrete binary representation of the signal, ADC converters are generally characterized by input voltage range, resolution, and bandwidth.

The input voltage range of an ADC is determined by its reference voltage (VREF). For input signal ranges less than or greater than VREF, an analog scaling function may be used to amplify or attenuate the input signal, thus matching the input voltage range of the ADC.

ADC resolution is a function of the number of binary bits in the converter. The ADC
approximates the value of the input voltage using 2nsteps where n is the number of bits in the converter. Each step therefore represents VREF/2nvolts. For an ADC configured for 12-bit operation, the least significant bit (LSB) = 2.56 V/4,096 = 0.625 mV, assuming a 2.56V reverence voltage.

Constrained by architecture and several performance characteristics, bandwidth is an indication of the maximum number of conversions the ADC can perform each second. For example, in 12-bit mode, a SAR ADC may be capable of up to 600 ksps.

Conversion

To begin a conversion, all of the capacitors are quickly discharged. Then VIN is applied to all of the capacitors, which are charged to a value very close to VIN. Then, the capacitors are switched to ground and thus –VIN is applied across the comparator.

The conversion process begins with capacitor C switched to VREF. Because of the binary weighting of the capacitors, the voltage at the input of the comparator is –VIN + ½VREF. If VIN is greater than VREF/2, then the comparator output is 1; otherwise the comparator output is 0. A register is clocked to retain this value as the most significant bit (MSB) of the result.

Next, if the MSB is 0, capacitor C is switched to ground. Otherwise, it remains connected to VREF. Then capacitor C/2 is connected to VREF. The result at the comparator input is either –VIN + ¼VREF or –VIN + ¾VREF (depending on the MSB), and the comparator output now indicates the value of the next MSB. This bit is registered, and the process continues for each subsequent bit until conversion is complete.

This process results in a binary approximation of VIN. Generally, there is a fixed interval, a sampling period (T), between the samples. The inverse of the sampling period is often referred to as the sampling frequency (fs = 1/T). If the signal changes faster than the sampling rate can accommodate, or if the actual VIN value falls between counts in the result, information is lost during conversion (Figure 1).

20071127_actel_fig1.jpg
Figure 1

To avoid these issues, the sampling rate must be high enough to provide enough samples to adequately represent the input signal. Based on the Nyquist-Shannon Sampling Theorem, the minimum sampling rate must be at least twice the frequency of the highest frequency component in the target signal (Nyquist Frequency). For example, to recreate the frequency content of an audio signal with up to 22 kHz bandwidth, you must sample at a minimum of 44 ksps. As long as the input is sampled at or above the Nyquist Frequency, post-processing techniques can be used to interpolate intermediate values and reconstruct the original input signal within desired tolerances.

Oversampling

Oversampling refers to sampling the signal at a rate significantly higher than the Nyquist Frequency. An increased sampling rate does not directly improve ADC resolution, but by providing more samples, this technique more accurately tracks the input signal by better utilizing the existing ADC dynamic range. Thus, oversampling by itself improves the digital representation of the signal only down to the ADC physical dynamic range limit (minimum step size).

20071127_actel_fig2.jpg
Figure 2

Figure 2 shows that doubling the sampling frequency a second time results in a series of samples that utilize the available dynamic range of the ADC even more fully. Increasing the sampling rate further without additional post-processing simply results in multiple samples of the same value during each step in the waveform, yielding no real improvement in the basic digital representation of the signal.

The high maximum sampling rate of the ADC in a mixed-signal PSC supports oversampling of multiple analog inputs under user control. By controlling the system clock rate, sample acquisition times, and the sampling sequence of the analog subsystem, or by implementing a state machine to control sequencing and timing of the analog block, the sample interval for each analog input in a design can be independently controlled.

Oversampling alone can be sufficient for many applications. However, oversampling is often combined with additional post-processing to further improve the digital representation of an input signal without the need for a high-performance DSP.

Averaging

One common post-processing technique is digital low-pass filtering, or averaging. Unlike other techniques, averaging is not intended to improve the resolution of the result. Rather, it is a simple and effective way to smooth the input waveform and reduce sensor noise or provide damping for a control input. Averaging is typically implemented with EQ 1 and EQ 2:

S(t) = S(t-1) – [S(t-1)/N] + VIN                                                                        EQ 1

A(t) = S(t)/N                                                                  EQ 2

20071127_actel_fig3.jpg
Figure 3

where S is an accumulator, N is the length of the filter (i.e., the number of samples included in the rolling average), and A is the result of the averaging function. The time constant of the averaging equation is given by τ = N/FS, where N is again the length of the filter and FS is the sampling frequency.

Figure 3 shows the step response of the averaging equation with N = 4. Also shown (as the dotted line) is the familiar waveform associated with the step response of a single-pole RC low-pass filter with time constant τ = RC (shown equal to the averaging equation’s time constant in this case).

20071127_actel_fig4.jpg
Figure 4

The green values in Figure 4 illustrate the result when the averaging equations are applied to the samples shown in Figure 2.

For designers using mixed-signal PSCs, the analog system builder available in the vendor’s design suite can provide easy access to the averaging function. For each analog input included, digital averaging may be included by selecting this feature in the peripheral configuration window.  Alternatively, averaging and other post-processing methods can be performed by user-created functions implemented in the field-programmable gate array (FPGA) fabric. Samples for each analog input can be captured in real time directly from the ADC_RESULT interface of the analog subsystem and this data can then be post-processed using state machines, an embedded microcontroller (MCU) or digital signal processing (DSP) circuitry.

Optimal oversampling and averaging are not always enough. For some applications, it is desirable to enhance the sampling rate or resolution of the ADC in order to represent signal content that might normally require a more costly ADC plus an FPGA or MCU. In practice, simple post-processing techniques can increase the effective sampling rate by up to four times and the effective resolution of the ADC by two to four bits. Beyond these limits, it is generally best to consider upgrading the hardware, or moving to a high-end DSP using the Whittaker-Shannon interpolation formula.

Interpolation

Anything done to arithmetically enhance the effective sample rate or resolution of the sampling results is interpolation. However, this term is generally applied only to techniques that use data from two or more samples to create a new data series with an increased effective sample rate.

The simplest interpolation duplicates each real sample N times to increase the effective sample rate to N × FS. If logic resources are at a minimum and audio qualityis not of paramount concern, then you might use this simple technique to adapt an audio signal that wassampled at 8 ksps to a recording device that expects data at 11 ksps (by duplicating 3 of every 8 samples).

20071127_actel_fig5.jpg
Figure 5

Figure 5 shows a more common use of interpolation. Using the real samples from Figure 1, we see the effect of simple two-point linear interpolation to double the number of samples and smooth the resulting digital representation of the input signal. In Figure 5, each tick on the T axis represents a real sample value. The intermediate values were calculated by summing two adjacent real samples and then, dividing by two, which doubles the effective sample rate and simultaneously increases the effective resolution by one bit. Performing the linear interpolation again will result in four times the original number of samples, and an increase to 0.25 LSB resolution (an effective two-bit increase).

20071127_actel_fig6.jpg
Figure 6

Implementing a simple linear predictive coding (LPC) algorithm in FPGA logic gates is relatively straightforward. Figure 6 contains a block diagram of the logic required to implement an LPC expansion that enhances the sampling rate by four times and the ADC resolution from 12 to 14 effective bits. Further, for each pair of 12-bit real samples from VIN, four 14-bit intermediate samples are computed using simple arithmetic and padding of the original samples. In this example, the computations are performed in parallel and the 14-bit output data is multiplexed to the rest of the processing chain at four times the ADC sample rate. Note that while linear interpolation increases both the effective sample rate and the resolution of the samples by predicting the value of VIN for the times between real samples, it can not recover lost information. For example, Figure 5 illustrates the loss of information about the peak of the input signal between T– and T+. Whereas the Whittaker-Shannon interpolation formula would recover some information about this peak, interpolating values by simply averaging the two adjacent samples did not.

Decimation

Another method for increasing the resolution of the ADC is a combination of oversampling and decimation. This technique involves oversampling of the input signal so that a number of samples can be used to compute a virtual result with greater accuracy than a single real sample can provide. Consider oversampling the signal in Figure 1 by a factor of 16 (i.e., the new sample rate is 16 × FS).

Once the oversampling is complete, sum the samples during a given sampling interval to derive a value that represents the value of the input during that sampling interval. In this example, when adding sixteen 12-bit values, the result is a 16-bit decimated result. During the green interval for example, a sum of 119 results, which represents an intermediate value of 7.4375 (i.e., 119/16). At this point, the choice is whether to 1) retain one or more extra bits of effective resolution, 2) truncate the result by using the upper 12 bits, or 3) round off the result by adding bit 3 of the 16-bit result (the 13th bit) to the upper 12 bits.

With any of these strategies, the resulting value more closely approximates the actual value of the signal during this interval than did the original data. Other choices include retention of the original sample rate of the data by using only the decimated data points, or combining the decimated values with the raw data to effectively double the sample rate, yielding a decimated interpolation.

In addition to yielding a more accurate approximation of the signal value during a given sampling interval, decimation also helps to improve the signal-to-noise ratio (SNR) of the input signal. By spreading the effects of random noise over multiple samples and computing a sum, decimation allows the noise to at least be partially cancelled from the final result. For every two samples taken during the sampling interval, the noise-floor is cut in half. So, it should follow that with 16x oversampling, the noise floor can be reduced by –12 dB (1/16th).

Increasing Measurement Accuracy

For low-bandwidth signals, such as power supply voltages, static pressure, and temperature measurements, oversampling and decimation can be used to increase measurement accuracy based on several criteria.

1. During the sampling interval required, the signal must not change more than 0.5 effective LSB of the end result. For example, if using this technique to increase the effective resolution of a 12-bit measurement to 16 bits, the signal of interest must not vary during the sampling interval by more than 1/32 LSB of the ADC.

2. During the sample interval, the ADC must convert the signal 4ntimes, where n is the number of virtual bits desired in the result.

3. There must be some noise on the input signal. Most systems have an abundance of electrical noise. Electrical noise is radiated from building lights, nearby electrical motors, radio stations, and the sun. Electrical noise is generated on circuit boards from switching power regulators, oscillator chips, and switching digital signals. This noise must have an amplitude greater than 1 LSB of the ADC, it must have an mean value of zero, and it must be randomly distributed. With no noise on the signal, the ADC result for each conversion will be the same and averaging produces no effective gain in resolution.

4. The result is truncated by shifting right 2nplaces to yield the desired resolution. Consider the example of controlling the atmospheric pressure in a reaction chamber for a chemical process. The pressure transducer on the reaction vessel is instrumented to provide an input voltage from 0.3 V to 2.2 V for the expected pressure of 100-800 kilo-Pascals (kPa). The processing system is set up to allow a solenoid to increase or decrease the pressure in the reaction vessel at a rate of 25 Pascals (Pa) per millisecond.

During a critical phase of the process, it is important to control the pressure in the reaction vessel to within 250 Pa. Therefore, measure the pressure to within ±60 Pa so that the digital control loop has some room to dither within the target range. Using the direct input of the analog I/O in to the SAR ADC and 12-bit resolution with the 2.56 V internal reference voltage, we get a full-scale dynamic range of 0 to 800 kPa × 2.56 V / 2.2 V = 931 kPa. This yields an LSB = 931 kPa / 4096 = 227 Pa. Therefore, the resolution of the measurements needs to be enhanced by 2 bits to meet the requirement of 60 Pa resolution, which can be accomplished by oversampling and decimating the transducer value at 42 / 0.00025 = 64 ksps to yield one decimated sample every 0.25 millisecond. The resulting sum contains 16-bit values that we truncate by using only the upper 14 bits to yield a measurement with an effective LSB = 57 Pa.

To be certain that this scheme will work, check the required criteria. First, the input will change at a maximum rate of 25 Pa per millisecond or 6.25 Pa per 0.25-millisecond sampling interval, well below the desired 57 Pa effective resolution. Second, the ADC can easily provide oversampling at the 64 ksps; in fact, it can monitor up to 8 pressure vessels simultaneously.

Next, random noise on the sensor signal is checked. With some random high-frequency noise of sufficient amplitude to dither the LSB of the conversion, the decimated sum is weighted by the statistical average of the noise plus the previously undetectable change in the sensor input, yielding an increased sensitivity of effectively two bits. Again, it is important to note that the noise amplitude on the signal must exceed the resolution of the LSB of the ADC. There is no upper limit on the noise amplitude as long as the frequency content represents broad spectrum (white) noise.

In many cases, the sum of these noise sources is a good approximation for white noise. However, users of this technique should be aware that any patterns in the noise can lead to an offset in results. If a particular component of the input noise occurs at the ADC conversion frequency, the sample results will show a DC offset consistent with the non-zero average value of the noise when sampled by the ADC.

20071127_actel_fig7.jpg
Figure 7

In rare cases, environmental noise in the system is not sufficient to effectively dither the input signal to the ADC. In these cases, it may be necessary to inject a noise signal onto the sensor input. This is accomplished with a white noise source and a summing amplifier (Figure 7).

To further refine this scheme, the LFSR output can be combined with a polynomial generator or a coding block (i.e., 8b/10b encoder) to ensure a minimum number of transitions during a sampling interval. This yields a predictable DC level that can be used to bias the summing circuit and can then be cancelled out either with an analog offset voltage, or during arithmetic post-processing of the sampled data.

Summary

For portable or battery-powered instruments and industrial controls, SAR ADCs, like those included in today’s mixed-signal PSCs, typically offer sufficient sample rates and resolution. However, in those cases when increased accuracy is required and bandwidth is not of primary concern, digital post-processing techniques, such as oversampling, averaging, and decimation, can be used to increase the effective resolution or sample rate of measurements.

 

Jim brings over 20 years of ASIC, FPGA, and mixed-signal design experience to Actel’s customer support team. Prior to joining Actel in 2001, Jim was a Field Applications Engineer at Lucent Microelectonics where he supported the ORCA FPGA product line, and he spent the first decade of his career at Texas Instruments where he helped to design ASICs, FPGAs, and board-level subsystems. He holds a bachelor’s in electrical engineering from the University of Minnesota Institute of Technology.

12 thoughts on “Improving ADC Results Through Oversampling and Post-Processing of Data”

  1. Pingback: GVK BIO
  2. Pingback: nebenjob
  3. Pingback: DMPK Services
  4. Pingback: zdporn.com
  5. Pingback: redirected here

Leave a Reply

featured blogs
Apr 20, 2018
The thing everyone always wants to know about CDNLive EMEA, since it is held in Munich in May, is "Will Bayern München be staying at the hotel?" during the conference, like they did a couple of years ago. The good news is that Munich is still in the European Ch...
Apr 19, 2018
COBO, Silicon Photonics Kevin Burt, Application Engineer at the Samtec Optical Group, walks us through a demonstration using silicon photonics and the COBO (Consortium of Onboard Optics) form factor.  This was displayed at OFC 2018 in San Diego, California. Samtec is a fable...