微波EDA网,见证研发工程师的成长!
首页 > 硬件设计 > 模拟电路设计 > 选择合适的系列电压基准源的绝对精度电压输出

选择合适的系列电压基准源的绝对精度电压输出

时间:01-30 来源:互联网 点击:
Load-Regulation Error= 140μA × 0.9mV / mA = 126μV (max)
= 106 × 126μV / 2.5V = 50ppm (max)

In general, it is best to be conservative and use the maximum output current directly for the load-regulation calculation. An exception may be if you're trying to extract the last bit of accuracy from a design and both the maximum and minimum DAC reference input resistance values are well specified. This approach results in a smaller load-regulation error because of the smaller ΔIREF.

Because the power supply is specified as varying for this example, we need to consider the effects of input line regulation on the MAX6102 reference. The power-supply range is specified as 4.5V to 5.5V. From this, a conservative reference-voltage line-regulation calculation is possible:

Line-Regulation Error= (5.5V - 4.5V) × 300μV / V = 300μV (max)
= 106 × 300μV / 2.5V = 120ppm (max)

The final voltage-reference-related error term to consider is the effect of reference output-noise voltage. Conveniently, Design A has a signal bandwidth (10Hz to 10kHz) that corresponds to the exact MAX6102 noise voltage bandwidth, so the wideband-noise-voltage specification of 30μVRMS is used directly (that is, bandwidth scaling is not required). Comparing the load- and line-regulation values (126μV and 300μV, respectively), we can see that noise is not a major contributor in this design. Using crude approximations to get numbers for the error analysis, we can assume an effective peak noise value of ~42μV (30μV ×√2), which corresponds to 17ppm (106 × 42μV/2.5V) with the DAC gain of 1. We are trying purposefully to keep the noise calculations simple here; a more detailed analysis can be performed if the relative error of the noise is larger or if the design is marginal. Remember that noise is specified as a typical value when judging design margin.

We will now review the relevant MAX5304 DAC specifications that impact accuracy at or near the upper end of the code range. The DAC INL value of ±4LSB (at 10 bits) is used directly. Treating it as a single-sided quantity, as with the other error terms in our analysis, we arrive at a value of 3906ppm (106 × 4/1024). Similarly, the DAC gain error is specified as ±2LSB and results in an error of 1953ppm (106 × 2/1024). The final MAX5304 DAC specification to be considered is gain-error tempco, which gives us a typical error of 70ppm (70°C × 1ppm/°C). The DAC output noise is not specified for the MAX5304, so it is ignored, most likely without adverse consequences in this 6-bit-accurate system.

When all of the error sources are added together, we obtain a worst-case error of 15596ppm, which just barely meets our target-error specification of 15625ppm. When confronted with this marginal situation, we can rationalize that we will probably never see an error of this magnitude, because it assumes worst-case conditions for most parameters. The root sum square (RSS) approach gives an error of 7917ppm, which is valid if the errors are uncorrelated. Some error sources may be correlated, so the truth probably lies somewhere between these two numbers. But regardless of the approach, the Design-A requirements have been met.

Design B: High Accuracy and Precision

The initial error of the A-grade MAX6225 is 0.04% or 400ppm, which exceeds Design B's entire 122ppm error budget. Because this application has gain calibration, virtually all of this reference initial error can be removed, assuming the calibration equipment has sufficient (~1μV) accuracy and the trim circuit has enough precision. The tempco contribution is calculated as 70ppm (70°C × 1ppm/°C), and the typical temperature hysteresis value of 20ppm is used directly. The long-term stability specification of 30ppm is also used rather than a more conservative number, because the instrument in this application has an initial burn-in as well as an annual calibration.

Applying the same assumptions that were used in Design A, we find Design B's reference output current variation to be 140μA (coincidentally, the same number as in Design A). This l eads to the following load-regulation-error calculation:
Load-Regulation Error = 140μA × 6ppm / mA ~= 1ppm (max)
The power supply is specified as being constant in this application, so the line regulation is assumed to be 0ppm. Note that it would be 1ppm even if the power supply weren't constant, as long as it remained within the specified 4.95V to 5.05V range, because the MAX6325 line-regulation specification is 7ppm/V max.

Because the bandwidth for Design B is specified as DC to 1kHz, we need to consider both the 1.5μVp-p low-frequency (1/f) noise and the 2.8μVRMS broadband noise specified from 0.1Hz to 10Hz and 10Hz to 1kHz, respectively. Using the same crude RMS to peak approximation as Design A, and adding the two peak noise terms together, we get a total noise estimate of 2ppm at the reference output ([[0.75μV + 2.8μVRMS × √2]/2.5V] × 106). Notice that this is the same value we would obtain if we calculated it at the DAC output, because the equation would be multiplied by 1.638/1.638 to rescale everything to 4.096V. It's worth mentioning that the peak-noise-sum method used here is fairly conservative, yet the total error contribution is still relatively small. An RSS approach is probably more accurate, because the two noise sources are most likely uncorrelated, but this smaller value would be even more "in the noise" (pun intended) compared to the peak-value approach.

All that remains for the Design-B analysis is to include the DAC error terms. The INL for the A-grade MAX5170 DAC is specified as ±1LSB, which is 61ppm and exactly half of our 122ppm error budget of ±2LSB at 14 bits. The DAC gain error is specified as ±8LSB worst-case, but this error is removed completely by the gain calibration mentioned earlier. The calibration works as follows: The DAC is set to a digital code where the ideal output voltage is known (for example, decimal DAC code 16380 should produce precisely 4.095V at the output). The reference voltage is then trimmed until the DAC output voltage is at this exact value, even if the reference voltage itself is not 2.500V. The MAX5170 DAC does not list a gain tempco, although the gain error is specified over the operating-temperature range. Because the gain error is calibrated out at only one temperature, Design B should be tested to ensure that the gain does not drift excessively over temperature. The final consideration is the MAX5170 DAC output noise, whose typical peak noise is roughly estimated as 1ppm ([106 × √(1000Hz × π/2) × 80nVRMS/√Hz × √2]/4.096V).

In the end, the final worst-case accuracy is 184ppm (~ ±3LSB at 14 bits), which doesn't quite meet our accuracy target of 122ppm, whereas the RSS accuracy is acceptable at 100ppm. Based on these numbers, we consider the design a success, because it has illustrated the important points and is close to the target accuracy with several conservative assumptions. In a real-world application, this design could be accepted as is, or the accuracy requirements could be loosened slightly. Alternatively, a more expensive reference could be used if this design were not acceptable.

Design C: One-Time Calibrated, Low Drift

The initial error of the A-grade MAX6162 is 0.1%, which consumes the entire Design-C error budget of 977ppm. However, like Design B, this is (at least partially) calibrated out. Note that the uncalibrated +4.096V MAX5154 DAC full-scale output voltage exceeds the required +4.000V output range, and the DAC has 1mV resolution even though only ±4mV of accuracy is required. Therefore, it is possible to do a "digital calibration" on the DAC input digital codes to remove some of the reference's initial error and the DAC's gain error.

The digital gain calibration is best demonstrated with an example: Assume the DAC output voltage needs to be at the full-scale value of 4.000V, but the ideal decimal DAC code of 4000 results in a measured output of only 3.997V due to various errors in the system. Using digital calibration, a correction value is added to the DAC code to produce the desired result. In this example, when the DAC output voltage of 4.000V is required, a corrected DAC code of 4003 is used instead of 4000. This gain calibration is scaled linearly across the DAC codes, so it has little effect at the lower codes and more impact on the upper codes.

The digital gain calibration accuracy is limited by the 12-bit resolution of the DAC, so the best we can hope for is ~ ±1mV or 244ppm (106 × 1mV/4.096V) of error after the calibration has been applied. Note that the accuracy is calculated on a 4.096V scale in this example to maintain consistency, but it could be calculated relative to the +4.000V output range if required by the application, and the error would be slightly higher.

If the required output range in this example were 4.096V, there are other options that could be used to always bias the uncalibrated DAC output voltage above 4.096V, so that the digital gain calibration scheme described in this example could be employed. Such options include the following:

    Copyright © 2017-2020 微波EDA网 版权所有

    网站地图

    Top