Electrical Test Fundamentals
Electrical Test Fundamentals
Electrical Test Fundamentals
Good measurement practices and collecting high quality data and can mean a variety of things to different people. However, most practitioners would agree that the ability to create a test setup suitable for the intended measurement outcome is fundamental. Frequently, this involves a test scenario where electrical characteristics of a device or material are being determined. The test equipment can range from a very simple setup, such as using a benchtop digital multimeter (DMM) to measure resistance values, to more complex systems that involve fixturing, special cabling, etc. When determining the required performance of the test system, important criteria include measurement accuracy, sensitivity, speed, etc. One must recognize that these criteria involve not only the performance of the measurement instrument, but also the limitations imposed by the effects of cabling, connectors, a test fixture, and even the environment under which tests are carried out.
When considering a specific measurement instrument for an application, the specification or data sheet is the first place to look for information on its performance and how that affects test results. Still, data sheets are not always easy to interpret because they typically use specialized terminology. Additionally, as alluded to above, instrument specifications provide information on only one part of the test system, and shouldn't be the only consideration in determining if a piece of test equipment will meet application requirements. Characteristics of the material or device under test may also have a significant impact on measurement quality.
Four-Step Measurement Process. The process of designing and characterizing the performance of any test setup can be broken down into four essential steps. Following this process will greatly increase the chances of building a system that meets requirements and eliminates unpleasant and expensive surprises.
Step 1 The first step, before specifying a piece of equipment, is to define the system's required measurement performance. This is an essential prerequisite to designing, building, verifying, and ultimately using a test system that will meet a user's requirements. Defining the required level of performance involves understanding terminology like resolution, accuracy, repeatability, rise time, sensitivity, and many others.
Resolution: Thisis the smallest portion of the signal being measured that can actually be observed. It is determined by the analog-to-digital (A/D) converter in the measurement device. There are several ways to characterize resolutionbits, digits, counts, etc. The more bits or digits there are, the better the device's resolution. The resolution of most benchtop instruments is specified in digits, such as a 6-digit DMM. Be aware that the -digit terminology means that the most significant digit has less than a full range of 0 to 9. As a general rule, digit implies the most significant digit can have the values 0, 1, or 2. In contrast, data acquisition boards are often specified by the number of bits their A/D converters have. Here's how these different resolution specs compare:
12-bit A/D 4096 counts approx. 3 digits
16-bit A/D 65,536 counts approx. 4 digits
18-bit A/D - 262,144 counts approx. 5 digits
22-bit A/D 4,194,304 counts approx. 6 digits
25-bit A/D 33,554,304 counts approx. 7 digits
28 bit-A/D 268,435,456 counts approx. 8 digits
Sensitivity: Although the terms sensitivity and accuracy are often considered synonymous, they do not mean the same thing. Sensitivity refers to the smallest change in the measurement that can be detected and is specified in units of the measured value, such as volts, ohms, amps, degrees, etc. The sensitivity of an instrument is equal to its lowest range divided by the resolution. Therefore, the sensitivity of a 16-bit A/D based on a 2V scale is 2 divided by 65536 or 30 microvolts. A variety of instruments are optimized for making highly sensitive measurements, including nanovoltmeters, picoammeters, electrometers, and high-resolution DMMs. Here are some examples of how to calculate the sensitivity for A/Ds of varying levels of resolution:
3 digits (2000) on 2V range = 1mV
4 digits (20000) on 2 range = 100m
16-bit (65536) A/D on 2V range = 30V
8 digits on 200mV range = 1nV
Accuracy: There are two types of accuracy to consider absolute accuracy and relative accuracy. Absolute accuracy indicates the closeness of agreement between the result of a measurement and its true value, as traceable to an accepted national or international standard value. Measurement devices are typically calibrated by comparing them to a known standard value. Most countries have their own standards institute where national standards are kept. Relative accuracy is the extent to which a measurement accurately reflects the relationship between an unknown and a locally established reference value. In the calibration of an instrument to either type of standard, an important consideration is calibration drift. The drift of an instrument refers to its ability to retain calibration over time for a given range of temperatures.
The implications of these terms are demonstrated by the challenge of ensuring the absolute accuracy of a temperature measurement of 100.00C to 0.01 C, versus measuring a change in temperature of 0.01C. Measuring the change is far easier than ensuring absolute accuracy to this tolerance, and often, that is all a user requires.
Repeatability: This is the ability to measure the same signal input and get the same value over and over again. Ideally, the repeatability of measurements should be better than the accuracy. If repeatability is high, and the sources of error are known and quantified, then high resolution and repeatable measurements are often acceptable for many applications. Such measurements may have high relative accuracy with low absolute accuracy.
Step 2 This step gets into the actual process of designing the measurement system, including selection of equipment and fixtures, etc. As mentioned previously, interpreting a data sheet to determine which specifications are relevant to a system can be daunting. The following explanations should help.
Accuracy: Instrument manufacturers do not have a uniform method of specifying accuracy. In the case of Keithley Instruments, accuracy specifications are normally specified in two parts (1) as a proportion of the value being measured, and (2) as a proportion of the scale that the measurement is taken on. These two elements of accuracy (i.e., measurement uncertainty) can be expressed as (gain error + offset error), or as (% reading + % range), or as (ppm of reading + ppm of range). Accuracy specs for high-quality measurement devices can be given for 24 hours, 90 days, one year, two years, or even five years from the time of last calibration. Basic accuracy specs often assume usage within 90 days of calibration.
Temperature coefficient:Accuracy specs are normally guaranteed within a specific temperature range, such as 23C, 5C. Within that temperature span, for a given instrument measurement range, the accuracy specification might be given, for example, as (50ppm of reading + 35ppm of range) If carrying out measurements where temperatures are outside this range, it's necessary to add temperature-related uncertainty. For the instrument and measurement example just given, the additional temperature related error might be stated as 2ppm over 0-18C and 6ppm over 28-50C. Determining measurement uncertainty becomes especially difficult when ambient temperatures are unstable or get outside the manufacturer's stated temperature ranges.
Instrumentation error:Some measurement uncertainty is a function of instrument design. For a given signal level and measurement range, a 6-digit DMM with a 22-bit A/D converter will be inherently more accurate than a 3-digit DMM or 12-bit A/D data acquisition board. Care must be taken even when comparing, for example, two 6-digit DMMs from different manufacturers. A manufacturer's abbreviated specs frequently provide only the gain error, but offset error may be the most significant factor when measuring values at the low end of a measurement range. Remember,
Accuracy = (% reading + % range) = (gain error + offset error).
Noise: Instrument sensitivity (smallest observable change that can be detected) may be limited either by noise or by the instrument's digital resolution. The level of instrument noise is often specified as a peak-to-peak or RMS value, sometimes within a certain bandwidth. It is important that sensitivity figures from the data sheet match your application requirements, but also consider the noise figures as these will especially affect low level measurements. Accurate measurements become increasingly difficult as changes in the signal level approach the instrument's noise level.
Measurement Settling Time: For a given level of accuracy, settling time affects test system speed or throughput. Obviously, automated test equipment with PC-controlled instruments enables quicker measurements than taking them manually, which can be especially important in a manufacturing environment. Nevertheless, the instrument reading, which goes from one level (before the signal is measured) to another (the desired measurement value) must have settled sufficiently toward its final value. Put another way, there is always a tradeoff between the speed at which measurements are made and the accuracy of the measurements.
The rise time of an analog instrument (or analog output) is generally defined as the time necessary for the output to rise from 10% to 90% of the final value when the input signal rises instantaneously from zero to some fixed value. Rise time affects the accuracy of the measurement when it's of the same order of magnitude as the period of the measurement. If the length of time allowed before taking the reading is equal to the rise time, an error of approximately 10% will result, because the signal will have reached only 90% of its final value. To reduce the error, more time must be allowed. To reduce the error to 1%, about two rise times must be allowed; reducing the error to 0.1% would require roughly three rise times (or nearly seven time constants).
Step 3 This step involves the actual building of the test system and verifying its performance. An important part of this process is adopting appropriate measurement techniques that can improve results.
At this point the test system builder has picked appropriate equipment, cables, and fixtures, and has determined that the equipment's specifications can meet the measurement requirements. Now it's time to assemble the test system and verify its performance. It is essential to first check that each measurement instrument has been calibrated and is still within its specified calibration period, which is usually one year.
Pretest checks: If the instrument will be used for making voltage measurements, place a short across the inputs of the meter to check for offset error. This can be compared to the specifications from the data sheet, and usually can be nulled out by using the instrument's ZERO or REL function. Similarly, if the instrument will be used for current measurements, check to see if there is an offset current reading on the meter with an open circuit at the input. Again, this can be compared to specifications, and there may be provisions for zeroing the meter. Next, add the system cabling and repeat the pretest checks. Then do the same after adding the test fixture. Finally, add the device under test (DUT), repeating the pretest checks. This stepwise procedure of assembling and checking the test system can help identify the source of offset errors and other problems within the system. (Pinpointing and correcting sources of errors are covered in more detail later.)
Measurement settling time: Make sure there is sufficient delay between application of the signal and taking a measurement. The goal is to achieve an acceptable tradeoff between measurement accuracy and test system throughput. Overemphasis on speed can lead to insufficient delay time, which is a common source of error in test systems. This is especially evident when running the test at high speed produces a different result than when performing the test manually, or in a step-by-step fashion.
Besides an instrument's settings and inherent design, cabling and other sources of reactance in the test circuit can affect measurement settling time. Generally, capacitance is most likely to be the source of the problem. Large systems with lots of cabling (i.e., high cable capacitance), and/or those measuring high impedance may require relatively long delay times due to a lengthy system time constant ( = RC). To address this problem, many instruments have a programmable trigger delay. In a manual system, a delay of 0.25 to 0.5 seconds will seem to be instantaneous. However, in automated test equipment, steps are typically executed in a millisecond or less. Even the simplest of systems may require delays of five to ten milliseconds after a change in stimulus in order to get accurate results.
Minimizing the Effects of Error Sources: Guarding of the test leads or cabling is one technique for dealing with capacitance issues, and thereby reduces leakage errors and decreases response time. Guarding consists of a conductor driven by a low impedance source surrounding the lead of a high impedance signal. The guard voltage is kept at or near the potential of the signal voltage [1]. Some instruments have built-in guard circuits.
Test lead resistance is a common source of error in 2-wire low resistance measurements. This can be minimized by using a 4-wire (Kelvin) test lead setup [2]. Instruments with this type of setup provide one pair of leads that supply a known test current to the unknown resistance, and a second pair of leads to measure the voltage across the resistance. Since very little current flows in the voltage measurement leads, the resistance of those leads has minimal affect on the measurement. The unknown resistance is then determined from Ohms Law. If, however, the unknown resistance is very high, approaching the input resistance of the voltmeter circuit, then an electrometer or specialized meter with extremely high input resistance may be required.
Thermoelectric EMFs are likely to be present in any measurement system. These create voltage offsets, which result from connections between dissimilar metals that act as a thermocouple. The magnitude of the resulting offset voltage error depends on the Seebeck coefficient of the two metals and the ambient temperature. For example, the connection between a clean copper lead and a copper test fixture that has become oxidized (i.e., a Cu-CuO connection) has a Seebeck coefficient of 1mV/C. Therefore at a room temperature of 25C, the thermoelectric EMF generated is 25mV, which could be significant in comparison to the value to be measured. Therefore, it is highly desirable to use only clean Cu-Cu connections in a test circuit, which have a Seebeck coefficient of less than 0.2V/C. For dissimilar metal connections that can't be avoided, some instruments provide an offset-compensated ohms measurement technique that minimizes the error from thermal EMFs.
RFI/EMI is an anomaly caused by radio frequency interference (RFI) or electromagnetic interference (EMI) that can introduce AC noise and DC offsets into a measurement. AC noise can act directly to obscure low level AC measurements. DC offset errors can result from the rectification of RFI/EMI in the test circuit or instrument. The most common source of external noise is 50Hz or 60Hz power line pick-up, depending on where in the world the measurements are being made. Picking up millivolts of noise is not uncommon, especially when measurements are made near fluorescent lights.
The signal components of noise superimposed on a DC signal being measured may result in highly inaccurate and fluctuating measurements. To avoid this, many modern instruments allow users to set the integration period of the A/D converter in relation to the number of power line cycles (NPLC). For example, a setting of 1NPLC will result in the measurement being integrated for 20 milliseconds (for 50Hz power) or 16.67milliseconds (for 60Hz). A 1NPLC integration period will eliminate noise inducted from the power line. While the performance improvement from this feature can be dramatic, it also limits the system measurement speed to a certain degree.
Step 4 Once the test system has been built using appropriate instruments and measurement techniques, and verified in Step 3, it can produce reliable measurement results. However, it's important to recheck the performance of any test setup on a regular basis. Because of component and temperature drifts, the accuracy of an instrument will vary over time, and it should be recalibrated on a regular basis.
References. The following references provide additional information on guarding, 4-wire measurements, and other techniques to minimize sources of error in electrical measurements:
1. "Low Level Measurements Handbook", 6th Edition, 2004, pp2-5 to 2-10; available online at http://www.keithley.com/knowledgecenter/knowledgecenter_pdf/LowLevMsHandbk_1.pdf.
2. MacLachlan, Derek, "Getting Back to the Basics of Electrical Measurements", Keithley Instruments White Paper, available online at http://www.keithley.com/data?asset=54359.
Digital Cameras Explained Typical Lug Nut Torque Specifications for HiSpec Aluminum Trailer Wheels The iPhone 4 White offering thousands of applications through the integrated App Store Switching to Electric cigarette regarding far better health Digital Photography Ebook Experience - SLR Camera Remington WDF-1600 Electric Shaver For Women The Magnet4Power Ebook - Can You Really Create a Magnet Which Substitutes the Electric Bill How to Retrieve Deleted Pictures From Digital Camera 3 Digital Camera Accessories Every Photographer Needs Buying The Best Canon Digital SLR Camera Private Photography Strategies That Leave Assist You Require Amazing Digital Pictures MagneGas (OTCBB: MNGA) Nearing Completion of Plasma Arc Flow Refinery Associations to enhance industry standards to be issued using the standard heat pumps - heat pumps, air conditioning - HVAC Industry