Technical data

5-27
Operating Concepts
Measurement Calibration
Measurement Calibration
Measurement calibration is an accuracy enhancement procedure that effectively removes the system
errors that cause uncertainty in measuring a test device. It measures known standard devices, and uses the
results of these measurements to characterize the system.
This section discusses the following topics:
definition of accuracy enhancement
causes of measurement errors
characterization of microwave systematic errors
effectiveness of accuracy enhancement
ensuring a valid calibration
modifying calibration kits
TRL*/LRM* calibration
What Is Accuracy Enhancement?
A perfect measurement system would have infinite dynamic range, isolation, and directivity
characteristics, no impedance mismatches in any part of the test setup, and flat frequency response. In any
high frequency measurement there are measurement errors associated with the system that contribute
uncertainty to the results. Parts of the measurement setup such as interconnecting cables and
signal-separation devices (as well as the analyzer itself) all introduce variations in magnitude and phase
that can mask the actual performance of the test device. Vector accuracy enhancement, also known as
measurement calibration or error-correction, provides the means to simulate a nearly perfect measurement
system.
For example, crosstalk due to the channel isolation characteristics of the analyzer can contribute an error
equal to the transmission signal of a high-loss test device. For reflection measurements, the primary
limitation of dynamic range is the directivity of the test setup. The measurement system cannot distinguish
the true value of the signal reflected by the test device from the signal arriving at the receiver input due to
leakage in the system. For both transmission and reflection measurements, impedance mismatches within
the test setup cause measurement uncertainties that appear as ripples superimposed on the measured
data.
Error-correction simulates an improved analyzer system. During the measurement calibration process, the
analyzer measures the magnitude and phase responses of known standard devices, and compares the
measurement with actual device data. The analyzer uses the results to characterize the system and
effectively remove the system errors from the measurement data of a test device, using vector math
capabilities internal to the network analyzer. When you use a measurement calibration, the dynamic range
and accuracy of the measurement are limited only by system noise and stability, connector repeatability,
and the accuracy to which the characteristics of the calibration standards are known.
What Causes Measurement Errors?
Network analysis measurement errors can be separated into systematic, random, and drift errors.
Correctable systematic errors are the repeatable errors that the system can measure. These are errors due