Evaluating Battery Test Equipment – Part 5/12 – Accuracy

Accuracy represents the trueness of test equipment measurement; the closeness of average sample to its true value.  Specifying this metric requires comparison to a known source such as a high-performance meter.

Plot of -200A discharge pulse on two different testers.
Plot of -200A discharge pulse on two different testers.

Accuracy should not be confused with noise. The relationship between “accuracy” and “precision” relates to the measurement noise. As stated above, accuracy is how close the average measurement is to the true value largely ignoring noise.  Precision specifies to the amount of noise that will be present.  Therefore, an instrument with good accuracy can still have a noisy measurement if it has poor precision and can be affected by temperature fluctuations.

Translating this into test results…
Note the plot and description above.  Many types of battery test equipment will have similar accuracy specifications, and while this is important, it should be evaluated in combination with the instrument’s resolution and precision.  The accuracy metric alone can hide the true performance difference of the equipment.

Calibration is another related factor to question during evaluation.  Battery test equipment that is sufficient to use a simple hand-held meter during calibration will not produce results any better than the meter.  Arbin calibration requires a 6.5 digit or better digital multi-meter and some equipment will require 8.5 digit or better.  NIST-traceability is maintained for meters used in all factory calibrations.

Inadequate meter for calibration
6.5 digit (or better) meter should be used for calibration.

(1) Resolution | (2) Precision | (3) Temperature | (4) Robustness | (5) Accuracy | (6) Software

Back←  |  →Or Continue to Next Section

Posted in

Arbin Team

Scroll to Top