Precision determines the level of noise/fluctuation present in the test equipment’s measurement. Precision also indicates the consistency and repeatability of the instrument’s measurement circuitry. A measurement with very little noise/fluctuation is considered to be precise. Measurement precision at “100ppm” indicates it will vary by no more than 0.01% (100/1,000,000).
Many instruments will not specify their precision, which is a warning sign, or will improperly report precision though multiple averaged calculations or very slow frequency data logging that hides the noise. Another common tactic is to report precision of battery coulombic efficiency calculations instead of hardware specification. These practices are misleading and reflect negatively on the company’s reputation. [Ask to learn more.]
The test equipment resolution, quality of materials, and thermal management all play a significant role to provide superior precision. Precision should be specified directly for the voltage, current, time, and sometimes temperature measurement of the test equipment.
Translating this into test results…
Arbin defines the measurement precision for voltage, current, and time for each class of test equipment. These are the three parameters measured by the test equipment that define its performance. Quartz timing crystals are used for measurement and timestamp; representing the state-of-the art method for such measurements.
The benefits of high-precision measurements during battery research have been widely discussed in academia for almost a decade. Coulombic efficiency and differential capacity are two metrics that have been shown to require incredibly high precision test equipment to be meaningful and draw confident conclusions. These analytical techniques can miss or obscure signatures in the data if experiments are conducted with low precision test equipment that will generate noisy and inconsistent (unrepeatable) results.
(1) Resolution | (2) Precision | (3) Temperature | (4) Robustness | (5) Accuracy | (6) Software