After the uncertainty in a measurement is decided and the measurement is made, two common methods follow, namely i) typically that measurement is repeated and then the individual measured values are combined into an average final value or ii) the measured value is combined mathematically with other measured values, either via combining equations or via a curve fit and graphical analysis, to find a final measured value.
the final measured value is then typically compared to an accepted value or otherwise known value in order to evaluate the relevance of your experimental results. A discussion of why the final measured value differs from the accepted or known value is called error analysis.
There are two common comparison methods, called percent error and percent difference.
The percent error is used when comparing the final measured value to a well-accepted or well-known value. The percent error is defined as:
The percent difference is used when comparing a measured value to another measured value. The percent difference is defined as:
Once the percent error or percent difference is known, the error analysis may proceed. Error analysis is the description of why two measured quantities differ. Usually, this discussion is about a final measured value and why it differs from a well-accepted or known value. Two main contributors factor into accounting for the discrepancy between the final measured value and the known value – these are accuracy and precision.
ACCURACY VERSUS PRECISION
After performing measurements, it is important to consider both the accuracy and precision of the measurements. If your work is accurate, this means that your experimental situation matches your mathematical theories (or your measurements match your predictions). If your work is precise, this means that, after multiple measurements, your measurements are similar (or have low uncertainty). Countless factors can affect both the accuracy (correctness) and the precision (uncertainty) of your experiment and should be considered and discussed at the end of any laboratory exercise, particularly when comparing a final measured value to a known or accepted value. If you only have one measurement, you cannot discuss precision. Similarly, if you do not have a prediction, you cannot discuss accuracy.
An example of accuracy versus precision is shown in the figure below. The idea here is that you are aiming the small yellow arrow at the central red bulls-eye. If your yellow arrows always hit close to one another, you have achieved high precision, whether they are close to the bulls-eye or not. If the arrows average evenly around the bulls-eye, then you have achieved high accuracy, whether they are close together or not. The best situation is when the arrows hit both close together (high precision) and in the bulls-eye (high accuracy).
Consider another example in which a researcher is collecting gravitational acceleration data. If five measurements are collected then the following scenarios are possible.
1) high accuracy and high precision: 9.83 m/s2 ± 0.01 m/s2
The measured value is close to known value of 9.8 m/s2 and has a low uncertainty of 0.01 m/s2.
2) high accuracy and low precision: 9.7 m/s2 ± 0.1 m/s2
The measured value is close to known value of 9.8 m/s2 and has a not-so-low uncertainty of 0.1 m/s2, compared to 0.01 m/s2.
3) low accuracy and high precision: 12.64 m/s2 ± 0.01 m/s2
The measured value is not close to known value of 9.8 m/s2 and has a low uncertainty of 0.01 m/s2.
4) low accuracy and low precision: 12.64 m/s2 ± 0.1 m/s2
The measured value is not close to known value of 9.8 m/s2 and has a not-so-low uncertainty of 0.1 m/s2, compared to 0.01 m/s2.
Both accuracy and precision will play a role in explaining the discrepancy between a final measured value and a known or accepted value. Both must always be considered in an error analysis discussion.
If a final measured value is found using combinations of several individual measured values, the uncertainty or precision of the final value is dependent on the uncertainty or precision of the individual measured values. The final uncertainty tells us to what degree we know our measurement is correct.
If a final measured value is found using combinations of several individual measured values, the accuracy of the final value is dependent on the accuracy of the individual measured values. The final accuracy tells us how correct the final value is.
TYPES OF ERRROS: POSSIBLE CONTRIBUTORS TO DISCREPANCIES BETWEEN THE FINAL MEASURED VALUE AND THE KNOWN OR ACCEPTED VALUE
As explained above, both accuracy and precision must be considered when attempting to explain the error or discrepancy between the final measured value and the accepted or known value. It is important to consider explanations for the error or discrepancy because the issues or factors contributing to this discrepancy or this error are the issues or factors that should be corrected or fixed if the experiment is to be improved, giving an improved final measured value.
Precision is determined by the uncertainty in the measurement, which is due to observer capabilities, apparatus capabilities, and technique or method.
Accuracy is determined by using correct methods and assuming correct models to describe the physical system. When the experimental conditions do not match the assumptions of the model, accuracy will not be high.
Two types of errors are defined in order to help organize the contributors to the discrepancy between the final measured value and the accepted or known value: systematic errors and random errors.
Systematic errors: These are errors which affect all measurements alike, and which can be traced to an imperfectly made instrument or to the personal technique and bias of the observer. These are reproducible inaccuracies that are consistently in the same direction. Systematic errors cannot be detected or reduced by increasing the number of observations, and can be reduced by applying a correction or correction factor to compensate for the effect.
Random errors: These are errors for which the causes are unknown or indeterminate, but are usually small and follow the laws of chance. Random errors can be reduced by averaging over a large number of observations.
The following errors are examples of systematic and random errors, and should be considered as possible contributors when explaining the discrepancy between your final measured value and the accepted or known value.
Incomplete definition (may be systematic or random) – One reason that it is impossible to make exact measurements is that the measurement is not always clearly defined. For example, if two different people measure the length of the same rope, they would probably get different results because each person may stretch the rope with a different tension. The best way to minimize definition errors is to carefully consider and specify the conditions that could affect the measurement.
Failure to account for a factor (usually systematic) – The most challenging part of designing an experiment is trying to control or account for all possible factors except the one independent variable that is being analyzed. For instance, you may inadvertently ignore air resistance when measuring free-fall acceleration, or you may fail to account for the effect of the Earth's magnetic field when measuring the field of a small magnet. The best way to account for these sources of error is to brainstorm with your peers about all the factors that could possibly affect your result. This brainstorm should be done before beginning the experiment so that arrangements can be made to account for the confounding factors before taking data. Sometimes a correction can be applied to a result after taking data to account for an error that was not detected.
Environmental factors (systematic or random) – Be aware of errors introduced by your immediate working environment. You may need to take account for or protect your experiment from vibrations, drafts, changes in temperature, electronic noise or other effects from nearby apparatus.
Instrument resolution (random) – All instruments have finite precision that limits the ability to resolve small measurement differences. For instance, a meter stick cannot distinguish distances to a precision much better than about half of its smallest scale division (0.5 mm in this case). One of the best ways to obtain more precise measurements is to use a null difference method instead of measuring a quantity directly. Null or balance methods involve using instrumentation to measure the difference between two similar quantities, one of which is known very accurately and is adjustable. The adjustable reference quantity is varied until the difference is reduced to zero. The two quantities are then balanced and the magnitude of the unknown quantity can be found by comparison with the reference sample. With this method, problems of source instability are eliminated, and the measuring instrument can be very sensitive and does not even need a scale.
Failure to calibrate or check zero of instrument (systematic) – Whenever possible, the calibration of an instrument should be checked before taking data. If a calibration standard is not available, the accuracy of the instrument should be checked by comparing with another instrument that is at least as precise, or by consulting the technical data provided by the manufacturer. When making a measurement with a micrometer, electronic balance, an electrical meter, or some of the LoggerPro hardware, always check the zero reading first. Re-zero the instrument if possible, or measure the displacement of the zero reading from the true zero and correct any measurements accordingly. It is a good idea to check the zero reading throughout the experiment.
Physical variations (random) – It is always wise to obtain multiple measurements over the entire range being investigated. Doing so often reveals variations that might otherwise go undetected. These variations may call for closer examination, or they may be combined to find an average value.
Parallax (systematic or random) – This error can occur whenever there is some distance between the measuring scale and the indicator used to obtain a measurement. If the observer's eye is not squarely aligned with the pointer and scale, the reading may be too high or low (some analog meters have mirrors to help with this alignment).
Instrument drift (systematic) – Most electronic instruments have readings that drift over time. The amount of drift is generally not a concern, but occasionally this source of error can be significant and should be considered.
Lag time and hysteresis (systematic) – Some measuring devices require time to reach equilibrium, and taking a measurement before the instrument is stable will result in a measurement that is generally too low. The most common example is taking temperature readings with a thermometer that has not reached thermal equilibrium with its environment. A similar effect is hysteresis where the instrument readings lag behind and appear to have a "memory" effect as data are taken sequentially moving up or down through a range of values. Hysteresis is most commonly associated with materials that become magnetized when a changing magnetic field is applied.
Personal error (neither systematic nor random), not a real error – Personal error is a result of carelessness, ignorance, poor technique, or bias on the part of the experimenter. The experimenter may measure incorrectly, or may use poor technique in taking a measurement, or may introduce a bias into measurements by expecting (and inadvertently forcing) the results to agree with the expected outcome. Gross personal errors, sometimes called mistakes or blunders, should be avoided and corrected if discovered. As a rule, gross personal errors are excluded from the error analysis discussion because it is generally assumed that the experimental result was obtained by following correct procedures. The term "human error" should also be absolutely avoided in error analysis discussions because it is too general to be useful, and because any data acquired under the effects of personal error should be excluded from error analysis.