Table of contents |
|
Accuracy and Precision in Measurements |
|
Errors |
|
Representation of Errors |
|
Least Count |
|
The error in measurement refers to the discrepancy between the actual value and the measured value of an object. Errors can arise from various sources and are typically categorized as follows:
Systematic or Controllable Errors
Systematic errors have identifiable causes and can be either positive or negative. Since the causes are understood, these errors can be minimized. Systematic errors are further classified into three types:
Random errors occur for unknown reasons and happen irregularly, varying in size and sign. While it is impossible to completely eliminate these errors, their impact can be reduced through careful experimental design. For example, when the same person repeats a measurement under identical conditions, they may obtain different results at different times.
Note: When a measurement is taken n times, the random error decreases to (1/n. times.
Example: For instance, if the random error in the average of 100 measurements is x, then the random error in the average of 500 measurements will be x /5.
Gross errors are caused by human mistakes and carelessness in taking measurements or recording results. Some examples of gross errors include:
These errors can be reduced through proper training and by paying careful attention to detail during the measurement process.
When we measure something, there can be mistakes or differences between what we measure and the true value. These mistakes are called errors, and they can be shown in different ways.
Absolute Error is the difference between the actual value and the measured value of something.
Suppose a physical quantity is measured n times and the measured values are a1, a2, a3 ..........an. The arithmetic mean (am) of these values is
am =
If the true value of the quantity is not given then mean value (am) can be taken as the true value. Then the absolute errors in the individual measured values are –
Δa1 = am – a1
Δa2 = am – a2
.........
Δan = am – an
The arithmetic mean of all the absolute errors is defined as the final or mean absolute error (Δa)m or of the value of the physical quantity a
So if the measured value of a quantity be 'a' and the error in measurement be Δa, then the true value (at) can be written as at = a ± Δa
Rule I: When adding or subtracting two quantities, the maximum absolute error in the result is the sum of the absolute errors of each quantity.
Rule II:
When multiplying or dividing quantities, the maximum fractional or relative error in the result is the sum of the fractional or relative errors of the individual quantities.
Rule III: The maximum fractional error in a quantity raised to a power (n) is n times the fractional error in the quantity itself, i.e.
If X = An then
If X = ApBq Cr then
If then
![]() |
Download the notes
Accuracy, Precision of Instruments & Errors in Measurement
|
Download as PDF |
The least count (L.C.) refers to the smallest measurement that an instrument can accurately make. It is a crucial aspect of precision measuring devices.
Suppose the size of one main scale division (M.S.D.) is M units and that of one vernier scale division (V. S. D.) is V units. Also let the length of 'a' main scale divisions is equal to the length of 'b' vernier scale divisions.
The quantity (M-V) is called vernier constant (V.C) or least count (L.C) of the the vernier callipers.
Least Count = Pitch/Total no. of divisions on the circular scale
where pitch is defined as the distance moved by the screw head when the circular scale is given one complete
rotation. i.e. Pitch = Distance moved by the screw on the linear scale/Number of full rotations given
Note: With the decrease in the least count of the measuring instrument, the accuracy of the measurement increases and the error in the measurement decreases.
Error of A Sum or A Difference
Dividing LHS by Z and RHS by AB we have,
Since ΔA and ΔB are small, we shall ignore their product.
Hence the maximum relative error
You can easily verify that this is true for division also.
82 videos|315 docs|80 tests
|
1. What is accuracy in measurements? | ![]() |
2. What are common types of errors in measurements? | ![]() |
3. How is the least count of an instrument defined? | ![]() |
4. What is the difference between accuracy and precision in measurements? | ![]() |
5. How can errors in measurement be minimized? | ![]() |