# 24 Working with Uncertainties

# Error in Science

The purpose of science is to discover new things, so we usually don’t have an accepted answer to compare with the results of an experiment. Attempting to measure something that already has an accepted standard value before performing our experiment can expose systematic errors so that they can be fixed or taken into account. This is known as *calibration*. Evan after calibration we can’t be certain that a systematic error has not affected our accuracy. “Students in science classes are in an artificial situation. Their experiments are necessarily repetitions of previous work, so the results are known. Because of this students learn a poor lesson about science. The good scientist [works hard to minimize possible sources of error and then] *assumes the experiment is not in error*. It is the only choice available. Later research, attempts by other scientists to repeat the result, will hopefully reveal any problems, but the first time around there is no such guide.”^{[1]} Bellevue College provides a more in-depth discussion of uncertainty and error.

# Uncertainty

Even assuming we have eliminated systematic errors from our measurement or experiment, the accuracy of our result could still be affected by random errors. Averaging many measurements reduces the effect of random error and analyzing the spread of those measurements allows us to define the measurement uncertainty. The *uncertainty of a measured value defines an interval that allows us to say with some defined level of confidence that a repetition of the measurement will produce a new result that lies within the interval. *Sometimes the uncertainty is determined primarily by the precision of an instrument, and sometimes other factors come into play.

### Everyday Examples: Uncertainty in Tyler’s Pupillary Distance Measurement

There are various statistical methods^{[2]} to determine the uncertainty in Tyler’s set of measurements, but we will just look at the range of values to get a quick idea of the precision in the measurement and use that for the uncertainty. We look at the seven values and the average and we notice that the values go up to 2 **mm** above the average and down to 2 **mm** below the average.

We will use 2 **mm** as a rough estimate of the uncertainty. This method is the known as the *half-range method* because it uses half of the difference between the maximum and minimum measured values as the uncertainty. If we wanted to show the final result of Tyler’s measurements including uncertainty in the standard way then we would write:

To complete our uncertainty statement we need to provide some kind of confidence. We could say that *most of the time a new measurement will be within 2 mm of the average. *

With only seven values it will be difficult to further quantify the uncertainty. A common rule of thumb that can be cautiously applied when we have taken many measurements is that *about* *70 % the time a new measurement will be less than 1/4 of the full range away from the average*. The full range in our example spans 4

**mm**so that would imply that roughly 70% of the time a new measurement will fall within 1

**mm**of the average. However, in our example we shouldn’t really put a lot of weight into quoted percentage because have only seven measurements.

### Examples: Rulers

What is the uncertainty in measuring the length of a piece of paper with a ruler?

The precision of a ruler typically determines the uncertainty in the measurement. If we have checked the length of the ruler against other standard rulers then we can assume it is accurate. A ruler with markings at a 1 **mm** interval will allow you to decide if the paper edge is closer to one mark or another. In other words, you will be able to tell if the paper edge is more or less than half-way between one mark and the next. We could then estimate the precision in the measurement to be half of one **mm** (0.5 **mm**) under ideal conditions because measurements would likely indicate the paper edge being closest to the same mark each time. To make a statement about our uncertainty we would then need a confidence level, in this case it would be qualitative: We* are very confident that repeated measurements will fall within 0.5 mm above or below the average value. *

Getting a quantitative uncertainty typically requires statistical analysis of the measurement values. An example would be calculating the standard deviation and stating that *68 % of the time a*

*repeated measurements will fall within one standard deviation from the mean.*Applying this type of statistical analysis requires making many repeated measurements and in this class we usually won’t make enough so we need to just estimate our uncertainties.

### Everyday Examples

What is the uncertainty in the mass measurement if you place a quarter on a standard electronic balance and obtain a reading of 6.72 **g**?

The scale is indicating the uncertainty in the measurement using the number of decimals it displays. The digits 6 and 7 are certain, and the 2 indicates that the mass of the quarter is likely between 6.71 and 6.73 grams. The quarter weighs *about* 6.72 grams, with a nominal uncertainty in the measurement of ± 0.01 gram. If the coin is weighed on a more sensitive balance, the mass might be 6.723 grams. This means its mass lies between 6.722 and 6.724 grams, an uncertainty of 0.001 gram. ^{[3]}

If wind currents in the room were causing the last digit to fluctuate between 6.77 grams and 5.67 grams then we would know the uncertainty was greater than the instrument precision. In that case we would have to average many values to ensure accuracy and then examine how those values were spread between 6.77 grams and 5.67 grams in order to determine the uncertainty.

Scientists try to reduce uncertainty as much as is practical and then use a variety of methods, some simple and some very sophisticated, to determine the size of the uncertainty for reported along with the results. In this textbook we will stick to the simple methods, but if you decide to continue studying science you will learn some of the more sophisticated methods^{[4]}^{[5]}.

### Reinforcement Exercises

# Significant Figures

Notice that in the previous example we have rounded the result to drop the decimal places from his result. This is because it would be meaningless to include decimals in the hundredth of a **mm** place if we don’t even know the answer to within 2 **mm**, which is in the one **mm** place. Dropping the decimal places changes the number of significant figures in our result match our uncertainty. *The significant figures in a result are those digits that contribute to showing how precisely we know the result*.

Special consideration is given to zeros when counting significant figures. The zeros in 0.053 are not significant, because they are only placeholders that locate the decimal point. There are two significant figures in 0.053. The zeros in 10.053 are not placeholders but are significant—this number has five significant figures. The zeros in 1300 may or may not be significant depending on the style of writing numbers. They could mean the number is known to the last digit, or they could be placeholders. So 1300 could have two, three, or four significant figures. Typically when you see a value like 1300 meters the zeros don’t count, but we can avoid ambiguity by using scientific notation and writing meters or using a metric prefix and writing 1.3 kilometers^{[6]}. The table below will help you deal with zeros.

Result |
Number of Placeholder Zeros |
Number of Significant Figures |

300.0 | 0 | 4 |

0.0003 | 4 | 1 |

0.000300 | 1 (first one) | 6 |

300.07 | 0 | 5 |

300.0700 | 0 | 7 |

375 | 0 | 3 |

3,750,000 | 3 (typically) | 3 (typically) |

3.75 x 10^{3} |
0 | 3 |

### Reinforcement Activity

Determine how many significant figures are in each of these reported results:

### Reinforcement Activity

Use the reported uncertainties to adjust each of the following results to the correct number of significant figures:

# Method of Significant Figures

Sometimes values are reported without uncertainty, but the level of uncertainty is still implied by the number of significant figures. When we express measured values, we can only list as many digits as we initially measured with our measuring tool. Tyler reported his first PD measurement as 56 mm, but he could not express this value as 56.31 mm because his measuring tool lacked the precision to measure down to the hundredth of a millimeter. Tyler had to decide which millimeter marking lined up with his pupil so the 1 mm digit hasuncertainty. The last digit in a measured value has always been estimated in some way by the person performing the measurement. Using the method of significant figures, *the last digit written down in a measurement is the first digit with some uncertainty. *^{[7]} In this way significant figures indicate the precision of a measuring tool that was used to measure a value.

Whether uncertainties are written out or implied, we still need to account for the fact that measured values have uncertainty when we use those values in calculations. We will use four general rules to determine the number significant figures in our final answers.

- 1) For multiplication and division, the result should have the same number of significant figures as the least number of significant figures in any of the values being multiplied or divided.

- 2) For addition and subtraction, the result should have the same number of decimal places as the least number of decimals in any of the values being added or subtracted.

- 3) Counting discrete objects may have zero uncertainty. For example, sitting at a table with three oranges on it, you can measure the number of oranges on the table to be 3 with full certainty.

- 4) Definitions can have zero uncertainty. For example, the definition of a kilometer is 100 meters, so if using this conversion factor in a calculation it does not contribute to adjusting your significant figures.

### Everyday Examples

Each of Tyler’s PD measurements are reported to the one’s place due to his rulers’ precision. He took the average to get the final result:

We see that to take the average Tyler had to add up the values:

Applying the rule for addition (rule # 2), the result must have its last digit in the ones place because that was the least number of decimals in any number we used.

Tyler then divided by the number seven to get the average, but because this is just a count of how many measurements we made it has no uncertainty and doesn’t affect the significant figures. So applying the rule for division, the final result should have the same number of significant figures as the least number in the division, which in this case is the three significant figures in 393 **mm**. Therefore our final result would be 56.1 **mm**, which implies that we certain of the 56, but we aren’t sure about the 0.1 because have uncertainty in the tenth of a millimeter place. This result has more significant figures than were produced simply looking at the range of values to roughly estimate the uncertainty; but remember we expected that quick method to be an overestimate of uncertainty so this result makes sense.

### Reinforcement Exercises

- "Accuracy vs. Precision, and Error vs. Uncertainty" by Physics Resources and References, Bellevue College ↵
- "Uncertainty in Measurement Results" by NIST Reference on Constants Units and Uncertainty, National Institute of Standards and Technology ↵
- "Measurement Uncertainty, Accuracy, and Precision" by Paul Flowers, Klaus Theopold, Richard Langley, William R. Robinson, PhD, Chemistry 2e, OpenStax is licensed under CC BY 4.0. ↵
- "Uncertainty in Measurement Results" by NIST Reference on Constants Units and Uncertainty, National Institute of Standards and Technology ↵
- "Experimental Uncertainty" by EngineerItProgram, California State University, Chio ↵
- OpenStax, College Physics. OpenStax CNX. Jul 6, 2018 http://cnx.org/contents/031da8d3-b525-429c-80cf-6c8ed997733a@11.20. ↵
- OpenStax, College Physics. OpenStax CNX. Jul 6, 2018 http://cnx.org/contents/031da8d3-b525-429c-80cf-6c8ed997733a@11.20. ↵

an error having a nonzero mean (average), so that its effect is not reduced when many observations are averaged. Usually occurring because

there is something wrong with the instrument or how it is used.

refers to the closeness of a measured value to a standard or known value

random errors are fluctuations (in both directions) in the measured data due to the precision limitations of the measurement device. Random errors usually result from the experimenter's inability to take the same measurement in exactly the same way to get exact the same number

Amount by which a measured, calculated, or approximated value could be different from the actual value

refers to the closeness of two or more measurements to each other

describing what happens, but not how much happens

describing what and how much happens

each of the digits of a number that are used to express it to the required degree of accuracy, starting from the first nonzero digit

using the number of digits provided in a measurement value to indicate the measurement uncertainty