Unit 2: Scientific Measurement, Methods, and Models
The first lab activity associated with this textbook guides you through the process of performing an experiment to determine human reaction time. The lab required that you apply a kinematic equation to model the motion of the falling ruler. As with the application of any scientific model, certain assumptions were made. For example, a lack of noticeable effects due to air resistance on the falling ruler. Any rigorous scientific investigation requires an understanding of the uncertainty in making measurements and the assumptions made in modeling the data that result from those measurements. The following chapters in this unit will introduce those concepts. The learner outcomes for this unit are listed below, and below that are some related key terms to watch out for as you complete the chapter.
Learner Objectives
- Explain the scientific cycle and the role of empirical models, physical models, hypotheses, observations, experiments, theories, laws, and principles. [5]
- Explain how systematic and random errors affect precision, accuracy and uncertainty.[4]
- Estimate and report uncertainties in specific measurements and calculated results. [4]
- Take actions to reduce the uncertainty in specific measurements or results. [4]
Key Takeaways
Measurement error
Random error
Systematic error
Precision
Accuracy
Uncertainty
Significant figures
Measurement Error (also called Observational Error) is the difference between a measured quantity and its true value. It includes random error (naturally occurring errors that are to be expected with any experiment) and systematic error (caused by a mis-calibrated instrument that affects all measurements)
random errors are fluctuations (in both directions) in the measured data due to the precision limitations of the measurement device. Random errors usually result from the experimenter's inability to take the same measurement in exactly the same way to get exact the same number
refers to the closeness of two or more measurements to each other
refers to the closeness of a measured value to a standard or known value
Amount by which a measured, calculated, or approximated value could be different from the actual value
each of the digits of a number that are used to express it to the required degree of accuracy, starting from the first nonzero digit