The Skinfold Method
The skinfold (caliper) method is one way to determine body composition. The uses specially designed calipers to measure the thickness of skinfolds that are pinched from several specific locations on the body, as seen in this skinfold demonstration video.The skinfold thicknesses are correlated with body fat percentage using tables or equations that were produced by making both displacement and skinfold body composition measurements on many people.
The is quick, easy, and requires minimal equipment, however there are many possible ways for error to enter the measurement. Analyzing the skinfold method will help us understand the concepts of error, precision, accuracy, and uncertainty, which actually apply to all measurements. Watching the short skinfold demonstration video will help you follow the discussion of these concepts.
Skinfold Measurement Error
Let’s say a physical therapist (PT) measures a particular skinfold thickness one time. The result might not be very , or close to the actual value, for a variety of reasons. For example, measuring above or below the center of the skinfold would produce a that would affect the accuracy of the results.
The PT could then make many measurements of each skinfold. If the collection of measurements were all relatively close together then the measurement would have high . On the other hand if the measurements were all relatively far apart then the measurement would have low precision. The measurement precision can be affected by the measurement method and/or by the equipment so improving the method or the equipment can improve precision. For example, the PT might draw a mark on the skin to be sure the measurement is made in the same place every time. A caliper with larger dial will make it easier to see which mark is closest to the needle position.
Low precision is not desirable, but it doesn’t have to ruin the measurement if the error causing the lack of precision is a . For example, if the PT happens to randomly measure at various distances above or below the actual skinfold center in equal amounts then this error is random. In this case averaging all of the measurements should give a result that is relatively close to the actual value. The effect of random error on accuracy can be reduced by averaging more measurements.
cannot be reduced by averaging because they bias the result away from the actual value in the same direction every time. For example, if the PT made a mark on the skin to improve precision, but the mark was actually in the wrong spot, then every measurement would be inaccurate in the same way. In this case averaging the results would not produce an result. Instead, must be reduced by improving methods or equipment. For example, using the displacement method instead of calipers would improve the accuracy of the body fat percentage measurement. These issues are part of why the caliper method is slowly going out of favor for determining body fat percentage. Another reason is that this specific method might embarrass and/or lower a patient’s motivation to visit with their health care provider about their health, and that negative outcome is not worth the body fat percentage information that might be gained from the measurement (uncertainty is typically 3% body fat).
method for measuring body fat percentage using specially designed calipers to measure the thickness of skinfolds that are pinched from several specific locations on the body as inputs to empirical equations
refers to the closeness of a measured value to a standard or known value
Measurement Error (also called Observational Error) is the difference between a measured quantity and its true value. It includes random error (naturally occurring errors that are to be expected with any experiment) and systematic error (caused by a mis-calibrated instrument that affects all measurements)
refers to the closeness of two or more measurements to each other
random errors are fluctuations (in both directions) in the measured data due to the precision limitations of the measurement device. Random errors usually result from the experimenter's inability to take the same measurement in exactly the same way to get exact the same number
an error having a nonzero mean (average), so that its effect is not reduced when many observations are averaged. Usually occurring because
there is something wrong with the instrument or how it is used.