Relocating

I am moving this blog to ngholmes.wordpress.com to get a more permanent blog home (on the off chance I ever finish grad school and leave UBC). I will disable this page in a few weeks.

18. December 2013 by ngholmes
Categories: Uncategorized | Leave a comment

Errors vs Uncertainty

Having made lots of changes to our intro physics lab at UBC, conducted a series of interviews, marked a lot of lab reports, and analyzed even more, I’ve had lots of time to think about how students deal with errors and uncertainties. Two weeks ago, I listened to the recording of the Global Physics Department’s discussion on error analysis and propagation (see the Global Physics Department blog here and the very extensive, detailed, and excellent error analysis and propagation document by John Denker here), which motivated me to get some of my thoughts and ideas out in public. Below is an edited excerpt from my PhD Research Proposal about [some of] my issues with the words “error” and “true value” and the way we teach and represent uncertainties. Here goes.

There’s a lot of research out there that describes how students struggle with understanding measurement and uncertainty (especially Allie, et al., 1998; Allie, et al., 2003). Most of the issues that I’ve come across in my research centre around the notion of set versus point paradigms for understanding measurement (Buffler, et al., 2001; Buffler, et al., 2009). What are set or point paradigms you ask? Well, Buffler and colleagues define the terms as follows: “The point paradigm is characterized by the notion that each measurement could in principle be the true value. … The set paradigm is characterized by the notion that each measurement is only an approximation to the true value and that the deviation from the true value is random.” (p. 1139) That is, the point paradigm emphasizes the importance of any single piece of data, placing special value on the individual measured value. In contrast, the set paradigm emphasizes only the importance of a collection of data, recognizing that an individual measured value is only an estimation for the physical quantity being measured. I have issues with the use of the words “true value,” but we’ll get to that shortly. And yes, I’m going to be putting it in “scare quotes” from now on.

First let’s focus on the use of the term “error.” Experts generally use the terms error and uncertainty interchangeably. For novices though, the literal meaning of the word can create series pedagogical issues, especially when combined with point-like reasoning and understanding. As Buffler and colleagues describe, students may hold a point-like understanding of measurement with the view that measurement “errors” are actual mistakes that have caused the measured value to be different from the “true value” (Buffler, et al., 2009). Under this notion, the authors describe that this extends into a belief that measurement errors could be reduced to zero and a perfect measurement of the “true value” could be made (presumably by scientists in a lab). Thus, point-like thinking encourages students to interpret errors (synonymously with uncertainties) as measures of accuracy, rather than precision. In fact, many students in the lab have indicated that they were previously taught to determine the percent-error through the following formula:

%error = |′ Actual′ − Measured|/ ′ Actual′ ∗ 100%

Students have also told us in interviews and in class that they were taught that a 1% error was a pretty good (or pretty accurate) measurement. Here, the use of the word “error” indeed refers to a quantitative measure of accuracy or systematic error. It is clear, then, that students can subsequently misinterpret measurement “errors”, intended as meaning uncertainties, as literal errors, expressing the deviation of a measurement from its theoretical value (Séré, et al., 2001). When instructors use the word “error” as uncertainty, these values are then interpreted as providing the range in which the “actual” or “true” value lies. While this reasoning is not entirely incorrect, it can propagate into much more inappropriate lines of reasoning. In particular, students may believe that the “true value” is one that expert physicists have measured with perfect precision (no uncertainty). As an anecdote, I remember being blown away in my undergrad when a prof put some fundamental constant on the board (I think it was alpha?) and then gave it an uncertainty. Up to that point, I hadn’t made the connection that many of the fundamental constants that I worked with in class (mass or charge of an electron, gravitational acceleration, etc etc etc) are measured quantities that have limits on how well we know them. What I was missing was that, in experimental physics, it is impossible to make a perfect measurement with no uncertainty, and so, no value for these fundamental constants can ever be fundamentally “true”… well, except maybe the speed of light. Of course, comparisons can be made to theoretical, uncertainty-less values, but the use of the word “error” would not apply as a definition of uncertainty. In many cases of authentic experimental science research, a theoretical value may not yet exist. As such, the uncertainty assigned to a measurement is not a measure of accuracy, since there is no known “true value” with which to compare. Exclusively using the term uncertainty in place of error is a small step towards improving the interpretation of measurement uncertainties in to more set-like paradigms (Allie, et al., 2003).

The traditional treatment of comparing measurements also poses pedagogical problems. When students enter the first year lab, most of their experience comparing two measurements involve checking whether their uncertainty ranges overlap (or if the theoretical value falls within the uncertainty range of the measurement). This binary comparison (either the ranges overlap indicating agreement or do not overlap indicating disagreement) differs significantly from authentic scientific practice. In the behavioural sciences, the difference between two means is only said to be statistically significant if no more than 5% of their probability distributions overlap (which would be about a difference of 2 units of uncertainty, or a 2σ level difference). In particle physics, a discovery is not made unless it is different at the 5σ level. While the threshold for agreement may differ depending on the research question, rarely is the comparison a binary one at the 1σ level. The binary comparison also reinforces point-like thinking as it ignores the probability distribution that characterizes each measurement. Comparing whether values agree within one unit of uncertainty does not represent the full range of values with which the measurement may exist, since one unit of uncertainty only describes the range of 68% of the probability distribution associated with the measurement (approximately, assuming the data is normally distributed). This issue is reinforced by the standard use of error bars, which provide a range around the measured value, which suggestively represents the full distribution around the measurement. Indeed, most often the uncertainties and error bars actually represent the 68% or 1σ confidence level. Thus, overlapping uncertainty bars or uncertainty ranges misleads students into the binary notion of agreement. Even the mathematical notation itself (using the ± symbol) is guilty of reinforcing this idea. It supports even more extreme versions of point-like thinking, especially if the language (plus or minus) is interpreted literally. While marking a student’s lab book, they wrote that a particular value was the measured value plus its uncertainty or the measured value minus its uncertainty, placing the importance on the extremes of the range (again, a 1σ range) rather than on the central value. This interpretation of the ± symbol is not so unreasonable when considering that the most common use of the symbol is for solving the quadratic equation. In that case, the ± symbol is used to determine the two different roots of the equation, and the solution is one or the other extreme (or both), rather than the values in between.

At UBC, we have developed some strategies for addressing these issues in the lab, but in the interest of suspense and brevity, I’ll save these for another post. I highly, highly recommend checking out the references below for more information in the meantime!

References:

Allie, S., Buffler, A., Campbell, B., and Lubben, F. (1998a). First year physics students’ perceptions of the quality of experimental measurements. American Journal of Physics, 20(4):447–459.

Allie, S., Buffler, A., Campbell, B., Lubben, F., Evangelinos, D., Psillos, D., and Valassiades, O. (2003). Teaching measurement in the introductory physics laboratory. The Physics Teacher, 41(7):394.

Buffler, A., Allie, S., and Lubben, F. (2001). The development of first year physics students’ ideas about measurement in terms of point and set paradigms. International Journal of Science Education, 23(11):1137– 1156.

Buffler, A., Lubben, F., and Ibrahim, B. (2009b). The relationship between students’ views of the nature of science and their views of the nature of scientific measurement. International Journal of Science Education, 31(9):1137–1156.

Séré, M.-G., Fernandez-Gonzalez, M., Gallegos, J. A., Gonzalez-Garcia, F., Manuel, E. D., Perales, F. J., and Leach, J. (2001). Images of science linked to labwork: A survey of secondary school and university students. Research in Science Education, 31(4):499–523.

17. December 2013 by ngholmes
Categories: Uncategorized | Leave a comment

Recent Media Coverage

When my co-coordinator became a little under the weather, I was tagged in to help with some media coverage of the UBC Let’s Talk Science’s All Science Challenge. Here’s what came out of it: a few awkward moments of me describing the activity but lots of great words from the students themselves! Click on the image above to view the YouTube video!

30. May 2013 by ngholmes
Categories: Uncategorized | Leave a comment

Spam prevention powered by Akismet