I pass by a table where three students are huddled around a dry-erase board that is covered in calculations.
"Mr Fazio, can you help us with this question?"
I glance at the students' board and notice that their work is tedious and convoluted. The students have failed to recognize the simplicity that lies at the heart of the question.
I pick up the marker. Uncapping it, I begin to speak, "Okay, so let’s try to approach this intuitively..."
The students look at each other and snicker.
"What is it? What's funny?"
"It's just that you always say that," one of them replies. “You always tell us to use our intuitions, but when we're stuck on something and you explain it to us, it’s totally not intuitive!”
***
This interaction made me realize that I throw around the word intuition a LOT in my classroom, but haven't taken pause to think about what I mean when I use it. A quick google search of “intuition” yields the following definition: the ability to understand something immediately without the need for conscious reasoning.
This is really not what I meant by intuition in the earlier scenario with my students. Maybe I was trying to say: "Let's try to solve it without using any calculations" or "this question is simpler than it looks" or "once you determine which variables are relevant, the physics isn't hard."
None of these statements really have anything to do with intuition. The fact that the students were asking for help is actually an indicator that their intuitions have failed and they need to rely on logical reasoning from axioms to reach an answer.
Maybe I was hoping to say the right thing or show them the right equation--bump them in the right direction so that their intuitions could catch on and suddenly illuminate the right answer. I really don't know what I meant.
This encounter got me thinking about the role of intuition in the physics classroom. Surely, it has a role to play, but what exactly is this role? What does it meant to have a good physics intuition? What does it look like when that intuition is deployed? How can we help students develop that intuition? Most interesting of all: how is intuition supposed to work in tandem with logical reasoning?
But scientific thinking is really not about intuition. Science is about using logical reasoning to make claims from evidence. Whether it’s using inductive reasoning to build models empirically or using deductive reasoning to generate predictions, sound reasoning is arguably synonymous with sound science.
It's clear that developing a "physics intuition" and being able to logically reason using physical models are both important, but these two things seem to be at odds with each other. Aren't they opposite modes of thinking? What does it look like to strike the right balance between them? Can you strike a balance between them!? Maybe the role of intuition is to provide that initial creative leap—a spark that shows you a fresh perspective—and then reasoning picks up and finishes the work.
All of these ideas were bouncing around in my head for a while, but it wasn't until I read The Enigma of Reason by the psychologists Hugo Mercier and Dan Sperber that they began to coalesce into some valuable (even practical!) insights. In the book, the authors review many psychological studies related to human reasoning with the goal of identifying (1) why we evolved to reason and (2) why we are so darn bad at it.
What's that? You're surprised to hear that humans are bad at reasoning? Um, are you sure that you teach physics to teenagers...?
Seriously though, for decades, pop psychology has told us that we often use emotions (or intuitions) to make decisions—even when we think we're being logical. Advertisement writers have known this for a long time and they make it their business to exploit it. As teachers, we must seek to understand this relationship too. How can we hope to teach students to have good scientific intuitions and to reason like scientists if we don't even understand the interplay between these mental processes?
Hold on to your buns, folks! The psychology of human reasoning reveals some striking truths about the way we think. We’ll take a quick look at a few variations of one particularly interesting study and then tie them back to what they can tell us about physics instruction.
The Four-Card Wason Selection Task
In 1966 Peter Wason developed a test that has become popular for studying the process of human reasoning. The test has been employed in endless variations, but the original version went something like this:
Four cards are placed on a table in front of you. Each card has a letter printed on one side and a number printed on the other. The cards are on the table with one side facing up as shown below.
Which cards must be turned over to find out whether the following rule is true or false for these four cards?
If there is an E on one side of a card, then there is a 2 on the other side.
Before reading on, try to answer the question.
Seriously, give it a try. Scroll down for the solution.
Solution
You choose E and 7, right? Congratulations! You figured it out you smartypants! If not, don't be too upset, most people don't make the right selection. In fact, only about 10% of people get it right.
Let's walk through the solution without using any philosophy of logic jargon:
E: We need to turn this card over. If there is not a 2 on the other side, we have proved that the rule is false. If there is a 2 on the other side, we have evidence that the rule is true.
K: This card is irrelevant. Remember, the statement says if there is an E then there is a 2. This means an E must lead to a 2, but a K could also lead to a 2 without violating our rule. You don't need to turn over the K to check the validity of the rule.
2: You actually don't need to turn over the 2. It is also irrelevant. Most people think you need to turn this card to see if there is an E, but it actually doesn't matter what letter is on the other side. Remember the statement "If E, then 2". E is a condition for 2, but not the opposite; E must lead to a 2, but 2 doesn't have to lead to E. Maybe consider it this way: a different letter could also lead to a 2 without our rule being proven false. Let's say you turn over the 2 and you find A. Well, now you have an instance where A leads to 2. Even though this doesn't demonstrate the rule we're trying to prove ("If E then 2"), it doesn't violate the rule either.
7: You need to turn over this card to check that there is not an E on the other side. Let's say you turn it over and find an E. You've just proven that our rule is false; there is an E on one side and there's not a 2 on the other. This is direct violation of the rule. If we turn over the 7 and see some other letter everything is fine.
Don't feel too bad if you have to walk yourself through this a couple of times. It can be a real brain-bender.
What Is the Role of Reasoning in Human Thought?
On it's own, the four-card selection task is a fun little problem, but doesn't tell us much (except that people are, in general, pretty bad at reasoning). There have, however been a few variations of the experiment with startling results.
We use intuition, not reasoning, to determine what is relevant to a problem.
A few years after Wason's initial experiment, Jonathan Evans conducting an interesting follow-up study. Evans took the four-card selection task and made one seemingly unremarkable change; he introduced the word "not" into the rule. Everything else about the task remained identical to Wason’s original study.
For Evan's study, the rule became:
"If there is an E on one side of a card, then there is not a 2 on the other.
No big deal, right?
Wrong!
With only this small change, Evans found that the majority of participants now answered correctly! Just take a. minute to. let that sink in. Only about 10% of people get the original question right. Why would this simple reversal (adding negation) suddenly make the task so much more doable?
Well, it turns out that with or without the "not" most participants make exactly the same selection of cards (E and 2). By adding the word "not," the correct answer actually changes; E and 2 become the right cards to flip over. You still need to flip over the E (now to check that there is not a 2 on the other side). But now you do need to flip over the 2; this is to check that there is not an E on its opposite side (because an E cannot lead to the 2 without violating the new rule.)
Evans argued that his results were due to the fact that participants didn't actually use logical reasoning, but instead relied heavily on their intuitions of regarding what is relevant to the task. This is a sort of heuristic approach. Participants see that E and 2 are explicitly mentioned in the rule, so they intuit that those cards are most relevant and so select them.
This is not logical reasoning--this is pure intuition.
We use reasoning to justify intuitive assumptions (even when our intuitions are nonsense).
Evans and Wason later collaborated on another study where they asked participants to explain their selections. In this follow-up study, they utilized both the original task as well as the task with "not" included in the rule (there may have been some other variations included as well).
What they observed was that people did reason through their selection, but their reasoning was done to justify their selection, not to make a correct choice. When their selection was right (typically in the version of the test with "not") they made the right choice and correct logic was used when they explained their answer. However, even when participants made incorrect selections, they provided explanations with equal confidence that were logically incorrect.
These results implied that we tend use reasoning to justify decisions that we've already made using our intuitions. We struggle to use logical reasoning in our decision-making process and instead apply it after the fact, to convince ourselves and others that we're right. As a species, we're pretty terrible at utilizing logical reasoning. You really have to study logic to get good at it—it does not come naturally. Instead, we rely heavily on intuitive assumptions and even go so far as to utilize faulty logic to convince ourselves that we are not using those intuitions.
If people are so terrible at reasoning, why did we evolve to do it all?
Antonio Damasio, one of the pioneer psychologists to study the role of emotions in decision making defines emotion as changes in body and brain states in response to stimuli (1994). He argues that physiological changes like heart rate and endocrine activity occur in the body. These changes are sensed by the brain, interact with memories in the subconscious, and this produces an "emotion" in response to the stimulus that triggered the reaction. Over time, these body/brain state changes and the "emotions" they produce become associated with particular situations and past outcomes of those situations. This process of emotional (I've been calling it intuitive) decision making occurs subconsciously and requires no conscious reasoning.
Based on all of this evidence, it seems like we barely use reasoning at all. If this is the case, why did we even evolve to reason? If all of our decisions are made using intuitive assumptions, why do we bother to try to reason through them after we've already made our choice? What is the fitness benefit of this sort of phony, after-the-fact pseudo reasoning that we tend to do? Why do we find the need to invent rationalizations (no matter how flawed) for what our intuitions tell us? Why aren't we just intuition-led zombies?
I suppose we actually could have ended up as intuition zombies if we were solitary creatures, but we are social creatures and we accrue fitness benefits through successful cooperation with others.
Mercier and Sperber make the following statement in their book:
We have rejected the intellectualist view that reason evolved to help individuals draw better inferences, acquire greater knowledge, and make better decisions. We favor an interactionist approach to reason. Reason, we will argue, evolved as a response to problems encountered in social interaction rather than in solitary thinking. Reason fulfills two main functions. One function helps solve a major problem of coordination by producing justifications. The other function helps solve a major problem of communication by producing arguments.
-From The Enigma of Reason (p. 142)
Okay, there's a lot to unpack here.
First off, we did not evolve reasoning capacities to make valid inferences as individuals. (The science tells us that we tend to rely intuitions for this). The “interactionist approach” argues that reasoning capabilities evolved to facilitate interactions between individuals.
Mercier and Sperber argue that this happens in two capacities. The first is production of “justifications.” By this, the authors are referring to the way you get other humans to trust you and to acknowledge that you accept the norms of the group. Cooperation comes at a risk and we need methods for identifying “cheaters.” Mercier and Sperber use big game hunting as an example. If a group of hunters work together they can bring down large prey, but this is a risky endeavor. How am I to know that you’re not going to flake out and leave me in a risky situation when the crucial moment comes? You need to justify yourself using reasoning to convince me that your motivations align with our group norms. You use reasoning to convince me to trust you--to justify your actions (especially if they might be perceived as questionable). Similarly, I use reasoning to evaluate your justifications. This is all necessary for us to successfully coordinate our behavior.
The second role of reasoning is to produce “arguments.” (And this is where things become more relevant to us as teachers.) An argument is a line of reasoning deployed by an individual to convince others that something is true. Let’s stick with the big fame hunting example. Say you think that there are mammoths in the next valley. You can’t hunt them on your own; you need help from the group. This means you need to produce reasons to convince us that it’s a good idea to follow you there. If you tell us, "let's go to the next valley, my intuition tells me that a mammoth herd is there." We have no reason to believe you. You need to produce an argument to convince us that you're right. The better reasons you produce, the more likely we are to follow. But logical also helps us make good choices, so the better we are at evaluating your reasoning, the more likely we are to find mammoths without expending unnecessary energy.
Note that if we were solitary hunters this explicit reasoning process would be completely unnecessary. We could have an unconscious intuition of where to find prey and head in that direction to find it. Being able to produce reasons for why prey is there is completely superfluous because I don’t need to convince anyone to follow me. My intuition is all I need to find the prey on my own. If I'm solitary the whole computing process could (from an fitness benefit perspective) occur subconsciously.
It’s this “reasoning to produce arguments” notion that is really important for us to recognize as teachers. We have evolved to reason not to find the right answer, but to convince others of what the right answer is. This means that if we really want students to use reasoning, we need to constantly put them in situations where they are required to produce and evaluate arguments with one another.
The Moral of the Story
Reasoning is a Social Endeavor and Whiteboarding is Key
We’ve learned that humans are terrible at reasoning on their own. Even when we think we’re using logical reasoning we’re often just whipping up some half-baked justifications for intuitive assumptions. Asking students to reason through physics problems on their own seems to run counter to everything psychology and evolutionary theory have to teach us about human reasoning.
Most physics teachers embrace the whiteboarding philosophy. Kelly O'Shea has a whole section on her blog about different whiteboarding protocols. Of course, cooperative reasoning doesn't need to happen around whiteboards, but for physics teachers “whiteboarding” is often used synonymously with “collaborative problem solving.” As teachers we accept a long list of benefits of collaboration classroom. It teaches communication. It allows struggling students to learn from students with better understanding. It gives students a chance to learn through teaching. It allows for differentiation. Collaboration is a collection of skills that need to be developed.
All of these things are great benefits of whiteboarding, but the implications of the studies I've discussed run so much deeper! If we take these results seriously, they mean that we need to completely rethink what reasoning looks in our classrooms. If reasoning is a social endeavor, we need to strive to create social contexts that maximize student interaction. We must also cultivate these contexts to produce the type of interaction that encourages good reasoning.
To do this, we must be purposeful with our classroom norms. What are ideal group numbers for students to engage in creating and evaluating arguments? What protocols provide adequate structure to allow students equal opportunities to practice creating and evaluating reasoned arguments? How can we scaffold lessons to guide students towards producing reasoned arguments? I don't propose answers to these questions—I don't think the research is really there and I‘m sure they are quite context-dependent. It is clear however, that we should be doing our best to get students creating and evaluating each others arguments as much as possible.
Intuition is Inevitable, But We Can Learn to Master It
We've seen that intuitions tend to rule our individual decision-making processes, but this isn't necessarily a bad thing. If our intuitions are wrong, we are easily led astray, but if our intuitions are right, we are pushed towards a successful outcome without much mental exertion. We are subject to our intuitions. We can't escape them, no matter how logical we hope to be. What we can do is "hack" our biology to make these intuitions work for us.
Without training we have terrible physics intuitions. This is because our everyday experience provides very little circumstance for developing them. However through repetitive use of physical models in different (but similar) scenarios, useful intuitions begin to emerge.
I don't think there are any tricks to this and it really doesn't come as a surprise. It's just about putting in time. The more physics we do, the more useful our physics intuitions become (assuming each new scenarios are sufficiently familiar). As teachers we can help students develop good physics intuitions by giving them ample practice and ensuring that the familiarity of materials is carefully tuned to building and refining intuitions.
Maybe you're reading this and thinking, "Duh! The more you practice, the better you get!"
You wouldn't be wrong, but you're missing the point! Repetitive, independent practice has an important role to play, but perhaps it’s not what you’ve assumed it to be. Consider the possibility that independent practice may not be helping students develop reasoning skills, but instead serves only to help them build intuitions. Both are important, but if your goal with a certain assignment is to give students a chance to practice reasoning, you may want to rethink what you've asked them to do.
There's another side to this as well. Now that we know our intuition can fail us, perhaps we can learn to recognize when we're about to fall into an intuition trap. Generally, when we're navigating a situation that is similar to a previous situation our intuition is pretty reliable. It's when we find ourselves in new territory that we have problems.
Simple awareness of this fact can go a long way! We should teach students to recognize when a problem is sufficiently different from what they've seen before. If they are able to do this, they will be able to recognize that they are in a situation where they need to be very careful about what their intuition is telling them. I hypothesize that this is a learnable skill and if practiced can potentially yield valuable benefits.
What Am I Going to Do with All of This?
Student Metacognition and Whiteboarding Protocols
Next school year I hope to be explicit with my students about all this information. At the beginning of the year, I'm going to take the time to give them the four-card Wason selection task and teach them about the faults of human reasoning. I want them to understand why my course places such a huge emphasis on whiteboarding. We will also discuss the importance of developing good physics intuitions and that this can only be achieved through repetition. I want students to learn to be metacognitive about the interplay between intuition and reasoning as they attempt to navigate the class.
I've always reserved more difficult tasks as in-class cooperative work (rather than HW), but now I have a reason for why this is the case. More "difficult" tasks tend to rely heavily on logical reasoning and less on intuitions students have developed. This is probably due to the fact that most of these tasks involve systems, scenarios, or techniques that are sufficiently different from what the students have seen before. In these situations, intuitions become unreliable and the task becomes challenging or even impossible for student to complete on their own. (Remember, we're not good at using logical reasoning on our own). Armed with this knowledge, I hope to reserve most higher-order reasoning tasks to implement in cooperative lessons.
I will also aim to be much more intentional about whiteboarding protocols, carefully watching which protocols really cause students to craft and evaluate arguments (as opposed to protocols where students just sort of discuss and work alongside each other).
Intuition-Building vs. Intuition-Evaluating
I define these two types of tasks as follows:
Intuition-Building Task: A task designed for the development and refinement of intuitions that will be reliable in the physics classroom.
Intuition-Evaluating Task: A task that is intentionally designed for student intuitions to fail (at least partially). This is to teach students to recognize when their intuitions are unreliable.
As physics teachers, we utilize both of these types of tasks on a regular basis, but there is still value in recognizing the distinction. Through doing so, we can ensure that we strike a balance between them and that that we're not giving intuition-evaluating tasks before students are prepared to reap their benefits.
A set of intuition-building tasks must be similar enough to each other that intuitions can provide useful insights, but varied enough that they cause intuitions to be refined. If tasks are too similar, they just reinforce the same intuitions, without developing them to become more transferrable. On the other hand, if tasks are too varied, intuitions fail miserably and students get "stuck." I'm not exactly sure what the right level of variation is, but hopefully our teacher intuitions can give us an idea.
Intuition-evaluating tasks must be used prudently. In order for an intuition-evaluating task to have benefits, some intuitions need to be in place to begin with and the task needs to be scaffolded in a manner that is appropriate. Early in the year, students may need a lot of support in learning to recognize that their intuition is failing. They will also need tips on what strategies to use when their intuition doesn't provide a path forward. As students move through the year, they will (hopefully) get better and better at deploying strategies in these situations.
The psychology of reasoning is poorly understood and the field still has so much to tell us about what physics instruction should look like. I've barely scratched the surface and I'm eager to see what new research emerges in the coming decades. I hope you found this post useful or at least found yourself questioning some assumptions you have about the way we think and the way we learn!
References
Damasio, A.R. (1994). Descartes' Error: Emotion, Reason, and the Human Brain. New York, NY: Grosset/Putnam.
Evans, J. (1972). Interpretation and matching bias in a reasoning task. Quarterly Journal of Experimental Psychology, 24(2): 193–199.
Mercier, H and Sperber, D. (2019). The Enigma of Reason. Cambridge, MA: Harvard University Press.
Wason, P. C. (1968). Reasoning about a rule. Quarterly Journal of Experimental Psychology, 20(3): 273–281.
Wason, P. C. and Evans J. (1975). Dual processes in reasoning? Cognition, 3: 141–154.
Comments