The restless relationship between science and teaching

“Science is more than a body of knowledge. It is a way of thinking; a way of skeptically interrogating the universe with a fine understanding of human fallibility.
If we are not able to ask skeptical questions, to interrogate those who tell us that something is true, to be skeptical of those in authority, then, we are up for grabs for the next charlatan (political or religious) who comes rambling along.”

Carl Sagan

Science poses difficult questions to those who would claim to have expert authority. As a system of thought, it is a systematic way of asking questions which forces us to reject or refine our beliefs over time. With a resurgence of interest in how psychology can be applied to education there’s bound to be tension. Our beliefs about how the world works are important to us and it is natural to defend them when others say they are incorrect

Where science can’t help

That’s not to say that science can help guide beliefs about education entirely. As Willingham readily concedes:

“… physics (and other basic sciences) strive to describe the world as it is, and so strive to be value-neutral. Education is an applied science; it is in the business of changing the world, making it more like it ought to be. As such, education is inevitably saturated with values.”

The purpose of education – whether it is to ‘gain qualifications and skills for the workplace’, ‘holistically guide individuals towards becoming citizens of a 21st century global community’, ‘radicalise young people to question (and overthrow) the status-quo that maintains the inequities of society’ (etc, etc.) – is not something that science can answer for you. That’s a debate about political values and beyond the brief of this blog post.

That’s not to say it’s an unimportant debate, but at the end of the day we live in a democratic society which imposes values upon educational institutions through politics. We should all argue and debate this (go! … get elected / join a union / lobby your MP), but whilst we have an exam based system which assesses knowledge and understanding of my subject, it’s my responsibility to help my students achieve all they can. In this, science can help.

Human fallibility

Whilst people are worthy of respect and consideration and I’ll “defend to death” their right to argue the values of the education system; people’s ideas about how children learn deserve no special protection. The reason we need scientific input into teaching is because without it the profession will simply continue to be vulnerable to bad ideas (VAK, Brain Gym, Pyramid of lies), instructional techniques that are ineffective (minimally-guided instruction), charismatic gurus and evangelist ‘experts’ (who promise panacea but are actually selling us snake-oil). We can’t rely on experience alone to keep the profession free of pseudoscientific ideas. As important as our ideas may be to us, we’re genuinely not very good at evaluating them.

How flawed is human reasoning? Let me count the ways!.

An important bias in our reasoning is that rather than trying to test our own ideas about the world, we tend to try to confirm them – usually by selecting evidence that supports our ideas and ignoring or dismissing the relevance of evidence that challenges them. We all have a bias towards making type 1 (false positive) errors – seeing connections and causes where none exist – at the expense of type 2 (false negative) errors.

Interestingly, there are some possible evolutionary explanations as to why:

Quick – survival question!
Do you step on this patch of grass or not?
Snake

The suggestion is that throughout our evolutionary history making a T2 error has been a bit more dangerous than making a T1 error. If I step on the grass and there is a snake present (false negative – I wrongly thought there wasn’t a snake) then the outcome is potentially much worse than if step around a patch of grass and there wasn’t a snake present (false positive – I wrongly thought there was a snake).

TypeI_TypeII1
Source of image

Should I eat this funny looking mushroom? Eating a poisonous mushroom thinking it safe is quite dangerous (false negative – I wrongly thought wasn’t poisonous). Better to avoid the novel food even though it’s actually safe to eat (false positive – I wrongly thought it was poisonous).

It’s also an important part of conditional learning: that we perceive causal connections between events that correlate. I found water here last time, should I go back? Typically better to revisit on the off chance there was a legitimate cause for the water being there (e.g. a hollow where rain collects) than search from scratch to discover a new source of water.

The trade-off (in evolutionary terms) for a system than is primed for pattern recognition and errs on the side of caution, is that by reducing false negative errors we increase the number of false positive errors we make. One explanation for the persistence of superstitions, magical thinking and many anomalous experiences is that we are adapted to more readily make false positive errors (think there’s a connection between events when there isn’t) in order to avoid the occasionally fatal false negative ones (ignore important causal connections between events).

The problem is that there are lots of correlations in the world – and many of them are not causal.

Here’s one of my favourite examples:
Pirates

This readiness to perceive causal connections between unrelated events is compounded by the fact we also prefer to hang around with people who believe the same as us: Something psychologists call the ‘false consensus’ effect.

We typically prefer to interact with people who validate (rather than challenge) our beliefs, leading us to believe that our beliefs are more common or more valid than they actually are. As a consequence, individual and even collective un-tested beliefs about teaching are invariably flawed. What we need to do is regularly put our precious ideas about effective teaching into the crucible of science.

Skeptical Questions

No amount of experimentation can ever prove me right; a single experiment can prove me wrong.

Albert Einstein*

An important principle within science is falsification. The philosopher Karl Popper suggested that every genuine test of a theory is an attempt not to confirm it (which is easy if you look hard enough) but to falsify it; to refute it.

In this way, the process of science can be seen as an attempt to collectively correct for the essential biases in the way we form beliefs about the way the world works. Experiments are not conducted to simply confirm theories, but to test them to see where they fall down. In essence the scientific process isn’t one of discovering the ‘truth’ – but a way of weeding out the T1 errors that tend to accumulate within a culture over time.

That’s not to say that both qualitative and quantitative methods of research don’t have a role to play in developing better ideas about how children learn. Qualitative research is a great method for delving deep and generating hypotheses about learning. However, to actually test those hypotheses, quantitative research can play a powerful role. The controlled conditions and processes like ‘blinding’ help reduce the biases inherent to observations. However, always remember that the purpose of RCTs is not to confirm theories but to falsify them. That’s what laboratory experiments (RCTs and the like) are really about: They tell us which ideas fail.

Popper thought scientists should be dispassionate about their theories but whilst we may endeavour to be impartial, in practice it’s really, really hard to give up on our precious ideas (even when they fail). Individually, we are always likely to be wedded to some ideas more than others. Indeed, if we didn’t champion our ideas then no one would ever hear about them! Thus, in practice science evolves through a collaborative ‘battle of ideas’; our theories have to survive the scepticism of our peers.

In this battle, meta-analysis can be a useful tool. A quick way of spotting what has survived the crucible (so far) and what has failed. However, the evidence from meta-analysis is never cut-and-dry and I’ve argued that education research would do well to adopt the consistent language of uncertainty used in reporting climate change when it summarises research. Otherwise, there is a danger of reducing teaching to mechanical tick-lists of ‘best practice’ or regulators seeking to restrict teachers to “approved practices”. In short:

Give me suggestions – great! Recommendations – ok.
Required features of every OFSTED lesson – no thanks!**

Where science can help teachers

A cherished mentor once told me, ‘there’s nothing as practical as a good theory’. From robust theories, applications leap out. Our task as educators is to take the best ideas that survive the crucible of science and then exploit them in imaginative ways; adapting them and exapting the capabilities of our students to their best advantage.

Psychology can inform teaching – through identifying general principles that we can try to apply. For example, there were a couple of great examples tweeted by Joe Kirby and David Didau recently:

Applying Science of Learning in Education
Principles of Instruction

Teaching needs to steal the best ideas and apply them in novel and ingenious ways. The best teaching requires subject knowledge, experience, ingenuity, creativity, empathy, humour… and an understanding of the science of learning.

Teaching can also inform psychology – by asking challenging questions, pushing at the boundaries of theory and providing important insights that could form the basis of a scientific hypothesis. The tension between theory and practice is not only inevitable, but necessary.

The restless relationship between scientists seeking to understand how people learn and teachers applying their understanding in real classrooms is one that is important and adaptive. In the meantime, we teachers shouldn’t jump to the conclusion that because a meta-analysis or an RCT doesn’t tell us everything that science has nothing to offer us. Humanity cracked rocket science years ago … cracking the science of something really difficult like learning will take a bit longer.

———————————————

*Calaprice, Alice (2005). The New Quotable Einstein. USA: Princeton University Press and Hebrew University of Jerusalem. p. 291. ISBN 0-691-12074-9. Calaprice denotes this not as an exact quotation, but as a paraphrase of a translation of A. Einstein’s “Induction and Deduction”. Collected Papers of Albert Einstein Vol. 7, Document 28. The Berlin Years: Writings, 1918–1921. A. Einstein; M. Janssen, R. Schulmann, et al., eds.
http://en.wikipedia.org/wiki/Falsifiability
** that’s the polite version

Advertisement
This entry was posted in Philosophy of education and tagged , , , . Bookmark the permalink.

4 Responses to The restless relationship between science and teaching

  1. Pingback: Can teachers stop believing in nonsense? | Evidence into practice

  2. Pingback: Make it stick: Learning better and remembering longer.. | From the Sandpit....

  3. judygurfein says:

    You make some great points. There’s a wealth of research regarding education; the hard part is applying it.

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s