With a renewed interest in cognitive science within teaching, are we in risk of “conflating hypothetical models with proven neuroscience since accepted facts can quickly become ‘neuro-myths’ when new research contradicts popular theories” as Ellie Mulcahy warns in “Forgetting everything we know about memory”, her recent blog post for LKMco?
As evidence for this concern, she relates a new piece of neuroscientific research examining the formation of memory engrams in genetically-modified mice. There’s two things particularly interesting about this study: firstly, the technique they developed to activate groups of neurons using light signals; second, that their study examining engram formation in the hippocampus and pre-frontal cortex appears to challenge a fairly long-standing neurological theory called the ‘Multiple Trace model’. In essence, neuroscientists thought that for engrams to be formed in the pre-frontal cortex, multiple retrievals of that engram were required from the hippocampus. This new study (though yet to be replicated) found that for an aversive stimulus (electric shock), the engrams formed in the prefrontal cortex at the same time.
However, after 1 day the engrams were not naturally retrievable (the engram didn’t appear to be activated when placed in the chamber where the mouse had received an electric shock), but could be activated using the light signals to directly stimulate the neurons in the pre-frontal cortex. After 14 days, the pattern was reversed – natural recall occurred and the engram in the prefrontal cortex was active, but the engram in the hippocampus remained dormant (but could be artificially activated using light signals).
Despite the fact that it is “not yet clear what the implications for teaching, learning and pupils’ memory are”, Mulcahy argues that this new research throws “everything we thought we knew in to question”. Of cognitive theories of memory, she says that this new research “serves to demonstrate that unproven models should not be taken at face value” and that we “risk charging headlong into the territory of new neuromyths and VAK revisited”.
Five claims and counterarguments
Mulcahy makes a series of very strong and, in my opinion, unwarranted and in some cases unscientific claims in her post:
- Neuroscience represents ‘proven’ facts.
- This study contradicts popular cognitive theories of memory.
- We should be ‘cautious’ of ‘unproven’ cognitive models of memory.
- This study has implications for teaching and learning (which are not yet clear).
- The findings of this study illustrate the risk of charging headlong into the territory of new neuromyths when applying cognitive psychology to the classroom.
These arguments, I contend, are erroneous: I’ll try to explain why in this post.
Does neuroscience represent ‘proven’ facts?
Well no – and without wishing to be unkind, it’s not the sort of statement I would expect from someone who has made any great study of psychology. Even if you weren’t aware of the many methodological and theoretical issues within neuroscience generally, its contingent (scientific) status is self-evident from Mulcahy’s post. Neuroscientists used to believe that engrams were formed first in the hippocampus and only later were they formed in pre-frontal cortex. This was based on neurological studies – but it turns out that this theory may not be accurate. In essence, a neurological theory about memory formation has been challenged by new evidence: Neuroscience is as contingent and uncertain as the rest of science – it does not represent some inviolable set of facts about the world (any more or less than cognitive science). The fact that this study represents a new finding and hasn’t yet been replicated, might also lead us to wonder why Mulcahy considers its status as a ‘proven’ fact rather than merely an interesting new piece of evidence.
Does this study contradict cognitive theories of memory?
It’s difficult to say from Mulcahy’s post as the only cognitive finding she relates it to is retrieval practice (though it might be more accurate to call retrieval practice an observed behavioural finding rather than a theory of learning). Mulcahy explicitly relates the neurological study to retrieval practice, implying that its status depends upon the multiple trace model (which the study contests). However, this simply isn’t the case:
The first behavioural evidence relating to retrieval practice is often taken to be a study by Arthur Gates in 1917, so the science is more-or-less 100 years old this year. The effect has been observed and replicated many, many times over the intervening years – so it represents a highly reliable observation about learning. Is it ‘proven’ though? Well, this is a philosophical question about science – and my answer would be ‘no’ – because like all scientific ideas it is contingent upon evidence. However, is it the sort of finding you can bet the house on? Well, yes I think it is.
That’s not to say that cognitive scientists won’t refine or improve our understanding of retrieval practice. The key application within this branch of cognitive science isn’t simply that opportunities for practice produce better results, but that retrieval practice produces better results than other forms of studying. Judge the evidence to support this view for yourself – here are a few examples (from decades of research in this field!):
Does the neurological study reported in Mulcahy’s post contradict this evidence? Well no – the study casts new light on the brain mechanisms involved in forming memory, but this is essentially irrelevant to status of retrieval practice as a secure behavioural finding. Indeed, a new neurological model of learning which directly contradicted the secure observation of retrieval practice would be a neurological model in considerable difficulty!
Is cognitive science based on ‘unproven theories’?
Unfortunately, Mulcahy makes this claim but then doesn’t specify which theories are ‘unproven’ (I’m going to treat this as “untested or lacks reliable evidence” from here on in). I wouldn’t call retrieval practice (the only cognitive finding mentioned explicitly) a cognitive theory of memory; the Working memory model, the New theory of disuse, or Cognitive load theory would have better claim to that status in my opinion. The model of memory that’s been perhaps written about the most recently in relationship to teaching is the Working Memory Model; therefore in the context of the post, is this an example of the sort of ‘unproven’ theory we should be concerned about?
The Working Memory Model was essentially constructed based on behavioural evidence from laboratory studies and ‘real-life’ settings rather than neurological studies. Much is sometimes made of the fact that Baddeley reviewed a great number of brain injury cases when formulating the theory. However, the evidence-base for the theory isn’t restricted to the behavioural deficits apparent in these cases, but has also been rigorously tested in an enormous number of experimental studies. There’s a great clip of Alan Baddeley talking about the development of Working Memory if you’re interested.
The model has undergone many minor revisions, but the basic architecture (e.g. as described in Willingham’s ‘Why don’t students like school?’) has remained remarkably robust in the face of experimental and – yes – neurological findings. There’s a good review of some of the neuroscientific evidence relating to working memory – and I’d argue that the Working Memory Model isn’t some untested hypothesis, but a robust scientific theory:
That isn’t to say that there aren’t further questions and refinements of the model to come: How is the focus of attention organised? How can we better account for capacity limitations in visual STM? What is the role of other neurotransmitters and hormones, in addition to dopamine, in working memory function?
With the technology involved in imagining getting cheaper and new techniques being developed like the one described in Mulcahy post, neuroscience has the potential to make a phenomenal contribution to cognitive science by helping to explicitly test the process models describing behavioural observations. However, what neuroscience won’t be doing is re-writing the behavioural observations upon which cognitive theories are based. For example, we know there are capacity limitations to visual STM from behavioural level observations; the question is how can we explain these? A new neurological study is extremely unlikely to overturn the evidence that we have limited visual STM in the first place.
All scientific theories remain contingent, but sometimes the evidence-base supporting them is strong enough that we can have good confidence that some new observation won’t happen along to upset them anytime soon. The Heliocentric model of the solar system, evolution through natural selection, etc still hold theoretical status, but flat Earthers and creationists are not exercising reasonable or scientific scepticism when they deny them. Likewise, working memory represents a theory from cognitive science in which we can have strong confidence.
Does this study have implications for teaching and learning (which are not yet clear)?
No. In fact, with all the advances in neuroimaging and our understanding of the brain, it is highly unlikely that we’ll see many applications for classroom practice arising from neuroscience. The reasons for this are explained really well in these three articles:
Willingham, D. Neuroscience applied to education: Mostly unimpressive.
Bishop, D. What is Educational Neuroscience?
Bishop, who is a neuroscientist specialising in child development argues that whilst there are a few instances of neuroscience being applied to techniques like neurofeedback, neuropharmacology and brain stimulation, that in the main we should focus on cognitive and behavioural evidence to understand teaching:
“If our goal is to develop better educational interventions, then we should be directing research funds into well-designed trials of cognitive and behavioural studies of learning, rather than fixating on neuroscience”
Bowers argues that neuroscience is often conflated with cognitive psychology and takes credit for its applications. There’s perhaps a reasonable degree of evidence to support this accusation – as even a recent EEF news report mislabelled research into sleep and spaced retrieval practice as neuroscience. In terms of whether neuroscience will give rise to useful applications for teachers:
“More importantly, regarding the assessment of instruction, the only relevant issue is whether the child learns, as reflected in behavior.”
Willingham, now Professor of Psychology at the University of Virginia was formerly involved in cognitive neuroscience research looking at brain mechanisms involved in learning. He makes a couple of useful points to bear in mind when assessing the connection between neuroscience and classroom practice.
Firstly, neuroscience isn’t an appropriate level of description for understanding learning in the classroom. Neurological explanations of teaching or learning would be examples of ‘greedy reductionism’. The distance between the actions of a group of neurons in the brain and a group of children in a classroom means that trying to pin classroom behaviours to neurological foundations essentially skips whole levels of theory and description in between. Willingham describes this as the ‘vertical’ problem of educational neuroscience.
Secondly, and in my opinion this represents a deeper problem for Mulcahy’s post, Willingham describes the ‘horizontal’ problem of educational neuroscience:
“Consider that in schools, the outcomes we care about are behavioral; reading, analyzing, calculating, remembering. These are the ways we know the child is getting something from schooling. At the end of the day, we don’t really care what her hippocampus is doing, so long as these behavioral landmarks are in place.”
Cognitive theories are formulated and applied based on these behavioural landmarks. For neuroscience to have useful implications for classroom practice we’d need to translate findings from this behavioural side, to the neural side, and then back to behaviour again. Mulcahy appears to conflate cognitive models of memory based on the behaviour of people with neurological models based on the behaviour of neurons. The former is a promising source of implications for practice, the latter very rarely.
Do the findings of this study illustrate the risk of charging headlong into new neuro-myths when applying cognitive psychology to the classroom?
No.
Unlike neuro-myths like ‘right and left brained learners’ or ‘only using 10% of our brains’ – cognitive models of memory have been tested against a lot of evidence produced from decades of research. Cognitive theories typically applied in teaching, most notably working memory, aren’t ‘flash-in-the-pan’ untested ideas, but well-evidenced theories which (in their applicable form) are extremely unlikely to be overturned any time soon: We can have reasonable confidence in the status of these theories when trying to think how we might make use of them in our teaching.
Now, of course, it’s possible that some future finding will cause significant revision to one or more cognitive theory of memory. I’m not convinced this is true for the study reported by Mulcahy. However, it’s a general truism for any branch of science (indeed, something that separates scientific ideas from neuro-myths). A degree of uncertainty is situation normal in science.
However, it’s also possible that neurological evidence will provide additional support for behavioural models (further increasing the confidence we can have in them); or neurological evidence may provide new understanding for how cognitive models are instantiated in the brain. But it’s extremely unlikely that neurological evidence will fundamentally change the sort of applications arising from behavioural models of learning for the classroom. I’m hugely in favour of teachers developing their professional scepticism and in this age of ‘alternative facts’ there’s more reason than ever to apply it. However, rational scepticism isn’t about making hyperbolic claims or misrepresenting the scientific status of theories.
No one – perhaps least of all the writers listed at the start of Mulcahy’s blog – would suggest that teachers look to psychology credulously or unquestioningly. Cognitive psychological models of memory aren’t a ‘magic bullet’ or the answer to all (indeed that many) problems in the classroom. It has some useful applications in teaching – but the potential benefits are not instantaneous or automatic. Teachers interested in helping their students develop more effective independent learning or looking to implement low-stakes quizzes in their lessons shouldn’t forget everything we know about memory. Instead, they should apply the ideas thoughtfully and use informed professional judgement to check whether they are having the effects intended in practice.
Reblogged this on The Echo Chamber.
LikeLike
Pingback: Educational Reader’s Digest | Friday 28th April – Friday 5th May – Douglas Wise
Pingback: What the Science Says About How Preschool Benefits Children and more in this week's news roundup! - Psych Learning Curve
Pingback: Stargazing in the Australian Outback: the limitations of science in education - LKMco