It’s interesting to see how cognitive science has recently become interesting to teachers. The field has some useful models and findings when it comes to understanding memory and motivational processes; some of which are quite applicable to teaching. It’s worth remembering, always, that cognitive science is only ‘what we know so far’; as Piaget said, “Scientific knowledge is in perpetual evolution; it finds itself changed from one day to the next.”
However, one of the more robust theories within cognitive science has proven to be the Working Memory Model (WMM), originally developed in 1974 by Alan Baddeley and Graham Hitch. As you’d expect for a scientific theory, it has been refined a number of times since the original model, but the basic elements of the model have survived the process of empirical attempts to falsify them very well.
Up to then, the Multi-Store Model (MSM) had been the dominant theory of memory. The MSM presented memory processes as a fairly linear progression from a sensory store (which momentarily holds perceptual information – quickly lost unless attended to), to a Short Term Memory (STM) Store (which held information through rehearsal – e.g. silently repeating it to yourself) to a Long Term Memory (LTM) store (information held indefinitely after sufficient or elaborated rehearsal).
Baddeley and Hitch had reviewed a large number of brain injury cases and identified a number of findings that simply couldn’t be explained by the MSM. For example, the case of KF, who was able to process and recall visual information, but was unable to retain some types of verbally presented information. From these various case studies*, Baddeley and Hitch saw that the STM component was not a unitary store, but a more complex multi-component process. They called their re-working of the model Working Memory.
Misconception: Teachers sometimes talk about STM and WM as separate things, whereas WM is a replacement theory for the idea of a single short-term store.
Overall, the WMM proposes that what we call STM is actually the active part of LTM – the two were not separate stores. It’s often described as the ‘active’ part of memory – the part that is engaged when trying to calculate mental arithmetic or reading a sentence.
There’s a more detailed summary of the WMM here, but below is a whistle-stop tour of the theory, some interesting stops along the way and some misconceptions to avoid on twitter.
The Central Executive (CE) acts a bit like a supervisor. As information comes into WM, the CE focuses attention to the information – selectively attending to one stimulus over another where there are competing stimuli (e.g. listening to the teacher’s voice rather than watching the people playing football outside the window). It has three ‘fluid’ or ‘dynamic’ sub-components that it can call on to help process the information coming into WM.
The Phonological Loop (PL) deals with speech and sometimes other kinds of auditory information. It acts as a temporary store for such information, holding on it briefly (to about 2s worth) before it decays or is over-written. This store is itself divided into sub-components, the phonological store and articulatory process (the linked pdf above gives lots more info).
Misconception: Teachers sometimes invoke the WMM to suggest that rehearsal doesn’t work. The Phonological Loop is the component that rehearses verbal information to hold it in memory. The PL (specifically the articulatory mechanism) also converts some visually presented material into an auditory form (e.g. graphemes to phonemes) and this is helpful as typically visual materials are more difficult to retain than verbal information through rehearsal.
The Visuo-Spatial Sketchpad (VSS) briefly holds visual information and the spatial relationships between objects. It may also hold onto to some kinaesthetic information. Again, this is simply a store and the information quickly decays unless it is rehearsed.
Misconception: Teachers sometimes suggest that the different sub-components in Working Memory supports the idea of VAK learning preferences. Quite wrong. The mixed-modalities within these sub-components are precisely the reason why cognitive scientists dismiss claims about VAK learning preferences. Quite simply we use all these modalities to process information and there’s no evidence to support a) that people have an advantage in one specific modality over any other and b) that teaching to that specific modality improves learning.
Mnemonics: One interesting suggestion is the Dual Coding Theory, that attempting to process the same information both visually and verbally leads to a ‘doubling-up’ of the coding and therefore increases the chance of retrieving the memory in future. It’s one of the explanations for how some mnemonic strategies work.
Cognitive Load Theory: On the other hand, Cognitive Load Theory suggests that sometimes this additional processing demand isn’t helpful. At the end of the day, to successfully learn new material we typically need to encode it semantically (what it means) and so presenting lots of unrelated images and text to process at the same time may place an extraneous (unnecessary) demand on the CE (e.g. overwhelming selective attention).
The Episodic Buffer (EB) was added to the model by Baddeley in 2000 after evidence suggested that there was a need for an additional component to deal with the information from the different stores and with what we already know. Its principle job is to integrate new information with information already stored in LTM. Cognitive science is still learning more and developing ideas about how this component works in WM.
Misconception: Some teachers seem to think that the claim that prior knowledge helps you learn is a neoliberal plot to destroy education. It’s not. The EB is one of the reasons why cognitive scientists point to the importance of prior knowledge when it comes to learning new material. By activating the relevant information in LTM, it allows new material to be encoded in a meaningful way and successfully stored. The role of LTM in helping Working Memory is well established and very easy to demonstrate (e.g. compare the retention of a random sequence of letters DPL OAM IGGB to a sequence containing meaningful chunks DOG PIG LAMB).
So, there’s a quick run through the Working Memory Model. It’s a robust theory, with a wealth of support from experimental studies (some of them quite elegant) and cognitive neuroscience. There are refinements to be made – no scientific model is ever really finished – but the basics are reliable and established enough that they’re unlikely to change radically any time soon.
*Indeed, this is the principle use of case studies in cognitive science. They are terrible sources of support for a theory, but pretty good ways of falsifying a psychological theory. As Popper noted, a theory that ‘all swans are white’ can be falsified by the observation of even one black swan.