Has the marshmallow melted? Interventions involving executive functioning may have little effect.

What are executive functions?

Executive functioning is, in some ways, a pesky cognitive ability to define as it’s implicated in so many different functions. It’s a hypothesised capacity for things like problem solving, reasoning, planning and organisation, inhibiting action or speech within context appropriate norms and managing attention control (amongst others).

These functions develop rapidly in early childhood, then slowly throughout adolescence and early adulthood – reaching a peak in our mid-twenties before gradually beginning to decline.

when-do-ef-skills-develop Source of image: http://developingchild.harvard.edu/key_concepts/executive_function/

The development of executive functioning is frequently related to (though not exclusively limited to) the development of the prefrontal cortex of the brain.

Prefrontal_cortex_(left)_-_lateral_viewPrefrontal_cortex_(left)_-_medial_view Source of images: http://en.wikipedia.org/wiki/Prefrontal_cortex#Additional_images

It’s one of the areas of the brain which is much greater in size (relative to the rest of the brain) in human beings compared to other primates and other species of hominid. The main reason appears to be the greater myelination of neurones (i.e. volume of white matter) which provides greater connectivity between the prefrontal cortex and the other areas of the brain in humans compared to other species.

The prefrontal cortex plays a significant role in what psychologists call ‘Working memory’ and the idea of ‘executive functioning’ is related to the ‘central executive’ component of that model of memory. Executive functioning is associated with a number of SEND conditions which teachers will have encountered or heard about, for example ADHD (Attention deficit hyperactivity disorder). There’s some evidence to suggest that deficits in working memory, potentially related to poor executive functioning, underlie some of the difficulties children may face in school. For example, Gathercole and Alloway (2007) report:

“Approximately 70% of children with learning difficulties in reading obtain very low scores on tests of working memory that are rare in children with no special educational needs.”

There may be considerable variance in the working memory function of children in a particular classroom. For example, Gathercole and Alloway (2007) suggest that:

“Differences in working memory capacity between different children of the same age can be very large indeed. For example, in a typical class of 30 children aged 7 to 8 years, we would expect at least three of them to have the working memory capacities of the average 4-year-old child and three others to have the capacities of the average 11-year-old child, which is quite close to adult levels.”

Perhaps the most famous example of a test of executive functioning is the ‘Marshmallow Test’ by Walter Mischel. In these studies a child is offered a choice; a small immediate reward (e.g. a marshmallow) or double the reward if they could wait for 15 minutes. What Mischel found in the follow up studies was that the children who deferred gratification (i.e. waited for the bigger reward) rather than going for immediate gratification (i.e. couldn’t wait) showed different characteristics even years later.

Children who deferred gratification were rated as better able to handle stress, engage in planning, and exhibit self-control when adolescents 10 years later and went on to obtain higher SAT scores. They found that these differences appeared to be apparent even when participants were in their 40s.

Can we train executive functioning?

Given the importance of executive function in emotional regulation and higher cognitive abilities like memory and attention, there’s been considerable interest in whether such abilities can be trained in children. Certainly there have been attempts to train children’s working memory in the hope that it might help them achieve more in school, but these interventions are not straightforward.

For example, Melby-Lervåg and Hulme (2013) examine the claims of training programmes designed to boosts working memory function. They report that some of these working memory training packages made fairly confident claims regarding their effectiveness; for example, that they could help children with ADHD, dyspraxia, ASD, that they could boost IQ and improve school grades. The programmes themselves appeared to involve numerous computerised memory trials:

“However, these programs do not appear to rest on any detailed task analysis or theoretical account of the mechanisms by which such adaptive training regimes would be expected to improve working memory capacity. Rather, these programs seem to be based on what might be seen as a fairly naïve “physical– energetic” model such that repeatedly “loading” a limited cognitive resource will lead to it increasing in capacity, perhaps somewhat analogously to strengthening a muscle by repeated use.” *

The outcomes of the meta-analysis were not so supportive of these impressive claims. They suggest that although there appeared to be short-term improvements on both verbal and nonverbal working memory tasks – these gains did not last very long, nor generalise to things like the ability to do arithmetic or decode words. For attentional control, the effects were small to moderate immediately after training, but reduced to nothing in the follow up.

* Incidentally, this is one reason why I personally dislike the ‘growth mindset’ analogy of the brain being ‘like a muscle’. In many, many ways, it simply isn’t!

Ok – so ‘brain training’ programmes don’t appear to have lasting or generalisable effects on working memory, but what about other interventions – specifically aimed at improving executive functioning? There’s certainly been a recent surge of interest for the idea of developing executive functioning in our pupils – linked with the whole notion of ‘character education’.

However, as the authors of a recent review Jacob and Parkinson (2015) point out:

“Yet, despite this enthusiasm, there is surprisingly little rigorous empirical research that explores the nature of the association between executive function and achievement and almost no research that critically examines whether the association is causal. From the existing research it is not clear whether improving executive functioning skills among students would cause their achievement to rise as a result.”

The authors of the review suggest that interventions to increase executive functioning probably have little value unless they are also helping children achieve greater success within school. Thus they focused the meta-analysis on whether interventions designed to improve executive functioning cause improvements to outcomes.

Interestingly, they found that there was no significant difference between attention/inhibition and working memory measures in their correlation with student achievement. Both appeared to correlate ~0.30 level. However, this relationship did not appear to be a directly causal one:

“.. there is substantial evidence that academic achievement and measures of executive function are correlated—both at a single point in time and as predictors of future achievement, and for a variety of different constructs and age groups. Despite this, there is surprisingly little evidence that a causal relationship exists between the two. High levels of executive function may simply be a proxy for other unobserved characteristics of the child.”

So what might be the factor underlying both executive functioning and school achievement? The authors explore a range of possible factors:

“Once child background characteristics and IQ are accounted for, the association between executive function and achievement drops by more than two thirds in most of these studies and in most cases the conditional associations are close to zero.”

This suggests that school-based interventions focused on improving executive functioning will have a disappointing impact on achievement:

“The most effective school-based interventions designed to influence executive function have only had an impact on measures of executive function equal to around half a standard deviation (e.g., Raver et al., 2011). This means that under the best case scenario … interventions designed to improve executive function would only have the potential to increase future achievement by less than a tenth of a standard deviation (half of 0.15).”

As well as regression analysis, they also looked at where randomised controlled trials had attempted to assess the impact of executive function interventions. They only found five studies which specifically looked at the effects of training on achievement and had a randomised design. They describe a number of programmes which have been evaluated, for example ‘Tools of the Mind’, ‘Head Start REDI’ and the ‘Chicago Schools Readiness Programme’.

These programmes varied in content, but tended to be taught as stand-alone, ‘skills-based’ approaches. For example, the REDI programme was taught to pre-school children in weekly lessons and extension activities where children were taught language skills, social skills, emotional understanding, self-regulation and aggression control by teachers trained on the ‘Promoting Alternative THinking Strategies’ curriculum. The review finds that none of these approaches appeared to directly improve student outcomes.

“The few random assignment studies which rigorously evaluate interventions designed to impact executive function provide some evidence that executive function can be influenced by intervention (most of the studies we reviewed showed some positive impacts on measures of executive function) but provide no compelling evidence that impacts on executive function lead to increases in academic achievement.”

One of the problems with the training programmes was that they target multiple factors at the same time. For instance the REDI intervention targeted executive functioning and school achievement. They make the point that:

“… if the intervention improved children’s ability to take tests, then children would perform better on both measures of executive function and on measures of achievement. If the improved ability to take tests was not accounted for in the analyses, the improvement in executive function would be correlated with the improvement in achievement.”

The problems with applying psychological research in schools

Children vary in many ways – so it should come as no great surprise we find examples of psychological differences between kids which do well at school and ones that struggle. However, just because we find that children’s school attainment correlates with cognitive ability ‘X’ or attribution ‘Y’, doesn’t tell us whether trying to train ability ‘X’ or change attribution ‘Y’ will actually help.

That’s one of the problems when trying to apply psychological findings to education: Simply identifying cognitive or affective differences between children isn’t actually all that useful. This kind of purely psychological research is a different ‘kettle of fish’ to the applied psychology of designing effective ‘interventions’ to raise achievement. There’s a lot of hype around cognitive or attributional variables which correlate with school outcomes at the moment.

As usual, the cart ends up before the horse – and interventions are implemented into schools before there’s good evidence about whether they do any good. It’s important we remember that interventions based on identified psychological differences may not necessarily lead to benefits for children. For instance, an intervention may be costly and irrelevant as there’s another factor which causes both the differences detected and the improved outcomes.

Of course, when schools have invested as great deal of time, effort and training in such an intervention scheme, it becomes easy for them to convince themselves that they are seeing a genuine difference. But we can’t rely on anecdotal evidence or professional experience alone here! It seems that the evidence to date suggests that teachers should be highly sceptical of training or intervention programmes which claim to have success in raising achievement through targeting executive functioning.

This entry was posted in Psychology for teachers and tagged , , . Bookmark the permalink.

8 Responses to Has the marshmallow melted? Interventions involving executive functioning may have little effect.

  1. dodiscimus says:

    This is a slightly tired and not very carefully-considered comment but I’m wondering whether the cognitive acceleration programmes – CASE, CAME, Let’s Think! – could be considered as trying to develop executive functioning. Their rationale is based on trying to move children on between Piagetian stages, which is to do with executive functioning I think. I’m not familiar enough with the evaluations to be certain but I understand that there have been studies showing good long-term effects. I don’t know how independent those studies are though and I haven’t looked at the methodology or analysis so have no opinion on the quality of the research, Any thoughts?

    Like

    • Good question. Takes me back! CASE was something I was encouraged to use when I started my career as a teacher (initially I trained as a science teacher). Certainly the version I saw tried to break up learning into separate ‘thinking processes’ to be exercised, so I suspect it might fall foul of trying to load a limited cognitive resource like a muscle.

      Another problem with it, in my opinion, is the same as many other “skills” based approaches – I don’t think the ‘thinking skills’ taught transfer very well, and its effectiveness would likely be limited for the same reasons as other ‘minimally-guided’ approaches tend to fail. Hattie rated ‘Piagetian programmes’ (which I assume includes things like CASE) quite highly – but I’ve not seen an RCT evaluating them. Most of the studies I’ve seen have been small-scale and lack robust controls.

      Like

      • dodiscimus says:

        That’s interesting. Having slept, I can manage a more informed response (possibly).
        I also assumed Hattie’s ‘Piagetian programmes’ referred to things like CASE but it doesn’t. My post here http://wp.me/p44DHA-8V investigates a bit and I do now have the Jordan and Brownlee (1981) paper (kindly posted from the USA by one of the authors) and the abstract is an accurate summary. Hattie does use a meta-analysis (Higgins et al 2005) https://eppi.ioe.ac.uk/eppi/Evidence/EPPI_reviews/Thinking_skills/Review2/R2summary.htm that includes CASE etc but this is under ‘Creativity programmes’, of all places. I think you are right that the quality of evidence for CA is not unimpeachable. The majority of research has not been independent (done by the KCL team that developed the programme) and after the first study further research in the UK has been by comparing CASE/CAME and non-CASE/CAME schools but the problems with comparing ‘innovative’ schools with others are well-known. There is one, small RCT from Finland but although the intervention groups made significant progress, so did the control group, so that’s inconclusive. So overall, none of this is irrefutable but on the other hand there is a fair weight of evidence there, and maybe most importantly the effect is being shown over several years and across core subjects.
        I briefly taught at a CASE school and I think you are wrong about it being a ‘skills-based’ approach in the way you mean it. It’s very specific to the science contexts even though it doesn’t link to curriculum topics directly. So, for example, I recall a lesson about the relationship between weight, volume, and density. It was directly addressing the ‘pound of feathers’ misconception in science. The lessons are closely scripted; I think it’s actually got quite a bit in common with Direct Instruction (the version with capitals).
        There is a 50 school RCT underway http://educationendowmentfoundation.org.uk/projects/cognitive-acceleration-through-science-education-case-lets-think-forum/ so we may know more next year although I can’t see how the evaluation takes into account the difference between the detailed planning of CA lessons and lessons planned normally. Best wishes

        Liked by 1 person

      • Thanks for your lucid reply! 🙂
        Some good points about CASE. As with many of these things, how they are implemented is clearly a vital factor in the efficacy – and my brief experience may not represent the way the programme was intended. Indeed, the way you describe it, I wouldn’t classify it as an ‘executive function’ approach. Yes, it would definitely benefit from more robust evaluation to establish its effectiveness. Glad to hear there’s an RCT in progress!

        Like

  2. Josie M says:

    Nick,
    Can I just check… I’ve tried to explore the links you’ve added here. I think you’d suggest that interventions to enhance executive functioning or metacognition are not supported by any strong evidence. Would you say, therefore, that modelling metacogntive strategies that are deeply rooted in subject domains are not effective either? Or is that entirely separate? The research I’ve done has suggested that so long as strategies are grounded in deep knowledge, they can be useful in enhancing learning. Would you agree?

    Like

  3. Interesting question! I wouldn’t necessarily agree that metacognition and executive functioning are the same thing. It’s likely that metacognition relies upon executive functioning (like anything involving attention) – and the two will almost certainly correlate – but my doubt would be the ’cause and effect’ relationship: The claim that teaching these strategies necessarily improves executive functioning (whereas it might simply be that children with good executive functioning get the most benefit out of these strategies).

    For me, metacognition strategies involve improving judgements of learning (an important part of self-assessment – c.f. AfL). For example, acting upon the difference between recognition (e.g. being superficially familiar with material) and recall (e.g. when testing themselves). Or what elements of the learning can the student do for themselves and where do they need help from their teacher (e.g. where there’s a lack of understanding). Or general strategies like error checking or proof reading.

    I would expect these sorts of metacognition – appropriate to the subject domain – with explicit modelling of this at first then reducing scaffolding support so that they are taking more responsibility – to be quite effective.

    Like

Leave a comment