Is teaching a ‘natural ability’?

What characteristics does a teacher need to be effective?

The answer appears to be elusive as various reviews find that most teacher characteristics appear to have only marginal impact on student attainment.

For example, looking at maths teaching Rockoff et al (2004) examined the relationships between student outcomes and a range of teacher characteristics including graduate education, general cognitive ability, content knowledge, personality traits (like introversion or extraversion) and self-efficacy. They found no significant relationship between graduate education and teacher effectiveness, a marginal relationship with cognitive ability, maths knowledge for teaching was more strongly related to math achievement, traits like conscientiousness and extraversion  were non-significantly related, and general self-efficacy was only marginally related. All in all, the correlations between teacher characteristics and student outcomes are typically very small or non-existent.

The summary of research published by the Sutton Trust (2014) suggested that there were two main factors linked to improving student outcomes:

  • teachers’ content knowledge, including their ability to understand how students think about a subject and identify common misconceptions
  • quality of instruction, which includes using strategies like effective questioning and the use of assessment

Clearly, ‘content knowledge’ is a technical competence. We’re not born knowing scientific theories, the rules of grammar or mathematical laws, therefore it must be something that teachers need to develop prior to (or during) their classroom practice.

However, subject knowledge appears necessary but not sufficient. For example, looking at science teaching Sadler et al (2013) found that subject knowledge alone did not secure improved outcomes for students when the material involved common science misconceptions. They suggested that a teacher’s ability to identify students’ common misconceptions was also required for students to make gains (even then this only helped where children had strong prior maths and reading ability).

Effective teachers appear to anticipate how students think about their subject and to use this insight to ask effective questions. However, to what extent does effective teaching involve a technical or professional set of knowledge and skills, developed through professional development or classroom practice, and to what extent is it a natural ability?

Theory of Mind and the ability to teach

The ability to infer how other people think and feel is referred to by psychologists as ‘Theory of Mind’ (ToM). ToM enables a person to explain and predict the behaviour of other people by inferring the mental states which cause that behaviour. The philosopher Daniel Dennett calls this the ‘Intentional Stance’ – understanding that other people’s actions are goal-directed and arise from their beliefs or desires. From his studies of imitation in infants, Andrew Meltzoff suggests ToM is an innate understanding that others are “like me” – allowing us to recognize the physical and mental states apparent in others by relating them to our own actions, thoughts and feelings. In essence, ToM is a bit like the ability to use your own mind to simulate and predict the states of others.

Strauss, Ziv and Stein (2002) proposed that ToM is an important prerequisite for teaching. A few other animals, for example chimpanzees, appear to teach conspecifics in a limited way, but only humans appear to teach using the ability to anticipate the mental states of the individual being taught. They point to the fact that the ability to teach arises spontaneously at an early age without any apparent instruction and that it is common to all human cultures as evidence that it is an innate ability. Essentially, they suggest that despite its complexity, teaching is a natural cognition that evolved alongside our ability to learn.

They taught pre-school children how to play a board game, and then observed that child’s behaviour when teaching another child. The identified a range of teaching strategies:

  • Demonstration—teacher actively shows learner what to do, e.g., moves the train on the track and stops at a station
  • Specific directive — teacher tells the learner what to do right now, e.g., “Take this”
  • Verbal explanation — teacher explains to the learner a rule or what he/she should be doing, e.g., “You got green. You can take the cube”
  • Demonstration accompanied by a verbal explanation
  • Questions aimed at checking learner’s understanding — “Do you understand”? “Remember”?
  • Teacher talk about own teaching — teacher shares with the learner his/her teaching strategies, e.g., “I will now explain to you how to play”
  • Responsiveness—teacher responds to utterances or actions of the learner, e.g., answers questions when a learner errs and demonstrates or verbally repeats a rule

One or more of these has likely been the basis of a CPD session you recently attended!

They also found that 5-year olds appeared to have a more advanced understanding of teaching compared to 3-year olds: Relying more on verbal explanations, more responsive to the learner’s difficulties and asking questions aimed at checking the learner’s understanding.

Implications

Firstly, if teaching is a natural ability functioning from a competent ToM, it might have implications for teacher recruitment. Given the very limited correlations with academic qualifications (beyond a degree in a relevant subject), cognitive ability and various personality traits – might some sort of advanced ToM test better predict teacher effectiveness?

ToM tests for adults on the autistic spectrum have been developed, for example Baron Cohen et al (2001). These involve identifying emotional / mental states from pictures of people’s eyes:

Eyes ToM test

Source of image

However, Baron Cohen has suggested that a functioning ToM involves both affective and cognitive components – the ability to emotionally respond to another’s mental states and the ability to understand another’s mental state. People likely vary on a spectrum across both of these components. Baron Cohen has suggested that psychopaths, for example, probably have a very high functioning cognitive ToM (required to be able to deceive and manipulate people) but ‘zero negative’ empathy for others.

I think great teachers probably need both: the ability to model other people’s thought processes (e.g. how students think about a subject), balanced by an empathetic concern for others.

Secondly, teaching involves the ‘impossible task’ of mind reading – not only identifying gaps in a student’s knowledge, beliefs or skills but also whether they hold incomplete or distorted ideas. In addition, great teachers make countless, unconscious inferences about students’ emotional and motivational states (are they attentive, tired, bored or confused) and react intuitively to these states. Teaching is such a complex task it is probably impossible to ‘do it consciously’.

If teaching is essentially a natural ability, then potentially a great deal of the CPD available to teachers is a waste of time! It could be argued that a great deal of teacher professional development (e.g. on questioning and providing feedback) involves developing the sorts of skills demonstrated by the average 5-year old. Perhaps this is why teachers fail to attract the sort of respect granted to other professionals! Therefore, an important question needs an answer – by ITE providers (of all types) or the proposed College of Teaching – exactly what is the ‘technical’ or ‘professional’ body of knowledge or set of skills required of an effective teacher, which can actually be taught?

 

Posted in Psychology for teachers | Tagged , , , | 19 Comments

Perpetual motion machines do not exist

Fludd machine 1618

Robert Fludd’s description of a perpetual motion machine from the 17th Century. The idea involved water held in a tank above the apparatus driving a water wheel which, through a complex set of gears, rotate an Archimedes screw which draws the water back up to the water tank.

 


The idea of creating a machine which can continue indefinitely without any source of energy to power it is one that has fascinated inventors since the astronomer and mathematician Bhāskara II described a wheel which could run forever in the 12th century. The failure to build such a machine hasn’t stopped people from trying to build them or even applications for patent; whether using magnets, or gravity or buoyancy as the basis for perpetual motion. However, no attempt to create one has ever worked.

Perpetual motion machines do not exist, because no one has built a machine which can continue indefinitely without some external source of energy to keep it going.

It would be very, very odd for someone to claim that they did exist, simply because inventors periodically try to create one. I’d certainly accept that they have tried to create a perpetual motion machine (and thus far failed), or created a machine which they claimed possessed perpetual motion (but didn’t really) – but to say that perpetual motion machines ‘exist’ surely implies that someone has built one that actually works.

I recently read a short series of blogs defending the idea of ‘learning styles’. The idea at the heart of learning styles is that information provided to a student in a form that matches their ‘style of learning’ will lead to improved learning.

Coffield et al (2004) review over a dozen attempts to measure differences in learning ability so that instruction can be matched to this ‘style of learning’. Certainly, all of the systems have tried to define learning styles, but the question is whether any of them actually work. They found that whilst a few of the them provided some relatively valid measures of differences between people, none of them demonstrated that attempting to match teaching to this style would have any benefit.

They conclude that whilst learning style theorists have conducted small-scale, weakly controlled studies to support their claims, none of them produce systems with any clear evidence that using them will advantage learners. None of the proposed systems work.

Pashler et al (2008) helpfully define what a learning style is supposed to be.

“The term ‘‘learning styles’’ refers to the concept that individuals differ in regard to what mode of instruction or study is most effective for them. Proponents of learning-style assessment contend that optimal instruction requires diagnosing individuals’ learning style and tailoring instruction accordingly.”

This makes it clear that merely identifying some differences in people isn’t sufficient for the label of a ‘learning style’ to be applied. As well as being able to measure some sort of psychometrically reliable differences, a learning style also needs to show what mode of instruction would be most effective for an individual. They also note that very few studies have actually tested whether proposed learning styles actually improve learning when instruction is tailored to them. Where these studies have been conducted, several found results which contradicted their claims. The data so far do not support the idea of learning styles.

No attempt at ‘learning styles’ has ever succeeded. It would seem bizarre, therefore, to claim that learning styles exist. I’d certainly accept that people have tried to describe learning styles (and failed) or that some people claim a system of learning styles is effective (when they don’t have evidence to support that view).

Cognitive scientists like Daniel Willingham and teachers like Tom Bennett seem on pretty safe ground making the claim that they don’t exist. The burden of proof is on those claiming that learning styles exist – let them produce the data showing both valid measures which differentiate learners and that matching instruction to these differences enhances learning. If robust evidence to support this comes to light in the future, then I am certain both would change their position (as would I) – that’s the nature of science.

In the meantime, however, claiming that ‘learning styles exist’ smacks almost of what Irving Langmuir called pathological science: ‘an area of research where “people are tricked into false results … by subjective effects, wishful thinking or threshold interactions”’. Langmuir identified pathological science, like perpetual motion machines, as fruitless ideas that simply will not “go away” despite repeated failure.

Given that attempts to identify effective learning styles is hardly new (at least since the 1980s), I have sympathy with the frustration in this article by Tom Bennett which argues that VAK – the most notorious attempt at learning styles in UK education –  is a ‘zombie’ idea in education that simply fails to die.

Posted in Psychology for teachers | Tagged , , , | 12 Comments

How Should Students Revise? by @Nick_J_Rose

Starter for Five

Name: Nick Rose
Twitter name: @Nick_J_Rose
Sector: Secondary
Subject taught (if applicable): Psychology
Position: Leading practitioner for psychology and research
What is your advice about? How should students revise?

1: Practice testing: Use low-stakes tests, quizzes or reviews on a regular basis. Encourage students to test themselves frequently as part of their revision.

2: Distributed practice: Revision over time leads to better recall than cramming. Consider opportunities to revisit previously taught material when planning schemes of work.

3: Interleaving: Consider encouraging students to alternate their practice of different kinds of items or problems when revising rather than sticking to one topic.

4: Elaborative interrogation: Consider encouraging explanatory questioning to promote learning; for example by prompting students to answer “Why?” questions.

5: Self-explanation: Consider encouraging students to explain to themselves how new information is related to known information, or the steps required to solve a problem.

View original post

Posted in General teaching | Tagged , | Leave a comment

What do UK teachers think of some common arguments about pedagogy?

Online survey small

An informal survey about what UK teachers think about some of the more contentious arguments surrounding pedagogy.

If you’d like to take the survey you can click the link below. The responses to this second survey will be analysed at a later date:

https://www.surveymonkey.com/r/DDPJ9RC

Broadly inspired by this paper by Juhani Tuovinen. The purpose of the survey is to explore some of the views and values teachers hold about teaching. The survey will ask you to provide some basic demographic info then the following page will ask you to rate your view of some opinions about teaching and learning.

The original survey ran from 5:45 pm (GMT) Mon 26th Oct 2015 until 5:45 Tuesday 27th.The results of the original survey can be found here:

Results and analysis – part 1

Results and analysis – part 2

Results and analysis – part 3

Results and analysis – part 4

Posted in General teaching | 4 Comments

The science of learning

Deans for impact

Here’s a really clear, short and applicable summary of the key areas of cognitive science which can be applied to the classroom:

The Science of Learning

The summary looks at six questions about learning, giving a quick summary of the science and some ideas about how they might apply in schools and classrooms. It effectively summarises a great deal of things I’ve written about over the last couple of years in six pages! Here are some links for further reading for some of the key points of the summary:

1. How do students understand new ideas?
2. How do students learn and retain new information?
3. How do students solve problems?
4. How does learning transfer to new situations?
5. What motivates students to learn?
6. What are some common misconceptions about how students think and learn?

I’m looking forward  to seeing future work by Deans for Impact – and I’ll be keeping an eye on their blog for more excellent resources!

Posted in Psychology for teachers | Tagged , , , | 7 Comments

How do we develop teaching? A journey from summative to formative feedback

researchED: Research leads network day, Brighton. April 18th 2015

The beginning of the new term means it’s taken a little while to get around to blogging about the great event on Saturday. This tardiness is additionally poor given that I was one of the presenters! However, there are some great summaries of the network day already out there. Two that caught my eye were:

ResearchED Brighton: inside out not bottom up   via @joeybagstock

#rEDBright – summary of a research lead conference by a faux research lead!   via @nikable

One of the themes that emerged from the day, for me, was the growing dissatisfaction with unreliable summative judgements of teacher quality and the view that schools would be better off looking at ways of formatively evaluating teaching through some sort of disciplined inquiry process.

From judgements of teaching quality …

Daniel Muijs opened the day with the provocative question ‘Can we (reliably) measure teacher effectiveness?’ His answer, which drew upon evidence from the MET project, suggested that that we could, though each of the tools for measuring teacher effectiveness had strengths and limitations. He analysed the reliability of VA data, observations and student surveys in turn.

Muijs suggested that the focus on student outcomes had liberated teachers to experiment more with their teaching – which is true, but it’s clear that a naïve treatment of this data has created problems of its own. For example, this focus on outcomes presupposes that there is a straightforward relationship between ‘teacher input’ and ‘student output’ (something Jack Marwood takes issue with here). Indeed, Muijs quoted Chapman et al (in press) saying that teaching probably only accounts for around 30% of the variance in such outcome measures.

In summative data of teacher performance, the inherent uncertainty within the measurement is expressed in the form of confidence intervals. A range of teacher VAM scores might look like this:

VAM1
Source:

The vertical bars represent the confidence interval associated with each teacher’s score.

In essence they suggest that we can only have reasonable certainty that a teacher’s score lies somewhere between the top and bottom of each line. The marginal differences between the midpoints along these lines are not a reliable comparison (even though intuitively they may appear so). In the example above it is reasonable to say that teacher B produced higher value-added scores than Teacher A, but the overlap in the confidence intervals for teacher C and D means that we cannot readily distinguish between them.

However, this uncertainty can get ignored when teachers are ‘held to account’. Their use has led to some pretty egregious practice in the US, e.g. ranking teachers by their VAM scores in a kind of ‘league-table’ of teachers. In one instance the LA Times appeared to completely ignore the presence of confidence intervals and published the data like this:

VAM2
Source:

Implying that the estimate of teaching quality was somehow a perfectly precise point rather than a range and creating spurious comparisons between teachers.

It struck me that schools in the UK risk falling into the same trap. For example when interpreting the sorts of VA graphs UK teachers might be familiar with:

CEM VA analysis
Source:

In the graph above, we can reasonably say that value-added was higher in 2005 than in 2012 (for what that’s worth), but can we readily distinguish between the scores for 2006 and 2011 on the graph above? The presence of a ‘dot’ above or below a mid-line may encourage the same sort of simplistic judgement as the LA Times: tiny variations in scores being interpreted as indicating something about teacher quality.

Indeed, even where a statistically significant deviation in VA scores is found, it doesn’t necessarily tell us whether the result is educationally important. Jack Marwood identifies this problem with the statistics used within RAISEonline to make judgements about schools:

The explanations of significance testing in RAISE are misleading and often completely wrong. In the current version of RAISE, readers are told that, “In RAISEonline, green and blue shading are used to demonstrate a statistically significant difference between the school data for a particular group and national data for the same group. This does not necessarily correlate with being educationally significant. The performance of specific groups should always be compared with the performance of all pupils nationally.”

The key phrase used to say, “Inspectors and schools need to be aware that this does not necessarily correlate with being educationally significant.” But even this does not make it clear how different statistical significance is to everyday significance. Everyday significance roughly translates as ‘important’. Statistical significance does not mean ‘importance’.

The worst influence of this focus on the summative judgement of ‘teacher quality’ is that policy discussion falls into the ‘McNamara Fallacy’ – as described brilliantly in a recent blog by Carl Hendrick:

… there is a deeply hubristic arrogance in the reduction of complex human processes to statistics, an aberration which led the sociologist Daniel Yankelovitch coining the term the “McNamara fallacy”:

1. Measure whatever can be easily measured.
2. Disregard that which cannot be measured easily.
3. Presume that which cannot be measured easily is not important.
4. Presume that which cannot be measured easily does not exist.

Sadly, some of these tenets will be recognisable to many of us in education – certainly the first two are consistent with many aspects of standardised testing, inspections and graded lesson observations. This fiscal approach been allowed to embed itself in education with the justification given often to ‘use data to drive up standards.’ What we should be doing is using “standards to drive up data”

The problem of using data to drive up standards was further highlighted in Rebecca Allen’s presentation. Drawing on her work with Education Datalab, she presented the problem of judging schools or teachers using the concept of expected progress.

I’ve written about this report before, but it’s worth reiterating the major points raised by their analysis.

XXgjEC4EARJkjy“Trajectory”

ur4efUWx6LYJBI“Reality”

From KS1 only about 9% of children take the expected ‘trajectory’ to KS4 outcomes and the assumption of linear progress becomes progressively weaker as you move from primary to secondary schools.

“Our evidence suggests that the assumptions of many pupil tracking systems and Ofsted inspectors are probably incorrect. The vast majority of pupils do not make linear progress between each Key Stage, let alone across all Key Stages. This means that identifying pupils as “on track” or “off target” based on assumptions of linear progress over multiple years is likely to be wrong.

This is important because the way we track pupils and set targets for them:

• influences teaching and learning practice in the classroom;
• affects the curriculum that pupils are exposed to;
• contributes to headteacher judgements of teacher performance;
• is used to judge whether schools are performing well or not.

Allen suggested that we shouldn’t give students an attainment target grade to reach, but a range – to reflect the inherent uncertainty in predicting a student’s ‘expected progress’.

So, given all the problems with reliability, why are we trying to measure effective teaching? One answer is so that schools can identify and sack incompetent teachers, and presumably reward effective teachers through PRP. However, I’ve argued that the lack of reliability in the measures that exist risks perpetuating a ‘cargo cult’ approach to school improvement.

It may be possible, through a rigorous application of some sort of combination of aggregated value-added scores, highly systematised observation protocols (Muijs suggested we’d need around 6-12 a year) and carefully sampled student surveys to give this summative judgement the degree of reliability it would need to be fair rather than arbitrary. Surely the problem is that for summative measures of effective teaching to achieve that rigour and reliability they would become so time-consuming and expensive that the opportunity costs would far outweigh any benefits.

Therefore, it seems to me that these summative measures are unlikely to result in significant improvements to schools. It’s a cliché for politicians to announce that ‘the quality of an education system cannot exceed the quality of its teachers’. One retort to this might be:

‘Useful judgements of teacher quality cannot exceed the reliability of the data’

The stakes attached to statistical analysis of school or teacher data need to be moderated in line with the reliability of that data.

… to developing teaching through ‘disciplined inquiry’.

After coffee, the network day turned away from the issues of evaluation and assessment towards exploring ways in which teachers could use research evidence within, what Dylan Wiliam has called, ‘disciplined inquiry’.

Andy Tharby led a thought-provoking session discussing the work that Durrington High School have been doing with Dr Brian Marsh at the University of Brighton. He made the point that inquiry projects were nothing new within the school, but that previous versions of teacher-led projects had been overly reliant upon reflection as the sole source of evaluation. Through the partnership with Dr Marsh, they have developed more evidence-informed CPD opportunities (like an Edu Book club), started to disseminate blogs and bulletins through teachers’ pigeonholes and three teachers had taken on year-long projects aligned with the school’s improvement plan.

There’s no doubt that these partnerships between schools and HEI’s can provide mutual benefits, but as Tharby was quick to point out, the sort of informal relationships that can be struck up between individual schools and university based academics isn’t really scalable in a way that could transform schools.

Given the difficulty in accessing the evidence base and the problems for teachers trying to sort myths and neuro-nonsense from useful insights into learning, Lia Commissar presented an interesting resource that could be developed for teachers.

The trial, involving psychologists and neuroscientists answering questions from classroom teachers, runs until the 9th May.

James Mannion presented another resource which teachers might wish to explore. Praxis: a professional development platform, designed to help school-based practitioners develop their practice through small-scale research inquiry.

Again, the issue of moving practitioner-led research beyond reflection was suggested as a way in which teachers could regain ‘agency’ within their professional development. Mannion expressed his hope that Praxis could become a forum for teachers to collaborate in their efforts to optimise the outcomes for their students within their individual contexts.

He also proposed ‘Praxis’ as a new term to encapsulate all the various forms of teacher inquiry: lesson study, action research, disciplined inquiry, etc (though I’d prefer a less value-laded term, even if it appears ‘less exciting’). However, I dare say that teachers will continue to use the plethora of terms to describe essentially the same thing regardless of what anyone proposes!

Developing research tools for teacher inquiry

My session drew on the recent Sutton Trust report:

Developing Teachers: Improving professional development for teachers

You can download the presentation here: Research tools for teacher inquiry:

My argument was that the drive to find measures of effective teaching might be better focused upon developing reasonably valid ways for teachers to investigate their own teaching than pure accountability. I made the point that most of the measures developed for accountability purposes don’t necessarily provide very useful information for teachers trying to improve!

Observations: The value of these are often lost where schools use Ofsted style grading structures – which are principally summative. To improve teaching, we need focused and formative observation protocols to provide challenging yet supportive feedback.

Value-added data: This data, however reliable it may or may not be, appears too infrequently in the academic year to provide regular feedback on teaching. In general, the assessment data which teachers report to line managers and parents – whilst more frequently collected – isn’t often useful as a formative form of feedback. There’s also the problem that we may inadvertently distort the data we generate about students if there’s a target attached to it. What’s sometimes called Goodhart’s Law: “When a measure becomes a target, it ceases to be a good measure.”  A better bet might be to encourage teachers to use question analysis to identify areas where their teaching or understanding of assessment could be developed.

Student surveys: These are potentially a cheap and easy source of reasonably valid feedback. However, even a good instrument can deliver poor data and surveys need strong protocols to provide effective feedback. I’ve written quite a lot about how the MET survey can be used as a formative tool within coaching here: Investigating teaching using a student survey

Reflection / self-report: Cognitive biases means pure reliance on ‘reflection’ is likely to have minimal impact on teaching practice. There likely needs to be an element of external challenge. However, I suggested using a behaviour log, based on Hayden’s research into behaviour in schools, as a self-report tool for teachers developing behaviour management or classroom climate. There’s more information about this here: Talking about the behaviour in our lessons

Work scrutiny: I made the point that techniques like content analysis, which might be used to gain insights into changes over time in books, are difficult and therefore tend to be conducted in a fairly superficial way (e.g. what colour pen the teacher is using). It may be possible, however, to create some simple protocols to look at specific features of students’ work to see whether they are responding to changes you’ve made in teaching.

Finally, I discussed the problems inherent in evaluating whether mentoring interventions were having the desired effect on student behaviour or effort in lessons.

Rob Coe’s ‘poor proxies for learning’ will likely be familiar to many who read this blog.

I added a few other proxies which I suspect may be unreliable when it comes to establishing whether a mentoring intervention is having the desired effect.

Discipline points: Problem is that a reduction may simply reflect the teacher ‘losing faith’ with the behaviour management system rather than improvements in behaviour.

Reward points: Where intrinsic motivation is lacking we instinctively employ extrinsic motivators. Where intrinsic motivation is ok, we shouldn’t (and usually don’t) use extrinsic motivators (which is why ‘naughty’ kids tend to get more reward points than ‘good’ kids).

Effort grades: Attitudes to learning scores may simply reflect teacher bias.
e.g. Rausch, Karing, Dörfler and Artelt (2013) Personality similarity between teachers and their students influences teacher judgement of student achievement.

Attitude surveys: Can be easily distorted by social desirability bias. There is also a big gap between a change in reported attitudes and actual changes in behaviour.

However, I ended by wondering whether there might be some behaviour that is ‘proximal enough’ that we could use simple structured observation techniques in lessons to evaluate changes in student behaviour. Drawing on the idea of a ‘time on task’ analysis, I asked the group to think about some behaviours which might indicate the presence or absence of ‘thinking hard’ in a lesson.

Measuring teaching: The need for a shift of focus

Developing simple research tools could help teachers move beyond ‘pure reflection’ as the basis of challenging their teaching practice and provide an empowering way for teachers to improve the quality of their own teaching.

However, it can be difficult to develop and validate these tools within the context of a single school and the initial time demands can be high. I commented that I had barely got started in piloting these with teachers (which created some very flattering spontaneous laughter from the audience), but even adapting the MET student survey to a formative purpose is a tricky task if you don’t want to lose whatever validity it possesses.

Nevertheless, I think it’s worth trying. It seems a plausible hypothesis that developing teacher inquiry tools which provide reasonably valid, developmental feedback could improve outcomes for students and foster greater professional autonomy for teachers.

Posted in Research Lead | Tagged , , , , , | 7 Comments

Research tools for teacher inquiry

Excellent day out in Brighton for the ResearchED: Research Leads network day.

I’ll blog about the day properly later in the week, but the presentation is here:

rED presentation

Developing research tools for teacher inquiry v1.3

Posted in Research Lead | Tagged , | 8 Comments

Ethical issues in teacher-led research

At the last research leads event at Cambridge, I raised the issue of ethical considerations where teachers engage in research. Here are some thoughts:

First of all, it’s important not to over-state the seriousness of ethical issues in teacher-led research within schools. The sorts of things that most teachers ‘investigate’ are close enough to the activities which take place as part of ordinary teaching in schools that ethical issues are not a major concern: Trying out a new writing frame, creating a formative assessment to identify misconceptions, developing questioning techniques, adapting materials to help EAL students access learning, implementing low-stakes quizzes to help consolidate previously taught topics, etc. Teachers already have a duty of care towards their students and should simply exercise professional judgement when planning innovations in their teaching. Thus, in my opinion, the vast majority of teacher inquiry projects likely require no additional ethical permissions or protections.

What probably is important, given that most teachers don’t have a background in social science or educational research, is that teachers have some guidance on ethical issues they might not have considered and a clear pathway to seek advice if ethical issues arise as part of their inquiry projects. Thus, I think it would be useful for schools to have sensible guidelines laid out in advance: A straightforward policy on research ethics – agreed by the school and shared with teachers who are engaged in research.

There’s a great starting point for discussion about this on Gary Jones’ blog:
Jones (2015) We need to talk about researchEDthics – School Research Leads and Ethical Research and Evidence-Based School Cultures

I’ll mainly be drawing from two other resources which research leads might find useful to look at:
BERA: Ethics and Educational Research (Hammersley and Traianou, 2012)

BPS: Code of Human Research Ethics (The British Psychological Society, 2010)

I’m going to talk about the three main areas of ethics in research that teachers may need to consider when planning research projects: Minimising harm, informed consent and confidentiality.

Minimising Harm

Jones suggests we should consider both ‘Beneficence’ and ‘Non-maleficence’ when planning research in schools: That the action or intervention is intended for the benefit of the individuals involved and that we have considered carefully any broader negative consequences which might arise. For the overwhelming majority of teacher inquiry projects, a teacher’s evidence-informed professional judgement will suffice, I would argue. However, there may be circumstances where a greater or more formal consideration of potential harm should be carried out.

In my view, this becomes a potential issue when considering psychological interventions with students – rather than teacher inquiry related to ordinary classroom activities. For example, I recently wrote about some of the concerns raised by teachers and academics about the rise of therapeutic interventions within schools. Little is known about the potential for adverse reactions or contraindications for psychological interventions.

The BPS guidelines state that psychological researchers should always consider research from the standpoint of the participants; with the aim of “avoiding potential risks to psychological well-being, mental health, personal values, or dignity.”

The BERA article makes the point that consideration of harm may include reputational damage for the teachers and potentially the institution involved:

“Minimising Harm. Is a research strategy likely to cause harm, how serious is this, and is there any way in which it could be justified or excused? Note that harm here could include not just consequences for the people being studied (financial, reputational, etc) but for others too, and even for any researchers investigating the same setting or people in the future.”

I would argue that there need to be some additional safeguards in place when school-based interventions are based on therapeutic models like cognitive behavioural therapy or mindfulness. Whilst ‘side-effects’ are unlikely to be as serious as they would be for a medical intervention, there’s a question as to whether children suffering from clinical depression or an anxiety disorder are benefited or harmed by some of these interventions. Given that the Annual Report of the Chief Medical Officer (2012) suggested that as many as 1 in 10 children possess a clinically diagnosable mental disorder. Some consideration of monitoring throughout such projects might be ethically justified here; along with possibly some sort of screening.

Another aspect of education research broadly encompassed by ‘harm’ might be where children are removed from regular lessons as part of a project. There is an ‘opportunity cost’ to a child’s education if they are removed from subject lessons. Do children benefit from missing maths lessons to partake in a character education project? What about history lessons? Or RE lessons? Good intentions alone are not enough here, I would argue. Schools should engage in a more formal costs and benefits analysis when considering such major changes to curriculum – and demand up-front the evidence for proposed benefits and the monitoring arrangements during the project which will identify whether intended benefits are being delivered compared to the ‘costs’ imposed on that child’s regular education.

Informed consent

These issues are unlikely to feature in the majority of teacher inquiry projects, as the activities students undertake are part of lessons within the normal context of classroom practice. Where a research project involves appreciable risk or significant opportunity cost however, the issue of informed consent may become more important.

In psychological research, participants should be given sufficient information about the research so that they can make an informed decision as to whether they want to take part. Part of the protections for participant autonomy is also their right to withdraw from a research study. With children under the age of 16, there’s an additional protection: Additional consent of parents or guardian of the individual should normally also be sought.

However, the BPS guidelines make an exception when it comes to research conducted in schools.

“In relation to the gaining of consent from children and young people in school or other institutional settings, where the research procedures are judged by a senior member of staff or other appropriate professional within the institution to fall within the range of usual curriculum or other institutional activities, and where a risk assessment has identified no significant risks, consent from the participants and the granting of approval and access from a senior member of school staff legally responsible for such approval can be considered sufficient.”

In schools this will likely be the head teacher.

BERA frames this aspect as respecting autonomy.

“Does the research process show respect for people in the sense of allowing them to make decisions for themselves, notably about whether or not to participate? This principle is often seen as ruling out any kind of deception, though deception is also sometimes rejected on the grounds that it causes harm.”

Where permission of parents is considered a justified step, there is a further question of whether passive or active consent should be obtained. For example: Noret (2012) Guidelines on Research Ethics for Projects with Children and Young People

“Active consent refers to the use of a consent form, whereby parents/guardians are required to sign and return a form indicating their consent for their child to participate in the study. Non-return of this slip is taken as an indication that the parent(s)/ guardian(s) do not want their child to participate in the study.
“Passive consent on the other hand, requires participants to return the slip only if they do not want their child to participate in the study. Non-return of the slip is then taken as consent for the child/young person to participate in the study (Ellickson & Hawes, 1989).”

Where research involves very young children, sensitive topics or intrusive methods, or where the research will involve students travelling outside the school environment, then active consent may be more ethical than passive.

Confidentiality

This is an area where schools already have some policy in place. For example, schools have to work within the Data Protection and Freedom of Information legislation. There’s a summary of how these impact on schools by BECTA here: Data protection and security – a summary for schools

As such, most teacher inquiry projects will simply need to comply with these already existent policies. Teachers need to collect information about students as part of their professional role. Things like data security and filming in the classroom likely already have guidelines which teachers are expected to follow. Research which involves the collection of data regarding students and falls within these existing rules is unlikely to raise additional ethical considerations.

However, there are some general principles which might inform professional judgement about confidentiality. For example, the BPS suggests that information obtained from and about participants during an investigation is confidential unless otherwise agreed in advance.

“Participants in psychological research have a right to expect that information they provide will be treated confidentially and, if published, will not be identifiable as theirs. In the event that confidentiality and/or anonymity cannot be guaranteed, the participant must be warned of this in advance of agreeing to participate.”

On the other hand, the BERA article argues that confidentiality may not always be desirable in educational research:

“Protecting Privacy. A central feature of research is to make matters public, to provide descriptions and explanations that are publicly available. But what should and should not be made public? What does it mean to keep data confidential, and is this always possible or desirable? Can and should settings and informants be anonymised in research reports?”

This becomes a potentially important issue when collecting data from and about teachers within a school. In this case, I find myself agreeing with the BPS on the issue of confidentiality: It should be a reasonable expectation that data generated by research should be treated as confidential and only published in forms where their identity cannot be ascertained.

For example, in piloting student surveys as a way of investigating teaching, there were several issues relating to confidentiality which I considered before starting out.

Firstly, it seemed important to the process that student responses to the survey should be anonymous – therefore, student names were not collected on the questionnaires and the classroom teacher would step out of the room whilst students completed the surveys. This helps ensure that students give a more honest reflection of their views than they might if they could be identified. The second issue relates to the use of this data. The issue of confidentiality needed to be agreed with the teachers concerned. In this case, the decision was taken to ensure the confidentiality of student feedback on teaching – in order to build trust and confidence in the process.

In summary

In the main, ethical considerations in teacher-led research can rely upon the evidence-informed professional judgement of the teachers involved. There are barriers enough already without adding lengthy ethical procedures to the list! It’s worth remembering that schools are likely to already have policies relating to some of these issues that teachers must follow – and there’s no point over-complicating or duplicating these policies. However, in my view, there are some types of school projects which may be worthy of more formal ethical consideration. Plans involving aspects of psychological therapy or significant changes in a child’s curriculum should be reasonably interrogated for the expected benefits versus the opportunity costs involved.

Given few teachers have a background in social science research; it might be reasonable for research leads to think about running a session or distributing a summary about ethical issues in education research to better inform those judgements.

Ethical issues in research cannot always be anticipated, however. Whilst the vast majority of the research projects teachers undertake are going to be unproblematic, it might be wise to have some simple guidelines and procedures laid out so teachers know where they can seek advice during their research. This will serve to protect both students and the teachers involved in research, should any ethical problems emerge as they carry out their investigations.

Posted in Research Lead | Tagged , | 4 Comments

Developing research leads in schools: The Janus-faced role of a research lead

researchED: Research leads network day, Cambridge. March 14th 2015

Janus_coin source: commons.wikimedia.org/wiki/File:Janus_coin.png

In a brief stop-over between Sydney and Dubai, Tom Bennett was surprised and delighted to discover so many teachers prepared to ‘give up a Saturday’ to come and explore the role of a research lead. It shouldn’t come as such a surprise. The event, expertly organised by Helene Galdin O’Shea, involved a combination of thought-provoking speakers and the opportunity to meet exceptional colleagues – making it extraordinary CPD. Here’s my reflection on the day and a (dreadfully inadequate) summary of some of the talks I attended.

Philippa Cordingley opened with a keynote identifying some effective ways to lead research within schools. Schools are increasingly creating directed time for professional inquiry (whether it’s called coaching, lesson study or action research) and she related some interesting case studies of how schools were structuring these opportunities. She related the importance of having a research base which reflected ‘the things that wake teachers in the middle of the night’ – and this is an important issue. Finding, accessing and disseminating the sort of educational research which can be applied by classroom teachers is one of the great challenges.

Philippa referred to the bottle-neck between research and practice based knowledge and the work that CUREE has been doing to overcome this:

A number of resources which may be useful to research leads: http://www.curee.co.uk/browse-resources
… and a host of Education research links: http://www.curee.co.uk/category/5/27
… she also described some of the tools and research resources, like ‘route maps’, which CUREE use to help teachers engage with research: http://www.curee.co.uk/node/2908

Philippa emphasised the need for professional learning to involve creating new ideas and strategies with a clear focus on student outcomes. It wasn’t so important whether teachers engaged in their own research or the research of others, rather that it was the process of challenging prevailing orthodoxies and supporting teachers as learners which had the greatest impact.

One of the most interesting items she raised was importance of school leaders actively participating and modelling this process. To support the argument, Philippa referred to research by Robinson, Hohepa and Lloyd (2009). Their meta-analysis into school leadership and student outcomes identified that the factor which had the greatest influence was promoting and participating in teacher learning and development:

Robinson, Hohepa and Lloyd Source: http://www.educationcounts.govt.nz/__data/assets/pdf_file/0015/60180/BES-Leadership-Web-redacted.pdf

The fact that this appeared to have greatest influence on student outcomes raised a few eyebrows. It might be expected that school leaders participating and promoting professional development would have an impact on outcomes for students, but the fact it appeared to be the most influential factor was surprising.

The day then presented some difficult choices! I missed Jude Enright’s talk on enquiry based practice (which she has very kindly blogged about here), Beth Grenville-Giddings talk on setting up a journal club and Robert Loe talking about how we might explore and measure ‘relationships for learning’. I hope someone will blog on these.

Gary Jones led the second session with a focus of how we can help teachers ask better questions about their teaching. Drawing parallels with evidence-based medicine and clinical practice, he related a number of ways that we can move from ‘background questions’ which tend to be poorly formulated and difficult to evaluate, towards ‘foreground questions’. In essence, he gave some useful ideas on how teachers might operationalise their own research in more productive ways:

One model, which I really liked, was the PICO format:

P — Pupil or Problem. How would you describe the group of pupils or problem?
I — Intervention. What are you planning to do with your pupils?
C — Comparison. What is the alternative to the intervention/action/innovations
O — Outcomes. What are the effects of the intervention/action/intervention?

There is a host of interesting ideas and resources for research leads on Gary’s blog. I might also recommend his discussion on some of the pitfalls and misconceptions that research leads would be wise to avoid:

The school research lead and pitfalls to be avoided.

The SUPER partnership (the ‘Power Rangers’ of researchED) is a school-university partnership for educational research between schools in the East of England and the Faculty of Education at the University of Cambridge. The SUPER partnership was bringing researchers and schools together years before researchED was even a twinkle in Tom Bennett’s eye. If you haven’t seen their blog, there’s a pretty comprehensive list of research resources that research leads would find useful:

SUPER blog: access to research

The talks by Clare Hood and Abi Thurgood were a fascinating insight into the challenges of the research lead role. Clare contrasted the fast-paced culture of accountability and ‘evidence’ that currently exists in many schools with the slower pace of academic research. She also talked of the value of having the Cambridge team as a ‘critical friend’, especially when formulating research questions across a teaching alliance. Abi identified some of the core aspects of her role: Formulating teacher and subject department research questions, the dissemination of practitioner research through a ‘marketplace’ format, links to the Cambridge MEd programme and the very interesting idea of bringing in student researchers through EPQ projects. Both talks emphasised the ‘sense of re-professionalism’ that came from teachers having opportunities to choose their own development goals rather than working to imposed targets.

One question that arose in my mind, as research leads become more common place in schools, is how we ensure an appropriate ethical framework for teacher-led research. University-based researchers have a range of ethical checks and guidelines to ensure research is conducted in a responsible way and minimise risk to participants, but few schools appear to have a research ethics policy. I have some ideas about this which I’ll endeavour to blog about before the next researchED research leads conference at Brighton in April. If people think it’s an area worth exploring, I’d be happy to present something and facilitate a discussion at a future event.

The issue of access to research was an enduring theme across the day. Vincent Lien made a reasoned and impassioned argument for teachers to have free access to education research. If you haven’t signed his petition yet, it’s available here:

Free access to research ejournals for teachers

Jonathan Sharples and Caroline Creaby related examples of connecting teachers and researchers together. Jonathan related some of the barriers which exist in sharing and promoting the use of evidence in schools. He made the point that this process was likely to be slow going – NICE estimate it takes up to 15 years for research evidence to become embedded within medical practice. The ‘Push’ of evidence-based research coming from universities will be slow to change education on its own. One of the roles for a research lead might be to help ‘pull’ evidence-based research into schools and foster the ‘links’ between universities and schools. Caroline gave some excellent examples of how teachers had been able to draw upon the expertise of university researchers through mechanisms as simple as emailing questions. However, I suspect these informal channels of communication – whilst excellent – are not really scalable across a large number of schools.

Academics tend to be very generous with their time and keen to talk to teachers about research, but without a more formal framework it’s difficult to see how this can genuinely make an impact across a school system. However, plans are in motion. Caroline is about to project manage ‘evidence for the frontline’ – involving the Coalition for Evidence-Based Education (CEBE) and the Institute for Effective Education (IEE) at York.

Essentially E4F appears to be a brokering service, linking up teachers with research expertise and resources. One of the things that they want to create is a map of expertise showing practitioners, researchers and other providers:

ExampleofMap_previewSource: http://cebenetwork.org/projects/evidence-frontline-%E2%80%93-research-hands-practitioners

Ffion Eaton took up the role of research lead in 2013 and talked about embedding a research culture, through a whole-staff action research programme, within her school and teaching school alliance. Her school is part of the RISE project – EEF funded research examining whether research leads help improve student outcomes in schools – and she described a little of the training, resources and support this had provided. One of the key initiatives she related was the difficulty in maintaining communication of research across the school – it’s easy for research to go on in isolated pockets within schools. One interesting idea was the development of a teaching and learning bulletin and mini ‘research conferences’ to help disseminate some of the research and findings across the alliance.

There felt like a convergence in many of the arguments raised across the day: A key focus upon research leads playing a role in strengthening professional development and using evidence-based research as the starting point for improving student outcomes. This aligns well, I think, with the recent Sutton Trust report on improving professional development. I think the six principles of teacher feedback listed in that report might serve as an effective summary of some of the major themes arising from the network day:

Developing Teachers: Improving professional development for teachers. January 2015

Six principles of teacher feedback

Sustained professional learning is most likely to result when:
• the focus is kept clearly on improving student outcomes;
• feedback is related to clear, specific and challenging goals for the recipient;
• attention is on the learning rather than to the person or to comparisons with others;
• teachers are encouraged to be continual independent learners;
• feedback is mediated by a mentor in an environment of trust and support;
• an environment of professional learning and support is promoted by the school’s leadership.

The Janus-faced role of a research lead

The metaphor of bottlenecks and bridges between research and practice-based knowledge emerged more than once over the course of the day. The Roman god of bridges, doorways and passageways was Janus. What emerged from the day, for me, was the ‘Janus-faced’ role of a research lead in schools: outward-looking towards the extensive and sometimes difficult-to-access research evidence that might inform practice; and inward-looking towards facilitating teachers investigating their own practice.

University researchers and classroom teachers expressed frustration at the ‘closed doors’ to each other’s institutions. At the moment, many of these links appear informal and rather haphazard – working through personal connections and chance encounters. To scale these mutually profitable relationships across the school system will likely involve more formal mechanisms by which schools can network with university researchers. Teachers need access to the broader evidence base to stimulate ideas, help formulate questions, gain research tools and act as a valid foundation for their own professional inquiry into their teaching. Researchers need access to schools and sometimes encouragement to focus their research on some of the applied problems that teachers face trying to improve outcomes for their students. Janus was associated with such travelling and trading, and research leads might also adopt this aspect – coordinating closer links and a greater trading of ideas, between school-based and university researchers.

Posted in Research Lead | Tagged , | 5 Comments

Has the marshmallow melted? Interventions involving executive functioning may have little effect.

What are executive functions?

Executive functioning is, in some ways, a pesky cognitive ability to define as it’s implicated in so many different functions. It’s a hypothesised capacity for things like problem solving, reasoning, planning and organisation, inhibiting action or speech within context appropriate norms and managing attention control (amongst others).

These functions develop rapidly in early childhood, then slowly throughout adolescence and early adulthood – reaching a peak in our mid-twenties before gradually beginning to decline.

when-do-ef-skills-develop Source of image: http://developingchild.harvard.edu/key_concepts/executive_function/

The development of executive functioning is frequently related to (though not exclusively limited to) the development of the prefrontal cortex of the brain.

Prefrontal_cortex_(left)_-_lateral_viewPrefrontal_cortex_(left)_-_medial_view Source of images: http://en.wikipedia.org/wiki/Prefrontal_cortex#Additional_images

It’s one of the areas of the brain which is much greater in size (relative to the rest of the brain) in human beings compared to other primates and other species of hominid. The main reason appears to be the greater myelination of neurones (i.e. volume of white matter) which provides greater connectivity between the prefrontal cortex and the other areas of the brain in humans compared to other species.

The prefrontal cortex plays a significant role in what psychologists call ‘Working memory’ and the idea of ‘executive functioning’ is related to the ‘central executive’ component of that model of memory. Executive functioning is associated with a number of SEND conditions which teachers will have encountered or heard about, for example ADHD (Attention deficit hyperactivity disorder). There’s some evidence to suggest that deficits in working memory, potentially related to poor executive functioning, underlie some of the difficulties children may face in school. For example, Gathercole and Alloway (2007) report:

“Approximately 70% of children with learning difficulties in reading obtain very low scores on tests of working memory that are rare in children with no special educational needs.”

There may be considerable variance in the working memory function of children in a particular classroom. For example, Gathercole and Alloway (2007) suggest that:

“Differences in working memory capacity between different children of the same age can be very large indeed. For example, in a typical class of 30 children aged 7 to 8 years, we would expect at least three of them to have the working memory capacities of the average 4-year-old child and three others to have the capacities of the average 11-year-old child, which is quite close to adult levels.”

Perhaps the most famous example of a test of executive functioning is the ‘Marshmallow Test’ by Walter Mischel. In these studies a child is offered a choice; a small immediate reward (e.g. a marshmallow) or double the reward if they could wait for 15 minutes. What Mischel found in the follow up studies was that the children who deferred gratification (i.e. waited for the bigger reward) rather than going for immediate gratification (i.e. couldn’t wait) showed different characteristics even years later.

Children who deferred gratification were rated as better able to handle stress, engage in planning, and exhibit self-control when adolescents 10 years later and went on to obtain higher SAT scores. They found that these differences appeared to be apparent even when participants were in their 40s.

Can we train executive functioning?

Given the importance of executive function in emotional regulation and higher cognitive abilities like memory and attention, there’s been considerable interest in whether such abilities can be trained in children. Certainly there have been attempts to train children’s working memory in the hope that it might help them achieve more in school, but these interventions are not straightforward.

For example, Melby-Lervåg and Hulme (2013) examine the claims of training programmes designed to boosts working memory function. They report that some of these working memory training packages made fairly confident claims regarding their effectiveness; for example, that they could help children with ADHD, dyspraxia, ASD, that they could boost IQ and improve school grades. The programmes themselves appeared to involve numerous computerised memory trials:

“However, these programs do not appear to rest on any detailed task analysis or theoretical account of the mechanisms by which such adaptive training regimes would be expected to improve working memory capacity. Rather, these programs seem to be based on what might be seen as a fairly naïve “physical– energetic” model such that repeatedly “loading” a limited cognitive resource will lead to it increasing in capacity, perhaps somewhat analogously to strengthening a muscle by repeated use.” *

The outcomes of the meta-analysis were not so supportive of these impressive claims. They suggest that although there appeared to be short-term improvements on both verbal and nonverbal working memory tasks – these gains did not last very long, nor generalise to things like the ability to do arithmetic or decode words. For attentional control, the effects were small to moderate immediately after training, but reduced to nothing in the follow up.

* Incidentally, this is one reason why I personally dislike the ‘growth mindset’ analogy of the brain being ‘like a muscle’. In many, many ways, it simply isn’t!

Ok – so ‘brain training’ programmes don’t appear to have lasting or generalisable effects on working memory, but what about other interventions – specifically aimed at improving executive functioning? There’s certainly been a recent surge of interest for the idea of developing executive functioning in our pupils – linked with the whole notion of ‘character education’.

However, as the authors of a recent review Jacob and Parkinson (2015) point out:

“Yet, despite this enthusiasm, there is surprisingly little rigorous empirical research that explores the nature of the association between executive function and achievement and almost no research that critically examines whether the association is causal. From the existing research it is not clear whether improving executive functioning skills among students would cause their achievement to rise as a result.”

The authors of the review suggest that interventions to increase executive functioning probably have little value unless they are also helping children achieve greater success within school. Thus they focused the meta-analysis on whether interventions designed to improve executive functioning cause improvements to outcomes.

Interestingly, they found that there was no significant difference between attention/inhibition and working memory measures in their correlation with student achievement. Both appeared to correlate ~0.30 level. However, this relationship did not appear to be a directly causal one:

“.. there is substantial evidence that academic achievement and measures of executive function are correlated—both at a single point in time and as predictors of future achievement, and for a variety of different constructs and age groups. Despite this, there is surprisingly little evidence that a causal relationship exists between the two. High levels of executive function may simply be a proxy for other unobserved characteristics of the child.”

So what might be the factor underlying both executive functioning and school achievement? The authors explore a range of possible factors:

“Once child background characteristics and IQ are accounted for, the association between executive function and achievement drops by more than two thirds in most of these studies and in most cases the conditional associations are close to zero.”

This suggests that school-based interventions focused on improving executive functioning will have a disappointing impact on achievement:

“The most effective school-based interventions designed to influence executive function have only had an impact on measures of executive function equal to around half a standard deviation (e.g., Raver et al., 2011). This means that under the best case scenario … interventions designed to improve executive function would only have the potential to increase future achievement by less than a tenth of a standard deviation (half of 0.15).”

As well as regression analysis, they also looked at where randomised controlled trials had attempted to assess the impact of executive function interventions. They only found five studies which specifically looked at the effects of training on achievement and had a randomised design. They describe a number of programmes which have been evaluated, for example ‘Tools of the Mind’, ‘Head Start REDI’ and the ‘Chicago Schools Readiness Programme’.

These programmes varied in content, but tended to be taught as stand-alone, ‘skills-based’ approaches. For example, the REDI programme was taught to pre-school children in weekly lessons and extension activities where children were taught language skills, social skills, emotional understanding, self-regulation and aggression control by teachers trained on the ‘Promoting Alternative THinking Strategies’ curriculum. The review finds that none of these approaches appeared to directly improve student outcomes.

“The few random assignment studies which rigorously evaluate interventions designed to impact executive function provide some evidence that executive function can be influenced by intervention (most of the studies we reviewed showed some positive impacts on measures of executive function) but provide no compelling evidence that impacts on executive function lead to increases in academic achievement.”

One of the problems with the training programmes was that they target multiple factors at the same time. For instance the REDI intervention targeted executive functioning and school achievement. They make the point that:

“… if the intervention improved children’s ability to take tests, then children would perform better on both measures of executive function and on measures of achievement. If the improved ability to take tests was not accounted for in the analyses, the improvement in executive function would be correlated with the improvement in achievement.”

The problems with applying psychological research in schools

Children vary in many ways – so it should come as no great surprise we find examples of psychological differences between kids which do well at school and ones that struggle. However, just because we find that children’s school attainment correlates with cognitive ability ‘X’ or attribution ‘Y’, doesn’t tell us whether trying to train ability ‘X’ or change attribution ‘Y’ will actually help.

That’s one of the problems when trying to apply psychological findings to education: Simply identifying cognitive or affective differences between children isn’t actually all that useful. This kind of purely psychological research is a different ‘kettle of fish’ to the applied psychology of designing effective ‘interventions’ to raise achievement. There’s a lot of hype around cognitive or attributional variables which correlate with school outcomes at the moment.

As usual, the cart ends up before the horse – and interventions are implemented into schools before there’s good evidence about whether they do any good. It’s important we remember that interventions based on identified psychological differences may not necessarily lead to benefits for children. For instance, an intervention may be costly and irrelevant as there’s another factor which causes both the differences detected and the improved outcomes.

Of course, when schools have invested as great deal of time, effort and training in such an intervention scheme, it becomes easy for them to convince themselves that they are seeing a genuine difference. But we can’t rely on anecdotal evidence or professional experience alone here! It seems that the evidence to date suggests that teachers should be highly sceptical of training or intervention programmes which claim to have success in raising achievement through targeting executive functioning.

Posted in Psychology for teachers | Tagged , , | 8 Comments