The ‘artificiality’ of teaching

In my last post, I argued that the universality and the spontaneous development of teaching leads to the conclusion that teaching is a natural ability. The post generated some really interesting responses, but one from @informed_edu made a direct attempt to answer the question I posed to ITE providers: What is the ‘technical’ or ‘professional’ body of knowledge or set of skills required of an effective teacher, which can actually be taught?

Whilst teaching may have evolved as a natural cognition (based on a functioning theory of mind) there are many aspects of modern teaching which are artificial. I use the term ‘artificial’ not in a pejorative sense but in the same sense as Herbert Simon in ‘The Sciences of the Artificial’.

“Natural science is knowledge about natural objects and phenomena. We ask whether there cannot also be “artificial” science knowledge about artificial objects and phenomena.” page 3

The modern context of teaching and more widely education are cultural phenomena, created by human beings rather than emerging directly from evolution through natural selection. Whilst, at its core, teaching may be a ‘natural ability’, it operates through artificial, culturally derived, systems. David Geary in ‘Educating the Evolved Mind’ suggests that these systems have emerged to meet a specific cultural demand.

He suggests that secondary cultural knowledge (e.g. science, literature, art) emerged from cognitive and motivational systems evolved to support what he calls primary or ‘folk knowledge’: Things like folk psychology (interest in people), folk biology (interest in living things) and folk physics (interest in inanimate objects) which directly aided survival and reproduction in our evolutionary past. As humans developed ways to retain these cultural artefacts across generations, he proposes that there was created an ever-growing gap between ‘folk knowledge’ (which people rapidly and easily acquire) and the theories and knowledge base of secondary knowledge (which people need explicitly to be taught).

Geary argues that where this gap between ‘folk knowledge’ and secondary cultural knowledge becomes large enough, schools emerge as cultural institutions. The function of schools, he suggests, is to close the gap between the biologically primary knowledge children rapidly learn for themselves and the secondary knowledge needed for living in society.

“The need for explicit instruction will be a direct function of the degree to which the secondary competency differs from the supporting primary systems.” Page 35

Teaching, I argued in the last post, largely involves ‘folk psychology’ – a rapidly acquired ability to pass-on cultural knowledge across generations. Beyond knowledge of secondary culture itself (subject knowledge), the question is whether there is a body of secondary knowledge required for teaching. What are the ‘technical’ or ‘professional’ elements of knowledge or sets of skills required of an effective teacher?

David’s response to ‘Is teaching a natural ability?’

You can read @informed_edu ‘s response in the comments to my last post here. He argued that:

“some form of teaching comes naturally to most people, but that doesn’t mean that the version that comes most naturally is always most effective”

To summarise (I hope fairly), David suggested a range of secondary knowledge required of a teacher which benefit from formalised instruction:

  1. Planning lessons
  2. Teaching strategies
  3. Curriculum design
  4. Assessment design
  5. How children learn
  6. Differences between students: e.g. special educational needs
  7. Behaviour management
  8. Drawing upon and evaluating research evidence
  9. Mentoring, coaching and leading teachers

Lastly he says:

“In most professions, the acquisition of this is significantly more formalised and then you achieve recognition for having learned it. You also get the options to delve into more specialist areas and receive well-planned learning and the opportunity to be recognised for that. This helps professions both build, recognise and then use the knowledge – easier to identify who to turn to for advice if there is a better system for recognising expertise.”

A body of knowledge for teaching

I agree with some of David’s points, others I think are a function of subject knowledge (which I readily conceded is learnt), and others I don’t think can be argued to form part of a professional body of knowledge or a technical competence.

Planning:

How useful is lesson planning? It seems to me an empirical question – what sort of planning actually improves student outcomes? Over my career, I’ve been asked to use a wide variety of formats to record what I intended to teach. I must confess I’ve often found it easier to write the lesson plan after I’d taught the lesson!

That’s somewhat flippant, but there are a couple of reasons I’m skeptical about the value of lesson planning. Firstly, the ‘impossible task of mind reading’ means I cannot always anticipate where students will have difficulty or achieve understanding easily. I concede that that very early on in my PGCE year I needed to sketch thoughts on paper before trying things out in the classroom. However, great teaching, in my opinion, requires responsive flexibility rather than explicit planning. Such flexibility undoubtedly comes from practice rather than any formal instruction – thus I reject the notion that planning forms a technical competence. I strongly suspect that teachers are using their subject knowledge and theory of mind to actually teach and that a great deal of ‘planning’ is being done merely for the appearance or accountability

Secondly, I’m far from certain that planning a lesson is even the right level of focus. Two great blogs exploring this:

A lesson is the wrong unit of time via @BodilUK

The problem with lesson planning via @LearningSpy

Teaching strategies

This, for me, is the most problematic area in CPD. Throughout my career, I’ve been told that one strategy or another was effective or necessary: I’ve been told I need to differentiate for kinaesthetic learners; told to limit the amount of talking I’m allowed to use; told to use lollipop sticks as a way of randomly sampling the class for questioning; told to make children write lesson objectives; told to divide lessons into starters, mains and plenaries, etc. The problem is that, beyond what evolved natural ability a teacher possesses, many teaching strategies passed on through CPD are little more than gimmicks.

Even much more plausible, research-based strategies – e.g. Assessment for Learning – tend to have devolved to the level of bureaucratic box-ticking rather than any useful strategy. For example, requiring teachers to report ‘progress’ through (non-existent) sub-levels and generating targets by which a student will reach the next (non-existent) sub-level. Beyond the observation that great teachers formatively assess learning as they teach (which I argue is natural ability) how useful were any of the ‘strategies’ that arose from AfL? This argument is explored in more depth by David Didau:

AfL: Cargo cult teaching? via @LearningSpy

5-year olds ask questions to check understanding whilst teaching, and great teachers do this too. I have no doubt that AfL provides a description of great teaching – my question is, does explicit teaching of AfL strategies (or any other teaching strategy) actually improve teaching?

Curriculum design

I don’t doubt that curriculum design involves a strong understanding of a subject – thus, I concede this involves secondary knowledge, but I’d argue that teachers do not receive any explicit training in curriculum design and therefore it’s hard to argue that this forms a pillar of professional status.

I think there are some useful things teachers can learn about curriculum design. Two that were most useful for me were these:

Trivium 21st Century via @Trivium21c

One scientific insight for curriculum design via @joe_kirby

The first of these provided an interesting model for thinking about curriculum design. Martin’s many examples of grammar, dialectic and rhetoric across different subject areas was based firmly within tradition, but I think it works because it reflects the inheritance, selection and variation which drives cultural evolution (but this is opinion). Joe’s ideas, based upon our profession’s nascent understanding of how children learn, are an excellent example of perhaps where a body of professional knowledge might exist to be exploited.

Assessment design

There’s certainly a body of technical knowledge required for effective assessment design. I guess my major issue is that teachers aren’t taught it! A great starting point, in my opinion, is the work of Daisy Christodoulou.

Guide to my posts about assessment via @daisychristo

How children learn

This is another area where I’m in complete agreement with David. Education appears to have recently discovered that scientific ideas about how humans learn didn’t stop in the 1920-30s with Jean Piaget and Lev Vygotsky.

There really is a body of knowledge here that teachers might benefit from. For an accessible starting point, teachers could do far worse than the Deans for Impact – The Science of Learning:

The Science of Learning

Obviously, I’m greatly biased in this regard – having a background in psychology and also writing a blog principally about trying to apply psychological research to the classroom! However, the sheer number of myths circulating in education regarding how children learn makes it something of a justifiable priority in my opinion.

Differences between students

There’s certainly a body of technical knowledge related to SEND. Merely understanding the overwhelming number of labels applied to children requires some explicit explanation.

Personally, I’m cautious of many of the labels that underlie differentiation strategies in lessons. There’s some evidence that such labels may not always benefit the children involved – for example:

Does the dyslexia label disable teachers?

There are certainly children who struggle within the large classes and the (inevitably) limited personal attention in mainstream education, but differentiation strategies have suffered from the same myths and unevaluated claims as other teaching strategies. There are some areas of SEND where some technical knowledge about how children learn might be very applicable, for example from Susan Gathercole and Tracey Alloway:

Understanding Working Memory: A Classroom Guide

However, the best starting point for understanding individual differences in learning is probably understanding how children learn in the first place (see above). Otherwise, I’d argue the majority of day-to-day classroom differentiation is running off the same ‘subject-knowledge-mediated-through-theory-of-mind’ as the rest of teaching.

Behaviour management

This is an interesting area of current debate. Many schools run behaviour management systems entirely upon operant conditioning lines (a branch of behaviourism – which includes using rewards and/or punishments). These behaviourist approaches have some fundamental limitations however, for example older children typically see through attempts to manipulate their behaviour through rewards and praise can undermine effort if used carelessly:

Praise and rewards – use thoughtfully!

For the most part, dealing with children is employing a teacher’s theory of mind more than applying an explicit body of technical knowledge, in my opinion. There are certainly helpful starting points for new teachers – mainly that children respond to clear and consistent expectations:

Tom Bennett’s top ten tips for maintaining classroom discipline via @tombennett71

Some of the behaviour challenges faced by teachers are often due to merely being a new face. Contrary to the saying ‘familiarity breeds contempt’ – there’s evidence that repeated interaction with the same person tends to bring about more positive attributions about that person (in psychology this is called the ‘mere-exposure’ effect). I’ve often wondered how much of behaviour issues in schools stem from staff turnover and timetable instability.

On balance, most of this involves ‘folk psychology’ and I remain to be convinced that there is an explicit ‘body of knowledge’ which underlies a positive classroom climate which teachers need to learn in order to be effective. Much more important, in my view, is the identification of effective school-wide systems for supporting teachers in developing the relationships and routines in classrooms.

Drawing upon and evaluating research evidence

There’s certainly a great deal of secondary cultural knowledge within research methods (including things David mentioned like statistics and how to assess things like validity and reliability).

Whilst organisations like the EEF were founded to provide teachers with better information about effective intervention and teaching strategies, it’s not clear how effectively this information is being used in schools. Pilot projects like ‘Evidence for the Frontline’ seek to overcome the gap between research and teaching through brokering partnerships – and perhaps this will help teachers access and implement more effective interventions. Finally, a fantastic grassroots organisation is achieving this dialogue between researchers and teachers at an international level, and I’d encourage any teacher to get involved with researchED.

At the very least, we might hope that greater professional understanding of research would help teaching avoid the gimmicks and myths which bedevil education. Therefore, perhaps some level of understanding of research methods – and most certainly statistics – would be useful for teachers. A big question is the degree to which we might expect all teachers to be ‘research literate’ and whether/what sort of teacher ‘research’ has demonstrable practical value in developing effective teaching.

Mentoring, coaching and leading teachers

David mentioned a range of things related to developing teaching – from ITT to school leadership. I’m going to be very sceptical and suggest a null hypothesis: None of the systems for developing teachers is more or less effective, it is merely that some teacher trainers, coaches and school leaders have a well-functioning theory of mind which makes them effective (regardless of the system they use). In essence, like an argument put forward regarding counselling, I think the ‘Dodo bird verdict’ applies to different models of mentoring, coaching and leadership.

In conclusion

I’ve argued that beyond knowledge and understanding of the subject to be taught – and some experience at teaching it – that teaching is essentially a natural ability arising from a well-functioning Theory of Mind. David mounted an interesting challenge to this, relating a list of essential knowledge and skills required for effective teaching which required explicit teaching and practice.

I disagree with some of his suggestions. As a profession I think we have a whole swathe of questionable teaching strategies and interventions, debateable behaviour management guidance and uncertain differentiation advice – much of which probably involves a natural ability to teach (plus a bit of practice) rather than the effectiveness of the strategies. I remain to be convinced that these require or benefit from formalised, explicit instruction.

However, for some of the areas he raised, I agree: How children learn, curriculum and assessment design, some level of statistics and research methods would appear especially fertile ground for developing a formalised body of professional knowledge and skills. What’s remarkable perhaps is the relative absence of these features from teacher training and professional development.

Posted in Education policy | Tagged , , , , , , , , , , | 4 Comments

Is teaching a ‘natural ability’?

What characteristics does a teacher need to be effective?

The answer appears to be elusive as various reviews find that most teacher characteristics appear to have only marginal impact on student attainment.

For example, looking at maths teaching Rockoff et al (2004) examined the relationships between student outcomes and a range of teacher characteristics including graduate education, general cognitive ability, content knowledge, personality traits (like introversion or extraversion) and self-efficacy. They found no significant relationship between graduate education and teacher effectiveness, a marginal relationship with cognitive ability, maths knowledge for teaching was more strongly related to math achievement, traits like conscientiousness and extraversion  were non-significantly related, and general self-efficacy was only marginally related. All in all, the correlations between teacher characteristics and student outcomes are typically very small or non-existent.

The summary of research published by the Sutton Trust (2014) suggested that there were two main factors linked to improving student outcomes:

  • teachers’ content knowledge, including their ability to understand how students think about a subject and identify common misconceptions
  • quality of instruction, which includes using strategies like effective questioning and the use of assessment

Clearly, ‘content knowledge’ is a technical competence. We’re not born knowing scientific theories, the rules of grammar or mathematical laws, therefore it must be something that teachers need to develop prior to (or during) their classroom practice.

However, subject knowledge appears necessary but not sufficient. For example, looking at science teaching Sadler et al (2013) found that subject knowledge alone did not secure improved outcomes for students when the material involved common science misconceptions. They suggested that a teacher’s ability to identify students’ common misconceptions was also required for students to make gains (even then this only helped where children had strong prior maths and reading ability).

Effective teachers appear to anticipate how students think about their subject and to use this insight to ask effective questions. However, to what extent does effective teaching involve a technical or professional set of knowledge and skills, developed through professional development or classroom practice, and to what extent is it a natural ability?

Theory of Mind and the ability to teach

The ability to infer how other people think and feel is referred to by psychologists as ‘Theory of Mind’ (ToM). ToM enables a person to explain and predict the behaviour of other people by inferring the mental states which cause that behaviour. The philosopher Daniel Dennett calls this the ‘Intentional Stance’ – understanding that other people’s actions are goal-directed and arise from their beliefs or desires. From his studies of imitation in infants, Andrew Meltzoff suggests ToM is an innate understanding that others are “like me” – allowing us to recognize the physical and mental states apparent in others by relating them to our own actions, thoughts and feelings. In essence, ToM is a bit like the ability to use your own mind to simulate and predict the states of others.

Strauss, Ziv and Stein (2002) proposed that ToM is an important prerequisite for teaching. A few other animals, for example chimpanzees, appear to teach conspecifics in a limited way, but only humans appear to teach using the ability to anticipate the mental states of the individual being taught. They point to the fact that the ability to teach arises spontaneously at an early age without any apparent instruction and that it is common to all human cultures as evidence that it is an innate ability. Essentially, they suggest that despite its complexity, teaching is a natural cognition that evolved alongside our ability to learn.

They taught pre-school children how to play a board game, and then observed that child’s behaviour when teaching another child. The identified a range of teaching strategies:

  • Demonstration—teacher actively shows learner what to do, e.g., moves the train on the track and stops at a station
  • Specific directive — teacher tells the learner what to do right now, e.g., “Take this”
  • Verbal explanation — teacher explains to the learner a rule or what he/she should be doing, e.g., “You got green. You can take the cube”
  • Demonstration accompanied by a verbal explanation
  • Questions aimed at checking learner’s understanding — “Do you understand”? “Remember”?
  • Teacher talk about own teaching — teacher shares with the learner his/her teaching strategies, e.g., “I will now explain to you how to play”
  • Responsiveness—teacher responds to utterances or actions of the learner, e.g., answers questions when a learner errs and demonstrates or verbally repeats a rule

One or more of these has likely been the basis of a CPD session you recently attended!

They also found that 5-year olds appeared to have a more advanced understanding of teaching compared to 3-year olds: Relying more on verbal explanations, more responsive to the learner’s difficulties and asking questions aimed at checking the learner’s understanding.

Implications

Firstly, if teaching is a natural ability functioning from a competent ToM, it might have implications for teacher recruitment. Given the very limited correlations with academic qualifications (beyond a degree in a relevant subject), cognitive ability and various personality traits – might some sort of advanced ToM test better predict teacher effectiveness?

ToM tests for adults on the autistic spectrum have been developed, for example Baron Cohen et al (2001). These involve identifying emotional / mental states from pictures of people’s eyes:

Eyes ToM test

Source of image

However, Baron Cohen has suggested that a functioning ToM involves both affective and cognitive components – the ability to emotionally respond to another’s mental states and the ability to understand another’s mental state. People likely vary on a spectrum across both of these components. Baron Cohen has suggested that psychopaths, for example, probably have a very high functioning cognitive ToM (required to be able to deceive and manipulate people) but ‘zero negative’ empathy for others.

I think great teachers probably need both: the ability to model other people’s thought processes (e.g. how students think about a subject), balanced by an empathetic concern for others.

Secondly, teaching involves the ‘impossible task’ of mind reading – not only identifying gaps in a student’s knowledge, beliefs or skills but also whether they hold incomplete or distorted ideas. In addition, great teachers make countless, unconscious inferences about students’ emotional and motivational states (are they attentive, tired, bored or confused) and react intuitively to these states. Teaching is such a complex task it is probably impossible to ‘do it consciously’.

If teaching is essentially a natural ability, then potentially a great deal of the CPD available to teachers is a waste of time! It could be argued that a great deal of teacher professional development (e.g. on questioning and providing feedback) involves developing the sorts of skills demonstrated by the average 5-year old. Perhaps this is why teachers fail to attract the sort of respect granted to other professionals! Therefore, an important question needs an answer – by ITE providers (of all types) or the proposed College of Teaching – exactly what is the ‘technical’ or ‘professional’ body of knowledge or set of skills required of an effective teacher, which can actually be taught?

 

Posted in Psychology for teachers | Tagged , , , | 19 Comments

Perpetual motion machines do not exist

Fludd machine 1618

Robert Fludd’s description of a perpetual motion machine from the 17th Century. The idea involved water held in a tank above the apparatus driving a water wheel which, through a complex set of gears, rotate an Archimedes screw which draws the water back up to the water tank.

 


The idea of creating a machine which can continue indefinitely without any source of energy to power it is one that has fascinated inventors since the astronomer and mathematician Bhāskara II described a wheel which could run forever in the 12th century. The failure to build such a machine hasn’t stopped people from trying to build them or even applications for patent; whether using magnets, or gravity or buoyancy as the basis for perpetual motion. However, no attempt to create one has ever worked.

Perpetual motion machines do not exist, because no one has built a machine which can continue indefinitely without some external source of energy to keep it going.

It would be very, very odd for someone to claim that they did exist, simply because inventors periodically try to create one. I’d certainly accept that they have tried to create a perpetual motion machine (and thus far failed), or created a machine which they claimed possessed perpetual motion (but didn’t really) – but to say that perpetual motion machines ‘exist’ surely implies that someone has built one that actually works.

I recently read a short series of blogs defending the idea of ‘learning styles’. The idea at the heart of learning styles is that information provided to a student in a form that matches their ‘style of learning’ will lead to improved learning.

Coffield et al (2004) review over a dozen attempts to measure differences in learning ability so that instruction can be matched to this ‘style of learning’. Certainly, all of the systems have tried to define learning styles, but the question is whether any of them actually work. They found that whilst a few of the them provided some relatively valid measures of differences between people, none of them demonstrated that attempting to match teaching to this style would have any benefit.

They conclude that whilst learning style theorists have conducted small-scale, weakly controlled studies to support their claims, none of them produce systems with any clear evidence that using them will advantage learners. None of the proposed systems work.

Pashler et al (2008) helpfully define what a learning style is supposed to be.

“The term ‘‘learning styles’’ refers to the concept that individuals differ in regard to what mode of instruction or study is most effective for them. Proponents of learning-style assessment contend that optimal instruction requires diagnosing individuals’ learning style and tailoring instruction accordingly.”

This makes it clear that merely identifying some differences in people isn’t sufficient for the label of a ‘learning style’ to be applied. As well as being able to measure some sort of psychometrically reliable differences, a learning style also needs to show what mode of instruction would be most effective for an individual. They also note that very few studies have actually tested whether proposed learning styles actually improve learning when instruction is tailored to them. Where these studies have been conducted, several found results which contradicted their claims. The data so far do not support the idea of learning styles.

No attempt at ‘learning styles’ has ever succeeded. It would seem bizarre, therefore, to claim that learning styles exist. I’d certainly accept that people have tried to describe learning styles (and failed) or that some people claim a system of learning styles is effective (when they don’t have evidence to support that view).

Cognitive scientists like Daniel Willingham and teachers like Tom Bennett seem on pretty safe ground making the claim that they don’t exist. The burden of proof is on those claiming that learning styles exist – let them produce the data showing both valid measures which differentiate learners and that matching instruction to these differences enhances learning. If robust evidence to support this comes to light in the future, then I am certain both would change their position (as would I) – that’s the nature of science.

In the meantime, however, claiming that ‘learning styles exist’ smacks almost of what Irving Langmuir called pathological science: ‘an area of research where “people are tricked into false results … by subjective effects, wishful thinking or threshold interactions”’. Langmuir identified pathological science, like perpetual motion machines, as fruitless ideas that simply will not “go away” despite repeated failure.

Given that attempts to identify effective learning styles is hardly new (at least since the 1980s), I have sympathy with the frustration in this article by Tom Bennett which argues that VAK – the most notorious attempt at learning styles in UK education –  is a ‘zombie’ idea in education that simply fails to die.

Posted in Psychology for teachers | Tagged , , , | 12 Comments

How Should Students Revise? by @Nick_J_Rose

Starter for Five

Name: Nick Rose
Twitter name: @Nick_J_Rose
Sector: Secondary
Subject taught (if applicable): Psychology
Position: Leading practitioner for psychology and research
What is your advice about? How should students revise?

1: Practice testing: Use low-stakes tests, quizzes or reviews on a regular basis. Encourage students to test themselves frequently as part of their revision.

2: Distributed practice: Revision over time leads to better recall than cramming. Consider opportunities to revisit previously taught material when planning schemes of work.

3: Interleaving: Consider encouraging students to alternate their practice of different kinds of items or problems when revising rather than sticking to one topic.

4: Elaborative interrogation: Consider encouraging explanatory questioning to promote learning; for example by prompting students to answer “Why?” questions.

5: Self-explanation: Consider encouraging students to explain to themselves how new information is related to known information, or the steps required to solve a problem.

View original post

Posted in General teaching | Tagged , | Leave a comment

What do UK teachers think of some common arguments about pedagogy?

Online survey small

An informal survey about what UK teachers think about some of the more contentious arguments surrounding pedagogy.

If you’d like to take the survey you can click the link below. The responses to this second survey will be analysed at a later date:

https://www.surveymonkey.com/r/DDPJ9RC

Broadly inspired by this paper by Juhani Tuovinen. The purpose of the survey is to explore some of the views and values teachers hold about teaching. The survey will ask you to provide some basic demographic info then the following page will ask you to rate your view of some opinions about teaching and learning.

The original survey ran from 5:45 pm (GMT) Mon 26th Oct 2015 until 5:45 Tuesday 27th.The results of the original survey can be found here:

Results and analysis – part 1

Results and analysis – part 2

Results and analysis – part 3

Results and analysis – part 4

Posted in General teaching | 4 Comments

The science of learning

Deans for impact

Here’s a really clear, short and applicable summary of the key areas of cognitive science which can be applied to the classroom:

The Science of Learning

The summary looks at six questions about learning, giving a quick summary of the science and some ideas about how they might apply in schools and classrooms. It effectively summarises a great deal of things I’ve written about over the last couple of years in six pages! Here are some links for further reading for some of the key points of the summary:

1. How do students understand new ideas?
2. How do students learn and retain new information?
3. How do students solve problems?
4. How does learning transfer to new situations?
5. What motivates students to learn?
6. What are some common misconceptions about how students think and learn?

I’m looking forward  to seeing future work by Deans for Impact – and I’ll be keeping an eye on their blog for more excellent resources!

Posted in Psychology for teachers | Tagged , , , | 7 Comments

How do we develop teaching? A journey from summative to formative feedback

researchED: Research leads network day, Brighton. April 18th 2015

The beginning of the new term means it’s taken a little while to get around to blogging about the great event on Saturday. This tardiness is additionally poor given that I was one of the presenters! However, there are some great summaries of the network day already out there. Two that caught my eye were:

ResearchED Brighton: inside out not bottom up   via @joeybagstock

#rEDBright – summary of a research lead conference by a faux research lead!   via @nikable

One of the themes that emerged from the day, for me, was the growing dissatisfaction with unreliable summative judgements of teacher quality and the view that schools would be better off looking at ways of formatively evaluating teaching through some sort of disciplined inquiry process.

From judgements of teaching quality …

Daniel Muijs opened the day with the provocative question ‘Can we (reliably) measure teacher effectiveness?’ His answer, which drew upon evidence from the MET project, suggested that that we could, though each of the tools for measuring teacher effectiveness had strengths and limitations. He analysed the reliability of VA data, observations and student surveys in turn.

Muijs suggested that the focus on student outcomes had liberated teachers to experiment more with their teaching – which is true, but it’s clear that a naïve treatment of this data has created problems of its own. For example, this focus on outcomes presupposes that there is a straightforward relationship between ‘teacher input’ and ‘student output’ (something Jack Marwood takes issue with here). Indeed, Muijs quoted Chapman et al (in press) saying that teaching probably only accounts for around 30% of the variance in such outcome measures.

In summative data of teacher performance, the inherent uncertainty within the measurement is expressed in the form of confidence intervals. A range of teacher VAM scores might look like this:

VAM1
Source:

The vertical bars represent the confidence interval associated with each teacher’s score.

In essence they suggest that we can only have reasonable certainty that a teacher’s score lies somewhere between the top and bottom of each line. The marginal differences between the midpoints along these lines are not a reliable comparison (even though intuitively they may appear so). In the example above it is reasonable to say that teacher B produced higher value-added scores than Teacher A, but the overlap in the confidence intervals for teacher C and D means that we cannot readily distinguish between them.

However, this uncertainty can get ignored when teachers are ‘held to account’. Their use has led to some pretty egregious practice in the US, e.g. ranking teachers by their VAM scores in a kind of ‘league-table’ of teachers. In one instance the LA Times appeared to completely ignore the presence of confidence intervals and published the data like this:

VAM2
Source:

Implying that the estimate of teaching quality was somehow a perfectly precise point rather than a range and creating spurious comparisons between teachers.

It struck me that schools in the UK risk falling into the same trap. For example when interpreting the sorts of VA graphs UK teachers might be familiar with:

CEM VA analysis
Source:

In the graph above, we can reasonably say that value-added was higher in 2005 than in 2012 (for what that’s worth), but can we readily distinguish between the scores for 2006 and 2011 on the graph above? The presence of a ‘dot’ above or below a mid-line may encourage the same sort of simplistic judgement as the LA Times: tiny variations in scores being interpreted as indicating something about teacher quality.

Indeed, even where a statistically significant deviation in VA scores is found, it doesn’t necessarily tell us whether the result is educationally important. Jack Marwood identifies this problem with the statistics used within RAISEonline to make judgements about schools:

The explanations of significance testing in RAISE are misleading and often completely wrong. In the current version of RAISE, readers are told that, “In RAISEonline, green and blue shading are used to demonstrate a statistically significant difference between the school data for a particular group and national data for the same group. This does not necessarily correlate with being educationally significant. The performance of specific groups should always be compared with the performance of all pupils nationally.”

The key phrase used to say, “Inspectors and schools need to be aware that this does not necessarily correlate with being educationally significant.” But even this does not make it clear how different statistical significance is to everyday significance. Everyday significance roughly translates as ‘important’. Statistical significance does not mean ‘importance’.

The worst influence of this focus on the summative judgement of ‘teacher quality’ is that policy discussion falls into the ‘McNamara Fallacy’ – as described brilliantly in a recent blog by Carl Hendrick:

… there is a deeply hubristic arrogance in the reduction of complex human processes to statistics, an aberration which led the sociologist Daniel Yankelovitch coining the term the “McNamara fallacy”:

1. Measure whatever can be easily measured.
2. Disregard that which cannot be measured easily.
3. Presume that which cannot be measured easily is not important.
4. Presume that which cannot be measured easily does not exist.

Sadly, some of these tenets will be recognisable to many of us in education – certainly the first two are consistent with many aspects of standardised testing, inspections and graded lesson observations. This fiscal approach been allowed to embed itself in education with the justification given often to ‘use data to drive up standards.’ What we should be doing is using “standards to drive up data”

The problem of using data to drive up standards was further highlighted in Rebecca Allen’s presentation. Drawing on her work with Education Datalab, she presented the problem of judging schools or teachers using the concept of expected progress.

I’ve written about this report before, but it’s worth reiterating the major points raised by their analysis.

XXgjEC4EARJkjy“Trajectory”

ur4efUWx6LYJBI“Reality”

From KS1 only about 9% of children take the expected ‘trajectory’ to KS4 outcomes and the assumption of linear progress becomes progressively weaker as you move from primary to secondary schools.

“Our evidence suggests that the assumptions of many pupil tracking systems and Ofsted inspectors are probably incorrect. The vast majority of pupils do not make linear progress between each Key Stage, let alone across all Key Stages. This means that identifying pupils as “on track” or “off target” based on assumptions of linear progress over multiple years is likely to be wrong.

This is important because the way we track pupils and set targets for them:

• influences teaching and learning practice in the classroom;
• affects the curriculum that pupils are exposed to;
• contributes to headteacher judgements of teacher performance;
• is used to judge whether schools are performing well or not.

Allen suggested that we shouldn’t give students an attainment target grade to reach, but a range – to reflect the inherent uncertainty in predicting a student’s ‘expected progress’.

So, given all the problems with reliability, why are we trying to measure effective teaching? One answer is so that schools can identify and sack incompetent teachers, and presumably reward effective teachers through PRP. However, I’ve argued that the lack of reliability in the measures that exist risks perpetuating a ‘cargo cult’ approach to school improvement.

It may be possible, through a rigorous application of some sort of combination of aggregated value-added scores, highly systematised observation protocols (Muijs suggested we’d need around 6-12 a year) and carefully sampled student surveys to give this summative judgement the degree of reliability it would need to be fair rather than arbitrary. Surely the problem is that for summative measures of effective teaching to achieve that rigour and reliability they would become so time-consuming and expensive that the opportunity costs would far outweigh any benefits.

Therefore, it seems to me that these summative measures are unlikely to result in significant improvements to schools. It’s a cliché for politicians to announce that ‘the quality of an education system cannot exceed the quality of its teachers’. One retort to this might be:

‘Useful judgements of teacher quality cannot exceed the reliability of the data’

The stakes attached to statistical analysis of school or teacher data need to be moderated in line with the reliability of that data.

… to developing teaching through ‘disciplined inquiry’.

After coffee, the network day turned away from the issues of evaluation and assessment towards exploring ways in which teachers could use research evidence within, what Dylan Wiliam has called, ‘disciplined inquiry’.

Andy Tharby led a thought-provoking session discussing the work that Durrington High School have been doing with Dr Brian Marsh at the University of Brighton. He made the point that inquiry projects were nothing new within the school, but that previous versions of teacher-led projects had been overly reliant upon reflection as the sole source of evaluation. Through the partnership with Dr Marsh, they have developed more evidence-informed CPD opportunities (like an Edu Book club), started to disseminate blogs and bulletins through teachers’ pigeonholes and three teachers had taken on year-long projects aligned with the school’s improvement plan.

There’s no doubt that these partnerships between schools and HEI’s can provide mutual benefits, but as Tharby was quick to point out, the sort of informal relationships that can be struck up between individual schools and university based academics isn’t really scalable in a way that could transform schools.

Given the difficulty in accessing the evidence base and the problems for teachers trying to sort myths and neuro-nonsense from useful insights into learning, Lia Commissar presented an interesting resource that could be developed for teachers.

The trial, involving psychologists and neuroscientists answering questions from classroom teachers, runs until the 9th May.

James Mannion presented another resource which teachers might wish to explore. Praxis: a professional development platform, designed to help school-based practitioners develop their practice through small-scale research inquiry.

Again, the issue of moving practitioner-led research beyond reflection was suggested as a way in which teachers could regain ‘agency’ within their professional development. Mannion expressed his hope that Praxis could become a forum for teachers to collaborate in their efforts to optimise the outcomes for their students within their individual contexts.

He also proposed ‘Praxis’ as a new term to encapsulate all the various forms of teacher inquiry: lesson study, action research, disciplined inquiry, etc (though I’d prefer a less value-laded term, even if it appears ‘less exciting’). However, I dare say that teachers will continue to use the plethora of terms to describe essentially the same thing regardless of what anyone proposes!

Developing research tools for teacher inquiry

My session drew on the recent Sutton Trust report:

Developing Teachers: Improving professional development for teachers

You can download the presentation here: Research tools for teacher inquiry:

My argument was that the drive to find measures of effective teaching might be better focused upon developing reasonably valid ways for teachers to investigate their own teaching than pure accountability. I made the point that most of the measures developed for accountability purposes don’t necessarily provide very useful information for teachers trying to improve!

Observations: The value of these are often lost where schools use Ofsted style grading structures – which are principally summative. To improve teaching, we need focused and formative observation protocols to provide challenging yet supportive feedback.

Value-added data: This data, however reliable it may or may not be, appears too infrequently in the academic year to provide regular feedback on teaching. In general, the assessment data which teachers report to line managers and parents – whilst more frequently collected – isn’t often useful as a formative form of feedback. There’s also the problem that we may inadvertently distort the data we generate about students if there’s a target attached to it. What’s sometimes called Goodhart’s Law: “When a measure becomes a target, it ceases to be a good measure.”  A better bet might be to encourage teachers to use question analysis to identify areas where their teaching or understanding of assessment could be developed.

Student surveys: These are potentially a cheap and easy source of reasonably valid feedback. However, even a good instrument can deliver poor data and surveys need strong protocols to provide effective feedback. I’ve written quite a lot about how the MET survey can be used as a formative tool within coaching here: Investigating teaching using a student survey

Reflection / self-report: Cognitive biases means pure reliance on ‘reflection’ is likely to have minimal impact on teaching practice. There likely needs to be an element of external challenge. However, I suggested using a behaviour log, based on Hayden’s research into behaviour in schools, as a self-report tool for teachers developing behaviour management or classroom climate. There’s more information about this here: Talking about the behaviour in our lessons

Work scrutiny: I made the point that techniques like content analysis, which might be used to gain insights into changes over time in books, are difficult and therefore tend to be conducted in a fairly superficial way (e.g. what colour pen the teacher is using). It may be possible, however, to create some simple protocols to look at specific features of students’ work to see whether they are responding to changes you’ve made in teaching.

Finally, I discussed the problems inherent in evaluating whether mentoring interventions were having the desired effect on student behaviour or effort in lessons.

Rob Coe’s ‘poor proxies for learning’ will likely be familiar to many who read this blog.

I added a few other proxies which I suspect may be unreliable when it comes to establishing whether a mentoring intervention is having the desired effect.

Discipline points: Problem is that a reduction may simply reflect the teacher ‘losing faith’ with the behaviour management system rather than improvements in behaviour.

Reward points: Where intrinsic motivation is lacking we instinctively employ extrinsic motivators. Where intrinsic motivation is ok, we shouldn’t (and usually don’t) use extrinsic motivators (which is why ‘naughty’ kids tend to get more reward points than ‘good’ kids).

Effort grades: Attitudes to learning scores may simply reflect teacher bias.
e.g. Rausch, Karing, Dörfler and Artelt (2013) Personality similarity between teachers and their students influences teacher judgement of student achievement.

Attitude surveys: Can be easily distorted by social desirability bias. There is also a big gap between a change in reported attitudes and actual changes in behaviour.

However, I ended by wondering whether there might be some behaviour that is ‘proximal enough’ that we could use simple structured observation techniques in lessons to evaluate changes in student behaviour. Drawing on the idea of a ‘time on task’ analysis, I asked the group to think about some behaviours which might indicate the presence or absence of ‘thinking hard’ in a lesson.

Measuring teaching: The need for a shift of focus

Developing simple research tools could help teachers move beyond ‘pure reflection’ as the basis of challenging their teaching practice and provide an empowering way for teachers to improve the quality of their own teaching.

However, it can be difficult to develop and validate these tools within the context of a single school and the initial time demands can be high. I commented that I had barely got started in piloting these with teachers (which created some very flattering spontaneous laughter from the audience), but even adapting the MET student survey to a formative purpose is a tricky task if you don’t want to lose whatever validity it possesses.

Nevertheless, I think it’s worth trying. It seems a plausible hypothesis that developing teacher inquiry tools which provide reasonably valid, developmental feedback could improve outcomes for students and foster greater professional autonomy for teachers.

Posted in Research Lead | Tagged , , , , , | 7 Comments

Research tools for teacher inquiry

Excellent day out in Brighton for the ResearchED: Research Leads network day.

I’ll blog about the day properly later in the week, but the presentation is here:

rED presentation

Developing research tools for teacher inquiry v1.3

Posted in Research Lead | Tagged , | 8 Comments

Ethical issues in teacher-led research

At the last research leads event at Cambridge, I raised the issue of ethical considerations where teachers engage in research. Here are some thoughts:

First of all, it’s important not to over-state the seriousness of ethical issues in teacher-led research within schools. The sorts of things that most teachers ‘investigate’ are close enough to the activities which take place as part of ordinary teaching in schools that ethical issues are not a major concern: Trying out a new writing frame, creating a formative assessment to identify misconceptions, developing questioning techniques, adapting materials to help EAL students access learning, implementing low-stakes quizzes to help consolidate previously taught topics, etc. Teachers already have a duty of care towards their students and should simply exercise professional judgement when planning innovations in their teaching. Thus, in my opinion, the vast majority of teacher inquiry projects likely require no additional ethical permissions or protections.

What probably is important, given that most teachers don’t have a background in social science or educational research, is that teachers have some guidance on ethical issues they might not have considered and a clear pathway to seek advice if ethical issues arise as part of their inquiry projects. Thus, I think it would be useful for schools to have sensible guidelines laid out in advance: A straightforward policy on research ethics – agreed by the school and shared with teachers who are engaged in research.

There’s a great starting point for discussion about this on Gary Jones’ blog:
Jones (2015) We need to talk about researchEDthics – School Research Leads and Ethical Research and Evidence-Based School Cultures

I’ll mainly be drawing from two other resources which research leads might find useful to look at:
BERA: Ethics and Educational Research (Hammersley and Traianou, 2012)

BPS: Code of Human Research Ethics (The British Psychological Society, 2010)

I’m going to talk about the three main areas of ethics in research that teachers may need to consider when planning research projects: Minimising harm, informed consent and confidentiality.

Minimising Harm

Jones suggests we should consider both ‘Beneficence’ and ‘Non-maleficence’ when planning research in schools: That the action or intervention is intended for the benefit of the individuals involved and that we have considered carefully any broader negative consequences which might arise. For the overwhelming majority of teacher inquiry projects, a teacher’s evidence-informed professional judgement will suffice, I would argue. However, there may be circumstances where a greater or more formal consideration of potential harm should be carried out.

In my view, this becomes a potential issue when considering psychological interventions with students – rather than teacher inquiry related to ordinary classroom activities. For example, I recently wrote about some of the concerns raised by teachers and academics about the rise of therapeutic interventions within schools. Little is known about the potential for adverse reactions or contraindications for psychological interventions.

The BPS guidelines state that psychological researchers should always consider research from the standpoint of the participants; with the aim of “avoiding potential risks to psychological well-being, mental health, personal values, or dignity.”

The BERA article makes the point that consideration of harm may include reputational damage for the teachers and potentially the institution involved:

“Minimising Harm. Is a research strategy likely to cause harm, how serious is this, and is there any way in which it could be justified or excused? Note that harm here could include not just consequences for the people being studied (financial, reputational, etc) but for others too, and even for any researchers investigating the same setting or people in the future.”

I would argue that there need to be some additional safeguards in place when school-based interventions are based on therapeutic models like cognitive behavioural therapy or mindfulness. Whilst ‘side-effects’ are unlikely to be as serious as they would be for a medical intervention, there’s a question as to whether children suffering from clinical depression or an anxiety disorder are benefited or harmed by some of these interventions. Given that the Annual Report of the Chief Medical Officer (2012) suggested that as many as 1 in 10 children possess a clinically diagnosable mental disorder. Some consideration of monitoring throughout such projects might be ethically justified here; along with possibly some sort of screening.

Another aspect of education research broadly encompassed by ‘harm’ might be where children are removed from regular lessons as part of a project. There is an ‘opportunity cost’ to a child’s education if they are removed from subject lessons. Do children benefit from missing maths lessons to partake in a character education project? What about history lessons? Or RE lessons? Good intentions alone are not enough here, I would argue. Schools should engage in a more formal costs and benefits analysis when considering such major changes to curriculum – and demand up-front the evidence for proposed benefits and the monitoring arrangements during the project which will identify whether intended benefits are being delivered compared to the ‘costs’ imposed on that child’s regular education.

Informed consent

These issues are unlikely to feature in the majority of teacher inquiry projects, as the activities students undertake are part of lessons within the normal context of classroom practice. Where a research project involves appreciable risk or significant opportunity cost however, the issue of informed consent may become more important.

In psychological research, participants should be given sufficient information about the research so that they can make an informed decision as to whether they want to take part. Part of the protections for participant autonomy is also their right to withdraw from a research study. With children under the age of 16, there’s an additional protection: Additional consent of parents or guardian of the individual should normally also be sought.

However, the BPS guidelines make an exception when it comes to research conducted in schools.

“In relation to the gaining of consent from children and young people in school or other institutional settings, where the research procedures are judged by a senior member of staff or other appropriate professional within the institution to fall within the range of usual curriculum or other institutional activities, and where a risk assessment has identified no significant risks, consent from the participants and the granting of approval and access from a senior member of school staff legally responsible for such approval can be considered sufficient.”

In schools this will likely be the head teacher.

BERA frames this aspect as respecting autonomy.

“Does the research process show respect for people in the sense of allowing them to make decisions for themselves, notably about whether or not to participate? This principle is often seen as ruling out any kind of deception, though deception is also sometimes rejected on the grounds that it causes harm.”

Where permission of parents is considered a justified step, there is a further question of whether passive or active consent should be obtained. For example: Noret (2012) Guidelines on Research Ethics for Projects with Children and Young People

“Active consent refers to the use of a consent form, whereby parents/guardians are required to sign and return a form indicating their consent for their child to participate in the study. Non-return of this slip is taken as an indication that the parent(s)/ guardian(s) do not want their child to participate in the study.
“Passive consent on the other hand, requires participants to return the slip only if they do not want their child to participate in the study. Non-return of the slip is then taken as consent for the child/young person to participate in the study (Ellickson & Hawes, 1989).”

Where research involves very young children, sensitive topics or intrusive methods, or where the research will involve students travelling outside the school environment, then active consent may be more ethical than passive.

Confidentiality

This is an area where schools already have some policy in place. For example, schools have to work within the Data Protection and Freedom of Information legislation. There’s a summary of how these impact on schools by BECTA here: Data protection and security – a summary for schools

As such, most teacher inquiry projects will simply need to comply with these already existent policies. Teachers need to collect information about students as part of their professional role. Things like data security and filming in the classroom likely already have guidelines which teachers are expected to follow. Research which involves the collection of data regarding students and falls within these existing rules is unlikely to raise additional ethical considerations.

However, there are some general principles which might inform professional judgement about confidentiality. For example, the BPS suggests that information obtained from and about participants during an investigation is confidential unless otherwise agreed in advance.

“Participants in psychological research have a right to expect that information they provide will be treated confidentially and, if published, will not be identifiable as theirs. In the event that confidentiality and/or anonymity cannot be guaranteed, the participant must be warned of this in advance of agreeing to participate.”

On the other hand, the BERA article argues that confidentiality may not always be desirable in educational research:

“Protecting Privacy. A central feature of research is to make matters public, to provide descriptions and explanations that are publicly available. But what should and should not be made public? What does it mean to keep data confidential, and is this always possible or desirable? Can and should settings and informants be anonymised in research reports?”

This becomes a potentially important issue when collecting data from and about teachers within a school. In this case, I find myself agreeing with the BPS on the issue of confidentiality: It should be a reasonable expectation that data generated by research should be treated as confidential and only published in forms where their identity cannot be ascertained.

For example, in piloting student surveys as a way of investigating teaching, there were several issues relating to confidentiality which I considered before starting out.

Firstly, it seemed important to the process that student responses to the survey should be anonymous – therefore, student names were not collected on the questionnaires and the classroom teacher would step out of the room whilst students completed the surveys. This helps ensure that students give a more honest reflection of their views than they might if they could be identified. The second issue relates to the use of this data. The issue of confidentiality needed to be agreed with the teachers concerned. In this case, the decision was taken to ensure the confidentiality of student feedback on teaching – in order to build trust and confidence in the process.

In summary

In the main, ethical considerations in teacher-led research can rely upon the evidence-informed professional judgement of the teachers involved. There are barriers enough already without adding lengthy ethical procedures to the list! It’s worth remembering that schools are likely to already have policies relating to some of these issues that teachers must follow – and there’s no point over-complicating or duplicating these policies. However, in my view, there are some types of school projects which may be worthy of more formal ethical consideration. Plans involving aspects of psychological therapy or significant changes in a child’s curriculum should be reasonably interrogated for the expected benefits versus the opportunity costs involved.

Given few teachers have a background in social science research; it might be reasonable for research leads to think about running a session or distributing a summary about ethical issues in education research to better inform those judgements.

Ethical issues in research cannot always be anticipated, however. Whilst the vast majority of the research projects teachers undertake are going to be unproblematic, it might be wise to have some simple guidelines and procedures laid out so teachers know where they can seek advice during their research. This will serve to protect both students and the teachers involved in research, should any ethical problems emerge as they carry out their investigations.

Posted in Research Lead | Tagged , | 4 Comments

Developing research leads in schools: The Janus-faced role of a research lead

researchED: Research leads network day, Cambridge. March 14th 2015

Janus_coin source: commons.wikimedia.org/wiki/File:Janus_coin.png

In a brief stop-over between Sydney and Dubai, Tom Bennett was surprised and delighted to discover so many teachers prepared to ‘give up a Saturday’ to come and explore the role of a research lead. It shouldn’t come as such a surprise. The event, expertly organised by Helene Galdin O’Shea, involved a combination of thought-provoking speakers and the opportunity to meet exceptional colleagues – making it extraordinary CPD. Here’s my reflection on the day and a (dreadfully inadequate) summary of some of the talks I attended.

Philippa Cordingley opened with a keynote identifying some effective ways to lead research within schools. Schools are increasingly creating directed time for professional inquiry (whether it’s called coaching, lesson study or action research) and she related some interesting case studies of how schools were structuring these opportunities. She related the importance of having a research base which reflected ‘the things that wake teachers in the middle of the night’ – and this is an important issue. Finding, accessing and disseminating the sort of educational research which can be applied by classroom teachers is one of the great challenges.

Philippa referred to the bottle-neck between research and practice based knowledge and the work that CUREE has been doing to overcome this:

A number of resources which may be useful to research leads: http://www.curee.co.uk/browse-resources
… and a host of Education research links: http://www.curee.co.uk/category/5/27
… she also described some of the tools and research resources, like ‘route maps’, which CUREE use to help teachers engage with research: http://www.curee.co.uk/node/2908

Philippa emphasised the need for professional learning to involve creating new ideas and strategies with a clear focus on student outcomes. It wasn’t so important whether teachers engaged in their own research or the research of others, rather that it was the process of challenging prevailing orthodoxies and supporting teachers as learners which had the greatest impact.

One of the most interesting items she raised was importance of school leaders actively participating and modelling this process. To support the argument, Philippa referred to research by Robinson, Hohepa and Lloyd (2009). Their meta-analysis into school leadership and student outcomes identified that the factor which had the greatest influence was promoting and participating in teacher learning and development:

Robinson, Hohepa and Lloyd Source: http://www.educationcounts.govt.nz/__data/assets/pdf_file/0015/60180/BES-Leadership-Web-redacted.pdf

The fact that this appeared to have greatest influence on student outcomes raised a few eyebrows. It might be expected that school leaders participating and promoting professional development would have an impact on outcomes for students, but the fact it appeared to be the most influential factor was surprising.

The day then presented some difficult choices! I missed Jude Enright’s talk on enquiry based practice (which she has very kindly blogged about here), Beth Grenville-Giddings talk on setting up a journal club and Robert Loe talking about how we might explore and measure ‘relationships for learning’. I hope someone will blog on these.

Gary Jones led the second session with a focus of how we can help teachers ask better questions about their teaching. Drawing parallels with evidence-based medicine and clinical practice, he related a number of ways that we can move from ‘background questions’ which tend to be poorly formulated and difficult to evaluate, towards ‘foreground questions’. In essence, he gave some useful ideas on how teachers might operationalise their own research in more productive ways:

One model, which I really liked, was the PICO format:

P — Pupil or Problem. How would you describe the group of pupils or problem?
I — Intervention. What are you planning to do with your pupils?
C — Comparison. What is the alternative to the intervention/action/innovations
O — Outcomes. What are the effects of the intervention/action/intervention?

There is a host of interesting ideas and resources for research leads on Gary’s blog. I might also recommend his discussion on some of the pitfalls and misconceptions that research leads would be wise to avoid:

The school research lead and pitfalls to be avoided.

The SUPER partnership (the ‘Power Rangers’ of researchED) is a school-university partnership for educational research between schools in the East of England and the Faculty of Education at the University of Cambridge. The SUPER partnership was bringing researchers and schools together years before researchED was even a twinkle in Tom Bennett’s eye. If you haven’t seen their blog, there’s a pretty comprehensive list of research resources that research leads would find useful:

SUPER blog: access to research

The talks by Clare Hood and Abi Thurgood were a fascinating insight into the challenges of the research lead role. Clare contrasted the fast-paced culture of accountability and ‘evidence’ that currently exists in many schools with the slower pace of academic research. She also talked of the value of having the Cambridge team as a ‘critical friend’, especially when formulating research questions across a teaching alliance. Abi identified some of the core aspects of her role: Formulating teacher and subject department research questions, the dissemination of practitioner research through a ‘marketplace’ format, links to the Cambridge MEd programme and the very interesting idea of bringing in student researchers through EPQ projects. Both talks emphasised the ‘sense of re-professionalism’ that came from teachers having opportunities to choose their own development goals rather than working to imposed targets.

One question that arose in my mind, as research leads become more common place in schools, is how we ensure an appropriate ethical framework for teacher-led research. University-based researchers have a range of ethical checks and guidelines to ensure research is conducted in a responsible way and minimise risk to participants, but few schools appear to have a research ethics policy. I have some ideas about this which I’ll endeavour to blog about before the next researchED research leads conference at Brighton in April. If people think it’s an area worth exploring, I’d be happy to present something and facilitate a discussion at a future event.

The issue of access to research was an enduring theme across the day. Vincent Lien made a reasoned and impassioned argument for teachers to have free access to education research. If you haven’t signed his petition yet, it’s available here:

Free access to research ejournals for teachers

Jonathan Sharples and Caroline Creaby related examples of connecting teachers and researchers together. Jonathan related some of the barriers which exist in sharing and promoting the use of evidence in schools. He made the point that this process was likely to be slow going – NICE estimate it takes up to 15 years for research evidence to become embedded within medical practice. The ‘Push’ of evidence-based research coming from universities will be slow to change education on its own. One of the roles for a research lead might be to help ‘pull’ evidence-based research into schools and foster the ‘links’ between universities and schools. Caroline gave some excellent examples of how teachers had been able to draw upon the expertise of university researchers through mechanisms as simple as emailing questions. However, I suspect these informal channels of communication – whilst excellent – are not really scalable across a large number of schools.

Academics tend to be very generous with their time and keen to talk to teachers about research, but without a more formal framework it’s difficult to see how this can genuinely make an impact across a school system. However, plans are in motion. Caroline is about to project manage ‘evidence for the frontline’ – involving the Coalition for Evidence-Based Education (CEBE) and the Institute for Effective Education (IEE) at York.

Essentially E4F appears to be a brokering service, linking up teachers with research expertise and resources. One of the things that they want to create is a map of expertise showing practitioners, researchers and other providers:

ExampleofMap_previewSource: http://cebenetwork.org/projects/evidence-frontline-%E2%80%93-research-hands-practitioners

Ffion Eaton took up the role of research lead in 2013 and talked about embedding a research culture, through a whole-staff action research programme, within her school and teaching school alliance. Her school is part of the RISE project – EEF funded research examining whether research leads help improve student outcomes in schools – and she described a little of the training, resources and support this had provided. One of the key initiatives she related was the difficulty in maintaining communication of research across the school – it’s easy for research to go on in isolated pockets within schools. One interesting idea was the development of a teaching and learning bulletin and mini ‘research conferences’ to help disseminate some of the research and findings across the alliance.

There felt like a convergence in many of the arguments raised across the day: A key focus upon research leads playing a role in strengthening professional development and using evidence-based research as the starting point for improving student outcomes. This aligns well, I think, with the recent Sutton Trust report on improving professional development. I think the six principles of teacher feedback listed in that report might serve as an effective summary of some of the major themes arising from the network day:

Developing Teachers: Improving professional development for teachers. January 2015

Six principles of teacher feedback

Sustained professional learning is most likely to result when:
• the focus is kept clearly on improving student outcomes;
• feedback is related to clear, specific and challenging goals for the recipient;
• attention is on the learning rather than to the person or to comparisons with others;
• teachers are encouraged to be continual independent learners;
• feedback is mediated by a mentor in an environment of trust and support;
• an environment of professional learning and support is promoted by the school’s leadership.

The Janus-faced role of a research lead

The metaphor of bottlenecks and bridges between research and practice-based knowledge emerged more than once over the course of the day. The Roman god of bridges, doorways and passageways was Janus. What emerged from the day, for me, was the ‘Janus-faced’ role of a research lead in schools: outward-looking towards the extensive and sometimes difficult-to-access research evidence that might inform practice; and inward-looking towards facilitating teachers investigating their own practice.

University researchers and classroom teachers expressed frustration at the ‘closed doors’ to each other’s institutions. At the moment, many of these links appear informal and rather haphazard – working through personal connections and chance encounters. To scale these mutually profitable relationships across the school system will likely involve more formal mechanisms by which schools can network with university researchers. Teachers need access to the broader evidence base to stimulate ideas, help formulate questions, gain research tools and act as a valid foundation for their own professional inquiry into their teaching. Researchers need access to schools and sometimes encouragement to focus their research on some of the applied problems that teachers face trying to improve outcomes for their students. Janus was associated with such travelling and trading, and research leads might also adopt this aspect – coordinating closer links and a greater trading of ideas, between school-based and university researchers.

Posted in Research Lead | Tagged , | 5 Comments