Developing research leads within schools: ‘the good we oft might win’

‘Our doubts are traitors, and make us lose the good we oft might win, by fearing to attempt.’

Measure for measure, Act I Scene IV

ResearchED Research Leads Network Day, 13th December 2014

It is perhaps indicative of the character of the researchED movement that the ‘Research Leads Network Day’ keynote was a ‘reality check’ delivered by Professor Rob Coe – warning that we risk being little more than another education fad and reminding us that there was currently no robust evidence supporting the idea of research leads in schools.

That might have been the end of the whole event – we might have marched off, heads low, back to our schools in shame (which would have been a pity given the large turnout and the distance some people had come) – but fortunately having warned us against hubris, he offered some plausible suggestions for ways forward.

Lost in translation

There is a growing body of evidence which can usefully inform pedagogical practice and the interventions schools implement. Whilst the conclusions we draw must remain tentative, we can at least start by selecting strategies which have a higher probability of succeeding. However, making good use of this evidence requires some understanding of the methodologies, the strengths and limitations, involved in the studies and meta-analyses of education research. An example of how easily the nuance can be lost was recently related by Sam Freedman: ‘A tale of two classrooms’ published by Demos.

“The dangers of this approach were illustrated with the toolkit entry on teaching assistants. Initially, teaching assistants were rated as having no impact. This was picked up by various newspapers, unsurprisingly given that around £4 billion a year is spent on teaching assistants. As a result the EEF was forced to put out a clarifying statement explaining that, while research suggests that on average teaching assistants do not have a positive effect on attainment, other studies showed that if deployed in certain ways teaching assistants can have a very significant impact.”

Work in trying to summarise ‘what works’, like the EEF toolkit or Hattie’s Visible Learning, are the starting point rather than the final word in discussions about school improvement or teacher development. What research evidence can tell us is that ‘something worked for the researchers’ – but it still requires interpretation and significant adaptation to fit within the context of a particular school. Indeed, the circumstances where those positive or negative effects were found may be radically different our own ‘real life’ setting.

That’s not to say such evidence lacks value. Where we have a choice of approach we might take (even if one option is to do nothing new), where the context is similar enough to our own setting that the effect might plausibly generalise and where we agree with the outcome measures used to establish the effectiveness of that strategy – then these high impact strategies represent ‘good bets’. It’s just that they aren’t ‘plug and play’ compatible.

These ‘toolkits’ of education research require significant self-assembly – and that requires some additional tools not supplied in the box – not least, an understanding of research methods.

This was a point highlighted by Alex Quigley and Carl Hendrick – who announced themselves the ‘Cannon and Ball’ of researchED (they are a great double act, though I can’t repeat Carl’s catchphrase related to individuals who haven’t read King Lear). In a briefly serious segment of their presentation, they stressed the role of a research lead as something of a translator. Teachers enter the profession from a wide range of disciplines. Some come with an understanding of the methodologies used in education research, but most do not. There’s also a lot of research out there. Some of it relates to the sorts of questions that teachers and school leaders have – but finding it and assessing its quality is no mean feat. The task of triaging research papers and reviews, pointing towards and summarising the ones that might best provide the starting points for useful discussions within schools, is a role that a research lead might plausibly assume.

Though, the point was raised that this aspect of a research lead’s role is significantly hampered by the inaccessibility of research papers. Opening up education journal access to teachers should be a priority – especially where that research has been conducted using public money. Perhaps this is something a National College could usefully champion.

Mind the gap

The gap between theory and practice was a key issue highlighted in David Weston, Sam Freedman’s and Keven Bartle’s talk on the role of research in ITT provision. ‘Front loaded’ theory, as featured in traditional ITT provision, tends to go unheeded by beginning teachers as they get swamped by the ‘survival instincts’ of proving competence in the classroom. On the other hand, in the absence of theory, teaching becomes a sterile set of routines – a set of ‘signature pedagogies’ which lack flexibility and form the basis of a false consensus about ‘what works’ in teaching.

This divorce between theory and practice is something that risks further undermining the professionalism of teaching. Somehow, we need to marry the best of theory (which Sam identified as a combination of cognitive science and behavioural psychology, and educational research such as Wiliam or the EEF) alongside the practice of basic skills and strategies through teaching practice across the early career development of teachers. The desired outcome is that teachers emerge into the profession as ‘informed consumers of research’– able to articulate questions about their practice and critically engage with future developments in education research. One possible route forward with this involves greater collaboration between HE and schools – and perhaps this is an area which can be facilitated by research leads.

An example of this was related by Daniel Harvey, who talked about the evolving CPD offer within his school supported by Dr Phil Wood at the University of Leicester. He explained how teachers have engaged in regular sessions to support ‘deliberate enquiry’ – small scale research projects which are presented back to whole-staff and written up as a report. He identified some of the pitfalls of engaging teachers with this kind of research model; not least how the contrived groups and short time-scales undermined the first attempt (both features changed in the adapted model they are currently trialling). The ambition appeared to be to produce high-quality research reports, broadly equivalent to Masters level, perhaps with a view to publishing them as a collection of work.

In truth, however, for all the virtue of taking part in such deliberate enquiry – whether that is formal lesson study or the more informal coaching model we use here at Turnford – I suspect that the utility of trying to raise these projects to publishable standard is limited.

Firstly, the informal link to HE that Daniel’s school enjoys isn’t scalable across a school system (I suspect even Dr Phil’s beneficence would be exhausted eventually). Secondly, there is the problem of the time demands of creating high-quality written research reports; especially against the backdrop of high teacher workload in England presented in Emily Knowles’ session. Lastly, if a teacher is going to invest time reading research, large numbers of context-specific, small-scale case studies or case series studies undertaken by teachers is unlikely to have sort of generalisable impact we want.

At best, teachers focussing on the action research of other teachers might provide ideas for research projects they might undertake themselves, but at worst I wonder if this might not simply embed the problem of ‘signature pedagogies’ identified by Keven Bartle. One doesn’t have to look very far to find of where such self-contained professional practice merely appears to embed some of the worst misconceptions of the profession.

Many of the problems within our profession have arisen, I believe, because the teachers have been talking to themselves rather too much – reflecting on and recycling current practice rather than looking outward towards the wider body of evidence emerging within psychology and education research. Indeed, if teachers are going to find the time to read, then they could do worse than read ‘Why don’t students like school?’ by Daniel Willingham (which has been the focus of our newly formed ‘Book Club’ this term).

Alternatively, ‘What makes great teaching?’ by Rob Coe, Cesare Aloisi, Steve Higgins and Lee Elliot Major, would be a worthwhile read for any teacher – and this review formed the focus of the final session of the Research Leads Network day.

The report itself deserves more space than I could summarise here (Sean Allison has written a good summary of the highlights)

For me, I suppose the issue that the report underlines is the need for a shift of emphasis when it comes to making assessments of teachers: from high-stakes, summative judgements towards low-stakes, formative and developmental feedback. To that end, might a research lead’s role be developing a toolkit of ways for teachers to investigate their teaching (whether for self-evaluation, peer coaching or mentoring).

One suggestion was using student surveys as a way of investigating teaching. We’ve been developing the use of the MET student survey as a source of insight into teaching. Within the confidential framework of coaching, a number of teachers have used an adapted version of the survey with their classes – and used the results as the basis of selecting a focus for whole-school coaching.

A question Rob Coe raised was whether we could test our (near universal) confidence in our high expectations of students. A question like ‘How often is your teacher satisfied with your work?’ might provide some interesting student feedback on how well our high expectations are communicated.

Subject knowledge is an area frequently overlooked within school CPD programmes. Rob Coe posed the challenge of how many of us would get 100% if we sat the exam ourselves (though I did point out a slight flaw with this scheme)

Another interesting suggestion was to look at ‘time on task’ for selected students in lessons. This is a feasible way, perhaps, to measure the impact of intervention programmes like mentoring. A question like: ‘What fraction of a lesson does a student appear to be thinking hard about the learning material?’ would be problematic to measure in any objective sense – but a suitably designed behavioural checklist might allow an observer to approximately gauge this and whether it improves over time.

Rob Coe also drew attention to the benefits of examining and practising specific routines. Short, say 5 min, video clips of a teacher engaged in questioning might provide useful feedback for development or a good comparison to our own classroom practice.

Interestingly, one strategy that didn’t appear very high on the list was the current practice of work scrutiny. It seems possible that in convincing Ofsted that observations were an unreliable measure of teaching quality, we may have invited an even less valid method for making such judgements. Ho hum …

Developing professional scepticism

One abiding theme which emerged from the day was the need for a critical perspective within schools. We need someone, most likely a teacher outside of line management, to ask ‘where is the evidence?’ and act as a devil’s advocate. I think that teaching, as a profession, can only really move forward if we foster a greater professional scepticism – and perhaps this can provide some measure of our impact as research leads

We’ve got some ‘good bets’, but uncertainty about ‘what works’ in any robust or generalisable sense is the honest position currently. This, not to mention the general difficulty of establishing anyone’s effectiveness in a school environment, makes it hard to assess whether research leads are improving the quality of teaching within a school.

That may change as the body of evidence regarding good teaching and good schools matures – but in the meantime the prevalence of ‘nonsense’ within education is something that could be tackled. There’s evidence of a need for it, Dekker et al (2012) and more recently Howard-Jones (2014) report that teaching is veritably saturated with misconceptions and myths.

Neuromyths

According to the Coe et al (2014) review, teacher’s beliefs about how children learn and what teaching strategies to use when, do have a measurable influence on student outcomes. So an operationalised question would be: Is the incidence of these pedagogical misconceptions reduced in schools which have developed a research lead role?

Building a plane while flying it

We were asked to suggest questions / areas for focus for the next research lead event. Here are two that occurred to me (so far):

Firstly, how can we facilitate communication and dissemination of good quality research between research leads? Tom Bennett once suggested creating a researchED peer-reviewed journal – a typically ambitious project for one of the ‘world’s greatest teachers’

At the moment, I mainly hear about new pieces of research through haunting twitter – which means it is somewhat left to chance. Most teachers don’t have the time or access to trawl through journal articles looking for pearls – but collectively we might have a better chance. Would there be any merit in setting up a researchED ‘research digest’ – highlighting some of the best sources of evidence and new papers that research leads might translate into their schools? What would it look like? How would we filter what went in it? Would we want to divide it by educational sector or keep it general? Would it focus principally on pedagogy, and/or school leadership or relate also to education policy? Who would be prepared to contribute to it and how would quality control be handled?

Secondly, there’s no formal role or definition of what a research lead does – we’re building the plane while flying it – so I’d be interested to hear other experiences of teachers developing their research lead roles within schools. How closely does it interact with CPD and ITT / NQT provision? Do research leads sit outside, alongside or within school leadership? What are they looking to develop next in terms of disseminating evidence and research? How much of their role is facilitating teachers doing research? What sort of relationships exist between schools that research leads are involved with (e.g. teaching alliances, etc)? What are they doing that their school feels ‘earns their keep’? Sharing various models of what a research leads do (or might be going to try) within their schools might offer areas we can adopt and adapt as we develop our own roles.

Advertisements
This entry was posted in Research Lead and tagged , , , , , , . Bookmark the permalink.

9 Responses to Developing research leads within schools: ‘the good we oft might win’

  1. mrlock says:

    Reblogged this on The Echo Chamber.

    Like

  2. BB2 says:

    Reblogged this on BB2 Collaborative.

    Like

  3. “Is the incidence of these pedagogical misconceptions reduced in schools which have developed a research lead role?” A good question – and more likely to be an effect of a research lead than using GCSEs scores as a measure.

    Liked by 1 person

  4. Pingback: 14 Blogposts for 2014! #14for2014 | From the Sandpit....

  5. Pingback: If you can’t stand the research, get out of the classroom…… | From the Sandpit....

  6. Jude Enright says:

    Re-reading this, have decided it is my favourite blog post ever on research in schools.
    Nuanced piece on State of the Edu Research Nation.
    Get this man a Ministry position.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s