Evaluating the impact of professional development is remarkably difficult.
Typically it stops at evaluation forms at the end of an INSET session, but it’s difficult to know whether even highly positive ratings equate to any change in teacher practice or student outcomes. In the past, follow-ups based on observations of lessons have been used – but such measures are qualitative and frequently subjective. Analysis of exam results can give some indication, but waiting one (or two) years to see whether changes have worked is simply too long to provide useful feedback on the quality of CPD.
Recently I blogged on how we had run a small pilot of the MET student survey to help teachers investigate their own teaching. I’m also interested in the potential use of student feedback data to inform school leaders about the strengths of the school, to inform CPD planning and help provide a quantitative measure which can used (alongside other methods) to evaluate the impact of our professional development programme.
To illustrate how this might work, back in December I created a meta-analysis of survey results based on a small volunteer sample of teachers (surveying 90 students).
The classroom ‘control’ items scored the lowest; 32% of responses were positive whereas 34% were negative ratings. This analysis, as unrepresentative as it was, suggested that behaviour management might be a useful coaching focus for at least some our teachers. In response to this, we decided to tweak the options available for peer-coaching (teachers self-select the groups they wish to work with over one or more half-termly cycles). We created a ‘Behaviour for learning group’ that would run from January and this turned out to be a popular choice with our teachers. All of the teachers in the sample group selected this coaching topic.
In March, the follow up survey for this small group showed that whilst scores for the other factors measured on the questionnaire had remained more-or-less static over the ~4 months, the scores for classroom ‘control’ had changed:
In the follow-up survey, 46% of student responses to the ‘Control’ items were positive and negative ratings fell to 24%.
Whilst this appears to suggest that our peer-coaching programme had a positive effect, we cannot draw that conclusion. This was a small, volunteer sample of teachers and lots of other factors can influence students’ perception of behaviour in lessons (it might even simply be regression to the mean). Thus, we can’t draw any strong conclusions about the effectiveness of our peer-coaching programme from these data alone. However, it does provide an example of how meta-analysis could be used to inform CPD planning and school self-evaluation.
These kinds of data can serve a useful purpose so long as it is not being used to make superficial judgements about teacher performance. Confidentiality is a vital part of coaching and analysis like this must preserve teacher anonymity in order to maintain trust (incidentally, permission was sought from the teachers involved before I posted this).
However, with adequate protocols it could potentially be used to guide school leadership decisions about the offer of professional development which might be genuinely useful to teachers and provide one form of evidence (alongside others) by which we can evaluate whether the time and money we have invested in CPD is working.