Excellent day out in Brighton for the ResearchED: Research Leads network day.
I’ll blog about the day properly later in the week, but the presentation is here:
Excellent day out in Brighton for the ResearchED: Research Leads network day.
I’ll blog about the day properly later in the week, but the presentation is here:
Spot on, but the MET percentages for the upper quartile value added teachers should guide you better to which of the areas to improve on, rather than just looking at the reds.
LikeLike
I am also introducing the 7Cs to my school. Are you cross referencing to your teachers’ value added? Would love to know what impact this student voice has on progress and changing teacher behaviour.
LikeLike
Hi Dominic, thanks for posting.
I recognise that the presentation lacks context, so I’ll try and give a little here. I’m not interested in them as a summative evaluation of teacher performance, but rather for their formative value as a way that teachers can investigate their own teaching within the confidentiality of our coaching framework.
Firstly, I strongly suspect that scores are influenced by the age-group (y7 vs y11) and subject (maths or English vs drama or art). Therefore, I’m not confident I can reliably say how an individual teacher’s scores compare to other teachers (i.e. whether they fall within the upper or lower quartile of normative values). But actually, I’m not trying to compare a teacher’s performance to some established ‘norms’ but rather help that teacher identify areas where they might focus in order to improve. Thus, the analysis compares the scores for that teacher within the context of that group rather than highlighting scores that fall within the upper or lower quartile for a population of teachers. There are some problems with this, I accept. For instance, some items are clearly discriminators and therefore typically score quite low compared to other items. However, as the analysis is merely the starting point of a conversation about teaching, I don’t think this is a huge issue.
I’m also not trying to validate this survey against value-added or progress measures (I’m relying on the fact that Bill and Melinda Gates have already spent quite a lot of money doing that with more rigour than I can realistically do already!). In terms of impact, we run a follow up survey and look at how the areas the teacher was working on have changed over time. Remember, the purpose we’re putting it to is to provide the teacher with feedback rather than for performance management.
LikeLike
Pingback: ResearchED Brighton: inside out not bottom up | must do better…
Reblogged this on The Echo Chamber.
LikeLike
Pingback: How do we develop teaching? A journey from summative to formative feedback | Evidence into practice
Pingback: Finding the middle ground between reflection and inquiry | Reflecting English
Pingback: Can Checklists Improve Teacher Development? | must do better…