In October I blogged on how student perception surveys might be used to provide a fairly reliable measure of teaching effectiveness. Since then, I have been piloting a version of the MET survey to investigate my own teaching (along with a number of other teachers here at Turnford).
I randomly sequenced the questions listed in the MET survey and drew up a simple set of protocols about how the survey would be used. This included strict confidentiality regarding the data – they would be shared only with that teacher (though they could share those data with whomever they wished, of course) and that any feedback on the pilot study would ensure teacher anonymity. It also included the standard instructions given to students about completing the survey (again, ensuring their contributions were anonymous).
A list of the question items can be found here: Student perception survey
I employed a 5-item Likert scale to record student responses to each item.
Each teacher received a graphical summary of the results – showing the positive and negative responses across the 7 categories. Here’s my one from October:
Teachers also received a break down of scores for each item. Strengths and areas for development were highlighted like this:
Teachers could then select one or more area to focus on over the following term. I chose 7 areas (because I’m greedy like that).
06 My teacher in this class makes me feel s/he really cares about me
21 My teacher seems to know if something is bothering me
20* Student behaviour in this class makes the teacher angry
30* When s/he is teaching us, my teacher thinks we understand when we don’t
23 This class does not keep my attention – I get bored
03 Students get to decide how activities are done in this class
16 My teacher takes the time to summarise what we learn each day
* indicates a reversed scoring item
Earlier this month I ran a follow up survey with the same group – in order to see whether the areas I targeted had made any improvement:
Care:
61% 06 My teacher in this class makes me feel s/he really cares about me [+0%]
33% 21 My teacher seems to know if something is bothering me [-6%]
Clearly my Y11s weren’t very impressed with my efforts to be more empathetic and express professional warmth! However, other areas showed some improvement …
Control:
78% *20 Student behaviour in this class makes the teacher angry [+39%]
I’m happy about this improvement – and I think it reflects the genuine improvements in the atmosphere and learning culture of the classroom.
Clarify:
67%* 30 When s/he is teaching us, my teacher thinks we understand when we don’t [+7%]
This worried me in the original survey – I’m glad to see my spot questions and encouragement to seek appropriate help are moving this in the right direction, but I still see work to do here.
Captivate:
72%* 23 This class does not keep my attention – I get bored [+11%]
A lot of this was monitoring and being ready to shift my pace (up or down) appropriately to the learning. Glad to see a positive shift here.
Confer:
50% 03 Students get to decide how activities are done in this class. [+28%]
A big increase – probably because I give more flexibility with revision and review tasks when I’m able. I’ve also tried to offer some choice about how specific studies are consolidated (e.g. using mnemonics, writing frames or visual maps) and I was able to make some of the practical experiments open to student choice (within in a limited range).
Consolidate:
72% 16 My teacher takes the time to summarise what we learn each day [+22%]
I was doing this before, I just needed to be more consistent with it.
Students aren’t fooled by a ‘show off’ lesson that might be given for an observation, and by asking the whole class you get the range of student experience you might not get by simply asking one or two students. These are probably some of the reasons why a recent Sutton Trust report rated the reliability of student surveys higher than observation. At the end of the day, students see all of my teaching – warts and all – good days and bad, so can provide some useful feedback on what I’m doing well and where I might look to improve.
As an accountability measure, I think student surveys would be stressful and unfair – but as a coaching tool to get inside your own teaching, I think it has merit.
Reblogged this on The Echo Chamber.
LikeLike
Just to point out that this pilot was carried out by teachers self-assessing their own teaching and under strict confidentiality (as part of a coaching agreement). I want to make very clear that I’m not advocating this process as an accountability stick with which to beat teachers!
LikeLike
Thanks for this post. I absolutely agree that this sort of mechanism should be only for self-assessment rather than part of a formal appraisal system. I think perhaps there might be a way to create ‘meta-data’ for a whole school, where it is gives some indication as to the general strengths of a team. That is the only way I see it being viable for school leaders like myself.
LikeLike
You might be interested in this post: https://evidenceintopractice.wordpress.com/2014/03/24/using-student-surveys-to-measure-the-impact-of-coaching/
LikeLike
Yes, I’m glad to say our HT takes the same view as you and recognises that for coaching to have any chance of working well it has to be totally separate from the appraisal system. Anonymous meta-data could provide a useful insight into the strengths of the school. Where low scores were found in such data, it might be used to plan CPD provision or to select a useful focus for our whole-school peer-coaching programme.
LikeLike
Reblogged this on The Echo Chamber.
LikeLike
Pingback: Using student surveys to measure the impact of coaching | Evidence into practice @ Turnford
Pingback: What can students really tell us? : eddiekayshun
Pingback: What Improves Teacher Quality? | Evidence into practice @ Turnford
Pingback: Great teacher talk | Evidence into practice
Pingback: Radical collegiality – PedagooSW. : eddiekayshun
Pingback: Talking about the behaviour in our lessons | Evidence into practice
Pingback: Ethical issues in teacher-led research | Evidence into practice
Pingback: How do we develop teaching? A journey from summative to formative feedback | Evidence into practice
Pingback: How can we evaluate teacher effectiveness? | David Didau: The Learning Spy
Pingback: Developing research leads within schools: ‘the good we oft might win’ | Evidence into practice
I like the way you have made concerted efforts in your teaching practice, a reminder that we can pivot our focus and really make great changes to our instructional practice. I wonder, did your student survey help students think more deeply about what helped them learn? Did an increase in ‘teaching effectiveness’ results in greater student progress? What strategies did you use? Thanks for sharing.
LikeLike
Hi – thanks for the questions. It’s always difficult to assess the ‘impact’ of these kinds of inquiry projects – as there isn’t a comparison group and it’s very easy to convince oneself of the value of things we put a lot of effort into. However, the MET project spent a fair amount of money validating the survey and establishing whether high scores corresponded with higher VAM scores (which, to a degree, they appear to). There’s also some reasons to believe that the feedback from the MET survey (when properly administered) may be more reliable than observations.
As a coaching tool, I felt it had value – providing low stakes feedback about teaching and a prompt for discussing areas of practice you might alter or adapt. I shared some of the results with my group – to get some insight into what their responses might mean – and perhaps that helped prompt them to think about what helps them learn.
The main strategies I used after the first survey were providing a bit of choice when it came to consolidating recall of the major studies and theories, not letting ‘lower attainment’ students off the hook too quickly with questions (to better check their understanding), and simply making sure that activities didn’t start to ‘drift’ (a bit of work on transition routines, effectively).
LikeLike
Pingback: Are student surveys windmills of the mind? | David Didau: The Learning Spy
Pingback: Observations of teaching are probably biased | Evidence into Practice