The Dim Post has a great guest post up that demonstrates what researcher drily refer to as ‘ability bias’. I’m not going to reproduce it all here but it’s worth reading if you’re not familiar with the problem.
The bit that I find really depressing about the article is this:
So, after all this has been done, Ministry Faktdrones come in and look at one piece of paper that has all of our grades on it. They don’t look at the students as humans, don’t look at any of our processes, and often don’t even look at previous results which would let them know things like value-added results. They look at one sheet of results and say “Look here – the class with 30 all got Achieved, and the class with 15 all got Achieved too. That means, statistically, class size doesn’t make a difference. Let’s cram forty of the little firestarters in there next year!”
Something I constantly and tiresomely have to harp on about when I’m talking to (at?) people about this and other education issues, is that the students are not statistics, and sometimes can’t be pigeonholed to fit a statistically clean model.
I’ve worked with people in the Ministry of Education’s research division a fair bit and they’re all hugely knowledgeable and competent people. They’re well aware of these problems with the statistics and highlight them in their research papers. They even work with the people who gather the data to try to improve the way it’s collected and overcome some of the measurement problems. So why do teachers still perceive Ministry researchers as ‘Faktdrones’? Either the research is being poorly interpreted by the policy teams or the outcomes of the research are being poorly communicated to the teachers. Possibly both.
Whatever the problem is, the author’s response seems to be to reject the use of statistics to inform policy because “students are not statistics”. Unfortunately for the author there is unlikely to be a reduction in the reliance on statistics to determine policy. From what I can tell the trend is actually for more statistics. Given that, the solution to problems such as ability bias is to collect better data. Sadly, the experience of teachers such as the author of this piece is likely to deter them from helping researchers interested in collecting more detailed data about their students.
In policy circles people often say that some data, however rough, is better than no data at all. Unfortunately, a little bit of data coupled with a little bit of knowledge can be a very dangerous thing. If it causes practitioners in the profession to be alienated by policy makers then it may actually be detrimental. It’s sometimes hard to remember that the ultimate goal of research is to help the people being studied, not simply to study them and experiment for its own sake. At times that may require a bit of caution and restraint.