Robots Are Grading Your Papers!
A just-released report confirms earlier studies showing that machines score many short essays about the same as human graders. Once again, panic ensues: We can’t let robots grade our students’ writing! That would be so, uh, mechanical. Admittedly, this panic isn’t about Scantron grading of multiple-choice tests, but an ideological, market- and foundation-driven effort to automate assessment of that exquisite brew of rhetoric, logic, and creativity called student writing. And without question, the most recent study is performed by folks with huge financial stakes in the results, and the non-education motive of personal gain drives them. But isn’t the real question not whether the machines can deliver similar scores as human raters, but why?
It seems possible that what really troubles us about the success of machine assessment of simple writing forms isn’t the scoring, but the writing itself-forms of writing that don’t exist anywhere in the world except school. It’s reasonable to say that the forms of writing successfully scored by machines are already-mechanized forms-writing designed to be mechanically produced by students, mechanically reviewed by parents and teachers, and then, once transmuted into grades and sorting of the workforce, quickly recycled. As Evan Watkins has long pointed out, the grades generated in relation to this writing stick around, but the writing itself is made to disappear. Like magic? Or like concealing the evidence of a crime?