Yet Another Testing Fail

Anyone who has ever worked for any of the Pearson education enterprises (and, I have) knows that the labyrinth of departments and sub-groups is only rivaled by the Federal bureaucracy.  Because of that, decisions that are made often don’t make sense and are astounding when then do.

That’s why the situation with the non-sensical reading comprehension test question on yet another state exam, this time in New York, didn’t surprise me at all.  It is the classic testing fail:  A stupid item doesn’t get challenged in one state, so the Pearson bureaucrats pass it along to another, until it finally creates a spectacular PR explosion.  Then, Pearson spokespersons try to act like it’s a first-time problem, you know, one that “slipped in”.  ”Won’t happen again.” 

As we would say in Oklahoma, boshat.

In yesterday’s post on her Washington Post blog, The Answer Sheet, Valerie Straus nails perfectly what the bigger problem is:

The problem, of course, isn’t one test question that people think was badly drawn, or the strong likelihood that other questions on these exams make little sense or actually assess only a small band-width of skills, concepts and knowledge that we want students to know.

The problem is that the results of standardized tests are being used in New York and other states to assess not only students but teachers, principals and schools through complicated formulas that purport to show how much “value” a teacher adds to a student’s achievement. Researchers say that “value-added” assessment models can’t do what supporters say they do and are unreliable accountability.

Teachers in public schools or even private schools have no more desire to be evaluated by private bureaucrats than they do public agency bureaucrats, especially when the line is increasingly more blurred.  There is no rational justification for this.  The people who see a teacher work every day are the only ones who can evaluate a teacher.