The Happy Scientist took a look at the test questions for Florida’s FCAT exam, used to assess whether or not fifth graders have achieved expected levels of scientific literacy for their age group, and found some problems.
I expected the Test Item Specifications to be a tremendous help in writing simulated FCAT questions. What I found was a collection of poorly written examples, multiple-choice questions where one or more of the wrong responses were actually scientifically correct answers, and definitions that ranged from misleading to totally wrong.
Click on the link to see some specific examples (the predatory cows are my favorite). But you know what’s even worse? The response he got when he pointed out the problems.
He wrote to FLDOE’s Test Development Center, and received a reply that enumerated each error and declared the material to be “deemed appropriate.” Upon pressing for a more personal reply, he received this response.
“we need to keep in mind what level of understanding 5th graders are expected to know according to the benchmarks. We cannot assume they would receive instruction beyond what the benchmark states… We cannot assume that student saw a TV show or read an article.”
This argument is being used to defend the practice of marking correct answers as wrong, if they’re not the answer the test developers picked as the correct one. I’ll grant you, students should not be expected to know more than was presented in the classroom, but if it happens that they are curious, well-read, and well-informed, that’s no reason to penalize them for being better at science than their peers—and/or, apparently, the people who wrote this test.