When Good Students Get Bad Standardized Test Scores 

 
 
Ameer is a good student.  


 
He takes notes in class, does all his homework and participates in discussions.  


 
He writes insightful essays and demonstrates a mastery of spelling and grammar.  


 
He reads aloud with fluency and inflection. He asks deep questions about the literature and aces nearly all of his classroom reading comprehension tests. 


 
However, when it is standardized test time, things are very different.  


 
He still arrives early, takes his time with the questions and reviews his work when he’s done – but the results are not the same.  


 
His grades are A’s. His test scores are Below Basic. 


 
How is that?  


 
How can a student demonstrate mastery of a subject in class but fail to do the same on a standardized test?  


 
And which assessment should I, his teacher, take seriously?  


 
After all, they can’t BOTH be correct. 


 
This is a problem with which most classroom teachers are forced to contend.  


 
Bureaucrats at the administrative or state level demand teachers assess students with standardized tests but the results often contradict a year or more of observation. 


 
Take the Measures of Academic Progress (MAP) test.  


 
This year at my Western Pennsylvania district, administration decided to use this computer-based standardized assessment as a pre-test or practice assessment before the state mandated Pennsylvania System of School Assessment (PSSA). 


 
I’ve already written about what a waste of time and money this is. A test before the test!? 


 
But after reluctantly subjecting my classes to the MAP and being instructed to analyze the results with my colleagues, we noticed this contradiction. 


 
In many cases, scores did not match up with teacher expectations for our students.  


 
In about 60-80% of cases, students who had demonstrated high skills in the subject were given scores below the 50th percentile – many below the 25th percentile.  


 
These were kids with average to high grades who the MAP scored as if they were in the bottom half of their peers across the state. 


 
 Heck! A third of my students are in the advanced class this year – but the MAP test would tell me most of them need remediation! 


 
If we look at that data dispassionately, there are possible explanations. For one, students may not have taken the test seriously. 


 
And to some degree this is certainly the case. The MAP times student responses and when they are input fast and furious, it stops the test taker until the teacher can unlock the test after warning them against rapid guessing. 


 
However, the sheer number of mislabeled students is far too great to be accounted for in this way. Maybe five of my students got the slow down sloth graphic. Yet so many more were mislabeled as failures despite strong classroom academics. 


 
The other possibility – and one that media doom-mongers love to repeat – is that districts like mine routinely inflate mediocre achievement so that bad students look like good ones.  


 
In other words, they resolve the contradiction by throwing away the work of classroom teachers and prioritizing what standardized tests say


 
Nice for them. However, I am not some rube reading this in the paper. I am not examining some spreadsheet for which I have no other data. I am IN the classroom every day observing these very same kids. I’ve been right there for almost an entire grading period of lessons and assessments – formative and summative. I have many strong indications of what these kids can do, what they know and what they don’t know.  


 
Valuing the MAP scores over weeks of empirical classroom data is absurd.  


 
I am a Nationally Board Certified Teacher with more than two decades experience. But Northwest Evaluation Association (NWEA), a testing company out of Portland, Oregon, wants me to believe that after 90 minutes it knows my students better than I do after six weeks! 


 
Time to admit the MAP is a faulty product. 


 
But it’s not just that one standardized test. We find the same disparity with the PSSA and other like assessments.  


 
Nationally, classroom grades are better than these test scores.  


 
In the media, pundits tell us this means our public school system is faulty. Yet that conclusion is merely an advertisement for these testing companies and a host of school privatization enterprises offering profit-making alternatives predicated on that exact premise.  


 
So how to resolve the contradiction? 


 
The only logical conclusion one can draw is that standardized assessments are bad at determining student learning.  


 
In fact, that is not their primary function. First and foremost, they are designed to compare students with each other. How they make that comparison – based on what data – is secondary.  


 
The MAP, PSSA and other standardized tests are primarily concerned with sorting and ranking students – determining which are best, second best and so on. 


 
By contrast, teacher-created tests are just the opposite. They are designed almost exclusively to assess whether learning has taken place and to what degree. Comparability isn’t really something we do. That’s the province of administrators and other support staff.  


 
The primary job of teaching is just that – the transfer of knowledge, offering opportunities and a conducive environment for students to learn.  


 
That is why standardized tests fail so miserably most of the time. They are not designed for the same function. They are about competition, not acquisition of knowledge or skill. 


 
That’s why so many teachers have been calling for the elimination of standardized testing for decades. It isn’t just inaccurate and a waste of time and money. It gets in the way of real learning.  


 
You can’t give a person a blood transfusion if you can’t accurately measure how much blood you’re giving her. And comparing how much blood was given to a national average of transfusions is not helpful. 


 
You need to know how much THIS PERSON needs. You need to know what would help her particular needs.  


 
When good students get bad test scores, it invariably means you have a bad test.  


 
 
An entire year of daily data points is not invalidated by one mark to the contrary.  


 
Until society accepts this obvious truth, we will never be able to provide our students with the education they deserve.  

Good students will continue to be mislabeled for the sake of a standardized testing industry that is too big to fail.


Like this post?  You might want to consider becoming a Patreon subscriber. This helps me continue to keep the blog going and get on with this difficult and challenging work.

Plus you get subscriber only extras!

Just CLICK HERE.

Patreon+Circle

I’ve also written a book, “Gadfly on the Wall: A Public School Teacher Speaks Out on Racism and Reform,” now available from Garn Press. Ten percent of the proceeds go to the Badass Teachers Association. Check it out!

 

15 thoughts on “When Good Students Get Bad Standardized Test Scores 

  1. There is only one benefit of Standardized Tests produced by nonprofit or profitable organizations/corporations, and that is the public money flowing into already greedy, corrupt, and rich people’s bank accounts.

    David Coleman is the CEO of the College Board. How much does Coleman earn to force the College Board’s tests down the throats of OUR children and their public school teachers?

    ” The College Board’s CEO, David Coleman, has a nearly $2 million salary while other top executives make $300,000 to $500,000 per year. Since no other corporation distributes PSAT, SAT and AP exams, the College Board has a monopoly on education, controlling tests and test preparation prices.”

    https://dailytrojan.com/2021/09/16/the-college-board-profits-off-students-anxieties-about-college-admissions/

    FACT: Standardized tests do not predict future success in school or life as much as GPA (grade point average) does.

    “It’s GPAs Not Standardized Tests That Predict College Success”

    https://www.forbes.com/sites/nickmorrison/2020/01/29/its-gpas-not-standardized-tests-that-predict-college-success/?sh=2008cd9c32bd

    “UChicago Consortium study finds high-school GPAs outweigh ACTs for college readiness (GPA us 5 times stronger at predicting success than those damned tests)”

    https://news.uchicago.edu/story/test-scores-dont-stack-gpas-predicting-college-success

    And anyone interested in learning how much the College Board donates to political campaigns and spends on lobbying annually to keep that public river of cash flowing into David Coleman’s bank account, click the next link (of course, Coleman isn’t the only greedy SOB that benefits from testing and stressing our children and public school teachers into mental illness).

    https://www.opensecrets.org/orgs/college-board/summary?id=D000049487

    And with that, I want to end my comment by repeating my opening statement:

    There is only one benefit of Standardized Tests produced by nonprofit or profitable organizations/corporations, and that is public money flowing into already greedy, corrupt, and rich people’s bank accounts.

    Like

    • Lloyd, I couldn’t agree more. I am currently tutoring an excellent 10th grade student who is taking an AP World History Course. Not only is the curriculum tied to the Coleman recipe, but the tests given during the year are drawn from AP test questions. The students are graded harshly on these Olympic-equivalent tests, which brings about unnecessary, harmful anxiety about GPA and college acceptance. Why can’t teachers just teach, discuss, ask for student feedback, provide teacher feedback, and move on–without creating a pressure-cooker of stress about their futures, and having bright, hard-working students deem themselves failures? Oh, yes, Follow the money.

      Like

  2. Steven,  Excellent comment, as usual.

    I don’t have a copy handy, but Nick Lemann in The Big Test also used a blood analogy.  It was slightly different – the psychometricians back in the day (and probably some today) thought that the SAT was like a blood test, and its results couldn’t be altered by test prep just as a blood test’s results couldn’t be altered short of something nefarious.  They probably also thought that their SAT was as accurate as a blood test.

    Just sayin …

    Keep up the good work,

    Jay

    Jay Rosner

    The Princeton Review Foundation

    Like

  3. You spend a lot of space to say that MAP is bad because standardized tests don’t capture classroom performance. We can agree but more nuance would be good. There are four ways, at least, that classroom results and MAP might not accord. The ways which this could happen matter and it would be better to get to the details than to rail against only one way in which they might not agree.

    Possibility 1a. MAPy skills and classroom skills differ, and we should value skills better aligned to broader performance than a multiple choice test. This is your point, and it is well taken. The counterpoint, of course, is that if you are not a highly skilled teacher providing rigorous classroom work, you could very easily conclude MAP tells you nothing when it actually is telling you something. Policymakers, obviously, feel there are many more such teachers than ones who are smarter than MAP. Your editorial isn’t going to get them to change their mind.

    Possibility 1b. MAPy skills and classroom skills differ, but the level of rigor does not—merely the presentation format. Student performance would concur if we taught students to recognize presentation format differences. I.e., if we taught them test taking skills. A first sword edge is that teaching multiple presentation formats IS good teaching, so maybe that’s a real area for growth. A second sword edge the other way is who cares about multiple choice, and then we are back to point 1a. Still, students in almost every walk of life will encounter standardized testing later in life. They might should be just as prepared for that presentation format as one’s preferred classroom modes.

    Possibility 2a. MAP isn’t testing state standards, but classroom work is. This is true to a point. NWEA likes to taut correlations with something like 70% of variance explained vs. state tests. IMO it’s more like 50-60%. This would explain, easily, why a narrow majority of individual student MAP results concur with classroom work, while a lot of results do not.

    Possibility 2a. MAP isn’t testing state standards at grade level, really. It’s an adaptive test. In theory this is helpful at getting an accurate level out of the test because it’ll back up into lower grade material if the initial questions’ results warrant. But now how the heck are you going to concord MAP to classroom results?

    For these reasons districts should use a Jaundiced eye toward MAP, but it’s not all for reason 1a. That does not mean MAP can’t be instructionally useful. It can. As you well know, policymakers look out over a sea of what they fear are morons — not NCBTs. Alas, there is too much slipperiness in it for argument 1a, alone, to change the minds that matter.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.