The Six Biggest Problems with Data-Driven Instruction

0

 

 

“On the dangers of being data-driven: Imagine driving from A to B ignoring the road, the weather, the traffic around you… only staring at the gauges on the dashboard.”

 – Educator Dan McConnell

 

 

“Make your instruction data-driven.”

 

If you’re a public school teacher, you’ve probably heard this a hundred times.

 

In the last week.

 

Principals and administrators use that word – “data-driven” – as if it were inscribed over the front doors of the schoolhouse in stone.

 

The idea goes like this: All lessons should be based on test scores.

 

Students take the federally mandated standardized test. Your job is to make sure they get the best possible score. Your class is nothing but a way station between standardized tests.

 

Pretest your students and then instruct them in such a way that when they take the test again, they’ll get the best possible score.

 

It’s total nonsense. And it doesn’t take much to see why.

 

No teacher should ever be data-driven. Every teacher should be student-driven.

 

You should base your instruction around what’s best for your students – what motivates them, inspires them, gets them ready and interested in learning.

 

To be sure, you should be data-informed – you should know what their test scores are and that should factor into your lessons in one way or another – but test scores should not be the driving force behind your instruction, especially since standardized test scores are incredibly poor indicators of student knowledge.

 

No one really believes that the Be All and End All of student knowledge is children’s ability to choose the “correct” answer on a multiple-choice test. No one sits back in awe at Albert Einstein’s test scores – it’s what he was able to do with the knowledge he had. Indeed, his understanding of the universe could not be adequately captured in a simple choice between four possible answers.

 

As I see it, there are at least six major problems with this dependence on student data at the heart of the data-driven movement.

 

So without further ado, here is a sextet of major flaws in the theory of data-driven instruction:

 

 

 

  1. The Data is Unscientific

    When we talk about student data, we’re talking about statistics. We’re talking about a quantity computed from a sample or a random variable.

    As such, it needs to be a measure of something specific, something clearly defined and agreed upon.

    For instance, you could measure the brightness of a star or its position in space.

    However, when dealing with student knowledge, we leave the hard sciences and enter the realm of psychology. The focus of study is not and cannot be as clearly defined. What, after all, are we measuring when we give a standardized test? What are the units we’re using to measure it?

    We find ourselves in the same sticky situation as those trying to measure intelligence. What is this thing we’re trying to quantify and how exactly do we go about quantifying it?

    The result is intensely subjective. Sure we throw numbers up there to represent our assumptions, but – make no mistake – these are not the same numbers that measure distances on the globe or the density of an atomic nucleus.

    These are approximations made up by human beings to justify deeply subjective assumptions about human nature.

    It looks like statistics. It looks like math. But it is neither of these things.

    We just get tricked by the numbers. We see them and mistake what we’re seeing for the hard sciences. We fall victim to the cult of numerology. That’s what data-driven instruction really is – the deepest type of mysticism passed off as science.

    The idea that high stakes test scores are the best way to assess learning and that instruction should center around them is essentially a faith based initiative.

    Before we can go any further, we must understand that.

  2. It Has Never Been Proven Effective

    Administrators and principals want teachers to base their instruction around test scores.

    Has that ever been proven an effective strategy for teachers planning lessons or the allocation of resources? Can we prove a direct line from data to better instruction to better test scores?

    The answer is an unequivocal NO.

    In a 2007 study from Gina Schuyler Ikemoto and Julie A. Marsh published in the Yearbook for the National Society for the Study of Education, data driven instruction actually was found to have harmful effects on educator planning and, ultimately, student learning.

    Researchers looked at 36 instances of data use in two districts, where 15 teachers used annual tests to target weaknesses in professional development or to schedule double periods of language arts for English language learners. The result was fewer instances of collective, sustained, and deeper inquiry by groups of teachers and administrators using multiple data sources – test scores, district surveys, and interviews – to reallocate funds for reading specialists or start an overhaul of district high schools.

    Teachers found the data less useful if it was not timely – standardized test scores are usually a year old by the time they get to educators. Moreover, the data was of less value if it did not come with district support and if instructors did not already buy into its essential worth.

    In short, researchers admitted they could not connect student achievement to the 36 instances of basic to complex data-driven decisions in these two districts.

    But that’s just one study.

    In 2009, the federal government published a report (IES Expert Panel) examining 490 studies where schools used data to make instructional decisions.

    Of these studies, the report could only find 64 that used experimental or quasi-experimental designs. Of these it could find only six – yes, six – that met the Institute of Education Sciences standard for making causal claims about data-driven decisions to improve student achievement.

    And when examining these six studies, the panel found “low evidence” to support data-driven instruction. They concluded that the theory that data-driven instructional decisions improve student test scores has not been proven in any way, shape or form.

  3. It’s Harmful – The Stereotype Threat and Motivation

    Data-driven instruction essentially involves grouping students based on their performance on standardized tests.

    You put the low scorers HERE, the students on the bubble who almost reached the next level HERE, and the advanced students HERE. That way you can easily differentiate instruction and help meet their needs.

    However, there is a mountain of psychological research showing that this practice is harmful to student learning. Even if you don’t put students with different test scores in different classes, simply informing them that they belong to one group or another has intense cognitive effects.

    Simply being told that you are in a group with lower test scores depresses your academic outcomes. This is known as the stereotype threat.

    When you focus on test scores and inform students of where they fall on the continuum down to the percentile – of how far below average they are – you can trigger this threat. Simply tracking students in this way can actually make their scores worse.

    It can create negative feelings about school, threatening students’ sense of belonging, which is key to academic motivation.

    But it’s not just the low scorers who are harmed. Even the so-called “advanced” students can come to depend on their privileged status. They define themselves by their achievement, collecting prizes, virtual badges and stickers. These extrinsic rewards then transform their motivation from being driven by the learning and the satisfaction of their curiosity to depending on what high achievement gets them, researchers have found.

    In short, organizing all academics around tests scores is a sure way to lower them.

  4. The Data Doesn’t Capture Important Factors

    Data-driven instruction is only as good as the data being used. But no data system can be all inclusive.

    When we put blinders on and say only these sorts of factors count, we exclude important information.

    For instance, two students do the same long-term project and receive the same grade. However, one student overcame her natural tendency to procrastinate and learned more than in past projects. The other did not put forth his best effort and achieved lower than his usual.

    If we only look at the data, both appear the same. However, good teachers can see the difference.

    Almost every year I have a few students who are chronically tardy to class. A good teacher finds out why – if this is because they aren’t making the best use of the class interval or if they have a greater distance to travel than other students. However, if we judge solely on the data, we’re supposed to penalize students without considering mitigating factors. That’s being data-driven – a poor way to be a fair teacher.

    It has been demonstrated repeatedly that student test scores are highly correlated with parental income. Students from wealthier parents score well and those from more impoverished families score badly. That does not mean one group is smarter or even more motivated than the other. Living in poverty comes with its own challenges. Students who have to take care of their siblings at home, for instance, have less time for homework than those who have nothing but free time.

    A focus solely on the data ignores these factors. When we’re admonished to focus on the data, we’re actually being told to ignore the totality of our students.

  5. It’s Dehumanizing

    No one wants to be reduced to a number or a series of statistics.

    It is extremely insulting to insist that the best way for teachers to behave is to treat their students as anything other than human beings.

    They are people with unique needs, characteristics, and qualities, and should be treated accordingly.

    When one of my students does an amazing job on an assignment or project, my first impulse is not to reduce what they’ve done to a letter grade or a number. I speak my approbation aloud. I write extensive comments on their papers or conference with them about what they’ve done.

    Certainly, I have to assign them a grade, but that is merely one thing educators do. To reduce the relationship to that – and only that – is extremely reductive. If all you do is grade the learner, you jeopardize the learning.

    Every good teacher knows the importance of relationships. Data-driven instruction asks us to ignore these lessons in favor of a mechanistic approach.

    I’m sorry. My students are not widgets and I refuse to treat them as such.

    I am so sick of going to conferences or faculty meetings where we focus exclusively on how to get better grades or test scores from our students. We should, instead, focus on how to see the genius that is already there! We should find ways to help students self-actualize, not turn them into what we think they should be.

    At this point, someone inevitably says that life isn’t fair. Our students will have to deal with standardized tests and data-driven initiatives when they get older. We have to prepare them for it.

    What baloney!

    If the real world is unfair, I don’t want my students to adjust to that. I want to make it better for them.

    Imagine telling a rape victim that that’s just the way the world is. Imagine telling a person brutalized by the police that the world is unfair and you just have to get used to it.

    This is a complete abdication not just of our job as teachers but our position as ethical human beings.

    Schools are nothing without students. We should do everything we can to meet their needs. Period.

  6. It’s Contradictory – It’s Not How We Determine Value in Other Areas

    Finally, there is an inherent contradiction that all instruction must be justified by data.

    We don’t require this same standard for so many aspects of schooling.

    Look around any school and ask yourself if everything you see is necessarily based on statistics.

    Does the athletic program exist because it increases student test scores? Does each student lunch correlate with optimum grades? Do you have computers and iPads because they have a measureable impact on achievement?

    Some administrators and principals DO try to justify these sorts of things by reference to test scores. But it’s a retroactive process.

    They are trying to connect data with things they already do. And it’s completely bogus.

    They don’t suddenly believe in football because they think it will make the team get advanced scores. They don’t abruptly support technology in the classroom because they think it will make the school achieve adequate yearly progress.

    They already have good reasons to think athletics helps students learn. They’ve seen participation in sports help students remain focused and motivated – sometimes by reference to their own lives. Likewise, they’ve seen the value of technology in the classroom. They’ve seen how some students turn on like someone flipped a switch when a lesson has a technological component.

    These aren’t necessarily quantifiable. They don’t count as data but they are based on evidence.

    We come to education with certain beliefs already in place about what a school should do and others are formed based on the empiricism of being there, day-in, day-out. “Data” rarely comes into the decision making process as anything but a justification after the fact.


    And so we can firmly put the insistence on data-driven instruction in the trash bin of bad ideas.

    It is unscientific, unproven, harmful, reductive, dehumanizing and contradictory.

    The next time you hear an administrator or principal pull out this chestnut, take out one of these counterarguments and roast it on an open fire.

    No more data-driven instruction.

    Focus instead on student-driven learning.

 

Don’t let them co-opt you into the cult of numerology. Remain a difference-maker. Remain a teacher.


 

Like this post? I’ve written a book, “Gadfly on the Wall: A Public School Teacher Speaks Out on Racism and Reform,” now available from Garn Press. Ten percent of the proceeds go to the Badass Teachers Association. Check it out!

WANT A SIGNED COPY?

Click here to order one directly from me to your door!

book-3

Top 10 Reasons You Can’t Fairly Evaluate Teachers on Student Test Scores

Screen Shot 2018-08-02 at 12.49.24 AM

 

I’m a public school teacher.

 

Am I any good at my job?

 

There are many ways to find out. You could look at how hard I work, how many hours I put in. You could look at the kinds of things I do in my classroom and examine if I’m adhering to best practices. You could look at how well I know my students and their families, how well I’m attempting to meet their needs.

 

Or you could just look at my students’ test scores and give me a passing or failing grade based on whether they pass or fail their assessments.

 

It’s called Value-Added Measures (VAM) and at one time it was the coming fad in education. However, after numerous studies and lawsuits, the shine is fading from this particularly narrow-minded corporate policy.

 

Most states that evaluate their teachers using VAM do so because under President Barack Obama they were offered Race to the Top grants and/or waivers.

 

Now that the government isn’t offering cash incentives, seven states have stopped using VAM and many more have reduced the weight given to these assessments. The new federal K-12 education law – the Every Student Succeeds Act (ESSA) – does not require states to have educator evaluation systems at all. And if a state chooses to enact one, it does not have to use VAM.

 

That’s a good thing because the evidence is mounting against this controversial policy. An evaluation released in June of 2018 found that a $575 million push by the Bill and Melinda Gates Foundation to make teachers (and thereby students) better through the use of VAM was a complete waste of money.

 

Meanwhile a teacher fired from the Washington, DC, district because of low VAM scores just won a 9-year legal battle with the district and could be owed hundreds of thousands of dollars in back pay as well as getting his job back.

 

But putting aside the waste of public tax dollars and the threat of litigation, is VAM a good way to evaluate teachers?

 

Is it fair to judge educators on their students’ test scores?

 

Here are the top 10 reasons why the answer is unequivocally negative:

 

 

1) VAM was Invented to Assess Cows.

I’m not kidding. The process was created by William L. Sanders, a statistician in the college of business at the University of Knoxville, Tennessee. He thought the same kinds of statistics used to model genetic and reproductive trends among cattle could be used to measure growth among teachers and hold them accountable. You’ve heard of the Tennessee Value-Added Assessment System (TVAAS) or TxVAAS in Texas or PVAAS in Pennsylvania or more generically named EVAAS in states like Ohio, North Carolina, and South Carolina. That’s his work. The problem is that educating children is much more complex than feeding and growing cows. Not only is it insulting to assume otherwise, it’s incredibly naïve.

 

2) You can’t assess teachers on tests that were made to assess students.

This violates fundamental principles of both statistics and assessment. If you make a test to assess A, you can’t use it to assess B. That’s why many researchers have labeled the process “junk science” – most notably the American Statistical Association in 2014. Put simply, the standardized tests on which VAM estimates are based have always been, and continue to be, developed to assess student achievement and not growth in student achievement nor growth in teacher effectiveness. The tests on which VAM estimates are based were never designed to estimate teachers’ effects. Doing otherwise is like assuming all healthy people go to the best doctors and all sick people go to the bad ones. If I fail a dental screening because I have cavities, that doesn’t mean my dentist is bad at his job. It means I need to brush more and lay off the sugary snacks.

 

3) There’s No Consistency in the Scores.

Valid assessments produce consistent results. This is why doctors often run the same medical test more than once. If the first try comes up positive for cancer, let’s say, they’re hoping the second time will come up negative. However, if multiple runs of the same test produce the same result, that diagnosis gains credence. Unfortunately, VAM scores are notoriously inconsistent. When you evaluate teachers with the same test (but different students) over multiple years, you often get divergent results. And not just by a little. Teachers who do well one year may do terribly the next. This makes VAM estimates extremely unreliable. Teachers who should be (more or less) consistently effective are being classified in sometimes highly inconsistent ways over time. A teacher classified as “adding value” has a 25 to 50% chance of being classified as “subtracting value” the next year, and vice versa. This can make the probability of a teacher being identified as effective no different than the flip of a coin.

 

4) Changing the test can change the VAM score.

If you know how to add, it doesn’t matter if you’re asked to solve 2 +2 or 3+ 3. Changing the test shouldn’t have a major impact on the result. If both tests are evaluating the same learning and at the same level of difficulty, changing the test shouldn’t change the result. But when you change the tests used in VAM assessments, scores and rankings can change substantially. Using a different model or a different test often produces a different VAM score. This may indicate a problem with value added measures or with the standardized tests used in conjunction with it. Either way, it makes VAM scores invalid.

 

5) VAM measures correlation, not causation.

Sometimes A causes B. Sometimes A and B simply occur at the same time. For example, most people in wheelchairs have been in an accident. That doesn’t mean being in a wheelchair causes accidents. The same goes for education. Students who fail a test didn’t learn the material. But that doesn’t mean their teacher didn’t try to teach them. VAM does not measure teacher effectiveness. At best it measures student learning. Effects – positive or negative – attributed to a teacher may actually be caused by other factors that are not captured in the model. For instance, the student may have a learning disability, the student may have been chronically absent or the test, itself, may be an invalid measure of the learning that has taken place.

 

6) Vam Scores are Based on Flawed Standardized Tests.

When you base teacher evaluations on student tests, at very least the student tests have to be valid. Otherwise, you’ll have unfairly assessed BOTH students AND teachers. Unfortunately standardized tests are narrow, limited indicators of student learning. They leave out a wide range of important knowledge and skills leaving only the easiest-to-measure parts of math and English curriculum. Test scores are not universal, abstract measures of student learning. They greatly depend on a student’s class, race, disability status and knowledge of English. Researchers have been decrying this for decades – standardized tests often measure the life circumstances of the students not how well those students learn – and therefore by extension they cannot assess how well teachers teach.

 

7) VAM Ignores Too Many Factors.

When a student learns or fails to learn something, there is so much more going on than just a duality between student and teacher. Teachers cannot simply touch students’ heads and magically make learning take place. It is a complex process involving multiple factors some of which are poorly understood by human psychology and neuroscience. There are inordinate amounts of inaccurate or missing data that cannot be easily replaced or disregardedvariables that cannot be statistically controlled for such as: differential summer learning gains and losses, prior teachers’ residual effects, the impact of school policies such as grouping and tracking students, the impact of race and class segregation, etc. When so many variables cannot be accounted for, any measure returned by VAMs remains essentially incomplete.

 

8) VAM Has Never been Proven to Increase Student Learning or Produce Better Teachers.

That’s the whole purpose behind using VAM. It’s supposed to do these two things but there is zero research to suggest it can do them. You’d think we wouldn’t waste billions of dollars and generations of students on a policy that has never been proven effective. But there you have it. This is a faith-based initiative. It is the pet project of philanthrocapitalists, tech gurus and politicians. There is no research yet which suggests that VAM has ever improved teachers’ instruction or student learning and achievement. This means VAM estimates are typically of no informative, formative, or instructional value.

 

9) VAM Often Makes Things Worse.

Using these measures has many unintended consequences that adversely affect the learning environment. When you use VAMs for teacher evaluations, you often end up changing the way the tests are viewed and ultimately the school culture, itself. This is actually one of the intents of using VAMs. However, the changes are rarely positive. For example, this often leads to a greater emphasis on test preparation and specific tested content to the exclusion of content that may lead to better long-term learning gains or increasing student motivation. VAM incentivizes teachers to wish for the most advanced students in their classes and to push the struggling students onto someone else so as to maximize their own personal VAM score. Instead of a collaborative environment where everyone works together to help all students learn, VAM fosters a competitive environment where innovation is horded and not shared with the rest of the staff. It increases turnover and job dissatisfaction. Principals stack classes to make sure certain teachers are more likely to get better evaluations or vice versa. Finally, being unfairly evaluated disincentives new teachers to stay in the profession and it discourages the best and the brightest from ever entering the field in the first place. You’ve heard about that “teacher shortage” everyone’s talking about. VAM is a big part of it.

 

10) An emphasis on VAM overshadows real reforms that actually would help students learn.

Research shows the best way to improve education is system wide reforms – not targeting individual teachers. We need to equitably fund our schools. We can no longer segregate children by class and race and give the majority of the money to the rich white kids while withholding it from the poor brown ones. Students need help dealing with the effects of generational poverty – food security, psychological counseling, academic tutoring, safety initiatives, wide curriculum and anti-poverty programs. A narrow focus on teacher effectiveness dwarfs all these other factors and hides them under the rug. Researchers calculate teacher influence on student test scores at about 14%. Out-of-school factors are the most important. That doesn’t mean teachers are unimportant – they are the most important single factor inside the school building. But we need to realize that outside the school has a greater impact. We must learn to see the whole child and all her relationships –not just the student-teacher dynamic. Until we do so, we will continue to do these children a disservice with corporate privatization scams like VAM which demoralize and destroy the people who dedicate their lives to helping them learn – their teachers.

 


NOTE: Special thanks to the amazingly detailed research of Audrey Amrein-Beardsley whose Vamboozled Website is THE on-line resource for scholarship about VAM.


 

Like this post? I’ve written a book, “Gadfly on the Wall: A Public School Teacher Speaks Out on Racism and Reform,” now available from Garn Press. Ten percent of the proceeds go to the Badass Teachers Association. Check it out!

WANT A SIGNED COPY?

Click here to order one directly from me to your door!

book-1

Pennsylvania GOP Lawmakers Demand Seniority For Themselves But Deny It For Teachers

Screen shot 2015-06-25 at 12.43.45 AM

Seniority.

Somehow it’s great for legislators, but really bad for people like public school teachers.

At least that was the decision made by Republican lawmakers in the Pennsylvania House Tuesday. They voted along party lines to allow schools to furlough educators without considering seniority.

But the House’s own leadership structure is largely based on seniority!

Hypocrisy much?

Most legislative bodies in the United States from the federal government on down to the state level give extra power to lawmakers based on how long they’ve been there.

Everything from preferential treatment for committee assignments to better office space and even seating closer to the front of the assembly is often based on seniority. Leadership positions are usually voted on, but both Republicans and Democrats traditionally give these positions to the most senior members.

And these same folks have the audacity to look down their noses at public school teachers for valuing the same thing!?

As Philadelphia Representative James Roebuck, ranking Democrat on the House Education Committee, said, “If it’s wrong for teachers, why is it right for us?”

If passed by the state Senate and signed by the Governor, the law would allow public schools to lay off teachers based on the state’s new and highly controversial teacher evaluation system.

Teachers with a “failing” ranking would go first, then those with a “needs Improvement,” label.

This system is largely untested and relies heavily on student standardized test scores. There is no evidence it fairly evaluates teachers, and lawsuits certainly would be in the wings if furloughs were made based on such a flimsy excuse.

Value-Added Measures such as these have routinely been criticized by statisticians as “junk science.”

It’s kind of like giving legal favor to the management practices of Darth Vader. In “The Empire Strikes Back,” when one of his minions displeased him, he choked them to death with the Force.

No second chances. No retraining. No due process. One misplaced foot and you’re gone.

Pennsylvania’s proposed method isn’t quite so harsh, but it’s essentially the same. You’re fired because of this flimsy teaching evaluation that has no validity and can really say whatever management wants it to say.

Technically, things like salary are not allowed to be considered, but given the unscientific and unproven nature of this evaluation system, management could massage evaluations to say anything. Administrators didn’t mean to fire the teachers with the highest salaries but those voodoo teaching evaluations said they were “failing.” What are you gonna’ do? OFF WITH THEIR HEADS!

While seniority is not a perfect means of selecting who gets laid off, at least it’s impartial. Moreover, teachers who have lasted in the classroom longest almost always are highly skilled. You don’t last in the classroom if you can’t hack it.

Being a public school teacher is a highly political job. Your boss is the school board and members are elected by the community. While many school directors have the best interests of their districts at heart, favoritism, nepotism and political agendas are not unknown. Teachers need protections from the ill-winds of politics so they can be treated fairly and best serve their students. Otherwise, it would be impossible – for instance – to fairly grade a school director’s child in your class without fear of reprisal.

As it stands, state school code specifically mandates layoffs to be made in reverse seniority order, also known as “first in, last out.” Pennsylvania is one of six states that calls for this to be the sole factor in school layoff decisions.

It’s unclear how the legislature could pass a law that contradicts the school code without specifically voting to alter the code which governs the Commonwealth’s public schools.

Moreover, it may be illegal on several additional counts. Public school districts have work contracts with their teachers unions. The state can’t jump in and void those contracts between two independent parties when both agreed to the terms of those contracts. Not unless there was some legal precedent or unconstitutionality or violation of human rights or SOMETHING!

Get our your pocketbooks, Pennsylvanians. If this law is somehow enacted, you’re going to be paying for years of court challenges.

And speaking of flushing money down the toilet, the law also allows school districts to furlough employees for financial reasons. At present, layoffs are allowed only when enrollment drops or by cutting programs wholesale.

This is especially troubling given the legislature’s failure the past four years to fairly fund its public schools. Ninety percent of school districts have had to cut staff in recent years, either through attrition or furlough, according to the Pennsylvania Association of School Administrators.

So this law makes it easier to rob poorer schools of funding. If it were enacted, districts could fire teachers and reduce programs to pinch pennies. Now they are constrained to keep the highest possible level of quality for students regardless of funding shortfalls. This puts them at odds with the legislature and forces them to demand fair funding for their districts. Under this new law, school boards could more easily ensure that some students get a higher quality education than others in the same district!

Oh! We increased class size for the struggling students (most of whom are poor and minorities) but decreased it for the advanced classes (most of whom are rich and white).

Finally, we get to the issue of viability. Will the state Senate pass this bill?

Maybe.

The House passed it without a single Democrat voting in favor. The Senate is likewise controlled by the GOP. However, Gov. Tom Wolf is a Democrat and has said he’s against it. Seniority issues, he said, should be negotiated through the local collective bargaining process.

So once again we have partisan politics reigning over our public schools – Republicans actively trying to sabotage our public schools and fire their way to the top! Democrats vainly trying to hold the line.

Couldn’t we all just agree to value our public schools and public school teachers?

Or at very least couldn’t we all agree to give others the same benefits we demand for ourselves?

You know. Things like seniority!