Skip directly to content

Value-Added: A Shot in the Dark

on Thu, 01/16/2014 - 13:38

Charleston's BRIDGE program uses “Value-Added Modeling” (VAM) to judge teachers. 


Originally designed to compare crop yields, VAM uses a formula to guess what each individual child should have scored.

This is compared to what the child did score.
 

Some policymakers are excited by the fact that VAM appears to hit the mark sometimes.
 

Sort of like a monkey with a shotgun. 

Teachers and experts are more concerned about the misses.
 


It's wildly unreliable when used to evaluate individual teachers. 

Charleston’s own consultant found a 36% Error Rate for VAM.

Research consistently shows that a teacher's VAM score varies 50 - 80% from one year to the next.
 

That's worse than a coin toss.
 

"VAM estimates of teacher effectiveness...are far too unstable to be considered fair or reliable."
National Academy of Sciences

 

Charleston is paying Mathematica research group $2.9 million to develop our Value-Added Model (VAM). 

In 2010, Mathematica reported that VAM systems have a 36% error rate in identifying teachers as effective or ineffective.

  • Using three years of data? 26% error rate.
  • Want to get the error rate down to 12%?  You’ll need ten years of data for that teacher.

They really don't like to talk about that now.

Federal grants like ours have made VAM-based merit pay a major cash cow ($800 million and counting).

 

They’re not the only ones with concerns about VAM…

National Academy of Sciences:
    …VAM estimates of teacher effectiveness should not be used to make operational decisions because such estimates are far too unstable to be considered fair or reliable
.

RAND Corporation:
The research base is currently insufficient to support the use of VAM for high-stakes decisions.


American Institutes for Research
We cannot at this time encourage anyone to use VAM in a high stakes endeavor.

Educational Testing Service and the Economic Policy Institute reached similar conclusions.

Dr. Edward Haertel, former president of the National Council on Measurement in Education, Chair of the National Research Council’s Board on Testing and Assessment, and former chair of the committee on methodology of the National Assessment Governing Board:

"Teacher VAM scores should emphatically not be included as a substantial factor with a fixed weight in consequential teacher personnel decisions.

The information they provide is simply not good enough to use in that way."

 

A VAM study of five large urban districts found that only 1/3 of top-ranked teachers kept that standing in the second year. 

A larger group plummeted
down to the bottom 40%.

Teachers at the bottom of the rankings showed the exact same pattern in reverse, with 1/3 rising to the top 40%.

Other than that, Mrs. Lincoln, how was the play?


None of this has stopped newspapers (including The Post & Courier) from publishing VAM ratings. 

Recent court rulings deem them public information, even when used for evaluation. 

One Los Angeles teacher committed suicide after a poor VAM rating was published.

 

NY officials pointed out that 75% of their teachers saw their category ratings go up or stay the same in the second year of their VAM scheme.
 

Um…that means 25% of their ratings went down


Are one-fourth of their teachers really worse this year than they were last year?


In other news, twenty-percent of all NC teachers flunked their VAM rating this year as Common Core curriculum was phased in.

Somehow their students scored above national averages on NAEP. 

Don't worry: Mathematica says Common Core won't be a problem here.

 

Want to know more about VAM?

Here’s a comprehensive report from some of the most prominent names in education research, and a definitive technical one by Dr. Edward Haertel of Stanford that is truly devastating.

Among other things, it slices and dices the oft-cited MET Study that cost Bill Gates $50 million and "proved" that, "Value-added really works!".  You know, like Windows 8.

Turns out, the main thing MET proved is that the customer is always right.

Heck, Mathematica could have told him that for $2.9 million.  Want more? it’s ALL here.