Climate Change and Value-Added: New Evidence Requires New Thinking

October 23, 2013

CEPR Faculty Director Thomas Kane discusses the use of value-added estimates for teacher evaluations in the Brookings Institution paper.

Anyone participating in the education policy debate for five years or more probably staked out their position on the use of value-added (or student achievement growth) in teacher evaluations  long ago.  That’s unfortunate, because, as has happened with research on climate change, there has been a slew of new research, especially in the last three years, on the strengths and weaknesses of such measures.  Given what we have learned, one wonders whether there would have been more consensus by now on the appropriate use of test-based measures in teacher evaluation if the debate had not started out so polarized. 

On statistical volatility (or reliability) of value-added

Remarkably, there is no disagreement about the facts regarding volatility: the correlation in teacher-level value-added scores from one year to the next is in the range of .35 to .60.  For those teaching English Language Arts, the results tend toward the bottom end of the range.  For those teaching math, the results tend toward the top end of the range.  Also, in middle school and high school, where the number of students taught in a given subject is larger, the stability of the measures tends to be higher.


Continue reading at