Educational interventions are often evaluated and compared on the basis of their impacts on test scores. Decades of research have produced two empirical regularities: interventions in later grades tend to have smaller effects than the same interventions in earlier grades, and the test score impacts of early educational interventions almost universally “fade out” over time. This paper explores whether these empirical regularities are an artifact of the common practice of rescaling test scores in terms of a student’s position in a widening distribution of knowledge. If a standard deviation in test scores in later grades translates into a larger difference in knowledge, an intervention’s effect on normalized test scores may fall even as its effect on knowledge does not. We evaluate this hypothesis by fitting a model of education production to correlations in test scores across grades and with college-going using both administrative and survey data. Our results imply that the variance in knowledge does indeed rise as children progress through school, but not enough for test score normalization to fully explain these empirical regularities.
Teacher Effectiveness
How Well Do Teacher Observations Predict Value-Added? Exploring Variability Across Districts. In Association for Public Policy Analysis & Management Fall Research Conference . Washington, DC.Abstract
(2013).
Validating Arguments for Observational Instruments: Attending to Multiple Sources of Variation. Educational Assessment , 17, 1-19.Abstract
(2012).
2013
May
15
2012
Apr
23
2012
Sep
13
2013
Jan
30
2013
Mar
12
2013
Apr
27
2013
Oct
17