Teacher Effectiveness

Lynch, K., Chin, M., & Blazar, D. (2013). How Well Do Teacher Observations Predict Value-Added? Exploring Variability Across Districts. In Association for Public Policy Analysis & Management Fall Research Conference . Washington, DC.Abstract

In this study we ask: Do observational instruments predict teachers' value-added equally well across different state tests and district/state contexts? And, to what extent are differences in these correlations a function of the match between the observation instrument and tested content? We use data from the Gates Foundation-funded Measures of Effective Teaching (MET) Project(N=1,333) study of elementary and middle school teachers from six large public school districts,and from a smaller (N=250) study of fourth- and fifth-grade math teachers from four large public school districts. Early results indicate that estimates of the relationship between teachers' value-added scores and their observed classroom instructional quality differ considerably by district.

Kelcey, B., McGinn, D., Hill, H. C., & Charalambous, C. (Working Paper). The Generalizability of Item Parameters Across Lessons.Abstract

The purpose of this study is to investigate three aspects of construct validity for the Mathematical Quality of Instruction classroom observation instrument: (1) the dimensionality of scores, (2) the generalizability of these scores across districts, and (3) the predictive validity of these scores in terms of student achievement.

Kane, T. J., & Staiger, D. O. (2008). Estimating Teacher Impacts on Student Achievement: An Experimental Evaluation.Abstract

The authors used a random-assignment experiment in Los Angeles Unified School District to evaluate various non-experimental methods for estimating teacher effects on student test scores. Having estimated teacher effects during a pre-experimental period, they used these estimates to predict student achievement following random assignment of teachers to classrooms. While all of the teacher effect estimates considered were significant predictors of student achievement under random assignment, those that controlled for prior student test scores yielded unbiased predictions and those that further controlled for mean classroom characteristics yielded the best prediction accuracy. In both the experimental and non-experimental data, the authors found that teacher effects faded out by roughly 50 percent per year in the two years following teacher assignment.

Hill, H. C., Charalambous, C. Y., Blazar, D., McGinn, D., Kraft, M. A., Beisiegel, M., Humez, A., et al. (2012). Validating Arguments for Observational Instruments: Attending to Multiple Sources of Variation. Educational Assessment , 17, 1-19.Abstract

Measurement scholars have recently constructed validity arguments in support of a variety of educational assessments, including classroom observation instruments. In this article, we note that users must examine the robustness of validity arguments to variation in the implementation of these instruments. We illustrate how such an analysis might be used to assess a validity argument constructed for the Mathematical Quality of Instruction instrument, focusing in particular on the effects of varying the rater pool, subject matter content, observation procedure, and district context. Variation in the subject matter content of lessons did not affect rater agreement with master scores, but the evaluation of other portions of the validity argument varied according to the composition of the rater pool, observation procedure, and district context. These results demonstrate the need for conducting such analyses, especially for classroom observation instruments that are subject to multiple sources of variation

Taylor, E. S., & Tyler, J. H. (2011). The Effect of Evaluation on Performance: Evidence from Longitudinal Student Achievement Data of Mid-career Teachers. Publisher's VersionAbstract

The effect of evaluation on employee performance is traditionally studied in the context of the principal-agent problem. Evaluation can, however, also be characterized as an investment in the evaluated employee’s human capital. We study a sample of mid-career public school teachers where we can consider these two types of evaluation effect separately. Employee evaluation is a particularly salient topic in public schools where teacher effectiveness varies substantially and where teacher evaluation itself is increasingly a focus of public policy proposals. We find evidence that a quality classroom-observation-based evaluation and performance measures can improve mid-career teacher performance both during the period of evaluation, consistent with the traditional predictions; and in subsequent years, consistent with human capital investment. However the estimated improvements during evaluation are less precise. Additionally, the effects sizes represent a substantial gain in welfare given the program’s costs.

Kane, T. J., Jacob, B., Rockoff, J., & Staiger, D. O. (2011). Can You Recognize an Effective Teacher When You Recruit One? Association for Education Finance and Policy , 6 (1), 43-74. Publisher's VersionAbstract

The authors administered an in-depth survey to new math teachers in New York City and collected information on a number of non-traditional predictors of effectiveness: teaching specific content knowledge, cognitive ability, personality traits, feelings of self-efficacy, and scores on a commercially available teacher selection instrument. They find that a number of these predictors have statistically and economically significant relationships with student and teacher outcomes. The authors conclude that, while there may be no single factor that can predict success in teaching, using a broad set of measures can help schools improve the quality of their teachers.

Blazar, D., Litke, E., Barmore, J., & Gogolen, C. (Working Paper). What Does It Mean to be Ranked a "High" or "Low" Value-Added Teacher? Observing Differences in Instructional Quality Across Districts.Abstract

Education agencies are evaluating teachers using student achievement data. However, very little is known about the comparability of test-based or "value-added" metrics across districts and the extent to which they capture variability in classroom practices. Drawing on data from four urban districts, we find that teachers are categorized differently when compared within versus across districts. In addition, analyses of scores from two observation instruments, as well qualitative viewing of lesson videos identify stark differences in instructional practices across districts among teachers who receive similar within-district value-added rankings. Exploratory analyses suggest that these patterns are not explained by observable background characteristics of teachers and that factors beyond labor market sorting likely play a key role. 

Hill, H. C., & Grossman, P. (2013). Learning from Teacher Observations: Challenges and Opportunities Posed by New Teacher Evaluation Systems. Harvard Educational Review.Abstract

In this article, Heather Hill and Pam Grossman discuss the current focus on using teacher observation instruments as part of new teacher evaluation systems being considered and implemented by states and districts. They argue that if these teacher observation instruments are to achieve the goal of supporting teachers in improving instructional practice, they must be subject-specific, involve content experts in the process of observation, and provide information that is both accurate and useful for teachers. They discuss the instruments themselves, raters and system design, and timing of and feedback from the observations. They conclude by outlining the challenges that policy makers face in designing observation systems that will work to improve instructional practice at scale.

2013 May 15

Beyond the Numbers Convening 2013

Wed May 15 (All day) to Fri May 17 (All day)

The third annual SDP Beyond the Numbers Convening, From Classroom to Boardroom: Analytics for Strategy and Performance, was held in Boston, MA on May 15–17, 2013. Participants included SDP Fellows (current and alumni); SDP Fellows’ Supervisors; members of ActivateED, a collaborative effort with Education Pioneers and the Broad Center; SDP Faculty Advisors; Bill and Melinda Gates Foundation representatives; Center for Education Policy Research staff; and key leaders in K–12 education.... Read more about Beyond the Numbers Convening 2013

2012 Apr 23

Beyond the Numbers: The Power of Analytic Leaders

(All day)


Boston, MA

Two hundred education leaders from across the country gathered in Boston, MA to attend the 2nd annual Strategic Data Project (SDP) Spring Convening, Beyond the Numbers: The Power of Analytic Leaders. Participants engaged in discussions about improving the use of data and analysis in their respective partner agencies, and partook in a rich set of sessions focused on college-going success and the human capital pipeline. Speakers included Amy Briggs, Chief Operating Officer at Student Achievement Partners; John Friedman, Assistant Professor of Public Policy at Harvard Kenned School; and...

Read more about Beyond the Numbers: The Power of Analytic Leaders
2012 Sep 13

CEPR Faculty Director Thomas Kane on Panel at The New York Times Schools for Tomorrow Conference

(All day)


New York, NY

The New York Times hosted the second annual Schools for Tomorrow Conference with a focus on "Building a Better Teacher." Four hundred educators, government officials, pilanthropists, and investors participated in workshops and debates to explore ways government, the private sector, and parents can create and support the best teachers possible. CEPR Faculty Director Thomas Kane discussed  teacher evaluation and measurement along with fellow panelists ...

Read more about CEPR Faculty Director Thomas Kane on Panel at The New York Times Schools for Tomorrow Conference
2013 Jan 30

Thomas Kane "Measures of Effective Teaching" lecture at Clark University

2:00pm to 3:00pm


Clark University

On Wednesday, January 30, CEPR Faculty Director Thomas Kane spoke at Clark University for the annual Dr. Lee Gurel '48 Lecture. His lecture, "Measures of Effective Teaching," focused on the questions: How do we tell who the great teachers are and how can we produce more of them and more effective teaching? Watch the video recording via the website below. 

Read more about Thomas Kane "Measures of Effective Teaching" lecture at Clark University
2013 Mar 12

Reinventing Teacher Evaluation: Bureaucratic Intrusion or Vital Educational Infrastructure?

12:00pm to 1:30pm


Penn Graduate School of Education, University of Pennsylvania

CEPR Faculty Director Tom Kane will speak at the Penn Graduate School of Education (Penn GSE) at the University of Pennseylvania as part of the lecture series held for the Pre-Doctoral Training Program in Interdisciplinary Methods for Fied-Based Research in Education, sponosred by the Institute of Education Sciences (IES). He will discuss finidings from the MET project and the importance of providing regular, reliable feedback to teachers.  

Read more about Reinventing Teacher Evaluation: Bureaucratic Intrusion or Vital Educational Infrastructure?
2013 Apr 27

CEPR-Affiliated Sessions at AERA 2013

Sat Apr 27 (All day) to Wed May 1 (All day)


San Francisco, CA

CEPR affiliates are to present at the American Educational Research Association (AERA) Annual Meeting 2013 from April 27–May 1 in San Francisco, CA. The theme of the event this year is “Education and Poverty: Theory, Research, Policy, and Praxis.” CEPR presenters include researchers; steering committee members; and SDP Fellows, Alumni, and Faculty Advisors. The sessions will be an opportunity to learn more about some CEPR research, as well as research done outside of the center by its affiliates. View the list below to see the sessions that include the CEPR network.

2013 Oct 17

SDP Toolkit for Effective Data Use Webinar

(All day)

The Strategic Data Project has released the human capital edition of the SDP Toolkit for Effective Data Use. The SDP Toolkit is a free resource guide for education agency analysts who collect and analyze data on student achievement. Completing the toolkit produces a set of basic, yet essential, human capital and college-going analyses that every education agency should have as a foundation to inform strategic management and policy decisions. The addition of human capital analyses will give users insight into teacher recruitment, placement, evaluation, development, and retention.


Read more about SDP Toolkit for Effective Data Use Webinar