National Center for Teacher Effectiveness Conference 2011

NCTE Conference Logo

On May 2-3, 2011, education leaders from 11 states and 30 districts gathered at Harvard University to attend the first NCTE conference, Putting the Pieces Together: Taking Improved Teacher Evaluation to Scale. The conference was designed to help states and districts learn about:

Conference Agenda

Speaker Biographies

Presentations, Videos, and Resources

Welcome & Introduction

Objective:  During this opening session, Jason Kamras of the District of Columbia Public Schools (DCPS) spoke about the “operational lift” of bringing classroom observation to scale in DCPS. Kamras reviewed DCPS’ strategy and their tools, and shared his experiences implementing classroom observation in DCPS.

Key takeaways from this session were:

  • The changing role of a school Principal: In order to facilitate the time that it takes to conduct effective teacher evaluation and professional development, school principals need to focus on instructional leadership.  Non-instructional work should shift to a business manager, operations manager, high-level administrative staff member, or other school-level support staff.

  • Doing value added means more than developing a model: Districts need to develop a team that can focus on supporting the ongoing nature of this work–from analytics, to technology support, to professional development for principals and other evaluators, and to communications.

  • Invest in technology: In order to use the information gathered in a classroom observation session as meaningful data, invest in technology and tools that will allow you to use the data in meaningful ways.

  • Don’t underestimate the power of ongoing and effective communication: Communication between districts and schools and within schools is critical to helping everyone understand their role in teacher evaluation, the purpose of classroom observation, and to creating a community where people work together to champion this work.

Presentations & Speakers:

  • Corinne Herlihy, Project Director, National Center for Teacher Effectiveness
  • Thomas Kane, Professor of Education and Economics, Harvard Graduate School of  Education; Deputy Director – US Program, Bill & Melinda Gates Foundation
  • Jason Kamras, Chief, Office of Human Capital, District of Columbia Public Schools

Measuring Practice

Objective:  The purpose of this session was for state/district teams to share information about current practices in teacher observation (in districts) and to learn about upcoming state mandates that may change those practices.  Special attention was given to discussing the tensions that arise when shifting from using observation data for “low-stakes” to using that data for “high-stakes” decisions (i.e. teacher merit pay, teacher evaluation, termination, tenure decisions etc.). Teams also set goals for this conference, identified their “burning questions” and discussed the challenges that they expect to encounter as they implement new teacher evaluation systems.


  • Learn how to leverage classroom observation as a professional development tool
  • Learn about how to incorporate student input into teacher evaluation
  • Learn the value and limitations of student achievement data as part of evaluation
  • Learn about how to move from training to implementation—in terms of all new aspects of teacher evaluation in my district, including classroom observation
  • Get advice on how to navigate and respond to the political pressures that make it harder to do this work, let alone do it well
  • Learn how to communicate effectively about this work


  • Working with unions to get input and support for teacher evaluation process
  • Incorporating teachers of non-tested grades/subjects into teacher evaluation process
  • Training principals and other raters to ensure fidelity of process and system
  • Structuring time during the school year to train observers
  • Principal workload—how to handle the increased responsibilities that expanded classroom observation requires
  • Budget and staffing constraints
  • Explaining value added as part of the evaluation process
  • Principal accountability—what’s fair?

Observational Tools I: Considerations & Options

Objective:  During this session several tool developers presented information about four classroom observation tools (see below).  The purpose of this session was to provide information to states and districts about these observation tools including a fuller understanding of what they measure, and how they might think about training.  In this session, each speaker demonstrated their tool in action, provided opportunities for participants to interact with the tool, and discussed implementation challenges.

At the start of the session state/district teams responded to a key initial question,“What makes a ‘good’ observation system?” 
See the responses

Presentations & Speakers

The Framework for Teaching (FFT) 
Developed by Charlotte Danielson

Presentation by Kate Dickson

The Protocol for Language Arts Teaching Observation (PLATO)
Developed at Stanford University

Presentation by Pam Grossman

The Classroom Assessment Scoring System (CLASS)
Measure developed at the University of Virginia

Presentation by Bridget Hamre

The Mathematical Quality of Instruction (MQI)
Developed at the University of Michigan and Harvard University

Presentation by Heather Hill

Additional Resources:

Taking Classroom Observation to Scale: Lessons from the Field

Objective:  During this session participants heard from leaders of three districts who have implemented expanded teacher evaluation systems to learn their “on-the-ground” perspectives and key lessons learned.  Jason Kamras shared his experiences implementing IMPACT, DCPS’ new teacher evaluation system which contains an observational component as well as links to student growth (i.e. value-added).  A team from Cincinnati discussed their experiences as one of the few districts that has incorporated a successful and formal teacher observation and evaluation system for the past ten years.  They discussed the implementation and sustainability challenges that they have faced over the past ten years, as well as the positive impact their program has had on teachers.  A team from Hillsborough County, Florida, spoke about the unique partnership between the district leadership and the teacher’s union that has enabled them to communicate effectively and build a movement for implementing expanded teacher evaluation in Hillsborough County.

 Presentations & Speakers:

  •  District of Columbia Public Schools, Washington, DC
    • Jason Kamras, Chief, Office of Human Capital
  • Cincinnati Public Schools, Ohio
    • Julia Indalecio, Teacher Programs Manager
    • Wellyn Collins, Facilitator, Peer Assistance and Evaluation Program, Teacher Evaluation System, and Career in Teaching Program
    • Susan Ankenbauer
  • Moderator: Sarah Glover, Executive Director, Strategic Data Project

Additional Resources:

Creating a System for Classroom Observation

Objective:  During this session participants learned about the legal and practical implications of implementing expanded teacher evaluation systems, and were provided with important tools to help them successfully frame and accomplish this work, including a sample “pacing guide”.  Catherine McClellan of ETS shared her experiences overseeing human scoring of various types of tests and the challenges that come with training raters and maintaining fidelity in the system. Sara Heyburn discussed Tennessee’s approach to implementing expanded teacher evaluation, rolling out training for observers, and presented the Tennessee pacing guide, which most participants found incredible helpful as they began to think about the school-level implications of implementation. During the discussion participants engaged heavily with these speakers, asking questions related to their own state/district contexts.

Catherine McClellan
Director of Human Constructed-Response Scoring, Educational Testing Services

Sara Heyburn
Policy Advisor, Race to the Top, Tennessee Department of Education

Panel Discussion

Corinne Herlihy (Moderator)
Project Director, National Center for Teacher Effectiveness

Additional Resources

Teacher-Student Data Linkage: Lessons from the Field

Objective:  During this session participants learned about a more technical component of implementing effective teacher evaluation in a high-stakes context – the importance of accurate teacher-student data links.  John Hussey of Battelle for Kids spoke about the need for accurate teacher-student data links, and provided an example of their system for verifying rosters in order to accurately track this information.  Beth Gleason spoke about this aspect of her work in Louisiana, providing participants with an example of how this technical piece is incorporated and viewed from the perspective of state implementation.  Hella Bel Hadj Amor spoke about her work to verify rosters in DCPS as part of her efforts to implement value-added to DCPS’ IMPACT teacher evaluation system.  

John Hussey
Chief Strategy Officer, Battelle for Kids

Beth Gleason
Research Scientist, Strategic Research and Analysis, Louisiana Department of Education

Hella Bel Hadj Amor
Director of Teacher Effectiveness Research and Evaluation, District of Columbia Public Schools

Doug Staiger (Moderator)
Co-Principal Investigator, National Center for Teacher Effectiveness and John French Professor of Economics, Dartmouth College

 Additional Resources:

Developing Fair & Reliable Measures of Effective Teaching

Objective:  During this session, Tom Kane and Denis Newman, spoke in-depth about the Measuring Effective Teaching Project(MET) currently being conducted by the Bill & Melinda Gates Foundation, as well as the National Center for Teacher Effectiveness at Harvard. The presentation focused on the initial research findings from the MET project and the potential policy implications of those findings. There was also a demonstration of the MET “validation engine”. Participants were highly engaged in learning about the implementation takeaways from the MET project that they might incorporate into their own states/districts, as well as learning more about how the validation engine might be used as a training tool.

Thomas Kane
Professor of Education and Economics, Harvard Graduate School of Education
Deputy Director – US Program, Bill & Melinda Gates Foundation

Denis Newman
President, Empirical Education

Additional Resources:

Using Student Surveys to Learn about Teaching

Objective:  During this session participants learned about another key tool that can be incorporated into expanded teacher evaluation: student surveys.  Rob Ramsdell spoke about the student survey developed by Ron Ferguson, known as the TRIPOD Project.  This survey has been used extensively by large districts as well as in the Gates Foundation Measuring Effective Teaching project.  In addition, early analysis suggests that students are solid predictors of effective teachers.  David Osborne was on-hand to talk about the rollout of the Tripod Survey in New York as well as the Gates MET project; he shared lessons learned from the implementation of this survey in New York City. 

Rob Ramsdell
Vice-President, Cambridge Education 
(standing in for Ron Ferguson, Senior Lecturer in Education and Public Policy, Harvard University; Project Founder, The Tripod Project)

David Osborne
NYC Director, Measures of Effective Teaching (MET) Project 

Corinne Herlihy (Moderator)
Project Director, National Center for Teacher Effectiveness

Binder Materials:

  • Ferguson, R. F. (October 2010).  “Student Perceptions of Teaching Effectiveness.”  Discussion Brief.
  • “NYC School Survey 2009-2010 Report.”  NYC Department of Education. 

Additional Resources: