Initiative (CEI) to produce the SDP College-Going Diagnostic. The diagnostic analyses focus on describing college enrollment and persistence rates of high school graduates across Colorado; describing patterns in students’ participation in college-level coursework (in particular, AP classes and concurrent enrollment participation) in high school; and investigating the extent to which students’ participation in AP classes and concurrent enrollment programs is associated with persistence at both two-year and four-year colleges.
The Delaware Department of Education collaborated SDP to produce the SDP College-Going Diagnostic. The diagnostic analyses examine students’ high school performance, college enrollment, and college persistence patterns, and compares these patterns across a variety of student characteristics and academic experiences.
The Massachusetts Department of Elementary and Secondary Education collaborated with the Strategic Data Project (SDP) to conduct the SDP College-Going Diagnostic. This report examines key findings related to students’ college enrollment and college persistence patterns, and compares these patterns across a variety of student characteristics and academic experiences. The report highlights results primarily at the state level and, for illustrative purposes, for a handful of school districts and high schools.
Denver Public Schools (DPS) collaborated with the Strategic Data Project to conduct the SDP Human Capital Diagnostic. The analysis focused on teacher recruitment, placement, development, evaluation/compensation, and retention/turnover. Because of the uniqueness of the data from DPS' ProComp teacher compensation system and its relevance to current policy discussions, we describe the evaluation/compensation analyses in depth and summarize the key findings from the other components of the diagnostic. These analyses have the potential to inform important education policy both in Denver and across the nation as education agencies consider revising traditional “lockstep” pay systems.
The New York State Education Department collaborated with SDP to produce the SDP Human Capital Diagnostic. The diagnostic is designed to identify patterns of teacher effectiveness and areas for policy change that could leverage teacher effectiveness to improve student achievement. It is also intended to demonstrate how education agencies can capitalize on existing data to understand its current performance, set future goals, and strategically plan responses.
Although wide variation in teacher effectiveness is well established, much less is known about differences in teacher improvement over time. We document that average returns to teaching experience mask large variation across individual teachers, and across groups of teachers working in different schools. We examine the role of school context in explaining these differences using a measure of the professional environment constructed from teachers’ responses to state-wide surveys. Our analyses show that teachers working in more supportive professional environments improve their effectiveness more over time than teachers working in less supportive contexts. On average, teachers working in schools at the 75th percentile of professional environment ratings improved 38% more than teachers in schools at the 25th percentile after ten years.
We extend this line of research by investigating teacher career and background characteristics, personal resources, and school and district resources that predict an array of instructional practices identified on a mathematics-specific observational instrument, MQI, and a general instrument, CLASS. To understand these relationships, we use correlation and regression analyses. For a subset of teachers for whom we have data from multiple school years, we exploit within-teacher, cross-year variation to examine the relationship between class composition and instructional quality that is not confounded with the sorting of "better" students to "better" teachers. We conclude that multiple teacher- and school-level characteristics--rather than a single factor--are related to teachers' classroom practices.
The authors used self-report surveys to gather information on a broad set of non-cognitive skills from 1,368 eighth-grade students attending Boston Public Schools and linked this information to administrative data on their demographics and test scores. At the student level, scales measuring conscientiousness, self-control, grit, and growth mindset are positively correlated with attendance, behavior, and test-score gains between fourth- and eighth-grade. Conscientiousness, self-control, and grit are unrelated to test-score gains at the school level, however, and students attending over-subscribed charter schools with higher average test-score gains score lower on these scales than do students attending district schools. Exploiting charter school admissions lotteries, the authors replicate previous findings indicating positive impacts of charter school attendance on math achievement, but find negative impacts on these non-cognitive skills. The authors provide suggestive evidence that these paradoxical results are driven by reference bias, or the tendency for survey responses to be influenced by social context. The results therefore highlight the importance of improved measurement of non-cognitive skills in order to capitalize on their promise as a tool to inform education practice and policy.
Using data from elementary mathematics teachers, we examine the correspondence between self-reports and observational measures of two instructional dimensions--reform-orientation and classroom climate--and the relative ability of these measures to predict teachers' contributions to student learning.
While research has generated substantial information regarding the characteristics of effective mathematics teachers and classrooms, scholars have rarely tested multiple aspects of teachers or teaching within a single study. Without testing multiple variables simultaneously, it is difficult to identify specific aspects of mathematics teachers and teaching that may be particularly impactful on student learning, and to understand the degree to which these characteristics are related to one another. This plenary draws on data from a three-year study measuring multiple components of teacher and teaching quality to investigate these issues.
The SDP Toolkit for Effective Data Use is a resource guide for education agency analysts who collect and analyze data on student achievement. Completing the toolkit produces a set of basic, yet essential, human capital and college-going analyses that every education agency should have as a foundation to inform strategic management and policy decisions.
The SDP Summer Melt Handbook is a resource for education leaders interested in examining whether summer melt is occurring in their agency. The handbook not only serves to diagnose the phenomenon, but also helps leaders understand what they can do to address it. Learn more about the Summer Melt Handbook on the SDP Website
Boston Public Schools collaborated with SDP to produce the SDP College-Going Diagnostic for its district. The diagnostic is designed to identify potential areas for action to increase students’ levels of academic achievement, preparedness for college, and postsecondary attainment. It is also intended to demonstrate how districts can capitalize on existing data to understand its current performance, set future goals, and strategically plan responses.
The School District of Philadelphia partnered with SDP to produce the SDP College-Going Diagnostic. The diagnostic analyses summarized in this report focus on 1) student performance in the district during high school and into college, 2) critical junctures along the way that affect student success, and 3) student characteristics and other factors that are most strongly related to college enrollment and persistence.
Los Angeles Unified School District (LAUSD) partnered with SDP to produce the SDP College-Readiness Diagnostic for its district. The diagnostic analyses focus on 1) how students across the district progress toward high school graduation, 2) whether and how students who fall off track for graduation recover and go on to graduate, and 3) the progress of students toward the completion of A-G requirements.
Boston Public Schools collaborated with SDP to produce the SDP Human Capital Diagnostic for its district. The diagnostic is designed to identify patterns of teacher effectiveness and areas for policy change that could leverage teacher effectiveness to improve student achievement. It is also intended to demonstrate how districts can capitalize on existing data to understand its current performance, set future goals, and strategically plan responses.
In this study, we use value-added scores and video data in order to mount an exploratory study of high- and low-VAM teachers' instruction. Specifically, we seek to answer two research questions: First, can expert observers of mathematics instruction distinguish between high- and low-VAM teachers solely by observing their instruction? Second, what instructional practices, if any, consistently characterize high but not low-VAM teacher classrooms? To answer these questions, we use data generated by 250 fourth- and fifth-grade math teachers and their students in four large public school districts.Preliminary analyses indicate that a teacher's value-added rank was often not obvious to this team of expert observers.
This toolkit provides useful resources for designing and rolling out a high school graduate exit survey, as well as effectively analyzing survey results in a school district. Anyone who is interested in implementing a high school exit survey, reworking a current exit survey, or effectively analyzing survey results in a school district can leverage this resource.
In this study we ask: Do observational instruments predict teachers' value-added equally well across different state tests and district/state contexts? And, to what extent are differences in these correlations a function of the match between the observation instrument and tested content? We use data from the Gates Foundation-funded Measures of Effective Teaching (MET) Project(N=1,333) study of elementary and middle school teachers from six large public school districts,and from a smaller (N=250) study of fourth- and fifth-grade math teachers from four large public school districts. Early results indicate that estimates of the relationship between teachers' value-added scores and their observed classroom instructional quality differ considerably by district.
In this article, Heather Hill and Pam Grossman discuss the current focus on using teacher observation instruments as part of new teacher evaluation systems being considered and implemented by states and districts. They argue that if these teacher observation instruments are to achieve the goal of supporting teachers in improving instructional practice, they must be subject-specific, involve content experts in the process of observation, and provide information that is both accurate and useful for teachers. They discuss the instruments themselves, raters and system design, and timing of and feedback from the observations. They conclude by outlining the challenges that policy makers face in designing observation systems that will work to improve instructional practice at scale.