In 2011-12, Newark launched a set of educational reforms supported by $20 million gift. Using data from 2009 through 2016, we evaluate the change in Newark students’ achievement growth relative to similar students and schools elsewhere in New Jersey. We measure achievement growth using a “value-added” model, controlling for prior achievement, demographics and peer characteristics. By the fifth year of reform, Newark saw statistically significant gains in English and no significant change in math achievement growth. Perhaps due to the disruptive nature of the reforms, growth declined initially before rebounding in recent years. Aided by the closure of low value-added schools, much of the improvement was due to shifting enrollment from lower-to higher-growth district and charter schools. Shifting enrollment accounted for 62 percent of the improvement in English. In math, such shifts offset what would have been a decline in achievement growth.
Researchers have identified many characteristics of teachers and teaching that contribute to student outcomes. However, most studies investigate only a small number of these characteristics, likely underestimating the overall contribution. In this paper, we use a set of 28 teacher-level predictors drawn from multiple research traditions to explain teacher-level variation in student outcomes. These predictors collectively explain 28% of teacher-level variability in state standardized math test scores and 40% in a predictor-aligned math test. In addition, each individual predictor explains only a small, relatively unique portion of the total teacher-level variability. This first finding highlights the importance of choosing predictors and outcomes that are well aligned, and the second suggests that the phenomena underlying teacher effects is multidimensional.
The purpose of this study is to investigate three aspects of construct validity for the Mathematical Quality of Instruction classroom observation instrument: (1) the dimensionality of scores, (2) the generalizability of these scores across districts, and (3) the predictive validity of these scores in terms of student achievement.
As many states are slated to soon use scores derived from classroom observation instruments in high-stakes decisions, developers must cultivate methods for improving the functioning of these instruments. We show how multidimensional, multilevel item response theory models can yield information critical for improving the performance of observational instruments.
Education agencies are evaluating teachers using student achievement data. However, very little is known about the comparability of test-based or "value-added" metrics across districts and the extent to which they capture variability in classroom practices. Drawing on data from four urban districts, we find that teachers are categorized differently when compared within versus across districts. In addition, analyses of scores from two observation instruments, as well qualitative viewing of lesson videos identify stark differences in instructional practices across districts among teachers who receive similar within-district value-added rankings. Exploratory analyses suggest that these patterns are not explained by observable background characteristics of teachers and that factors beyond labor market sorting likely play a key role.
Aided by $200 million in private philanthropy, city and state leaders launched a major school reform effort in Newark, New Jersey, starting in the 2011–2012 school year. In a coinciding National Bureau of Economic Research (NBER) working paper, we assessed the impact of those reforms on student achievement growth, comparing students in Newark Public Schools (NPS) district and charter schools to students with similar prior achievement, similar demographics, and similar peers elsewhere in New Jersey. This report includes key findings.
The project team is still awaiting student test data to complete the evaluation, but this brief provides a short update on survey results. Students of MQI-coached teachers report that their teachers ask more substantive questions, and require more use of mathematical vocabulary as compared to students of control teachers. Students in MQI-coached classrooms also reported more student talk in class. Teachers who received MQI Coaching tended to find their professional development significantly more useful than control teachers, and were also more likely to report that their mathematics instruction improved over the course of the year.
Against the backdrop of a contentious ballot question, charter schools in Massachusetts have faced scrutiny across multiple dimensions. This event brings together several of the preeminent researchers on the topic to share their findings, followed by a period of directed questions, and audience Q&A.
In Massachusetts, the charter school debate has centered on four concerns:
that the achievement of the high-scoring charter schools is due to selective admission and retention policies and not the education that the charter schools provide,
that charter schools are underserving English language learners and special education students,
that charter schools are disciplining students at higher rates in order to drive troublesome students back to traditional schools, and
that charter schools are undermining traditional public schools financially.
This report summarizes the evidence pertaining to these four concerns.
Achievement Network (ANet) was founded in 2005 as a school-level intervention to support the use of academic content standards and assessments to improve teaching and learning. Initially developed within the Boston charter school sector, it has expanded to serve over 500 schools in nine geographic networks across the United States. The program is based on the belief that if teachers are provided with timely data on student performance from interim assessments tied to state standards, if school leaders provide support and create structures that help them use that data to identify student weaknesses, and if teachers have knowledge of how to improve the performance of students who are falling behind, then they will become more effective at identifying and addressing gaps in student learning. This will, in turn, improve student performance, particularly for high-need students.
In 2010, ANet received a development grant from the U.S. Department of Education’s Investing in Innovation (i3) Program. The grant funded both the expansion of the program to serve up to 60 additional schools in five school districts, as well as an external evaluation of the expansion. The Center for Education Policy Research (CEPR) at Harvard University partnered with ANet to design a matched-pair, school-randomized evaluation of their program’s impact on educator practice and student achievement in schools participating in its i3-funded expansion.
With the debate over the federal role in education at rest with the passage of the Every Student Succeeds Act (ESSA), it is time to refocus attention on how to help the states move forward and succeed using the Common Core State Standards (CCSS). In this Askwith Forum, Professor Thomas Kane will share findings about CCSS implementation strategies from the Center for Education Policy Research at Harvard University. This will be followed by a panel of educators, who will share their experiences, pain points, and successes with the CCSS over this past year.
Subtle policy adjustments can induce relatively large “ripple effects.” We evaluate a College Board initiative that increased the number of free SAT score reports available to low-income students and changed the time horizon for using these score reports. Using a difference-in-differences analytic strategy, we estimate that targeted students were roughly 10 percentage points more likely to send eight or more reports. The policy improved on-time college attendance and 6-year bachelor’s completion by about 2 percentage points. Impacts were realized primarily by students who were competitive candidates for 4-year college admission. The bachelor’s completion impacts are larger than would be expected based on the number of students driven by the policy change to enroll in college and to shift into more selective colleges. The unexplained portion of the completion effects may result from improvements in nonacademic fit between students and the postsecondary institutions in which they enroll.
In 2011, the Strategic Data Project (SDP) began a partnership with the Wake County Public School System (WCPSS). As part of this partnership, SDP collaborated with WCPSS to analyze patterns of high school students’ on-track status, graduation, college enrollment, and college persistence. This set of high-leverage, policy-relevant analyses constitutes the SDP College-Going Diagnostic.