In 2011-12, Newark launched a set of educational reforms supported by $20 million gift. Using data from 2009 through 2016, we evaluate the change in Newark students’ achievement growth relative to similar students and schools elsewhere in New Jersey. We measure achievement growth using a “value-added” model, controlling for prior achievement, demographics and peer characteristics. By the fifth year of reform, Newark saw statistically significant gains in English and no significant change in math achievement growth. Perhaps due to the disruptive nature of the reforms, growth declined initially before rebounding in recent years. Aided by the closure of low value-added schools, much of the improvement was due to shifting enrollment from lower-to higher-growth district and charter schools. Shifting enrollment accounted for 62 percent of the improvement in English. In math, such shifts offset what would have been a decline in achievement growth.
Researchers have identified many characteristics of teachers and teaching that contribute to student outcomes. However, most studies investigate only a small number of these characteristics, likely underestimating the overall contribution. In this paper, we use a set of 28 teacher-level predictors drawn from multiple research traditions to explain teacher-level variation in student outcomes. These predictors collectively explain 28% of teacher-level variability in state standardized math test scores and 40% in a predictor-aligned math test. In addition, each individual predictor explains only a small, relatively unique portion of the total teacher-level variability. This first finding highlights the importance of choosing predictors and outcomes that are well aligned, and the second suggests that the phenomena underlying teacher effects is multidimensional.
The purpose of this study is to investigate three aspects of construct validity for the Mathematical Quality of Instruction classroom observation instrument: (1) the dimensionality of scores, (2) the generalizability of these scores across districts, and (3) the predictive validity of these scores in terms of student achievement.
As many states are slated to soon use scores derived from classroom observation instruments in high-stakes decisions, developers must cultivate methods for improving the functioning of these instruments. We show how multidimensional, multilevel item response theory models can yield information critical for improving the performance of observational instruments.
Education agencies are evaluating teachers using student achievement data. However, very little is known about the comparability of test-based or "value-added" metrics across districts and the extent to which they capture variability in classroom practices. Drawing on data from four urban districts, we find that teachers are categorized differently when compared within versus across districts. In addition, analyses of scores from two observation instruments, as well qualitative viewing of lesson videos identify stark differences in instructional practices across districts among teachers who receive similar within-district value-added rankings. Exploratory analyses suggest that these patterns are not explained by observable background characteristics of teachers and that factors beyond labor market sorting likely play a key role.
Over the past two decades, education underwent a “big data” revolution as states began tracking individual student performance and interim assessments and educational software allowed for a greater granularity of data on students, teachers, and schools.
Despite this plethora of new data, considerable gaps in data on early childhood education, school spending, student program and intervention, and postsecondary outcomes remain.
Dissatisfaction with education data will never fully disappear due to technical gaps between what policymakers and researchers would like to measure and what can be measured, as well as normative disagreements about what data should be collected.
Policymakers should focus on closing the gaps they can while also recognizing the technical and normative constraints on educational measurement.
Given the major disruptions to students’ daily lives as well as the education field more generally caused by the COVID-19 pandemic, NCRERN was interested in learning how its partner districts navigated mandatory school closures and the shift to online learning, as well as identifying ways that NCRERN could support the short- and long-term needs of rural educators. Throughout April 2020, NCRERN staff conducted semistructured phone interviews with district officials and other leaders from 40 out of its 49 partner rural districts in Ohio and New York. The majority of interviews took place when schools were 3–5 weeks into shutdown. Notes from each interview were coded by two graduate research assistants to identify major themes that emerged from the conversations. Because interviews were semistructured, not all districts answered each question; as a result, counts should be interpreted with caution.
Like many other elements of the American economy, higher education is working to realize the potential of sophisticated data analytics to inform and transform how it operates. In August 2019, the Association for Institutional Research (AIR), EDUCAUSE (the association of campus information technology professionals), and the National Association of College and University Business Officers (NACUBO) released a joint statement with the provocative title “Analytics can save higher education. Really.” Its purpose was to inspire a sense of urgency and provide direction for higher education leaders to harness data as a strategic organizational asset. The statement features the following rationale for investment in data analytics:
“We strongly believe that using data to better understand our students and our own operations paves the way to developing new, innovative approaches for improved student recruiting, better student outcomes, greater institutional efficiency and cost-containment, and much more.”
However, progress has been uneven, with some state higher education agencies, university and college systems, and individual institutions leading the way while many others struggle to adapt. Why?
The Strategic Data Project (SDP) at the Center for Education Policy Research at Harvard University has a ten-year track record of developing data capacity in state and local PK-12 agencies and organizations and interviewed 40 leaders and analysts at 29 institutions of higher education and postsecondary organizations to explore their data needs to understand why some colleges and university systems are excelling in using data and others have yet to fully realize the potential of their data to inform strategic decisions that transform student success in school and the workforce.
Our key finding is that the missing link is not in the technical infrastructure but in human capacity. If higher education is to take advantage of data analytics to improve student outcomes and increase organizational effectiveness, it will have to find better ways to attract, train, and retain strategic data professionals who can inform policy and practice.
Teacher evaluation reform has been among the most controversial education reforms in recent years. It also is one of the costliest in terms of the time teachers and principals must spend on classroom observations. We conducted a randomized field trial at four sites to evaluate whether substituting teacher-collected videos for in-person observations could improve the value of teacher observations for teachers, administrators, or students. Relative to teachers in the control group who participated in standard in-person observations, teachers in the video-based treatment group reported that post-observation meetings were more “supportive” and they were more able to identify a specific practice they changed afterward. Treatment principals were able to shift their observation work to noninstructional times. The program also substantially increased teacher retention. Nevertheless, the intervention did not improve students’ academic achievement or self-reported classroom experiences, either in the year of the intervention or for the next cohort of students. Following from the literature on observation and feedback cycles in low-stakes settings, we hypothesize that to improve student outcomes schools may need to pair video feedback with more specific supports for desired changes in practice.
Aided by $200 million in private philanthropy, city and state leaders launched a major school reform effort in Newark, New Jersey, starting in the 2011–2012 school year. In a coinciding National Bureau of Economic Research (NBER) working paper, we assessed the impact of those reforms on student achievement growth, comparing students in Newark Public Schools (NPS) district and charter schools to students with similar prior achievement, similar demographics, and similar peers elsewhere in New Jersey. This report includes key findings.
The project team is still awaiting student test data to complete the evaluation, but this brief provides a short update on survey results. Students of MQI-coached teachers report that their teachers ask more substantive questions, and require more use of mathematical vocabulary as compared to students of control teachers. Students in MQI-coached classrooms also reported more student talk in class. Teachers who received MQI Coaching tended to find their professional development significantly more useful than control teachers, and were also more likely to report that their mathematics instruction improved over the course of the year.
Against the backdrop of a contentious ballot question, charter schools in Massachusetts have faced scrutiny across multiple dimensions. This event brings together several of the preeminent researchers on the topic to share their findings, followed by a period of directed questions, and audience Q&A.
Achievement Network (ANet) was founded in 2005 as a school-level intervention to support the use of academic content standards and assessments to improve teaching and learning. Initially developed within the Boston charter school sector, it has expanded to serve over 500 schools in nine geographic networks across the United States. The program is based on the belief that if teachers are provided with timely data on student performance from interim assessments tied to state standards, if school leaders provide support and create structures that help them use that data to identify student weaknesses, and if teachers have knowledge of how to improve the performance of students who are falling behind, then they will become more effective at identifying and addressing gaps in student learning. This will, in turn, improve student performance, particularly for high-need students.
In 2010, ANet received a development grant from the U.S. Department of Education’s Investing in Innovation (i3) Program. The grant funded both the expansion of the program to serve up to 60 additional schools in five school districts, as well as an external evaluation of the expansion. The Center for Education Policy Research (CEPR) at Harvard University partnered with ANet to design a matched-pair, school-randomized evaluation of their program’s impact on educator practice and student achievement in schools participating in its i3-funded expansion.
With the debate over the federal role in education at rest with the passage of the Every Student Succeeds Act (ESSA), it is time to refocus attention on how to help the states move forward and succeed using the Common Core State Standards (CCSS). In this Askwith Forum, Professor Thomas Kane will share findings about CCSS implementation strategies from the Center for Education Policy Research at Harvard University. This will be followed by a panel of educators, who will share their experiences, pain points, and successes with the CCSS over this past year.