The MQI was developed by Heather Hill and colleagues at the University of Michigan and Harvard University to reliably measure several dimensions of the work teachers do with students around mathematical content. The MQI is based on a theory of instruction, existing literature on effective instruction in mathematics, and an analysis of the teaching of hundreds of diverse U.S. teachers.
The MQI is based on the perspective that the mathematical work that occurs in classrooms is distinct from classroom climate, pedagogical style, or the deployment of generic instructional strategies. Similarly, the MQI provides separate teacher scores for different dimensions of the mathematical work teachers do; for instance, the presence of mathematical explanations and practices is scored separately from student participation in mathematical explanations and practices. This makes the MQI unique among instruments that measure mathematics instruction, many of which prioritize novel practices over a more balanced view of the numerous elements that comprise a mathematics lesson.
The MQI was developed and piloted between 2003 and 2012. During that time, its authors examined the relationships between teachers’ mathematical knowledge for teaching, MQI scores, and student outcomes, often finding significant and sometimes substantial relationships. The MQI has also been subject to several studies that examine the best conditions for arriving at accurate and generalizable scores for specific teachers.
For more information:
Hill, H. C., Kapitula, L., & Umland, K. (2011). A validity argument approach to evaluating teacher value-added scores. American Educational Research Journal, 48(3), 794–831.
Hill, H. C., Umland, K., Litke, E., & Kapitula, L. R. (2012). Teacher quality and quality teaching: Examining the relationship of a teacher assessment to practice. American Journal of Education, 118(4), 489–519.
Hill, H. C., Blunk, M., Charalambous, C., Lewis, J., Phelps, G. C., Sleep, L., & Ball, D. L. (2008). Mathematical Knowledge for Teaching and the Mathematical Quality of Instruction: An exploratory study. Cognition and Instruction, 26, 430–511.
Hill, H. C., Charalambous, C. Y., Blazar, D., McGinn, D., Kraft, M. A., Beisiegel, M., Humez, A., Litke, E., & Lynch, K. (2012). Validating arguments for observational instruments: Attending to multiple sources of variation. Educational Assessment, 17(2–3), 88–106.
Hill, H. C., Charalambous, C. Y., & Kraft, M. A. (2012). When rater reliability is not enough: teacher observation systems and a case for the generalizability study. Educational Researcher, 41(2), 56–64.
Kelcey, B., McGinn, D., & Hill, H. (2014). Approximate measurement invariance in cross-classified rater-mediated assessments. Frontiers in psychology, 5, 1469.
Blazar, D., Braslow, D., Charalambous, C. Y., & Hill, H. C. (2017). Attending to general and content-specific dimensions of teaching: Exploring factors across two observational instruments. Educational Assessment, 22(2), 71–94.