The insights gained from teacher value-added reports have the potential to benefit schools, students, and communities. However, because these reports are generated from complex statistical methods that rely on inaccurate or incomplete data and have wide margins of error, more responsible use of these reports is needed to reap their benefits—and minimize their risks. My own story is just one example of how this process can go terribly wrong and, in my view, provides a good entry point for reflection on ways to better align our practice of using these reports with the broader purpose of ensuring an excellent education for all students.
For the past three years, I have worked as a sixth- and seventh-grade math teacher in Brooklyn, N.Y. I have had two value-added scores published on the New York Times
SchoolBook website which received the scores from the New York City Department of Education through a Freedom of Information Act request. One of these reports omitted all special education students that I taught in an inclusion classroom. The other contained forty-eight students I never taught. The New York City Department of Education was informed of these mistakes, but refused to generate a corrected report and knowingly released inaccurate data to the media. Unfortunately, I know there are many teachers with similar stories whose professional reputations may have been unduly harmed by the publication of this data.
This negative experience prompted me to think deeply about the potential negative consequences of publishing this data—on teachers and on the school system as a whole. I worry about whether or not I will be able to get a teaching job with this incorrect low-performing label attached to my name. I worry that teachers will avoid working in schools that are under-resourced and under-supported because this may limit their ability to receive high value-added scores. I worry that publicly reporting teachers’ effectiveness will be another reason among many why talented young people will avoid entering the teaching profession or leave just as they are becoming effective teachers.
As someone who is passionate about improving education, I believe we need to improve the quality of value-added reports. We should continue to support research and development so that the statistical models that are used are as reliable and accurate as possible. We should ensure that our data systems have the capacity to store and sort the information that is needed for these reports to be generated. We must allow teachers a chance to identify mistakes in the reports, and we must fix the errors before reports are used.
But we also must consider the ways data can be used responsibly to strengthen the school system. First and foremost, value-added scores must be used in conjunction with other indicators of a teacher’s performance, such as observations or student portfolios. One test, given on one day, should not alone determine a teacher’s or student’s ability level. A comprehensive judgment about a teacher’s ability (informed in part by value-added scores) could be used to improve educational outcomes in the following ways: identifying weak teachers and providing them with targeted professional development, identifying highly effective teachers to serve as mentors for novice or struggling teachers, placing students into classes in ways that ensure they do not have a less effective teacher for two years in a row, and relocating effective teachers to schools that do not have enough strong teachers.
All of these data-driven decisions can positively impact our schools, minimize the negative effects that individual teachers experience from flawed value-added scores, and do not require public release of teachers’ reports. As my story highlights, value-added data holds the potential to both enhance student learning and to hurt the very individuals and schools we are trying to improve.
This blog post continues the conversation from the
Harvard Educational Review special symposium “By What Measure?: Mapping and Expanding the Teacher Effectiveness Debate.”