by Douglas N. Harris on March 14,2012
I love newspapers. I really do. I subscribe to both the New York Times
and the Wall Street Journal
. But their recent decisions to publish teachers' names along with their “value-added” ratings shows the newspapers at their very worst—focusing on what sells papers rather than the public good. In the process, they may single-handedly bring down what could be one of the more positive developments in K-12 education in recent decades.
Value-added measures attempt to measure what educators contribute to their students' test scores. Rather than focusing just on the end-of-year scores, they take into account prior student achievement, class size, and other factors that are outside educators’ control. Since the early 1990s, states like Tennessee have been experimenting with this approach to measuring performance of entire schools and there is little debate anymore that this approach is better than the point-in-time snapshots that are the basis for No Child Left Behind.
In the process of studying school value-added measures, researchers (myself and others) found something important: that school performance is actually fairly similar across schools. Instead, the real action is with specific teachers. Even in a single school that we might deem highly effective, some teachers are below average. Also, both national teacher unions have long complained that teachers receive little useful feedback on ways to improve. Others emphasize that important decisions about hiring, tenure, promotion, and compensation basically ignore teacher performance. All these facts point in one direction: as a nation, we really need to improve teacher evaluation and accountability. President Obama’s Race to the Top
has focused on just that need.
In 2011, I published a book about value-added measures
to improve understanding among policymakers and practitioners as they consider and embark on these significant reforms. I could see the potential benefits of using teacher value-added measures to improve teacher evaluation, as well as what could go wrong. Indeed, just as I was finishing the book, the Los Angeles Times
decided to put teacher names and value-added measures on their web site. So, I rewrote part of the book to emphasize why this was a terrible idea. This of course did not stop the New York City papers from recently doing exactly the same thing.
What’s the problem? There are three: (1) value-added involves only student test scores, which are useful but still limited measures of student learning; (2) even if the tests measured everything, value-added measures are not very accurate indicators of teacher contributions to those scores; and (3) even if the value-added measures were accurate, what good does it do to put them on a web site? What company or organization publicizes this information about its employees?
To this last question, some might answer that sports teams make individual players performance public. That’s true, but imagine if the Boston Herald
published the fielding percentage of Red Sox players, calculated by a rough guess by an intern. On top of that, suppose the newspaper decided not
to publish other useful statistics such as batting averages and pitchers’ earned run averages. It’s nonsense, but that about sums up what the newspapers are doing with teacher value-added measures—only the stakes in the classroom are so much higher than they are on the ball field.
Now, as I travel around the country talking about the book, I hear fear in the voices of school teachers, principals, and superintendents. They want to improve teacher evaluation and accountability, but they absolutely—and rightly—do not want these measures in newspapers. The newspapers’ argument about the public’s “right to know” may well bring down the public right to have good schools.