Publication of Teacher Data Distracts from Real Evaluation
By UCLA IDEA staff
Themes in the News for the week of Aug. 16-20, 2010
The Los Angeles Times created an uproar in the education community with the publication of a story—the first in a series—that analyzed teacher effectiveness using a value-added model. Later this month, the Times plans to follow up by releasing information about 6,000 third- through fifth-grade teachers, ranking them on a scale from least to most effective. Reactions of shock and deep concern are coming from many corners of the education community, even those rarely in agreement.
Diane Ravitch, who opposes the use of standardized tests as single tools for evaluation, called the public outing “disgraceful.” In a blog post, Rick Hess of the American Enterprise Institute think tank and a proponent of using data for teacher evaluations also said he had serious problems (Education Week).
Value-added analysis measures the movement up or down on a student’s test scores from one year to the next. According to the Times, the higher the jump, the more effective the teacher.
Most expert reports on this method, including one by the National Academy of Science, point out that the value-added metric, alone, is insufficient in evaluating teacher effectiveness. In agreement is Los Angeles Unified School District Superintendent Ramon Cortines, who asked, “Would a person be diagnosed with diabetes solely on the basis of a high blood pressure reading? (Color Lines)
Barnett Berry, president of the Center for Teaching Quality, offered another medical analogy. It is “the equivalent of a newspaper indiscriminately listing the names of doctors, in rank, based on mortality rates, irrespective of the type of medicine they practice or the context in which they practice” (Christian Science Monitor).
A Sacramento high school teacher distinguished between data-driven and data-informed. “In schools that are data-informed, test results are just one more piece of information that can be helpful in determining future directions” (Washington Post).
And the data the Times intends to publish is limited in several ways. It purports to identify the value added by particular teachers, but does not take into account student mobility, absenteeism, the role of tutors or team teachers, summer school programs or after-school programs (Christian Science Monitor, Washington Post).
Parents are being invited to act on powerful conclusions they draw from the teacher data, but it is difficult to find positive steps they can take. In the short term, parents might compete among themselves to win their child a spot in the “most effective” teacher’s classroom. School morale, already low after a season of pink slips, could receive another blow. Teachers might do their best to avoid teaching grades 3, 4 and 5. Or, turn inward and narrow their curriculum to teach solely to the test.
Public disclosure and a culture of blame could create a chilling effect on teacher collaboration. It could also dry up the pool of people willing to enter the teaching profession.
People love rankings, sorting, and surveys; it’s hard to resist the appeal of the “10 Best” or “10 Worst”. But the facile display of numbers and rankings can be misleading for a public that is not well acquainted with nuanced statistical models or with critiques of how to use that data. That is why the National Academy of Sciences worries about the “considerable limitations to the transparency” of value-added analysis.
The Times, by focusing on a narrow and underdeveloped measure of teacher effectiveness, distracts attention from the real reform need: a comprehensive teacher evaluation system that provides ample support to improve student learning.