Current Issue
This Month's Print Issue

Follow Fast Company

We’ll come to you.

What determines a good school? Usually the quantitative data — most frequently standardized test scores and graduation rates — carry the most weight in answering that question. On the qualitative side, there's student and parent satisfaction and teacher morale, among other factors. The key, as in all professions, is determining metrics — figuring out how all of these different factors should be measured and calibrated. This past month, however, has included several stumbles in achieving this task in education.

In New York last week, much buzz — and fallout — ensued over the release of "report cards" for the city's schools. Each school received a letter grade (A-F) based upon student performance (weighted 30 percent), student improvement (weighted 55 percent), and school environment (weighted 15 percent). Because student improvement accounted for a majority of the grade, some schools with high test scores but scant gains from one year to the next were deemed to be failing. Conversely, some schools whose test scores continued to flag but had made significant gains were given A's.

This methodology undoubtedly confused many parents, who were accustomed to hearing their children's schools being labeled as "outstanding" or "failing," only to see grades that suggested the opposite. The New York Times covered one particularly interesting aspect of this confusion, regarding parents who had moved to certain neighborhoods so that their children could attend supposedly A-list schools, only to find out that, according to the "report cards", they weren't. While in most cases, the difference was only by a single letter (a B instead of an A), the frustration remained: how could a school be exceptional one day and merely above par, or even plainly average, the next?

Similar confusion and frustration arose in the field of education several weeks ago, at the end of October, when research from Johns Hopkins labeled approximately 1,700 high schools as "dropout factories." Local newspapers added fuel to the fire by listing area schools designated "dropout factories," providing instant embarrassment, sometimes without publishing accompanying statistics. The media-ready term "dropout factory" surely provoked instant curiosity among readers, including myself, eager to see if schools with which they were familiar made the list.

My hometown, Norfolk, Virginia, had the ignominy of having all of its five high schools included on the list of "dropout factories." What a shame, considering that just two years ago, Norfolk's school district won the Broad Prize in Urban Education, an annual award that recognizes school districts that "demonstrate the greatest overall performance and improvement in student achievement while reducing achievement gaps among poor and minority students." (By the way, New York City won the prize this year.) Of course, these two distinctions stem from different metrics of school quality.

To the public, however, they're usually presented as gross oversimplifications. Winning a prestigious education prize = good school district. Having all your high schools labeled "dropout factories" = bad school district.

Other schools and districts faced discrepancies that arose from the same metric. The Johns Hopkins research only used raw numbers, not taking into account relocation or reassignment of students, both of which are considered in many states' data. As a result, the "dropout factory" news forced schools to contend with conflicting data and gave parents conflicting information about their children's schools. As with the New York City report cards, for many schools, it seemed as if one day they were considered passable, the next day unsatisfactory.

While such discrepancies must be confusing to educators, they're probably even more so to parents and students. If the Johns Hopkins research had been conducted while I was still in high school, my high school might well have been on the "dropout factory" list then. Yet, I not only graduated but received enough preparation, including AP and IB classes, to do well at an Ivy League school — an advantage that many schools don't offer. Again, these observations stem from two different metrics. But "dropout factory" labels and "F" grades for schools constantly pull spectators from one metric to another without offering adequate analysis or integration. As a result, it's extremely difficult to see the full picture.

Solving this problem of confusion requires, of course, consistency of evaluation, a colossal challenge in our educational system. But while teachers, education consultants, and politicians work to iron out the kinks in defining and assigning value to different metrics in education, they should strive to present as clear and complete a picture as possible, minus the flip-flopping extremes. Continual inconsistency in data and analysis, coupled with media hype, only results in a grave disservice to the parents who simply want the best for their children and the students who simply want to be prepared for adulthood.