left arrow road sign Bear Left!

Willful Disregard
Tim Francis-Wright

In his syndicated column last week, George Will tried to link single-parent families to the decline of public schools in America. His evidence for this link was flimsy. His methods were egregiously sloppy. Even the originality of his column was suspect. But worst of all, he missed the opportunity for a worthwhile column on the best use of education spending. I don't care if he is America's most famous political pundit. George Will needs to do his homework quite a bit better.

Will has been down this path before. In a column in September 1993, he claimed that higher education spending led to lower, not higher average SAT scores. His evidence then included the revelation that four of the states with the lowest spending per pupil were among the 10 states with the highest SAT scores. More recently, he wrote a column in early 2001 that was essentially a first draft of his column of last week. It, however, had even fewer facts to bolster his arguments against education spending.

The basic flaw in Will's arguments in both 1993 and 2002 was his use of state-level SAT data to compare states against one another. The College Board web site has at least three pages that warn reporters and pundits about using state-level data. One page entitled "Cautions on the use of aggregate SAT scores" warns reporters that "[u]sing these scores in aggregate form as a single measure to rank or rate teachers, educational institutions, districts, or states is invalid because it does not include all students. In being incomplete, this use is inherently unfair." Another page entitled "A word about comparing states and schools" has a more detailed warning. It reads, in part, "[t]he SAT is a strong indicator of trends in the college-bound population, but it should never be used alone for such comparisons because demographics and other nonschool factors can have a strong effect on scores. If ranked, schools and states that encourage students to apply to college may be penalized because scores tend to decline with a rise in percentage of test takers."

Finally, the site provides an informative chart with the average SAT scores by state for 2001, 2000, 1996, and 1991. At the top of the page, the College Board stresses that it "strongly discourages the comparison or ranking of states on the basis of SAT scores alone." On that same chart are the participation rates for the test. Not surprisingly, the top 20 states in terms of average scores are the 20 states with the lowest participation. George Will, in a recurring role as reporter manque, ignored these warnings both in 1993 and 2002.

Many states use the ACT, an alternative to the SAT, for admission to their state universities. SAT scores for those states show massive selection biases, because students who take the SAT are those students with the best chances for admission to selective colleges. Students with narrower college prospects often take the ACT instead. While Will cited Daniel Moynihan's jocular quip that proximity to the Canadian border is the best predictor of SAT performance, he ignored a very real reason for why state-level SAT scores come out the way they do. For example, does he really believe that Mississippi's rank (17th) among states really reflect anything brilliant about the Mississippi educational system?

Will dissembled when he equated socioeconomic status with single parenthood. He first quotes a 1966 government study that found that the socioeconomic background of students was the determining factor in how they did in school. He concludes from this finding that "[t]he crucial predictor of a school's performance is the quality of the children's families." In reality, researchers use socioeconomic background to mean whether they and their neighbors live in households beneath the poverty line. And the College Board's national data lend credence to the notion that students from more affluent families do better in the SAT. See this chart for the monotonic relationship between income and SAT scores.

Will's 1993 column was such a travesty of statistical analysis that it is the subject of an article of how not to analyze SAT data. An article in the Journal of Statistics Education shows how controlling for the participation rate shows that spending more on schools leads to higher, not lower scores. (Accompanying the article are both the raw data and some additional variables, so readers can try their own hands at the statistical models.)

In his most recent column, Will can no longer find that the states with the highest SAT scores have the lowest amount of spending on education. He shows the top 5 and bottom 5 states in terms of SAT scores, along with their ranks in terms of per pupil spending, and does not even try to analyze the relationship. No wonder: one of the top 10 spenders and one of the bottom 10 spenders are in the top 5 by SAT scores. Only one of the top ten spenders is in the bottom five by SAT scores. Instead, Will compares the top state (North Dakota) and the 49th state (District of Columbia). Because North Dakota had the highest proportion of two-parent families in 1992 and the District of Columbia the lowest, he produces a new Theory of Education, based on one of the sparsest samples ever: two data points. According to Will, traditional nuclear families are the only reason for high SAT scores.

Will did not even come out with a truly original column last week. In 2001, Will wrote a remarkably similar column in which he also claimed that there was no link between spending and educational quality. That column, however, refrained from providing any current data on educational achievement. To "prove" his thesis then, Will merely claimed that Catholic schools were superior to public schools. This was such a compelling argument that Will left it out in his column from last week.

Will missed a golden opportunity for an illuminating column on spending on education. The District of Columbia, if it were a state, would be the only purely urban state. The problems that its schools face are similar to the problems that many urban schools face: endemic poverty among the students, private schools that attract the best students away from the public schools, crumbling infrastructure, and large student-teacher ratios.

The links between educational performance and family income are striking. Links between educational spending and educational performance show up in education journals, but there are all sorts of factors that confound them. Where the cost of living is higher, teachers' salaries are higher. Where special education requirements are higher, schools spend more money on students for whom standard educational norms are moot. Districts with older school buildings necessarily spend more on maintenance than do districts with newer, more efficient buildings. Finally, affluent school districts have more money to spend. A truly unanswered question for educators is which dollars are the most efficient dollars for public education.

In a perfect world, every school would have the resources of Trinity, Oxford, or Princeton (where George Will attended), or Phillips Academy and Yale (where not only both Presidents Bush but also this author attended). None of these schools faces serious accusations of merely throwing money at educational problems. All of these schools have well-paid and well-educated faculty, small student-faculty ratios, and the ability to select their students. None have the mandate of public schools to educate all comers. What makes money spent at Phillips Academy or Princeton so effective and the money spent on the Washington schools so ineffective? It probably has a lot to do with family income. It might have something to do with single-parent families. But distorting SAT data to claim that the nuclear family is the sine qua non of education is ignorant at best and mendacious at worst.

Bear Left!: link library | archives | privacy statement | about us
mailing list | home (with this week's columns and links)


© 2002 Bear Left!