Who’s wonking who?

About a third of the way into Ezra Klein’s new essay “How Politics Makes Us Stupid,” I met a stumbling block. Klein begins his essay by describing a 2013 study that tested whether political affiliation could compromise people’s ability to solve a simple statistical problem. In an experiment, researchers gave some subjects a stats problem about the efficacy of a skin-rash lotion, and others a structurally identical problem about the efficacy of a gun-control law. Here’s Klein’s summary of the results:

Being better at math didn’t just fail to help partisans converge on the right answer. It actually drove them further apart. Partisans with weak math skills were 25 percentage points likelier to get the answer right when it fit their ideology. Partisans with strong math skills were 45 percentage points likelier to get the answer right when it fit their ideology. The smarter the person is, the dumber politics can make them.

Consider how utterly insane that is: being better at math made partisans less likely to solve the problem correctly when solving the problem correctly meant betraying their political instincts. People weren’t reasoning to get the right answer; they were reasoning to get the answer that they wanted to be right.

Something’s not quite right with Klein’s inferences here, I’m pretty sure. Here’s a link to the research paper that Klein is describing: “Motivated Numeracy and Enlightened Self-Government” by Dan Kahan, Ellen Peters, Erica Dawson, and Paul Slovic. And here’s how the original authors phrase the results that have caught Klein’s eye:

On average, the high Numeracy partisan whose political outlooks were affirmed by the data, properly interpreted, was 45 percentage points more likely (± 14, LC = 0.95) to identify the conclusion actually supported by the gun-ban experiment than was the high Numeracy partisan whose political outlooks were affirmed by selecting the incorrect response. The average difference in the case of low Numeracy partisans was 25 percentage points (± 10)—a difference of 20 percentage points (± 16).

Klein has reported the numbers accurately, but his interpretation of them is fallacious. As you can see by comparing Kahan et al.’s words with Klein’s, Klein is correct when he writes that “Partisans with weak math skills were 25 percentage points likelier to get the answer right when it fit their ideology. Partisans with strong math skills were 45 percentage points likelier to get the answer right when it fit their ideology.” But Klein is in error when he adds, “The smarter the person is, the dumber politics can make them.” If higher-numeracy subjects are 45 percent more likely to identify the correct answer when they find it congenial, and lower-numeracy subjects are only 25 percent more likely to do the same under the same conditions, then math skills improve the ability to solve the problem under those conditions by 20 percentage points, as Kahan, Peters, Dawson, and Slovic note. Smarter people are in fact smarter (the trouble is that they only bother to use their smarts to confirm their political bias—more on that in a moment).

Klein also writes, “Being better at math made partisans less likely to solve the problem correctly when solving the problem correctly meant betraying their political instincts.” That’s not an accurate report of Kahan et al.’s results. In their study, being better at math did make partisans a tiny bit more likely to solve the stats problem correctly even when the correct answer contradicted their partisan druthers. (For the evidence, see the dotted blue and solid red curves in the lower graph of figure 6 in Kahan et al.’s paper; the drift is upward in both cases, though it’s an exceptionally modest upward; that is, when solving a puzzle that declares that gun control increases crime, a liberal’s odds go up very slightly as his math skills improve, and so do a conservative’s odds when solving a puzzle that declares that gun control lowers crime.) Kahan et al. didn’t discover that math hurt problem solving. They discovered that math skills helped disproportionately more when the correct answer confirmed the subject’s political biases.

Klein writes that “People weren’t reasoning to get the right answer; they were reasoning to get the answer that they wanted to be right.” In fact, the original researchers’ explanation was a bit more subtle. They noted that an easy wrong answer tempts anyone who first glances at the type of statistics puzzle they chose, and they suggested that when the easy wrong answer confirmed a partisan’s bias, he was more likely to fall for it. Partisans resorted to brain-taxing math skills only when the easy wrong answer contradicted what they hoped to hear.

Kahan et al. did discover that math skills increased polarization. Not polarization in political bias, though: within the experiment’s sample of subjects, polarity in political bias was a given. The polarization that worsened was between likelihood of solving the problem correctly when it confirmed biases and likelihood of solving it correctly when it contradicted biases. Intriguingly, that polarization was not only higher when math skills were higher. It was also higher among conservatives than among liberals. (The evidence is in the lower two graphs in figure 7 of Kahan et al.’s paper. In both graphs, the red bumps are much further from one another than the blue bumps are, which suggests that conservatives’ ability to solve the problem diverges more according to bias than liberals’ ability does.)

Child’s play

In “Playing for All Kinds of Possibilities,” a very fun science article in yesterday’s New York Times, reporter David Dobbs describes how four-year-olds easily beat grown-ups at Blickets, a game invented by child psychologists Alison Gopnik and David Sobel. There seem to have been many versions of Blickets over the years, each designed to ferret out a different nuance of children’s understanding of the world, but in his article Dobbs is describing two that he calls “or” and “and”:

The “or” version is easier: When a blicket is placed atop the machine, it will light the machine up whether placed there by itself or with other pieces. It is either a blicket or it isn’t; it doesn’t depend on the presence of any other object.

In the “and” trial, however, a blicket reveals its blicketness only if both it and another blicket are placed on the machine.

Adults are usually stumped by the “and” version, but it gives children no trouble. Researchers believe that children succeed because they aren’t constrained by “prior biases.” Children don’t have such biases because they simply don’t know much about the world yet, and in their effort to understand, they’re willing to try out all kinds of wild ideas. As they age, they learn that some kinds of hypotheses are less commonly successful than others, and they become less willing to risk belief in these low-probability hypotheses. They grow up to be adults who lose at Blickets. They learn, Dobbs writes, that “‘or’ rules apply far more often in actual life, when a thing’s essence seldom depends on another object’s presence.”

This last claim stuck in my head, and this morning I realized why: I’m not sure it’s true, at least not about a very important category of thing, namely, people. Suppose, instead of playing Blickets with a rectangle, a triangle, and a bridge, we play Lovers with Rilke, Lou, and Gumby. And suppose, instead of placing clay tokens on top of a Blicket Detector, we play the game by leaving our three contestants alone in a room in pairs, to see if they happen to get busy. Rilke + Gumby = nothing. Lou + Gumby = also a blank. But Rilke + Lou = sonnets! Even adults are able to understand that these facts reveal that Rilke and Lou are Lovers, and that Gumby isn’t.

In the psychology experiment, the children were instructed that “the ones that are blickets have blicketness inside,” a somewhat confusing thing to say, given that the property of blicketness is completely fictional and doesn’t correspond to shape, color, weight, or any other physical trait. But adults are able to overcome a similar red herring, in the form of a word for the (also perhaps fictional) essence that qualifies a person as a Lover, namely, love.

As near as I can figure it, any mutually defined, nonhierarchical relationship between people operates in the real world by the same logic as the “and” version of Blickets. You can play Brothers, for example, with Henry James, William James, and William Dean Howells. You can play Rivals with Henry James, William Dean Howells, and Saint Francis of Assisi. You can play Friends with Emerson, Thoreau, and Jefferson Davis. (Note that for all these relationships, we have words for the relevant essence: brotherhood, rivalry, and friendship.)

None of this may matter for Gopnik and Sobel’s conclusion, because it doesn’t alter their finding that children are more willing to try out an unlikely hypothesis about clay triangles than adults are. But I’m not convinced that what children learn when they grow up is that “or” rules apply more often in real life than “and” rules do. It may be that they learn merely that “and” rules tend to be limited to human relationships.

Or it may be that they learn that grown-ups aren’t supposed to think of their toys as living creatures with thoughts and emotions . . .