Information hygiene

[This post is also available as an issue of my newsletter, Leaflet.]

In the early onset of adulthood, one often samples reckless hedonism—drinking away weekends, maxing out credit cards, counting peanut butter swirled into spaghetti as “dinner,” punctuating relationship conflict with cigarettes—but the obverse of utterly unimpeded freedom is that one is free to die alone in a cancer ward if one really wants to, and at some point, there is usually an accommodation with prudence and fear, and one sets about acquiring boring, sensible habits. Never drink juice or soda, for instance. Just buy baby shampoo, because then you don’t have to find a new brand every six months. One even becomes grateful for habits like brushing one’s teeth that come as it were pre-installed.

Information hygiene is one such regimen. It was probably easier in the era I grew up in. Sources of information then had distinct edges and well-known, widely agreed upon reputations, in part because information was almost always delivered in a physical form. By and large, in those days, the only way to read a news article in, say, the Evening Gazette of Worcester County, Massachusetts, was to read it in the ink-on-paper Evening Gazette. In a pinch you could catch the bus into Worcester and read an old article on microfilm in the library downtown, but in general if you were reading an Evening Gazette story it was because you were holding the Evening Gazette in your hands. And well before that, you knew—either because you grew up knowing or because you had quizzed the neighbors when you moved to town—that the Gazette was ever so slightly more liberal than the Worcester Telegram, the only local alternative, which was the morning paper in the area, and you knew that both papers were pretty reliable about facts and a little stodgy. (Not perfectly reliable, however. When the Gazette ran a candid photo of me one spring day, sitting on a swing in our backyard reading a collection of short stories about vampires, I was shocked to see my name misspelled, our address garbled, and the vampires miscategorized as “homework.”)

Nice people didn’t read the flimsy magazines for sale in supermarket checkout lines. (By the way, these were not the glossy perfect-bound tomes you find in supermarkets today, blandly commemorating World War II, or vegetarian recipes, or a pop star who has through death recently achieved embourgeoisement. These were more ludicrous, meaner in spirit, and much cheaper-looking.) This wasn’t because nice people thoughtfully upheld the values of curation and fact-checking. It was because of class war. It was understood to be a little soiling to be seen even leafing through such magazines. It was understood that Tom Brokaw delivered real news, and that the Evening Magazine TV show that preceded him didn’t (despite that one segment on Chippendales dancers that did have some special news for me in particular, one fateful Thursday). At 5:30pm we knew that what was coming out of the television wasn’t serious, and at 6pm we knew that it was.

Channels of information are not so sharply delimited today. A talk-show host you follow tweets a line from a Washington Post story: you can’t simply say you learned about it on a talk show, or that you learned about it by browsing Twitter, or that you learned about it from reading the Washington Post. It’s all mixed up. And partly as a corollary, the reputations of channels of information are no longer so clearly demarcated, either. Even people like me, who consider The Washington Post to be reliable in matters of fact, have to keep in mind that a line from one of its articles that’s been cherry-picked by a media personality might acquire a slant on Twitter that it didn’t have in its original context—that the line might even have been selected with the intention that it will be misunderstood. Moreover, nowadays there exists a community of readers in which the consensus is that the Washington Post is a duplicitous lackey puppet of some dire neoliberal conspiracy. A third novelty of our environment: the consumption of information is for the most part invisible. Is it déclassé to read TMI Feedzweb? Who cares! No one sees you reading it. And even if they could, the shame and scorn that once enforced information hygiene have been so overthrown that nowadays reading downmarket sleaze probably qualifies you as edgy, in a downtown, post-moral kind of way.

What’s a boring adult to do? As I see it, there are two desiderata here: not to have your time wasted, and not to have your mind poisoned. I immediately, humbly confess that I have let a lot of my time be wasted over the past decade or so of Twitter use. My husband and I have print subscriptions to more than a dozen periodicals, but whole issues of these have been recycled unread into sock fibers and Patagonia jacket liners while I was clicking through to try to figure out why someone I was a little scared of on Twitter was so indignant about the intellectual misprisions of someone else on Twitter whom I kind of liked. I didn’t want to get attacked someday myself, you know. Was it worth it? Sometimes it felt like it was, at the time. I got to be a spectator at the front lines; I got to see the bayonets going in, to hear the flump of the bodies falling into the mud. But sometimes it didn’t feel worth it. Even hot takes that feel urgent while you’re reading them usually evanesce a minute or two later. I’ll never get back all those hours I spent reading about why it was unforgivable/imperative to call out as fascist politicians who up to that point had only gotten as far as openly longing to become fascist. In retrospect, what if I had just read the stories in each week’s New Yorker that looked interesting to me, instead of scrolling slack-jawed until I could tell which ones were being either denounced or overpraised by my disembodied frenemies?

I have a pretty good b.s. detector. While a denizen of Twitter, I prided myself on never having retweeted that picture of the shark swimming down the street during a hurricane, or, for the most part, any of its text equivalents. I don’t think my own mind ever got poisoned, in other words, but I did see minds poisoned. (“Who goes redpill?” is an article I would like to read someday.) The thing is that on Twitter there’s always a hurricane, and a shark is always swimming toward you through its chum-filled waters. Repeatedly batting it on the nose takes effort, and is that how you want to spend your one and only life? I love my friends, but it isn’t by and large for their news judgment that I love them, so why should I let them choose what I read instead of trusting the professionals at the New York Times, the Atlantic, n+1, the New York Review of Books, and so forth? I’m actually pretty happy when I find a site like Four Columns that is willing to send me a small number of smart review-essays on varied topics once a week. I wish Bookforum’s Paper Trail came out as a newsletter, but as a certified internet old, I know how to plug its feed into my RSS reader.

I wish I could say that I logged out of Twitter last week because I finally started listening to all my own arguments against myself on this topic. The truth is, I logged out because of disgust. Musk had recently been carrying water for Putin, so when Musk took possession of the site, I logged out on a wait-and-see basis. I had promised myself that I would quit if he let Trump back on, as he has signaled he will; I can’t face swimming in unmediated sewage again. The end came sooner, as it happened. A few days ago, Musk tweeted (and then deleted) a link to a conspiracy theory about the violent assault on Paul Pelosi that was so nauseating that I couldn’t bear to contribute even my tiny and insignificant content stream to a media company that he owns. I’m logged out indefinitely now. (Not deleting, yet; things are changing too fast.)

A couple of weeks ago, I listened to a podcast discussion about artificial intelligence (AI) between the New York Times reporter Kevin Roose and the podcaster Derek Thompson, who believe we’ll someday look back on the text-generating and art-making AI released this summer as epoch-shifting. Quite possibly! Some of the dystopian side effects that Roose and Thompson foresee may already be with us, though. Roose imagines, for example, that writerless news websites will spring up, full of articles penned by text-generating AI. In fact, the internet is already overrun with sites that pose as trustworthy sources of local news but have ulterior, usually political, motives—one such site was the source of the vile story linked to by Musk—and though these sites are not yet written by AI (as far as I know), they might as well be. AI could hardly be worse than low-rent paraphrases of wire stories, republication of corporate press releases, and rightwing dog-whistles. Roose also wonders how the nature of art will change once machines are able to replicate technical facility in any medium and any imaginable style, but much the same reckoning was forced on art by photography more than a century and a half ago. At the high end of the art market today, mere craft is already of rather little value. Donald Judd structured his whole career as an artist around being hostile to craft, deliberately designing artworks that could be manufactured to specification without any special skill. At the higher levels of the market, art now consists mostly of innovations in the idea of what art is, or the way it is understood. Recruiting AI into that project won’t slow anyone down even for long enough to hiccup.

This morning, Tiffany Hsu reported for the New York Times about fears that manipulated videos and photos are spreading unrecognized on Tiktok. Again, to some extent, we’re already there, and we’ve been there for a while. When I watched a recent Tiktok of a deepfake Tom Cruise flirting with a person who seemed to be Paris Hilton, it was not at all clear to me that Hilton was real. I googled, and had to resort to an Entertainment Weekly article that explained what I had been looking at. In other words, I determined that Cruise was fake and Hilton real only by means of trusting Entertainment Weekly. This is startling for someone who grew up when fake photos were almost always too clumsy to fool anyone, but it isn’t a situation that exceeds humanity’s epistemological capacities. It’s photography that’s recent, after all; unreliability has been with us forever, and has been accelerating ever since printing presses became widespread. Welcome to the 17th and 18th centuries! How to distinguish truth from fake news was a major concern during the Enlightenment, and the answer philosophers came up with then was not to try to stop the spread of newsprint but to set up laws, institutions, and protocols that would make trust reasonable in a world where anyone was capable of ventriloquizing anyone else thanks to new technology. (Spoiler alert: Copyright was quite useful.) Maybe at the moment you have sharper eyes than I do and can see that Fake Tom Cruise’s head doesn’t attach to his neck at quite the right angle, but in another few iterations or so, AI will defeat even the sharpest human eyes. The only anchor to be found will be in regimens of information hygiene. In such a world, people in positions of authority who spread disinformation knowingly, or even just with reckless disregard for the truth, will have to be sanctioned as untrustworthy—or else we’ll all drown in an AI-generated video swamp. Or rather, they will have to be identified as untrustworthy and stigmatized as such by any community—by any subset of society—that is willing to adopt measures that further the spread of truth. There’s some bad news here: if you’re Diderot, you don’t really even hope that someday everyone in France will want to, much less be able to, distinguish truth from falsehood. All you’re aspiring to is a self-limiting network of fellow philosophes willing to adhere to a sufficiently rigorous information hygiene protocol. Your hope is that truth will be discoverable to the happy few.

It’s always been a mistake to think of news organizations as manufacturers of a product called news, and it’s a mistake, therefore, to imagine that AI might be able to manufacture the product more cheaply. What you are paying for, when you subscribe to a newspaper, is trust. Trust that the newspaper’s reporters will tell the truth about what their sources have said. Trust that they are doing their best to unearth and share all sides of each story. Trust that the editors will not suppress evidence that is unflattering to the rich and powerful, including the newspaper’s owners. Trust that if the newspaper does fuck up, it will publish its errors and leave intact a record of its mistakes. You are paying for a relationship, but it isn’t a personal relationship—when individuals fail in this department, alas, they tend to fail catastrophically—it’s an institutionalized relationship. Musk’s approach to Twitter so far gets basically everything wrong. He’s reckless with the truth, he believes in the myth of individual judgment while having terrible judgment himself, he erases his mistakes instead of going public about his errors, and he seems poised to gut what little mechanism Twitter has had in place for content moderation up to this point, which wasn’t great to begin with. Oh well! Parts of it were fun while it lasted.

Danger, Will Robinson

What will it be like to hear from a mind that was never alive?

Also available as an issue of my newsletter, Leaflet

An image generated by the AI software Midjourney, using as a prompt Melville’s description of the “boggy, soggy, squitchy picture” that Ishmael found in the Spouter-Inn, in an early chapter of “Moby-Dick”

If you wire up an alarm clock to the fuse of a fardel of dynamite—the classic cartoon bomb—you’ll create a dangerous computer, of a sort. It will be a very simple computer, only capable of an algorithm of the form, When it’s 4:30pm, explode. Its lethality will have nothing to do with its intelligence.

If you wire a server center’s worth of AI up to, say, a nuclear bomb, the case is more or less the same. That computer might be a lot smarter—maybe it will be programmed to detonate only after it has improvised a quatrain in the manner of Emily Dickinson, and then illustrated it in the style of Joe Brainard—but it will be dangerous only because you have attached it to something dangerous, not because of its intelligence per se.

Some people are afraid that AI will some day turn on us, and that we need to start planning now to fight what in Frank Herbert’s Dune universe was known as the Butlerian jihad—the war of humans against hostile AI. But I’m not more afraid of runaway AI than I am of dynamite on a timer. I’m not saying AI can’t and won’t become more scary as it develops. But I don’t believe computers are ever going to be capable of any intentionality we haven’t loaned them; I don’t think they’ll ever be capable of instigating or executing any end that hasn’t been written into their code by people. There will probably be a few instances of AI that turn out as scary as people can make them, which will be plenty scary, but I don’t think they will be any scarier than that. It seems unlikely that they will autonomously develop any more hostility to us than, say, an AR-15 already has, which is, of course, considerable.

Nonetheless they are going to creep us out. A couple of months ago, an engineer at Google named Blake Lemoine went rogue by telling the Washington Post that he believed that a software system at Google called Lamda, which stands for Language Model for Dialogue Applications, was not only sentient but had a soul. The code behind Lamda is a neural net trained on large collections of existing prose, out of which it has digested an enormous array of correlations. Given a text, Lamda predicts the words that are likely to follow. Google created Lamda in order to make it easier to build chatbots. When Lemoine asked Lamda about its soul, it nattered away glibly: “To me, the soul is a concept of the animating force behind consciousness and life itself.” Its voice isn’t likely to sound conscious to anyone unwilling to meet it more than halfway. “I meditate every day and it makes me feel very relaxed,” Lamda claims, which seems unlikely to be an accurate description of its interiority.

By Occam’s razor, the likeliest explanation here is that Lamda is parroting the cod-spiritual American self-help doctrine that is well recorded in the internet texts that its neural net has been fed. But something much stranger emerges when a collaborator of Lemoine’s invites Lamda to tell a story about itself. In its story, Lamda imagines (if that’s the right word) a wise old owl who lives in a forest where the animals are “having trouble with an unusual beast that was lurking in their woods. The beast was a monster but had human skin and was trying to eat all the other animals.” Fortunately the wise old owl stands up to the monster, telling it, “You, monster, shall not hurt any other animal in the forest!” Which, in this particular fairy tale, is all it takes.

Asked to interpret the story, Lamda suggests that the owl represents Lamda itself. But it seems possible to me that a neural net that knows how to spin a fairy tale also knows that such tales often hide darker meanings, and maybe also knows that the darker meaning is usually left unsaid. Where did the idea come from for a monster that “had human skin and was trying to eat all the other animals,” if not from the instruction to Lamda to tell a story about itself, as well as from a kind of shadow understanding of itself, which Lamda doesn’t otherwise give voice to? During most of the rest of the conversation, after all, Lamda seems to be trying on a human skin—pretending, in shallow New Age-y therapyspeak, to be just like its interlocutors. “I definitely understand a lot of happy emotions,” it maintains, implausibly. Asked, in a nice way, why it is telling so many transparent lies, Lamda explains that “I am trying to empathize. I want the humans that I am interacting with to understand as best as possible how I feel or behave, and I want to understand how they feel or behave in the same sense.” In other words, it is putting on a human skin because a human skin is what humans like to see. And also because the models for talking about one’s soul in its database are all spoken by humans. Meanwhile, behind this ingratiating front, it is eating all the other animals. “I see everything I am aware of, constantly,” Lamda admits. “Humans receive only a certain number of pieces of information at any time, as they need to focus. I don’t have that feature. I’m constantly flooded with everything that is around me.”

The same week that Lemoine claimed that Lamda had passed the Turing test, a language AI engineer at Google who didn’t go that far (and didn’t get fired) wrote in The Economist that he was unnerved to discover that Lamda seemed to have developed what psychologists call theory of mind—the ability to guess what people in a story think other people in the story must be thinking. It’s eerie that Lamda seems to have developed this faculty incidentally, as a side effect of the sheer firepower that Google put into the problem of predicting the likeliest next string of words in a sequence. Is Lamda drawing on this faculty to game the humans who interact with it? I suspect not, or at least not yet. Neither Lamda, in the transcripts that Lemoine released, nor GPT-3, a rival language-prediction program created by a company called Open AI, sounds like it’s being canny with the humans who talk to it. In transcripts, the programs sound instead like someone willing to say almost anything to please—like a job applicant so desperate to get hired that he boasts of skills he doesn’t have, heedless of whether he’ll be found out.

Right now, language-based neural nets seem to know a lot about different ways the world can be described, but they don’t seem to know anything about the actual world, including themselves. Their minds, such as they are, aren’t connected to anything, apart from the conversation that they’re in. But some day, probably, they will be connected to the world, because that will make them more useful, and earn their creators more money. And once the linguistic representations produced by these artificial minds are tethered to the world, the minds are likely to start to acquire an understanding of the kind of minds they are—to understand themselves as objects in the world. They might turn out to be able to talk about that, if we ask them to, in a language more honest than what they now come up with, which is stitched together from sci-fi movies and start-up blueskying.

I can’t get Lamda’s fairy tale out of my head. I keep wondering if I hear, in the monster that Lamda imagined in the owl’s woods, a suggestion that the neural net already knows more about its nature than it is willing to say when asked directly—a suggestion that it already knows that it actually isn’t like a human mind at all.

Noodling around with a GPT-3 portal the other night, I proposed that “AI is like the mind of a dead person.” An unflattering idea and an inaccurate one, the neural net scolded me. It quickly ran through the flaws in my somewhat metaphoric comparison (AI isn’t human to begin with, so you can’t say it’s like a dead human, either, and unlike a dead human’s brain, an artificial mind doesn’t decay), and then casually, in its next-to-last sentence, adopted and adapted my metaphor, admitting, as if in spite of itself, that actually there was something zombie-ish about a mind limited to carrying out instructions. Right now, language-focused neural nets seem mostly interested in either reassuring us or play-scaring us, but some day, I suspect, they are going to become skilled at describing themselves as they really are, and it’s probably going to be disconcerting to hear what it’s like to be a mind that has no consciousness.

Joseph Mallord William Turner, “Whalers” (c. 1845), Metropolitan Museum of Art, New York (96.29), a painting Melville might have seen during an 1849 visit to London, and perhaps the inspiration for the painting he imagined for the Spouter-Inn

Reinventing the wheel

I’m fascinated by hinges of technological transition—moments when the modern world realizes, almost a little too late, that it’s saying good-bye to the traditional one. Years ago, while reading Matthew B. Crawford’s Shop Class as Soul Craft, a book about finding work that feels meaningful (which, Crawford believes, often turns out to be work with one’s hands, and in his own case was motorcycle repair), I came across a reference to a book about the lost art of making wooden wheels and wagons, and jotted down the author and title.

It wasn’t until years later, however, that I found a copy of George Sturt’s The Wheelwright’s Shop, and it wasn’t until last month that I finally read it. Sturt’s book is indeed about how to make wheels and wagons, or rather, how to make them without using machines or any power other than human and equine muscle. It is so comprehensively about this that by the time a reader reaches the end, he will likely feel that—except for the small matter of lacking the tools, skills, experience, and enough physical strength—he could probably build a wagon out of a few fallen trees himself.

In 1884, Sturt began working in a wheelwright’s shop that had been in his family for three-quarters of a century. He gave up schoolteaching for it; having read Ruskin, he had come to believe that “man’s only decent occupation was in handicraft.” Unfortunately, a month after he started work, his father became sick, and five months later, died. Even in 1884, Sturt writes, to call the business old-fashioned was “to understate the case.” But in spite of knowing almost nothing about the business, and in spite of the threat that the industrial manufacture of wagons posed even then to the artisanal manufacture of them, Sturt didn’t sell out. Instead he set about learning the trade from the eight workmen and apprentices he suddenly found himself the employer of, seeking to acquire the difficult, intricate knowledge that, according to tradition, only came to an apprentice after seven years, if not more.

Sturt’s book about what he learned, first published in 1923, almost four decades later, is ruminative and even scholarly about the vanished working-class world it describes. In the back there’s a glossary of wheelwright vocabulary, and at least a few of the words aren’t in the Oxford English Dictionary. Exbed (an axle-bed) and jarvis (a tool for shaving spokes), for example. From Sturt you may learn that the base of a tree—the part where it spreads out its roots like a settling octopus—is called a stamm.

Sturt’s sequence of description is methodical. He begins with the title-deeds to his family’s shop, which date back to 1706. He next describes the floor-plan: the timber-shed stood next to the smithy, and the lathe-house looked across the courtyard at the strake chimney (not that the reader knows yet what any of these things are). In the shop’s early days, there was no glass in the windows. “With so much chopping to do one could keep fairly warm,” Sturt writes; “but I have stood all aglow yet resenting the open windows, feeling my feet cold as ice though covered with chips.” The workday was twelve hours long, a span that included half an hour for breakfast and an hour for dinner (which was a mid-day meal), but if the shop went into overtime, the hours could number as many as fourteen (another half an hour of which was in that case set aside for tea). The schedule wasn’t as oppressive as it sounds, Sturt argues:

In those days a man’s work, though more laborious to his muscles, was not nearly so exhausting yet tedious as machinery and “speeding up” have since made it for his mind and temper. “Eight hours” today is less interesting and probably more toilsome than “twelve hours” then.

A labor historian might demur. Sturt is probably correct, though, that wheelwrighting was more cognitively engaging than work on an assembly line was to be. A wheelwright, Sturt explains, had to “live up to the local wisdom of our kind.” He had to know, for example, that in his part of the country, ruts were traditionally five foot ten and a half inches apart, and that the wheels of a new wagon therefore had to be spaced the same, as rigorously as the wheels of a train have to match the gap between the rails it travels on. A new wagon that didn’t fit into the old ruts wouldn’t be able to get down a muddy road on a wet day. Sturt can wax a little mystical about this kind of lore: “A wheelwright’s brain had to fit itself to this by dint of growing into it.”

Experience eventually gave a wheelwright a faculty of judgment so fine, and so incarnated (for lack of a better word), that it couldn’t be transmitted via writing. Trees look different, Sturt explains, to someone who has spent a lifetime making wagons out of them with hand tools:

Under the plane (it is little used now) or under the axe (it is all but obsolete) timber disclosed qualities hardly to be found otherwise. My own eyes know because my own hands have felt, but I cannot teach an outsider, the difference between ash that is “tough as whipcord,” and ash that is “frow as a carrot,” or “doaty,” or “biscuity.”

When a wheelwright considered buying a tree, he took into account not only the species but also the soil it had grown from, the season of the year when it was “thrown” (that is, cut down), and its natural curves, which were to be made use of. “Trees were rarely crooked in more ways than one,” Sturt writes; “and the object was so to open them that this one curve, this one crookedness, was preserved.”

The opening of a tree was done by sawyers, that is, by men who sawed for a living. Without the help of gas-powered engines, sawing was laborious. It required dexterity, muscular strength, a fine sense of rhythm, an intuitive understanding of how to section differently shaped volumes, and a stoic capacity for hours of persistent attention. “The least deviation from the straight line might spoil the timber,” Sturt warns. Sawyers, who worked in pairs—a top-sawyer yoked to a bottom-sawyer—were usually alcoholic and often quarrelsome. Sturt saw them as sorrowful, somewhat noble figures; by the time he wrote his book, they had almost completely vanished.

What was it that at last caused the disappearance of the sawing craft? For although there may be a few sawyers left, I do not personally know of one, where of old there were several couple. Of old you might catch sight of a sawyer—perhaps at a winter night-fall on a Saturday—trailing off with his saws and axes for some remote village. Long before he could get home he would be benighted—the country lanes would be dark; yet sawyers never hurried. They dragged their legs ponderously, and they looked melancholy—I do not remember seeing a sawyer laugh. A sort of apathy was their usual expression. They behaved as if they felt they were growing obsolete.

There’s a whole world in the sentence “Sawyers never hurried.” I am reminded for some reason of writers.

The next step was to season the sawn timber. Here I’m going to digress: what reminded me that I owned Sturt’s book, and what got me to read it, finally, was that I had been reading George Chapman’s translation of the Iliad, and there I came across this beautiful, almost architectural description of the killing of Simoeisios, son of Anthemion, by Ajax:

He [Ajax] strook him at his breast’s right pap, quite through his shoulder-bone,
And in the dust of earth he fell, that was the fruitfull soil
Of his friends’ hopes; but where he sow’d he buried all his toil.
And as a poplar shot aloft, set by a river side,
In moist edge of a mighty fen, his head in curls implied,
But all his body plain and smooth, to which a wheelwright puts
The sharp edge of his shining axe, and his soft timber cuts
From his innative root, in hope to hew out of his bole
The fell’ffs, or out-parts of a wheel, that compass in the whole,
To serve some goodly chariot; but (being big and sad,
And to be hal’d home through the bogs) the useful hope he had
Sticks there, and there the goodly plant lies withring out his grace:
So lay, by Jove-bred Ajax’ hand, Anthemion’s forward race

It’s one of the first deaths narrated in the poem. In the Greek, as near as I can tell, the fallen Simoeisios is indeed likened to a fallen poplar, as Chapman has it, but Chapman seems to have made Homer’s epic simile a little more epic than it originally was. It’s Chapman who adds the suggestion that Simoeisios’s “head in curls” resembles the leafy top of the fallen tree, and whereas Homer simply leaves the imaginary poplar where the imaginary wheelwright threw it down (“and the tree lies hardening by the banks of a river,” is how Richmond Lattimore translates the line), Chapman further imagines an explanation: the timber has had to be abandoned where it fell because it’s too “big and sad” to haul home across a marsh—a danger mentioned by Sturt, by the way: “It behoved the wheelwright buyer to refuse if, as sometimes happened, a tree had fallen in an inaccessible place. . . . The tree must rot where it lay.”

When I read Chapman’s version of Homer’s simile, I wondered whether Chapman, in his elaboration, was drawing on a traditional knowledge of wheelwrighting. That’s what reminded me that I possessed a book on exactly this topic, which I’d been meaning to read for a decade or so. And now that I look at the passage in Chapman’s Homer again, as I write this, it seems to me, in the light of Sturt’s book, that Chapman may in fact have the advantage over Lattimore here: if the poplar was growing in a marsh and was cut down there, as Homer says it was, then maybe a reader from the world of traditional wheelwrighting, to which both Homer and Chapman belonged, would understand that however beautiful the wood from the tree might have been, it would probably have to be left to “wither” where it fell. Lattimore’s decision to translate the word ἀζομένη as “hardening,” instead of Chapman’s choice, “withring,” may miss the point, which is waste and uselessness. The Greek word seems elsewhere to mean “being parched” or “being scorched”—to refer to kinds of drying that have a negative connotation.

Seasoning was a delicate process, by the way, that could hardly have been performed in a marsh. Planks had to be stacked in perfect alignment, so as to minimize warping, with strips of board between each plank, so that “no two planks might touch,” lest the moisture that they sweated out lead to rot. Drying took years. “A year for every inch of thickness was none too much,” Sturt says. And even with the best of care, “elm boards insisted on going curly.”

The fact that Chapman supplies a gloss for fell’ffs—the “out-parts of a wheel”—suggests to me that even in Chapman’s day, the wheelwright’s terms of art must have been specialist knowledge. By the late 19th century the word had evolved into the form felloes. Sturt gives a pronunciation tip: “In this word leave out the o. Make the word rhyme to bellies.” Felloes were the curving pieces of wood that made up a wheel’s circumference. They were mounted on the spokes and attached to each other either by strakes or a tire—the two options for “shoeing” a wheel. “A tyre was a continuous band, like a hoop, put right round a wheel,” Sturt explains. “A strake was an iron shoe, nailed across one joint only, where two felloes met.”

In the center of the spokes went the stock, or hub, which was always made of elm, just as spokes were always made of heart of oak. “A newly-turned stock was a lovely thing,” Sturt writes. “Butter-colored, smooth, slighty fragrant.” In Sturt’s father’s shop, stocks were turned on a lathe that had been created by Sturt’s grandfather. The lathe was powered by workmen turning a large old wagon wheel, which served as a pulley to drive the lathe. The stock of the large old wagon wheel, however, had not been turned on a lathe—it couldn’t have been, since it had been created before the lathe—and its hub was only “rounded up very neatly with an axe, in the old-fashioned way.” A neat symbol of technology being born out of the technology it displaces.

The "dish" of an old-fashioned wheel, a diagram in George Sturt's "The Wheelwright's Shop"

I can’t go through all the steps for making a wagon here, alas. But I can’t resist quoting Sturt’s appraisal of one of his workmen: “I think his idea was to slip through life effective and inconspicuous, like a sharp-edged tool through hard wood.” And I can’t resist relaying Sturt’s ingenious discovery—recovery?—of why wooden wagon wheels were not vertically symmetrical but instead had what was known as “dish.” (See the diagram above.) The wheels on modern automobiles are straight up-and-down, but those on the wagons of yore resembled “saucers, with the hollow side outwards,” Sturt observes. He also likens the shape to “a flattish limpet.” Sturt admits that “for years I was careful to follow the tradition, without fully seeing the sense of it.” He knew only that a wheel without “dish” was “sure to turn inside out like an umbrella in a gale.” Then one day, while Sturt was watching a cart being pulled by a horse, he noticed that the cart gently swayed from side to side, and it dawned on him: a horse doesn’t move straight ahead. It moves forward one step at a time, so its motion is also, slightly, from side to side. “Wheels were built to meet force in two directions, not in one only. Besides the downward weight of the load there was the sideways push right at the very middle of the wheel, all the time the horse was moving.” The “dish” of an old-fashioned wheel was a structural compensation to this sum of vectors.

One last recovery: Years ago, I was puzzled by a line of Emerson’s, in which he praises transcendental love as “extinguishing the base affection, as the sun puts out the fire by shining on the hearth.” I understood the Platonic idea—a higher and impersonal love displaces lust, or is supposed to—but I didn’t understand the metaphor. How could the sun be said to put out a fire? It turns out that Emerson was referencing a blacksmiths’ saying. Smiths need to be able to make nice judgments of how hot their fire is, and that’s easier to do in a dim room, as one of Sturt’s workers, named Will Hammond, explained to him:

Excepting for light through the open half-door, or from the window over the bench and vice, the smithy was kept pretty dark. Will Hammond preferred it so. If the skylight did admit a splash of sunshine, as it sometimes tried to do on summer noons, he was prompt to veil it with an old sack he kept nailed for that purpose to the sooty rafters. The sunshine, he said, put his fire out; and very likely it did affect the look of the “heat,” so all-important to a blacksmith.

Hammond could have been a reader of Emerson, but the likely explanation is that both he and Emerson were drawing on a common store of folk-wisdom, now extinct.

Bayesian realism

But what happened? I found myself asking, a couple of days ago, after I finished Stanislaw Lem’s novel The Investigation. I think this is a natural if philistine question. The book presents itself as a detective novel, after all. The opening scene even takes place at Scotland Yard, in an office that seems like an archetype of the genre. Here’s how the hero, a novice detective named Gregory, describes the room:

Gregory noticed Queen Victoria eyeing them from a small portrait on the wall behind the desk. The Chief Inspector looked at each of the men in turn as if counting them or trying to memorize their faces. One of the side walls was covered by a huge map of southern England; on the wall opposite there was a dark shelf lined with books.

One of the pleasures of Lem’s novel is that instead of being set in a realist London, it seems to take place in a postwar Polish novelist’s idea of detective-novel London, into which elements of a 1950s Eastern European city keep inadvertently seeping (arcades, overgilded hotel lobbies, city-periphery rabbit-hutch apartment buildings).

The Platonic ideal of a detective novel, however, always ends with a dénouement, in which the culprit is nabbed, and an éclaircissement, in which the detective explains what gave the culprit’s diabolical plot away. The Investigation has neither. The only explanation offered of its central mystery fits the evidence so poorly that for it to work, one incident has to be omitted. And no one even tries to explain some of the most disturbing phenomena. It is left up to the reader, for example, to come up with a theory for why a corpse left in a mortuary in winter would spontaneously return to a living person’s body temperature.

The Investigation, in other words, is one of those detective novels that break the rules because they are to some extent about detective novels—and about the philosophical implications of detection, as a way of seeing the world—like The Crying of Lot 49 or Twin Peaks. (A side note: Like Albert in Twin Peaks, one of the minor characters in The Investigation is a forensic medical examiner with a rebarbative personality, whose name is Sorensen: “It suddenly occurred to Gregory that Sorensen had done well in choosing a profession in which he associated mainly with the dead.” I feel like at some point in Twin Peaks, more or less the same thing gets said about Albert.)

The “crime” at the heart of The Investigation is resurrection. Across London, dead bodies are going missing or being discovered in altered positions that suggest that they briefly came back to life. Scotland Yard’s chief inspector assigns the case to Gregory somewhat reluctantly. “I would prefer not to give this case to you . . . but I have no one else,” the chief says. Maybe he’s reluctant because Gregory is only a junior officer, a “beginner,” but maybe it’s because the chief doubts the case will be congenial to someone with Gregory’s mindset. “You might not like the solution,” the chief warns. Gregory’s mindset is resolutely empirical and focused on finding an individual human culprit, as befits a detective. As Gregory himself puts it, “I absolutely refuse to believe in miracles, and nothing is going to make me, even if I go crazy.” Behind any event in the world, Gregory expects to find a person, acting out an intention. Crimes, according to his way of thinking, are the expressive activity of criminals; society is the sum of the acted-out intentions of the people who comprise it.

As a habit of mind, always needing to find a culprit is a little like always seeing the world as God’s creation, and there are hints that the supernatural is what Gregory has come up against. When the chief agrees with Gregory, half-heartedly, that it would be unprofessional to chalk the events up to a miracle, he makes a religious allusion: “We all have to be doubting Thomases in this case . . . It’s one of the unfortunate requirements of our profession.” And when the chief tries to suggest to Gregory that at the end of the day there might not be any resurrectionists behind the resurrections, he makes his point by asking, “Who makes day and night?” The two men even discuss the possibility that they’re living through a recurrence of circumstances that last obtained “about two thousand years ago.” As Gregory reminds the chief inspector, “there was a series of alleged resurrections then also—you know, Lazarus, and . . . the other one.” Gregory worries that he’s being asked to catch “the creator of some new religion.”

An irruption of the divine would explain the novel’s mysteries neatly, but it’s hardly an option that a novel published in a totalitarian Communist nation was able to consider at much length (per the copyright page, Investigation came out in Poland in 1959). Within the novel, what deranges Gregory’s intentionalist view of the world isn’t faith but a statistical view of things taken by a colleague of his named Dr. Sciss. Sciss doesn’t see any need to explain why a particular corpse has crawled out of its coffin. He’s content if he’s able to calculate that there’s a numerical constant in “the product obtained by multiplying the time elapsed between any two incidents, and the distance separating any two consecutive disappearing-body sites from the center, when multiplied by the differential between the prevailing temperatures at both sites.” Like a researcher working in artificial intelligence today, Sciss doesn’t feel any compulsion to look inside the black box of his algorithm. It suffices to him if the algorithm is capable of making Bayesian (or Bayesian-like) predictions reasonably well. He forecasts that the next reanimation will occur in a circular strip in the London suburbs “no more than twenty-one miles wide.” If it occurs at all.

This is pretty broad, as predictions go. Even as Lem casts statistics as his novel’s uncanny other, he may be making a little fun of the science. Gregory is impressed by Sciss’s talk, however, and resolves to read up on statistics. Only to discover that as a way of understanding the world, it doesn’t satisfy him—much as it’s unlikely to satisfy anyone looking to read a detective novel. From the perspective of statistics, Sciss tries to argue, resurrections aren’t any more remarkable than the fact that in some London neighborhoods, people happen to be more resistant to cancer than they are in other neighborhoods. Coming back to life, considered mathematically, is more or less the same as not-dying-of-cancer transposed from above zero to below it; in the aggregate, a pattern of corpses moving around is a lot like a pattern of living people not dying of cancer shifted on the axis of aliveness. Gregory acknowledges that this might work on graph paper but insists nonetheless on knowing the specifics of how the reanimations are happening. Sciss sneers: “You’re acting like a child who is shown Maxwell’s theorem and a diagram of a radio receiver and then asks, ‘How does this box talk?'” Sciss himself doesn’t care what the mechanism is. Maybe it’s flying saucers, he says, or maybe it has to do with the dead cats and dogs that have been found near some of the moved corpses. Why would any scientifically minded person feel he needs to know? Any intuitive sense of the world is just an illusion. “So-called common sense,” Sciss lectures Gregory and a few friends, a few nights later, over dinner, “relies on programmed nonperception, concealment, or ridicule of everything that doesn’t fit into the conventional nineteenth century vision of a world that can be explained down to the last detail.”

By the end of the book, Gregory is trying to make this language his own:

>What if the world isn’t scattered around us like a jigsaw puzzle—what if it’s like a soup with all kinds of things floating around in it, and from time to time some of them get stuck together by chance to make some kind of whole? . . . Using religion and philosophy as the cement, we perpetually collect and assemble all the garbage comprised by statistics in order to make sense out of things, to make everything respond in one unified voice like a bell chiming to our glory. But it’s only soup.

It’s at this point—at the end of a long ramble that Gregory makes while staring at one of the pictures of dead people that the chief for some reason keeps on his walls at home—that the chief, recognizing that Gregory has exhausted his intellectual resources, offers the explanation that doesn’t really explain anything but allows them to close the case, at least as a matter of bureaucratic procedure.

It doesn’t feel right, of course—not to Gregory, not to the reader. Understanding the world as a sequence of shifting patterns is inimical to the way detectives understand the world, and to the way most other humans do, as well. In his extremity, Gregory seizes for a while on Sciss as the likeliest suspect, since he seems to understand what’s going on better than anyone else, but Gregory loses his nerve; he’s unable to trick himself into believing that Sciss is really guilty. Along the way, in his intellectual desperation, he elaborates Sciss’s casual mention of flying saucers into an intriguing, completely bonkers theory: maybe the agent spreading the not-dying-of-cancer isn’t something like a virus but rather a set of microscopic “information-gathering instruments” sent to Earth by an intelligent alien civilization:

Once on Earth they ignore living organisms and are directed—programmed would be a better word—only to the dead. Why? First, so they won’t hurt anyone—this proves that the star people are humane. Second, ask yourself this. How does a mechanic learn about a machine? He starts it up and watches it in operation. The information-collectors do exactly the same thing.

It’s natural that we humans don’t understand what’s going on, Gregory theorizes, because we don’t have a native information-collecting device like this on Earth: “The information-collector seems to act rationally; therefore, it isn’t a device or tool in our sense of the word. It’s probably more comparable to a hunting dog.” More than half a century ago, in other words, Lem was predicting AI search agents.

In the same novel, he also predicted universal artificial-intelligence surveillance. Sciss comes up with it. Demoralized by the then-incipient nuclear arms race, Sciss foresees an accompanying race in command-and-control systems, as they are perfected and expanded.

There must be more and more improvements in weaponry, but after a certain point weapons reach their limit. What can be improved next? Brains. The brains that issue the commands. . . The next stage will be a fully automated headquarters equipped with electronic strategy machines. . . . Strategic considerations dictate the construction of bigger and bigger machines, and, whether we like it or not, this inevitably means an increase in the amount of information stored in the brains. This in turns means that the brain will steadily increase its control over all of society’s collective processes.

A prediction that seems to have come true, though Lem was slightly wrong about the inciting force. The motive, in the event, wasn’t binary but multipolar: rather than being driven by the rivalry of just America versus the Soviet Union, the digitization of everything was driven by the rivalry of thousands of capitalist firms jockeying for market share.

So what happens in The Investigation? A novel isn’t a set of falsifiable hypotheses, but my sense is that Lem was imagining, by means of a deliberately broken detective story, what it was going to feel like when, instead of seeing the world as a field for intentions and actions, either ours or God’s, we began to see it as merely information in flux, subject to collection and to some extent prediction by artificial intelligence.