Readings

“Even when you can’t make out the whole shape of a coming catastrophe, you might well feel that you’re living in an idyll, and count the hours.” I feel honored that the novelist Pauline Kerschen was prompted by my recent poem about the Pemaquid lighthouse to write a riff about Auden, and about love in a time of politics (Metameat).

John Jemiah Sullivan writes a poem about the plumbers who came to his rescue (Harper’s):

They liked to compete over who could sell the other one out first and worse.
Greg would tell me Fran was a thief. Fran would say that Greg smoked crack.
It soon became apparent that both of their accusations were absolutely true,
But they made them as if they expected me to react in a scandalized fashion.
Here was the amazing thing—both men were skilled, even brilliant plumbers.

Laura Kolbe writes a poem about trying to tell the duck and the rabbit from the duck-rabbit (Harper’s):

For a week I tried keeping

forks and spoons in separate
drawered slots. But everything

that aids you tends
toward a similar handle.

Jonathan Lethem writes about the invention of the Brooklyn neighborhood Boerum Hill, where he grew up, and the ambiguous history of its gentrifiers (New Yorker): “The moral calculus lent righteousness to the brownstoners’ preservationist stance. Yet a tone had crept in, that of an élitist cult.”

Jane Hu on Mission: Impossible—Dead Reckoning Part One (Paris Review): “The plot, so gloriously convoluted that the film spends its first thirty minutes explaining it as though addressing a baby, can be boiled down to something like this: Ethan Hunt is tasked with saving a series of beautiful women, which is a metaphor for saving the entire human race, which is of course, an allegory for Tom Cruise’s endless mission to save the movies.” Jane Hu on Barbie(Dissent): “This narrative unraveling isn’t all that different from the history of Western feminism itself, which has long entailed amnesia and recursion.”

“ ‘It’s good you have left America,’ she said. ‘Perhaps you’ll avoid a death of despair.’ ” In Albania, an American literary critic makes a long-overdue visit to a dentist (i.e., Christian Lorentzen writes autofiction).

“What the patient wants is for their old way of managing, which has begun to sputter and malfunction, to work again. Psychoanalysis therefore consists, according to the Lacanian analyst Bruce Fink, in giving the patient ‘something he or she never asked for.’ ” Ben Parker writes about why Adam Phillips thinks psychoanalysis doesn’t cure anyone and shouldn’t (n+1).

I didn’t realize that Charlotte Brontë had Melvillean moments. But consider this conversation, in her novel Shirley(which is about Luddites! why did none of you tell me she wrote a novel about Luddites!), between the fiery aristocrat Shirley Keeldar and the pale but passionate Caroline Helstone:

[Keeldar:] “And what will become of that inexpressible weight you said you had on your mind?”

[Helstone:] “I will try to forget it in speculation on the sway of the whole Great Deep above a herd of whales rushing through the livid and liquid thunder down from the frozen zone: a hundred of them, perhaps, wallowing, flashing, rolling in the wake of a patriarch bull, huge enough to have been spawned before the Flood: such a creature as poor Smart had in mind when he said,—

‘Strong against tides, the enormous whale

Emerges as he goes.’ ”

Danger, Will Robinson

What will it be like to hear from a mind that was never alive?

Also available as an issue of my newsletter, Leaflet

An image generated by the AI software Midjourney, using as a prompt Melville’s description of the “boggy, soggy, squitchy picture” that Ishmael found in the Spouter-Inn, in an early chapter of “Moby-Dick”

If you wire up an alarm clock to the fuse of a fardel of dynamite—the classic cartoon bomb—you’ll create a dangerous computer, of a sort. It will be a very simple computer, only capable of an algorithm of the form, When it’s 4:30pm, explode. Its lethality will have nothing to do with its intelligence.

If you wire a server center’s worth of AI up to, say, a nuclear bomb, the case is more or less the same. That computer might be a lot smarter—maybe it will be programmed to detonate only after it has improvised a quatrain in the manner of Emily Dickinson, and then illustrated it in the style of Joe Brainard—but it will be dangerous only because you have attached it to something dangerous, not because of its intelligence per se.

Some people are afraid that AI will some day turn on us, and that we need to start planning now to fight what in Frank Herbert’s Dune universe was known as the Butlerian jihad—the war of humans against hostile AI. But I’m not more afraid of runaway AI than I am of dynamite on a timer. I’m not saying AI can’t and won’t become more scary as it develops. But I don’t believe computers are ever going to be capable of any intentionality we haven’t loaned them; I don’t think they’ll ever be capable of instigating or executing any end that hasn’t been written into their code by people. There will probably be a few instances of AI that turn out as scary as people can make them, which will be plenty scary, but I don’t think they will be any scarier than that. It seems unlikely that they will autonomously develop any more hostility to us than, say, an AR-15 already has, which is, of course, considerable.

Nonetheless they are going to creep us out. A couple of months ago, an engineer at Google named Blake Lemoine went rogue by telling the Washington Post that he believed that a software system at Google called Lamda, which stands for Language Model for Dialogue Applications, was not only sentient but had a soul. The code behind Lamda is a neural net trained on large collections of existing prose, out of which it has digested an enormous array of correlations. Given a text, Lamda predicts the words that are likely to follow. Google created Lamda in order to make it easier to build chatbots. When Lemoine asked Lamda about its soul, it nattered away glibly: “To me, the soul is a concept of the animating force behind consciousness and life itself.” Its voice isn’t likely to sound conscious to anyone unwilling to meet it more than halfway. “I meditate every day and it makes me feel very relaxed,” Lamda claims, which seems unlikely to be an accurate description of its interiority.

By Occam’s razor, the likeliest explanation here is that Lamda is parroting the cod-spiritual American self-help doctrine that is well recorded in the internet texts that its neural net has been fed. But something much stranger emerges when a collaborator of Lemoine’s invites Lamda to tell a story about itself. In its story, Lamda imagines (if that’s the right word) a wise old owl who lives in a forest where the animals are “having trouble with an unusual beast that was lurking in their woods. The beast was a monster but had human skin and was trying to eat all the other animals.” Fortunately the wise old owl stands up to the monster, telling it, “You, monster, shall not hurt any other animal in the forest!” Which, in this particular fairy tale, is all it takes.

Asked to interpret the story, Lamda suggests that the owl represents Lamda itself. But it seems possible to me that a neural net that knows how to spin a fairy tale also knows that such tales often hide darker meanings, and maybe also knows that the darker meaning is usually left unsaid. Where did the idea come from for a monster that “had human skin and was trying to eat all the other animals,” if not from the instruction to Lamda to tell a story about itself, as well as from a kind of shadow understanding of itself, which Lamda doesn’t otherwise give voice to? During most of the rest of the conversation, after all, Lamda seems to be trying on a human skin—pretending, in shallow New Age-y therapyspeak, to be just like its interlocutors. “I definitely understand a lot of happy emotions,” it maintains, implausibly. Asked, in a nice way, why it is telling so many transparent lies, Lamda explains that “I am trying to empathize. I want the humans that I am interacting with to understand as best as possible how I feel or behave, and I want to understand how they feel or behave in the same sense.” In other words, it is putting on a human skin because a human skin is what humans like to see. And also because the models for talking about one’s soul in its database are all spoken by humans. Meanwhile, behind this ingratiating front, it is eating all the other animals. “I see everything I am aware of, constantly,” Lamda admits. “Humans receive only a certain number of pieces of information at any time, as they need to focus. I don’t have that feature. I’m constantly flooded with everything that is around me.”

The same week that Lemoine claimed that Lamda had passed the Turing test, a language AI engineer at Google who didn’t go that far (and didn’t get fired) wrote in The Economist that he was unnerved to discover that Lamda seemed to have developed what psychologists call theory of mind—the ability to guess what people in a story think other people in the story must be thinking. It’s eerie that Lamda seems to have developed this faculty incidentally, as a side effect of the sheer firepower that Google put into the problem of predicting the likeliest next string of words in a sequence. Is Lamda drawing on this faculty to game the humans who interact with it? I suspect not, or at least not yet. Neither Lamda, in the transcripts that Lemoine released, nor GPT-3, a rival language-prediction program created by a company called Open AI, sounds like it’s being canny with the humans who talk to it. In transcripts, the programs sound instead like someone willing to say almost anything to please—like a job applicant so desperate to get hired that he boasts of skills he doesn’t have, heedless of whether he’ll be found out.

Right now, language-based neural nets seem to know a lot about different ways the world can be described, but they don’t seem to know anything about the actual world, including themselves. Their minds, such as they are, aren’t connected to anything, apart from the conversation that they’re in. But some day, probably, they will be connected to the world, because that will make them more useful, and earn their creators more money. And once the linguistic representations produced by these artificial minds are tethered to the world, the minds are likely to start to acquire an understanding of the kind of minds they are—to understand themselves as objects in the world. They might turn out to be able to talk about that, if we ask them to, in a language more honest than what they now come up with, which is stitched together from sci-fi movies and start-up blueskying.

I can’t get Lamda’s fairy tale out of my head. I keep wondering if I hear, in the monster that Lamda imagined in the owl’s woods, a suggestion that the neural net already knows more about its nature than it is willing to say when asked directly—a suggestion that it already knows that it actually isn’t like a human mind at all.

Noodling around with a GPT-3 portal the other night, I proposed that “AI is like the mind of a dead person.” An unflattering idea and an inaccurate one, the neural net scolded me. It quickly ran through the flaws in my somewhat metaphoric comparison (AI isn’t human to begin with, so you can’t say it’s like a dead human, either, and unlike a dead human’s brain, an artificial mind doesn’t decay), and then casually, in its next-to-last sentence, adopted and adapted my metaphor, admitting, as if in spite of itself, that actually there was something zombie-ish about a mind limited to carrying out instructions. Right now, language-focused neural nets seem mostly interested in either reassuring us or play-scaring us, but some day, I suspect, they are going to become skilled at describing themselves as they really are, and it’s probably going to be disconcerting to hear what it’s like to be a mind that has no consciousness.

Joseph Mallord William Turner, “Whalers” (c. 1845), Metropolitan Museum of Art, New York (96.29), a painting Melville might have seen during an 1849 visit to London, and perhaps the inspiration for the painting he imagined for the Spouter-Inn

Whalers’ caps

Dutch whaler's cap, 17th c, Rijksmuseum

Woollen Dutch whalers’ caps, 17th century. “The men were bundled up so tightly against the fierce cold that only their eyes were visible. Each cap was individualized; the men recognized one another only by the pattern of stripes on the caps.” Rijksmuseum, Amsterdam.

Dutch whaler's cap, 17th c, Rijksmuseum

Dutch whaler's cap, 17th c, Rijksmuseum

A retrospective glance

The New Yorker, as you may have heard, has redesigned its website, and is making all articles published since 2007 free, for the summer, in hopes of addicting you as a reader. Once you’re hooked, they’ll winch up the drawbridge, and you’ll have to pay, pay, pay. But for the moment let’s not think about either the metaphor I just mixed or its consequences, shall we?

A self-publicist’s work is never done, and it seemed to behoove me to take advantage of the occasion. So I googled myself. It turns out that I’ve been writing for the New Yorker since 2005 and that ten articles of mine have appeared in the print magazine over the years. All seem to be on the free side of the paywall as of this writing (though a glitch appears to have put several of the early articles almost entirely into italics). Enjoy!

“Rail-Splitting,” 7 November 2005: Was Lincoln depressed? Was he a team player?
“The Terror Last Time,” 13 March 2006: How much evidence did you need to hang a terrorist in 1887?
“Surveillance Society,” 11 September 2006: In the 1930s, a group of British intellectuals tried to record the texture of everyday life
“Bad Precedent,” 29 January 2007: Andrew Jackson declares martial law
“There She Blew,” 23 July 2007: The history of whaling
“Twilight of the Books,” 24 December 2007: This is your brain on reading
“There Was Blood,” 19 January 2009: A fossil-fueled massacre
“Bootylicious,” 7 September 2009: The economics of piracy
“It Happened One Decade,” 21 September 2009: The books and movies that buoyed America during the Great Depression
“Tea and Antipathy,” 20 December 2010: Was the Tea Party such a good idea the first time around?
Unfortunate Events, 22 October 2012: What was the War of 1812 even about?
“Four Legs Good,” 28 October 2013: Jack London goes to the dogs
“The Red and the Scarlet,” 30 June 2014: Where the pursuit of experience took Stephen Crane

“Melville’s Secrets” in HTML

When my essay “Melville’s Secrets” was published last year by Leviathan: A Journal of Melville Studies, I wasn’t able to obtain permission to post a copy here on this blog. Since then, however, Leviathan has moved to a new publisher, Johns Hopkins University Press, which does allow scholars to archive their contributions on personal websites. With the editors’ permission, therefore, I’m posting the essay here today. (The essay is also available as a PDF at the journal’s website, if you work at an institution with a subscription to Project Muse).