The most dangerous intelligence

There’s been concern lately about the dangers of artificial intelligence (AI), and famously the concern has been expressed even by AI’s makers and proponents, such as Sam Altman of Open AI. One term of art used when discussing the danger is alignment, as in, Will the interests of AI remain aligned with those of humanity? Or: Will the interests of AI turn out to be aligned with the interests of some humans, at the expense of the well-being of others?

New tools often do serve some people’s interests better than others, and usually the some in question turns out to be the rich. But the concern about AI is not just that it will put whole classes people out of work. There’s fear that it could amount to a kind of apocalypse—that humans will be outsmarted by the new intellectual entities we are unleashing, maybe even before we realize that the entities are coming into their own. Faster than Chat GPT can write a college essay, power plants will be induced to melt down, pathogens will be synthesized and released, and military software will be hacked, or maybe will self-hack.

Is this possible? The idea seems to be that AI could develop intentions of its own, as it acquires general (rather than task-specific) intelligence and becomes a free-ranging, self-directed kind of mind, like the minds you and I have. Is that possible? Altman has described his GPT-4 engine as “an alien intelligence.” The phrase I found myself resorting to, when I played with it not long ago, was “a dead mind.” It can be uncanny how closely its operation resembles human thinking, but there’s something hollow and mechanical about it. The thoughts seem to be being thought by someone who is disembodied, or someone who has never been embodied. It isn’t clear how the one thing needful could be added to this. Among the surprises of AI’s development, however, have been its emergent skills—things it has learned incidentally, on the way to learning how to write paragraphs. Without its creators having set about teaching it to, AI became able to write software code, solve plumbing problems, translate from one human language to another, and construct on the fly what psychologists call “theory of mind,” i.e., mental models of what other minds are thinking. I think what most unnerves me about interacting with Chat GPT is how seamlessly it manages all the things a human takes for granted in a conversation: the AI seems to understand that you and it are different mental entities, who are taking turns expressing yourselves; that when you ask a question, it is supposed to answer, and vice versa; that when you give instructions, it is supposed to carry them out. It acts as though it understands, or even believes, that it may have information you don’t, and vice versa. That’s a very rudimentary kind of self, but it’s not nothing. Five years from now, will AI have a kind of self that approaches that of living human consciousness?

It’s dangerous to bet against technology, especially one that is advancing this fast, but I think I do see a couple of limits, which I’d like to try to articulate.

First, on the matter of possible apocalypses, I’m not sure that any large-language-model artificial intelligence will ever be smarter than the smartest human. In fact I think it’s likely that AIs created from large-language models will always be a little dumber than the smartest human. Language is not the world. It’s a description of the world; that is, it’s a remarkably supple and comprehensive representation of the mental model that humans have developed for understanding what has happened and is happening in the world and for predicting what will happen in it next. Behind the new AIs are neural nets—multidimensional matrices modeled on the interacting layers of neurons in a brain—and as the neural nets grow larger, and are fed on bigger and bigger tranches of human writing, it seems likely that they will approach, at the limit, existing human expertise. But it doesn’t seem clear to me how they could ever exceed that expertise. How could they become more accurate or more precise than the description of the world they are being trained to reproduce? And since the nets need to be trained on very large corpuses of text, those corpuses are likely going to contain a fair amount of mediocrity if not just plain inaccuracy. So a bright, well-informed human—someone with an intuitive sense of what to ignore—will probably always have an edge over an AI, which will necessarily be taking a sort of average of human knowledge. That John Henry edge might get very thin if the AIs are taught how to do second-order fact-checks on themselves. But I think that’s as far as this process could go. I don’t think it’s likely that the kind of training and model-making currently in use will ever lead to an intellectual entity so superior to human intellect as to be qualitatively different. An AI will probably be able to combine more varieties of high-grade expertise than any single human ever could; a knowledge of plumbing and cuneiform don’t often appear together in a single human mind, for example, given the slowness of human learning, and maybe there’s something that a world-class plumber would immediately notice about cuneiform that a straight-and-narrow Assyriologist isn’t likely to see. That kind of synoptic look at human knowledge could be very powerful. But I suspect that the AI’s knowledge of plumbing will not be greater than that of the best human plumbers, and that the same will be true of cuneiform and the best Assyriologists. To be clear: having world-class expertise on tap in any smartphone may indeed disrupt society. I don’t think it will lead to our enslavement or annihilation, though, and I’m not sure how much more disruptive it will be to have that expertise in the form of paragraph-writing bots, rather than just having it in downloadable Wikipedia entries, as we already do. (Altman seems excited by the possibility that people will sign up to be tutored by the AIs, but again, we already live in a world where a person can take online courses inexpensively and download textbooks from copyright-violating sites for free, and I’m not sure we’re living through a second Renaissance. The in-person classroom is an enduring institution because there’s nothing like it for harnessing the social impulses of humans—the wish to belong, the wish to show off, the wish not to lose face before others—in order to focus attention on learning.)

A second limit: unfortunately, we already live in a world populated with billions of dangerous, unpredictable, largely unsupervised intelligences. Humans constantly try to cheat, con, and generally outmaneuver one another. Some are greedy. Some are outright malicious. Many of these bad people are very clever! Or anyway have learned clever tricks from others. And so sometimes you and I are tempted to loan a grand or two to a polite, well-spoken man our age in another country who has an appealing (but not too obviously appealing) profile pic and a really plausible story, and sometimes our credit cards get maxed out by strangers buying athleticwear in states we’ve never been to, and sometimes a malignant narcissist leverages the racist grievances of the petty bourgeoisie to become President of the United States, but humanity is not (immediately or completely) destroyed by any of these frauds. It isn’t clear to me that AIs wielded by bad actors, or even AIs that develop malicious intentionality of their own, would be harder for humans to cope with than the many rogues we already have on our hands. I’m not saying there’s no new danger here. Criminals today are limited in their effectiveness by the fact that most of them aren’t too bright. (If they were bright, they would be able to figure out how to get what they want, which is usually money, without running the risk of imprisonment and shame. Thus the phrase “felony stupid,” i.e., the level of stupid that thinks it’s a bright idea to commit a felony.) If, in the new world, criminals are able to rent intelligence, that could be a problem, but again, I wonder how much more of a problem than we have to live with now, where criminals can copy one another’s scam techniques.

The last limit I can think of is that the AIs aren’t animals like us, with a thinking process powered by drives like lust, hunger, social status anxiety, and longing for connection, and therefore aren’t experiencing the world directly. There seems to be a vague idea that an artificial general intelligence derived from large-language models could be attached post hoc to a mechanical body and thereby brought into the world, but I’m not sure that such a chimera would ever function much like a mind born in a body, always shaped by and sustained in it. It’s not clear to me that in any deep sense a large-language-model-derived intelligence could be attached to a robotic body except in the way that I can be attached to a remote-controlled toy tractor by handing me the remote control. Maybe I’m being mystical and vague myself here, but as I understand it, the genius of the large-language models is that programmers devised the idea of them, and in individual cases, design the schematics (i.e., how many layers of how many pseudoneurons there will be), but leave all the particular connections between the pseudoneurons up to the models themselves, which freely alter the connections as they learn. If you train up an intelligence on language corpuses, and attach it to a robot afterwards, there isn’t going to be the same purity of method—it won’t be spontaneous self-organizing of pseudoneurons all the way down. It’ll just be another kludge, and kludges don’t tend to produce magic. I think it’s unlikely that AIs of this centaur-like sort will experience the world in a way that allows them to discover new truths about it, except under the close supervision and guidance of humans, in particular domains (as has happened with models of protein folding, for example). Also, unless you develop a thinking machine whose unit actions of cognition are motivated by drives—rather than calculated as probabilities, in an effort to emulate a mental model that did arise in minds powered by such drives—I don’t think you’re ever going to produce an artificial mind with intentions of its own. I think it’s got to be love and hunger all the way down, or not at all. Which means that the worst we’ll face is a powerful new tool that might fall into the hands of irresponsible, malignant, or corrupt humans. Which may be quite bad! But, again, is the sort of thing that has happened before.

All of my thoughts on this topic should be taken with a grain of salt, because the last time I programmed a line of code was probably ninth grade, and I haven’t looked under the hood of any of this software. And really no one seems to know what kinds of change AI will bring about. It’s entirely possible that I’m telling myself a pleasant bedtime story here. Lately I do have the feeling that we’re living through an interlude of reprieve, from I’m not sure what (though several possibilities come to mind). Still, my hunch is that any harms we suffer from AI will be caused by the human use of it, and that the harms will not be categorically different from challenges we already face.

Readings

“Even when you can’t make out the whole shape of a coming catastrophe, you might well feel that you’re living in an idyll, and count the hours.” I feel honored that the novelist Pauline Kerschen was prompted by my recent poem about the Pemaquid lighthouse to write a riff about Auden, and about love in a time of politics (Metameat).

John Jemiah Sullivan writes a poem about the plumbers who came to his rescue (Harper’s):

They liked to compete over who could sell the other one out first and worse.
Greg would tell me Fran was a thief. Fran would say that Greg smoked crack.
It soon became apparent that both of their accusations were absolutely true,
But they made them as if they expected me to react in a scandalized fashion.
Here was the amazing thing—both men were skilled, even brilliant plumbers.

Laura Kolbe writes a poem about trying to tell the duck and the rabbit from the duck-rabbit (Harper’s):

For a week I tried keeping

forks and spoons in separate
drawered slots. But everything

that aids you tends
toward a similar handle.

Jonathan Lethem writes about the invention of the Brooklyn neighborhood Boerum Hill, where he grew up, and the ambiguous history of its gentrifiers (New Yorker): “The moral calculus lent righteousness to the brownstoners’ preservationist stance. Yet a tone had crept in, that of an élitist cult.”

Jane Hu on Mission: Impossible—Dead Reckoning Part One (Paris Review): “The plot, so gloriously convoluted that the film spends its first thirty minutes explaining it as though addressing a baby, can be boiled down to something like this: Ethan Hunt is tasked with saving a series of beautiful women, which is a metaphor for saving the entire human race, which is of course, an allegory for Tom Cruise’s endless mission to save the movies.” Jane Hu on Barbie(Dissent): “This narrative unraveling isn’t all that different from the history of Western feminism itself, which has long entailed amnesia and recursion.”

“ ‘It’s good you have left America,’ she said. ‘Perhaps you’ll avoid a death of despair.’ ” In Albania, an American literary critic makes a long-overdue visit to a dentist (i.e., Christian Lorentzen writes autofiction).

“What the patient wants is for their old way of managing, which has begun to sputter and malfunction, to work again. Psychoanalysis therefore consists, according to the Lacanian analyst Bruce Fink, in giving the patient ‘something he or she never asked for.’ ” Ben Parker writes about why Adam Phillips thinks psychoanalysis doesn’t cure anyone and shouldn’t (n+1).

I didn’t realize that Charlotte Brontë had Melvillean moments. But consider this conversation, in her novel Shirley(which is about Luddites! why did none of you tell me she wrote a novel about Luddites!), between the fiery aristocrat Shirley Keeldar and the pale but passionate Caroline Helstone:

[Keeldar:] “And what will become of that inexpressible weight you said you had on your mind?”

[Helstone:] “I will try to forget it in speculation on the sway of the whole Great Deep above a herd of whales rushing through the livid and liquid thunder down from the frozen zone: a hundred of them, perhaps, wallowing, flashing, rolling in the wake of a patriarch bull, huge enough to have been spawned before the Flood: such a creature as poor Smart had in mind when he said,—

‘Strong against tides, the enormous whale

Emerges as he goes.’ ”

In praise of granite

Pemaquid point lighthouse, Maine

The Atlantic is publishing a new poem of mine on their website today. It’s called “Pemaquid lighthouse revisited,” and it’s about Peter and me revisiting a geologically striking promontory in Maine last year. It’s also about being married, and about being gay and being married, and about time. It’s sort of a knockoff of “Tintern Abbey” and sort of an answer to Auden’s “In Praise of Limestone,” a poem that also discusses homosexuality in rocks (which I wrote about for The Atlantic years ago, as it happens). At one point, Auden describes the young men in his poem as “at times / Arm in arm, but never, thank God, in step,” and as kind of a riposte to that anxiety of Auden’s (what would be so terrible about being in step, Wystan?), I wrote in alternating five-beat and six-beat lines, so that every pair of lines is, as I put it in the poem, “in step the way one always is in time / and differing the way one always does in time.”

Please take a look!

Other means

Early in the federal indictment of former President Trump that was released yesterday, special counsel Jack Smith admits that Trump, “like every American,” has the right to say whatever he wants about the 2020 presidential election—and even has the right to lie about it. But it was a crime, Smith asserts, for Trump to use lies to obstruct and distort the tallying and certifying of election results. Smith goes on to indict Trump for conspiracy to defraud the United States, conspiracy to obstruct the certification of presidential election results, and conspiracy to deprive Americans of their right to vote.

The distinction between lying that is free of legal consequences and lying in order to commit fraud and obstruction isn’t a terribly subtle one, but there are going to be people who will pretend they don’t understand it. If Trump has the right to lie to NBC News, they will ask, why doesn’t he also have the right to lie to Georgia’s Secretary of State about Georgia’s election results? So let’s get this out of the way: If I announce at my favorite local gay bar that Ryan Gosling and I have just gotten married, and I succeed in making all my friends jealous, I’m not committing a crime. But if Gosling and I file our taxes together, falsely claiming on the forms that we’re married, in an attempt to pay less tax than we would otherwise have to, it’s fraud. And it’s still fraud even if we don’t get away with it.

There are probably also going to be people who claim that Trump and his conspirators may not have been aware that the claims they were making were untrue. Smith’s indictment shivs that defense pretty brutally. In paragraph 30 (¶30) of the indictment, to take just one bald-faced example, John Eastman, aka “Co-Conspirator 2,” acknowledges in an email that he and Trump have learned that some of the allegations in a verification they have signed are “inaccurate” and that signing a new verification “with that knowledge (and incorporation by reference) would not be accurate”—and then he and Trump go ahead and put Trump’s signature on the new verification anyway.

Yesterday’s indictment isn’t as much fun to read as Smith’s earlier indictment of Trump for withholding classified security documents, partly because a more serious matter is at stake (national security secrets are important, but they’re not as important as the right to vote, and Trump seems to have been treating the secret documents as memorabilia, anyway, a motivation so entertainingly venal that it’s hard to treat the earlier matter with the gravity it deserves) and partly because the way Trump and his allies lied—over and over again, shamelessly—is exhausting. The catalog of their lies in Smith’s indictment is practically Homeric. They lie, are told they are lying, and then tell the same lie again. Remember the years we spent trying to argue in good faith with people who were repeating lies in bad faith? These are those people. “It’s all just conspiracy shit beamed down from the mothership,” (¶25) admits a senior advisor to the Trump campaign, in a private email, dismayed by his team’s repeated losses in court and exasperated that the team’s political strategy obliges him or her to pretend publicly to believe in repeatedly debunked claims.

The particular lie that pushed this senior advisor into venting was about election workers at the State Farm Arena in Atlanta. Giuliani (“Co-Conspirator 1”) told the lie to Georgia state senators on December 3, 2020 (¶21), the lie was publicly debunked by the Georgia secretary of state’s chief operating officer on December 4 (¶23), Georgia’s attorney general told Trump there was no evidence for the claim on December 8 (¶24), Giuliani told the lie again in a public hearing before a committee of Georgia’s state representatives on December 10 (¶26), Trump’s acting attorney general and acting deputy attorney general told Trump the actions at State Farm Arena had been “benign” on December 15 (¶27), Trump’s chief of staff told him the election tallying at State Farm Arena had been “exemplary” on December 22 (¶28), Trump nonetheless tweeted that Georgia’s election officials were “terrible people” who were hiding evidence of fraud on December 23 (¶28), Trump repeated the lie to his acting attorney general and acting deputy attorney general on December 27 (¶29), Trump signed a verification incorporating the lie on December 31 (¶30), and Trump repeated the lie one more time on January 2, 2021, to Georgia’s secretary of state, during the infamous conversation when Trump said he was looking to “find” 11,780 more votes (¶31).

After Giuliani told the lie in Georgia’s House of Representatives on December 10, “the two election workers received numerous death threats,” Smith observes (¶26). The identities of the people who made those death threats are very likely unknown, but almost certainly neither Trump nor any of his co-conspirators made the threats.

Why are they nonetheless part of Smith’s indictment? If the case ever reaches trial, Trump’s lawyers may try to argue that he shouldn’t be held responsible for threats made by a third party. But keep in mind the distinction that is the crux of the case, between lying for the sake of vanity or entertainment and lying in order to obstruct or impede the workings of democracy. A death threat is not an innocuous speech act. It is a promise to use violence. A public lie about a government employee or official, if a reasonable person would expect the lie to trigger death threats, is therefore a kind of force, applied on a government employee or official with respect to the performance of their duties. “An act of force to compel our enemy to do our will”: that’s Clausewitz’s first (if less famous) definition of war. With good reason, the laws in any well-ordered republic forbid acts of war between politicians and/or citizens. Hobbes writes, in Leviathan, that “because all signs of hatred, or contempt, provoke to fight, . . . we may . . . , for a law of nature, set down this precept, that no man by deed, word, countenance, or gesture, declare hatred, or contempt of another.” In a state of war, one isn’t necessarily bound by the laws of nature, Hobbes writes, and we don’t want to be living in a state of war.

On November 11, 2020, Trump disparaged a Philadelphia City Commissioner who had said there was no evidence of voter fraud in Philadelphia, and the commissioner and his family were sent death threats (¶42). And on January 6, 2021, famously, Trump tweeted that “Mike Pence didn’t have the courage to do what should have been done to protect our Country and our Constitution,” and one minute later, the Secret Service felt obliged to evacuate Pence to a secure location. Rioters who broke into the Capitol that afternoon chanted, “Hang Mike Pence!” (¶111–13). If Trump knows anything about himself, and it may be the only thing about himself he knows, it is that he has a gift for summoning and directing the rage of his followers. It is his instinct in a crisis, almost a reflex. Words for him are instrumental, not representative. He knew what he was doing.

The prospect of violence recurs at two other moments in the indictment. On January 3, a deputy White House counsel warned Jeffrey Clark (“Co-Conspirator 4”) that if Trump were kept in power on the basis of false claims of voter fraud, there would be “riots in every major city in the United States.” Clark replied, “Well, . . . that’s why there’s an Insurrection Act.” Clark, in other words, looked forward to repressing with military force any protest of the power grab he and his conspirators were trying to effect.

In its legal specifics, the scheme to keep Trump in power depended on the theory that Pence had the authority to reject or return to the states their slates of legitimate electors. On January 4, John Eastman acknowledged to one of Trump’s senior advisors that no court was likely to back the theory, and the advisor warned Eastman that by drumming up public fury on the strength of a theory that could never be put into effect legally, Trump and his allies were “going to cause riots in the streets.” Eastman replied that it wouldn’t be the first moment in American history when violence was needed to protect the republic (¶94). Eastman, in other words, looked forward to bolstering with street violence a legal theory he conceded was unjustifiable.

Clark looked forward to putting down rioters, and Eastman looked forward to being backed by them, but both knew that through lies they were welcoming violence into politics. Clausewitz’s second, more famous definition of war is “a continuation of political activity by other means”—the implication being that politics has its own means. To maintain the rule of law, politicians who go beyond them must be kept out of politics, if not sent to jail.