Does media violence lead to real violence, and do video games impair academic performance?

Cross-posted from the University of Michigan Press blog.

"Twilight of the Books," an essay of mine published in The New Yorker on 24 December 2007, has been honored by inclusion in The Best of Technology Writing 2008, edited by Clive Thompson. When The New Yorker published my essay, I posted on my blog a series of mini-bibliographies, for anyone who wanted to dig into the research behind my article and try to answer for themselves whether television impaired intellect or whether literary was declining (here's an index/overview to all these research posts). A month or so ago, when the University of Michigan Press, the publisher of The Best of Technology Writing 2008, invited me to write about my essay for their blog, I was afraid I didn't have any more to say. Also, alas, I was under deadline. But I have a breather now, and looking over my year-old notes, I realize that there were a couple of categories of research that I never posted about at the time, because the topics didn't happen to make it into my article's final draft.

This research tried to answer the questions, Does exposure to violence on television or in video games lead to aggressive behavior in the real world? and Do video games impair academic performance? I still think the questions are very interesting, though I must now offer my summaries with the caveat that they are somewhat dated. In fact, I know of some very interesting research recently published on the first question, some of which you can read about on the blog On Fiction. I'm afraid I haven't kept up with video games as closely, but I'm sure there's more research on them, too. I hope there is, at any rate, because when I looked, I found very little. (By research, in all cases, I meant peer-reviewed studies based on experimental or survey data, and not popular treatments.)

A few words of introduction. The historian Lynn Hunt has suggested in her book Inventing Human Rights that in the eighteenth century, the novel helped to change Europe's mind about torture by encouraging people to imagine suffering from the inside. As if in corroboration, some of the research summarized below suggests that the brain responds less sympathetically when it is perceives violence through electronic media. As you'll see, however, there is some ambiguity in the evidence, and the field is highly contested.

1. Does exposure to violence on television or in video games lead to aggressive behavior in the real world?

  • In a summary of pre-2006 research, John P. Murray pointed to experiments in the 1960s by Albert Bandura, showing that children tend to mimic violent behavior they have just seen on screen and to a number of studies in the early 1970s that found correlations between watching violence and participating in aggressive behavior or showing an increased willingness to harm others. In 1982, a panel commissioned by the Surgeon General to survey existing research asserted that "violence on television does lead to aggressive behavior," and in 1992, a similar panel commissioned by the American Psychological Association reported "clear evidence that television violence can cause aggressive behavior." One mechanism may be through television's ability to convince people that the world is dangerous and cruel, in what is known as the "mean world syndrome." Murray claims that a twenty-two-year longitudinal study in Columbia County, New York, run by Huesmann and Eron, which was begun under the auspices of the Surgeon General's office, has linked boys' exposure to television violence at age eight to aggressive and antisocial behavior at age eighteen and to involvement in violent crime by age thirty; in fact, a 1972 study by Huesmann et al. did link boys' exposure at eight to aggressive behavior at eighteen, but the 1984 study cited by Murray linked violent crime at age thirty to aggressive behavior at age eight and said nothing about exposure to televised violence. In an unrelated study, when television was introduced in Canada, children's levels of aggression increased. [John P. Murray, "TV Violence: Research and Controversy," Children and Television: Fifty Years of Research, Lawrence Erlbaum Associates, 2007. L. Rowell Huesmann, Leonard D. Eron, Monroe M. Lefkowitz, and Leopold O. Walder, "Stability of Aggression Over Time and Generations," Developmental Psychology 1984. For a synopsis of Huesmann's 1972 study, see Steven J. Kirsh, Children, Adolescents, and Media Violence: A Critical Look at the Research, Sage Publications, 2006, p. 208.]
  • A longitudinal study of 450 Chicago-area children was begun in 1977 when the children were between six and eight years old, and continued in 1992-1995, when they were between twenty-one and twenty-three years old. As children, the subjects were asked about their favorite television programs, whether they identified with the characters, and how true-to-life they thought the shows were. Fifteen years later, it emerged that watching violent shows, identifying with aggressive characters of the same sex, and believing that the shows were realistic correlated with adult aggression, including physical aggression. The effect was present even after controlling for such factors as initial childhood aggression, intellectual capacity, socioeconomic status, and parents' level of emotional support. (Note that in the opinion of the researchers, the Six Million Dollar Man was considered a "very violent" show, and that the heroine of the Bionic Woman was considered an aggressive character.) [L. Rowell Huesmann, Jessica Moise-Titus, Cheryl-Lynn Podolski, and Leonard D. Eron, "Longitudinal Relations between Children's Exposure to TV Violence and Their Aggressive and Violent Behavior in Young Adulthood, 1977-1992," Developmental Psychology, 2003. Cf. Kirsh , p. 209.]
  • In a 2006 textbook about the relation between media violence and aggressive behavior, author Steven J. Kirsh notes that a 1994 meta-analysis of the link between television violence and aggression estimated the size of the effect to be r = .31. "The effect sizes for media violence and aggression are stronger than the effect sizes for condom use and sexually transmitted HIV, passive smoking and lung cancer at work, exposure to lead and IQ scores in children, nicotine patch and smoking cessation, and calcium intake and bone mass," Kirsh wrote. A 2004 meta-analysis found that the correlation between video game violence and aggressive behavior was r = .26. To put the effect sizes in perspective, Kirsh notes that they are greater than the link between testosterone levels and aggression, but weaker than the link between having antisocial peers and delinquency. In surveying the research on video games, Kirsh makes the point that there is little research as yet, and that most of it was done in what he calls the "Atari age," when the games were fairly innocuous; almost no one has experimentally tested the effects on children and teens of the new-generation, highly realistic and gory first-person shooter games. [Steven J. Kirsh, Children, Adolescents, and Media Violence: A Critical Look at the Research, Sage Publications, 2006.]
  • In a 2007 summary of research, three scientists asserted that there was "unequivocal evidence that media violence increases the likelihood of aggressive and violent behavior in both immediate and long-term contexts," and noted that the link between television violence and aggression had been proved by studies in both the laboratory and the field, and by both cross-sectional and longitudinal studies. Video games were not as well documented, but in the opinion of the scientists, the preliminary evidence suggested that their effect would be similar. Playing violent video games has been shown to increase physiological arousal. Measurements of skin conductance and heart rate show that people have less of an aversion to images of real violence, if they have previously been exposed to violent television or violent video games. Measurements of event-related brain potentials (ERPs) and functional magnetic resonance imaging (fRMI) allow researchers to look with new precision at the magnitude of brain processes that occur at particular times and at the activation of specific regions of the brain. A 2006 study by Bartholow et al., for example, showed that exposure to violent video games reduces aversion to scenes of real violence, as measured by a blip of voltage that typically occurs 300 milliseconds after sight of a gory image. A 2006 study by Murray et al. (see below) showed that violent scenes of television activated parts of the brain associated with emotion, memory, and motor activity. Yet another 2006 study, by Weber et al., showed that while players were engaged in violence during a video game, a brain region associated with emotional processing was suppressed, and one associated with cognitive processing was aroused, perhaps in order to reduce empathy and thereby improve game performance. In a 2005 study by Matthews et al., chronic adolescent players of violent video games scored the same as adolescents with disruptive behavior disorders on a test designed to assess a brain region responsible for inhibition and error correction. Attempting to explain the results of the various studies under review, the authors write: "Initial results suggest that, although video-game players are aware that they are engaging in fictitious actions, preconscious neural mechanisms might not differentiate fantasy from reality." [Nicholas L. Carnagey, Craig A. Anderson, and Bruce D. Bartholow, "Media Violence and Social Neuroscience," Currents Directions in Psychological Science, 2007.]
  • While a functional magnetic resonance imaging (fMRI) device monitored their brain activity, eight children watched a video montage that included boxing scenes from Rocky IV and part of a National Geographic animal program for children, among other clips. The violent scenes activated many brain regions that the nonviolent scenes did not, mostly in the right hemisphere. These regions have been associated by other researchers with emotion, attention and arousal, detection of threat, episodic memory, and fight or flight response. The authors of the study speculate that "though the child may not be aware of the threat posed by TV violence at a conscious level . . . a more primitive system within his or her brain (amygdala, pulvinar) may not discriminate between real violence and entertainment fictional violence." In the activation of regions associated with long-term memory, the researchers saw a suggestion that the television violence might have long-term effects on the viewer. [John P. Murray, etal. "Children's Brain Activations While Viewing Televised Violence Revealed by fMRI," Media Psychology, 2006.]
  • In a 2005 study, 213 video-game novices with an average age of twenty-eight were divided into two groups, and one group spent a month playing an average of 56 hours of a violent multi-player fantasy role-playing video game. Participants completed questionnaires to assess their aggression-related beliefs before and after the test month, and were asked before and after whether they had argued with a friend and whether they had argued with a romantic partner. The data showed no significant correlation between hours of game play and the measures of aggression, once the results were controlled for age, gender, and pre-test aggression scores. The authors note that there might be an effect too small for their study to detect, and that adults might be less sensitive to the exposure than children or adolescents. [Dmitri Williams and Marko Skoric, "Internet Fantasy Violence: A Test of Aggression in an Online Game," Communication Monographs, June 2005. Andrea Lynn, "No Strong Link Seen Between Violent Video Games and Aggression," News Bureau, University of Illinois at Urbana-Champaign, 9 August 2005.]
  • A 2007 book presented three studies of video-game violence's effect on school-age children. In the first study, 161 nine- to twelve-year-olds and 354 college students were asked to play one of several video games—either a nonviolent game, a violent game with a happy and cartoonish presentation, or a violent game with a gory presentation—and then to play a second game, during which they were told they could punish other player with blasts of noise (the blasts were not, in fact, delivered). Those who played violent games, whether cartoonish or gory, were more likely to administer punishments during the second game; playing violent games at home also raised the likelihood of punishing others. Children and college students behaved similarly. In the second study, 189 high school students were given questionnaires designed to assess their media usage and personality. The more often the students reported playing violent video games, the more likely they were to have hostile personalities, to believe that violence was normal, and to behave aggressively, and the less likely they were to feel forgiving toward others. The correlation between game playing and violent behavior held even when the researchers controlled for gender and aggressive beliefs and attitudes. The more time that students spent in front of screens (whether televisions or video games), the lower their grades. In the third study, 430 elementary school children were surveyed twice, at a five-month interval, about their exposure to violent media, beliefs about the world, and whether they had been in fights. Students were asked to rate one another's sociability and aggressiveness, and teachers were asked to comment on these traits and on academic performance. In just five months, children who played more video games darkened in their outlook on the world, and peers and teachers noticed that they became more aggressive and less amiable. The effect was independent of gender and of the children's level of aggression at the first measurement. Screen time impaired the academic performance of these students, too; they only became more aggressive, however, when the content they saw during the screen time was violent. [Craig A. Anderson, Douglas A. Gentile, and Katherine E. Buckley, Violent Video Game Effects on Children and Adolescents: Theory, Research, and Public Policy, Oxford University Press, 2007.]

2. Do video games impair academic performance?

  • In a 2004 survey of 2,032 school-age children, there were statistically significant differences in print and video-game use between students earning As and Bs and those earning Cs and below. On average, A-B students had read for pleasure 46 minutes and played video games for 48 minutes the previous day; C-and-below students had read for pleasure 29 minutes and played video games for 1 hour 9 minutes. Television watching seemed constant between the groups. [Donald F. Roberts, Ulla G. Foehr, and Victoria Rideout, Generation M: Media in the Lives of 8-18 Year-Olds, The Henry J. Kaiser Family Foundation, March 2005, page 47.]
  • A 2007 book presented results of a study in which 189 high school students were given questionnaires designed to assess their media usage and personality. The more time that students spent in front of screens (whether televisions or video games), the lower their grades. In a related and similar study, 430 elementary school children were surveyed twice, at a five-month interval, and screen time impaired the academic performance of these students, too. [Craig A. Anderson, Douglas A. Gentile, and Katherine E. Buckley, Violent Video Game Effects on Children and Adolescents: Theory, Research, and Public Policy, Oxford University Press, 2007.]

UPDATE (27 Feb. 2009): For ease in navigating, here's a list of all the blog posts I wrote to supplement my New Yorker article "Twilight of the Books":

Notebook: "Twilight of the Books" (overview)
Are Americans Reading Less?
Are Americans Spending Less on Reading?
Is Literacy Declining?
Does Television Impair Intellect?
Does Internet Use Compromise Reading Time?
Is Reading Online Worse Than Reading Print?
I also later talked about the article on WNYC's Brian Lehrer Show and on KUER's Radio West.
And, as a bonus round: Does media violence lead to real violence, and do video games impair academic performance?

Group and Church

The Prime of Miss Jean Brodie, The Girls of Slender Means, The Driver’s Seat, The Only Problem.
By Muriel Spark.
Everyman’s Library Contemporary Classics. 608 pp. $23.

IN 1948, A PSYCHIATRIST in London named W. R. Bion invited volunteers to help him study the psychic life of groups. This was in the days before supervisory committees began to deter researchers from inflicting anguish on human subjects, and the volunteers had no idea what they were in for. In Bion’s account, it sounds as if his experimental technique extended the neutrality of Freud and anticipated the surrealism of Monty Python:

At the appointed time members of the group begin to arrive; individuals engage each other in conversation for a short time, and then, when a certain number has collected, a silence falls on the group. After a while desultory conversation breaks out again, and then another silence falls. It becomes clear to me that I am, in some sense, the focus of attention in the group. Furthermore, I am aware of feeling uneasily that I am expected to do something. At this point I confide my anxieties to the group, remarking that, however mistaken my attitude might be, I feel just this.

I soon find that my confidence is not very well received. . . .

The disappointment was only the first in a series. The group continued to expect Bion to tell them what to do, and he continued not to. As frustration and confusion set in, he persisted in offering nothing but mild-mannered observations of what he thought they thought of him, until his abstinence, perceived as sabotage, had so charged the room with anger and resentment that the group was “almost devoid of intellectual content.” Success!

Bion’s account is fun to read. The hapless are having their reality messed with, and an authority figure is indulging in mischief. It is not unlike the fun to be had by watching Ashton Kutcher’s Punked. Yet with his prank Bion was posing important questions. Why does a group insist on having a leader? If deprived of the leader it expects, how does it choose a new one? Bion, who fought with distinction in World War I, began his psychiatric practice during World War II, when he developed a group therapy for British soldiers, and perhaps he felt challenged to understand Britain’s ideological enemy, fascism. His questions about groups and leaders soon led him to a broader inquiry: How does a group enhance and compromise the individuals that compose it?

He developed a complex theory. Though he considered Aristotle’s Politics “an extremely dreary work,” he agreed with Aristotle that the human is a political animal, or “group animal,” as he preferred to phrase it. According to Bion, adults are always ready to participate in three groups: the dependent group, who require a master for nourishment and protection; the fight-flight group, who demand a leader to take them into battle or away from danger; and the pairing group, who hope for a messiah and sort themselves into couples (they will create him by mating, if other searches fail). Bion’s experiences suggested to him that in the absence of rules and traditions, humans revert to these primitive groups, which feel vital but not always pleasant. Unfortunately, in selecting a master, leader, or messiah, a primitive group tends to choose a person good at evacuating himself of his individuality and poor at apprehending reality. On this point Bion’s sense of humor was nicely bleak: “In its search for a leader the group finds a paranoid schizophrenic or malignant hysteric if possible; failing either of these, a psychopathic personality with delinquent trends will do; failing a psychopathic personality it will pick on the verbally facile high-grade defective. I have at no time experienced a group of more than five people that could not provide a good specimen of one of these.”

Like Freud, Bion was pessimistic about the human predicament. And yet, like Freud, he was not despairing. In order to resist the atavistic pull, sophisticated groups could balance the three primitive arrangements against one another and regulate them with artifices, such as laws. As for the individual, Bion saw him as “a group animal at war, not simply with the group, but with himself for being a group animal.” Nonetheless, no individual truly lived apart from the group. “You cannot understand a recluse living in isolation,” Bion wrote, “unless you inform yourself about the group of which he is a member.”

IN MURIEL SPARK’S novel The Prime of Miss Jean Brodie, when visitors ask Sister Helena about the influences that led her to convert to Roman Catholicism and write a treatise on morals, she gently corrects the tendency of their questions. No, she wasn’t a fan of Auden or Eliot in the 1930s. No, she wasn’t reacting against Calvinism. “But there was a Miss Jean Brodie in her prime,” she volunteers. Sister Helena may have set herself apart from the world by joining a convent, but she cannot be understood unless the reader informs himself about the group of which she is a member. In her case, that group consists of an unorthodox teacher and the women who were fascinated by her as schoolgirls. (Like Bion, Spark recognizes that a group may survive its dispersal and even the death of some of its members.)

The fictions of Muriel Spark are wonderfully unlike one another in their premises. The four novels collected in the new Everyman’s Library omnibus, for example, are concerned with a teacher’s influence on her protegees at an Edinburgh girls’ school (The Prime of Miss Jean Brodie, 1961), a tragedy in a boarding house for young women in wartime London (The Girls of Slender Means, 1963), an accountant who makes her vacation the occasion of an antisocial spree (The Driver’s Seat, 1970), and a man whose inherited wealth enables him to study the Book of Job full-time (The Only Problem, 1984). Elsewhere she has written about an amateur autobiographers’ club, anonymous phone calls that disconcert a circle of elderly friends, and a ghost’s account of haunting her murderer. And this list is only a sample of her range.

But for all its disparity of subject, Spark’s fiction is consistent in one aspect–its focus on the group. On occasion an individual receives a close-up and the narrative follows that person’s train of thought, but the story never belongs to a single person. In fact, individuals are expendable, and Spark’s indifference to their fate can be shocking. Early in The Prime of Miss Jean Brodie, the reader learns that the stupidest of the girls under Jean Brodie’s spell, Mary Macgregor, “a silent lump, a nobody whom everybody could blame,” will die at age twenty-three in a fire in a hotel. There is no indication that her death is regrettable.

Back and forth along the corridors ran Mary Macgregor, through the thickening smoke. She ran one way; then, turning, the other way; and at either end the blast furnace of the fire met her. She heard no screams, for the roar of the fire drowned the screams; she gave no scream for the smoke was choking her. She ran into somebody on her third turn, stumbled and died.

Groups of two do not fix Spark’s attention, either. There are romantic affairs and even marriages in her fiction, but they feel incidental. In Memento Mori (1958), Godfrey Colston perks up considerably when he learns that his eighty-five-year-old wife was, decades earlier, as unfaithful as he; his relief is highly Sparkian. More Sparkian still is the daisy chain of broken confidences and lingering fondnesses that leads to Godfrey’s revelation. It is where three or more are gathered that Spark takes an interest. Her favorite material is the tissue between characters, composed of their acknowledged and unacknowledged exchanges–the shared secrets and the purloined, the matched moods and the resisted. In a different genre, Muriel Spark has asked questions about group life as probing as Bion’s, and her books have defied expectation as freakishly as his experiments.

SPARK CAME OF AGE during World War II. She was in Southern Rhodesia (now Zimbabwe) when it began. “Believe it or not, . . . I wanted to ‘experience’ the war,” she writes in her memoir, Curriculum Vitae (1993). In 1944 she placed her five-year-old son in a boarding school and returned to Great Britain alone. She also left behind a husband whom she was divorcing; he had become violent and was being treated in a mental hospital.

In wartime London, she took a job in “the dark field of Black Propaganda or Psychological Warfare.” She assisted at a top-secret radio station run by British intelligence, which broadcast in German and pretended to be Nazi in order to demoralize and mislead unsavvy German listeners. German POWs were the announcers. Sometimes the lies Spark helped to manufacture boomeranged and were reported as fact in British newspaper. “We were constantly in danger of deceiving our own side,” she writes in her memoir, “and sometimes, at least for a while, we did.” It would be hard to imagine a more appropriate training for a novelist. As Godfrey’s wife, a novelist, remarks in Memento Mori, “The art of fiction is very like the practise of deception.” In order to save the country from fascism, Spark helped to make up stories with a fascist point of view. She received an education in the moral hazards of fiction into the bargain.

After the war, Spark edited a trade magazine for jewelers, which she enjoyed, and a poetry journal, which she loathed (“In no other job have I ever had to deal with such utterly abnormal people”). For the next few years she worked as an independent literary scholar. “No one . . . has ever been as poor as you were in those days,” a friend later recalled; “I mean someone of education, culture and background. . . . You had one dress, and your shoes had holes in them.” She again became entangled with a man who suffered from mental illness.

She then suffered a series of transformative experiences. Intense reading of John Henry Newman led to her conversion to Roman Catholicism in 1954. Shortly afterward, malnourishment and overuse of dexedrine caused her to have hallucinations. As she recovered, it was her good fortune to receive a small subsidy from Graham Greene and a commission from Macmillan to write her first novel– then as now “a thing unheard of,” as she admits in her memoir. The Comforters, which made literary use of her hallucinations and her conversion, was published in 1957 and brought her success; here her memoir ends.

HER MOST FAMOUS WORK, The Prime of Miss Jean Brodie, seems slighter than it is. It is more a novella than a novel in length; it takes up only 123 pages in the Everyman’s Library edition. Published in 1961–the same year as Bion’s collection of papers Experiences in Groups–it is concerned with a schoolteacher named Jean Brodie and five girls who were her pupils in the junior grades and who “remained unmistakably Brodie” in the senior grades and beyond. Inside the Brodie circle, life is charmed. “All my pupils are the creme de la creme,” Jean Brodie assures her followers.

As a sort of schoolgirl romance, the book scarcely seems serious enough in genre to take up the subject of fascism. Yet the subject is present on the first page, in an inventory of the miscellaneous knowledge that distinguishes the Brodie set from their peers:

These girls were discovered to have heard of the Buchmanites and Mussolini, the Italian Renaissance painters, the advantages to the skin of cleansing cream and witch-hazel over honest soap and water, and the word ‘menarche’; the interior decoration of the London house of the author of Winnie-the-Pooh had been described to them, as had the love lives of Charlotte Bronte and of Miss Brodie herself.

At the name Mussolini (and at the name Buchman, if you know that he was an early admirer of Hitler, a fact I had to look up), a reader may be startled. But it is difficult to take the startle seriously. The reader doesn’t yet know what Miss Brodie thinks of Mussolini and Buchman, and the other items in the list are so prepossessing and harmless. Witch hazel! A. A. Milne! The reader suppresses his qualms and is thereby inoculated. By the time he learns, a chapter further on, that Miss Brodie has returned from a vacation in Italy with a picture “showing the triumphant march of the black uniforms in Rome,” which she displays to her students admiringly, he is willing to regard her politics as an eccentricity. He takes it as he might take the news that an admired musician has been caught shoplifting, or that a favorite actor belongs to what is probably a cult. The misbehavior doesn’t seem to taint what he admires–her regally announced preferences (Giotto over Leonardo; goodness, truth, and beauty before safety), her unchecked flair for personal drama. People like her tend to break rules, after all.

What sort of group is Jean Brodie’s? That she breaks rules is one of the first clues. “Hold up your books,” she instructs her pupils; “prop them up in your hands, in case of intruders. If there are any intruders, we are doing our history lesson.” It is the sort of group that lies to outsiders. But such a flat, moralizing description feels unfair. Brodie encourages her students to deceive the headmistress so that she may narrate to them the romantic death of her fiance in World War I. Hers is a seductive badness; it brings access to secret knowledge. And the secret knowledge leads to personal developments that might not otherwise be possible. Inspired by Brodie’s tale, two of the girls collaborate on a kind of slash-fiction sequel, in which Brodie’s fiance survives, returns to Scotland, and tries to hold them prisoner in a mountain hut.

Here are the two girls at work:

Jenny wrote: With one movement he flung her to the farthest end of the hut and strode out into the moonlight and his strides made light of the drifting snow.

‘Put in about his boots,’ said Sandy.

Jenny wrote: His high boots flashed in the moonlight.

‘There are too many moonlights,’ Sandy said, ‘but we can sort that later when it comes to publication.’

‘Oh, but it’s a secret, Sandy!’ said Jenny.

‘I know that,’ Sandy said. ‘Don’t worry, we won’t publish it till our prime.’

The reader feels the lure. He may even feel as if he too belongs, because Spark has written so as to induce that feeling. She uses words, such as prime, that have a private meaning in the Brodie set. The sequence of ideas is jumpy, forcing the reader to supply context as if from his own experience as a member. Is Sandy or Jenny the primary creator of the fantasy about Jean Brodie’s fiance? It is left purposely vague, as if no individuals were responsible. It is also unclear whether Sandy’s wish to publish the fantasy is in earnest or is itself a component of the fantasy.

It is worth looking closely at that last ambiguity. Probably Sandy does wish to share the tale with others, and Jenny’s response is an admonishment: As members of the Brodie group, they mustn’t. Sandy and Jenny have split off from the main group in order to fantasize about Brodie’s love life. The fantasy is loyal to Brodie, in object if not in spirit, but if it were to issue in a work made public–if, in pairing up, Sandy and Jenny were to give birth, so to speak–they would no longer belong to Brodie’s group but to their own, led not by her but by the fiction they had created. They would betray her. Because Sandy is still loyal, she expresses her wish to publish ambiguously, so that it need not become a confrontation. It does not, and neither does Jenny’s correction. At this stage of the novel, Sandy is willing to defer, if not sacrifice, her personal wishes.

Am I reading too much into a couple of lighthearted lines of dialogue? The more bonded you are to Jean Brodie’s group, the more likely you are to think so, and Spark is at pains to have the reader bonded. Inside a group, simple disavowals–I didn’t mean “here” but “in Neverland”; I didn’t mean “next week” but “in our prime”–may be enough to suppress the awareness of dissent, because the group doesn’t want to know about it in the first place. “Group mentality,” Bion writes, “is the unanimous expression of the will of the group, contributed to by the individual in ways of which he is unaware, influencing him disagreeably whenever he thinks or behaves in a manner at variance with the basic assumptions” governing the group. Paramount among the basic assumptions is the assumption that the survival of the group trumps the welfare of mere individuals.

The ambiguity of Sandy’s challenge reflects not only insight into groups but also a stylistic innovation. Spark did not write The Prime of Miss Jean Brodie from an omniscient point of view. Nor did she write it in free indirect style, the technique that intermittently sinks a bird’s-eye perspective into the consciousness of a particular character. The book is not written from Sandy’s perspective or from that of the Sister Helena she grows up to become. It is not written from the perspective of Jenny, Mary, or any of the other girls, and it is not written from Jean Brodie’s perspective, either. It certainly isn’t written as if by an outsider. It is written, rather, from the point of view of the Brodie group. It descends at will into their group mentality and participates in all the limitations of understanding and banishments of conflict that belonging enforces.

THE BRODIE GROUP is responsible for the sadism in the narrative toward the longanimical Mary Macgregor. Individually, none of the girls are so heartless. Even Jean Brodie voices a regret; “Perhaps I should have been kinder to Mary,” she speculates at one point.

But the group has no remorse. It knows that Mary “was too stupid ever to tell a lie,” and therefore it ought to realize that she must be telling the truth when she says she doesn’t know who spilled ink on the floor. But it blames her nonetheless. “I dare say it was you,” Jean Brodie pronounces. “I’ve never come across such a clumsy girl.” Mary is the scapegoat. The group requires her to play the role, in spite of fact and logic, and she accepts it for the sake of belonging. She accepts it deeply. “These were the days,” Spark writes, with killing irony, “that Mary Macgregor, on looking back, found to be the happiest days of her life.”

It is when Sandy feels the “temptation to be nice to Mary Macgregor” that she first appreciates the power of the Brodie group. On a field trip, Sandy joins in mockery of Mary’s awkward gait but then feels sorry. While Miss Brodie is praising the girls as “heroines in the making,” Sandy considers treating Mary kindly. At once she experiences something she calls “group-fright.” If she were kind to Mary, she “would separate herself, and be lonely, and blameable in a more dreadful way than Mary who, although officially the faulty one, was at least inside Miss Brodie’s category of heroines in the making.” Kindness would probably frighten Mary, too. If the group did not blame her, it might have no use for her at all.

We know the group that requires a scapegoat; everyone has been in it at some point. How do you leave? How do you change it? Told as it is from the perspective of the Brodie group, the answer in Spark’s novel is betrayal. It would please the dreary Aristotle to know that she consistently writes novels with a recognition and a reversal, which betrayal neatly provides.

TO MAKE A CONNECTION between Bion’s ideas, Spark’s novels, and the relation of fascism to group life, I’ve been focusing on just the first few chapters of The Prime of Miss Jean Brodie. Before finishing, I’d like to survey several of Spark’s novels quickly, in order to suggest their connection to religion.

When the four novels in this omnibus are read one after the other, one sees that it is Spark’s pattern to follow an idyll with a catastrophe. The idyll is in some cases less than idyllic. It might be more accurate to call it a time of license, which the catastrophe to some extent punishes. Miss Brodie defies the pedagogical norms of Edinburgh and turns her classroom into something like a salon. In The Girls of Slender Means, wartime poverty seems to bind a group of beautiful young women into a community, who share dresses and ration coupons. The vacationing heroine of The Driver’s Seat has mysteriously excused herself from taking into account any consequences that might take more than a day to reach her. And in The Only Problem, after Harvey Gotham is abandoned by his wife, he takes up with his sister-in-law, somewhat improvisationally. In each case, the lawlessness of the idyll accounts in large part for its charm. And the trick of it is that in each case, beneath the charm of the idyll is evil, in the person or the conditions that have made it possible. In the catastrophe, the evil surfaces, and the novel turns on its own pleasures.

There is evil, in other words, in one of the most attractive aspects of Spark’s fiction, because the pleasure of the idyll is a pleasure of this world, which is led by the prince of this world. The pleasures of the novel are also subject to him. This is an awfully dark aesthetic. Transposed from the key of religion to that of politics, an equivalent might be a radio program broadcast from the Nazi point of view but intended to demoralize Nazis.

In The Prime of Miss Jean Brodie, Spark pokes fun at the perversity of John Calvin in “having made it God’s pleasure to implant in certain people an erroneous sense of joy and salvation, so that their surprise at the end might be the nastier.” But Calvinism has no monopoly on such twists. Is it any less perverse to suppose that God sent his spirit into a body as a means for teaching that the body must be mortified? In The Comforters, Caroline Rose, a recent convert to Roman Catholicism, makes a revealing joke. Attending mass always puts her in a bad mood, she admits. “It’s evidence of the truth of the Mass, don’t you see?” she tells her boyfriend. “The flesh despairs.” In Loitering with Intent (1981), Spark offers a piece of advice to novelists: “To make a character ring true it needs must be in some way contradictory, somewhere a paradox.” Maybe the advice also applies to religious doctrines.

It takes just such a perversity to write novels that convict novelizing. Like the mortification of the flesh, the thwarting of novelists has a religious flavor in Spark’s fiction. In The Comforters, Caroline is convinced by her hallucinations that someone on another plane is trying to write her into a novel, and she feels obliged to resist. “I intend to stand aside and see if the novel has any real form apart from this artificial plot,” she says. “I happen to be a Christian.” (Soon after she says this, whoever is writing the novel puts her into a car wreck.)

To be more precise, Spark’s novels convict wild novelizing. They warn against the stunted artist who devotes her storytelling to the fascination and control of weaker personalities–who turns her powers on life rather than art. In Spark’s fiction she is usually a woman, but not always. Like Spark herself, the wild novelizer specializes in configurations of at least three people, such as blackmail. She is almost always opposed by another storyteller, younger and more gifted. Is the difference between them merely a matter of talent?

Maybe the difference is like that between a cult and a church. People in a church are likely to be more talented at leading a religion than those in a cult, but it seems inadequate to say that the presence of talent is what distinguishes the two groups. Those in a church have found a way to check dangerous personalities. Bion noted that priests handle the issue of leadership as tenderly as if it were dynamite. “The attempt is constantly and increasingly made to ensure that the leader . . . is not a concrete person–the commonest way in which this is done is of course by making a god the leader; and when that, for a variety of reasons, turns out still to be not sufficiently immaterial, by striving to make him God, a spirit.” (By this logic, Bion would probably have seen real mischief in the carnalizing conceit of Dan Brown’s novel The Da Vinci Code, which posits that Christ wed Mary Magdalene and that his descendants survive today. ) The expertise of the priests matters less, perhaps, than the methods they rely on: the development, systematization, and perpetuation of a set of beliefs. The beliefs feel arbitrary in proportion as they restrain the group from following its nature, known to Bion as the group’s basic assumption and to the church as original sin.

A novel and a religion might both be described as ways of giving structure to groups of people. Where religions have orthodoxy, novels have something else, a kind of aesthetic principle. Spark does not confuse the two; she is quite worldly about crime and sex, for example. The ultimate lawfulness of her novels is quiet. But it is only in her confidence of the law that she borrows so unabashedly from the vitality of lawlessness. Mere abstinence from vitality is not enough, her novels suggest; that only gets you into car wrecks. It is only by the dangerous art of writing novels that novelizing may be put right.

Innocence and Experience: Peter Terzian’s interview with Alan Hollinghurst

“It would be nice if more blogs would link to my Alice Munro profile,” said Peter, wistfully. While I am obliging, here’s a more permanent link to Peter’s Alan Hollinghurst interview, too.

And while I’m in sales mode, here in one place are all five of the recent numbered posts in which I come perilously close to arguing that people who voted for Bush probably cannot read. The hope is that the posts make more sense if read in order, rather than backwards. But perhaps not. I share with Scott McLemee doubts about the “use” of writing, doubts reminiscent of those I felt three years and two months ago.

The Un-Jefferson

A review by Caleb Crain of Alexander Hamilton: Writings, ed. Joanne B. Freeman. Originally published in the New York Times Book Review, 11 November 2001.

Thomas Jefferson has long been the founding father most popular with American writers. He had writerly faults: a tinkerer’s curiosity about nearly everything, the inability to resist a fancy French idea and the misconception that these were intellectual strengths. He also believed that scholarly ladies and gentlemen could live happily and virtuously on farms distant from the corrupting metropolis—an ideal that lurks behind such ventures as Brook Farm, Walden and the MacDowell Colony. Even today, writers under Jefferson’s influence augustly depart for the mountains or the plains, vowing to remain pure, free from the vulgarizing marketplace.

They ought to reconsider. Under the influence of Jefferson’s enemy Alexander Hamilton, they would stay and figure out how to be paid better. Hamilton thought about money in a way that Jefferson, who lived beyond his means, did not and perhaps could not. And Hamilton’s value as a writerly model extends beyond finance. Even more so than Jefferson, who inherited slaves and land, Hamilton made his way in the world by his pen. “Hamilton is really a colossus to the anti-republican party,” Jefferson conceded on one of the many occasions when Hamilton had defeated him in the court of public opinion with a hail of pamphlets. “Without numbers, he is an host within himself.” For a dozen years, Hamilton stood in relation to the presidency roughly as Laurence Tribe or Richard Posner stands to some of the justices in the Supreme Court today: without holding the office, he was kind enough to help do the thinking necessary for those who did. (Washington was grateful; Adams resentful.) His power consisted merely in his words.

“The manner in which a thing is done,” Hamilton once advised a future mayor of New York, “has more influence than is commonly imagined.” Today he is remembered for his deeds—arguing New York into ratifying the Constitution, financing the Revolutionary War debt, founding the first national bank—as the patron saint of capitalism and social orderliness. But it is his manner, at once methodical and dashing, that comes to life in “Alexander Hamilton,” the generous and intelligent anthology of his writings, edited by Joanne B. Freeman, an assistant professor of history at Yale University. The reader finds him, at 17, calculating for his employer whether a shipment of undernourished mules is worth the cost of the pasturage needed to recuperate them, and describing pious humility in a hurricane with boyish glee: “Where now, oh! vile worm, is all thy boasted fortitude and resolution?”

Hamilton was born in the West Indies in 1755 to a couple who were socially respectable but not legally married. When he was 10, his father went bankrupt and abandoned the family. “It was his fault to have had too much pride and two large a portion of indolence,” Hamilton later explained. Three years later, Hamilton’s mother died, and the boy became a clerk in a trading house. For the rest of his life, he would consider capitalism a safety net and honor something to be prickly about. The account of the hurricane was in effect his college application essay; it induced friends to send him to school in America. While an undergraduate at Kings College (later Columbia University), he argued that the colonies ought to be free because enslavement to Britain “relaxes the sinews of industry, clips the wings of commerce, and introduces misery and indigence in every shape.” (Note the distinctive fusion of insurrection and fiscal prudence.) When the Revolution came, he enlisted.

Among the disconcerting facts about Hamilton, when considered in a gallery of founders, is his beauty. Exhibit A would have to be the delicate, precise, off-center sweep of his right arm in John Trumbull’s 1792 full-length portrait. (Exhibit B would have to be his calves in the same painting.) In uniform he must have been irresistible. Here he is, an aide-de-camp to General Washington, explaining England’s tyranny to the woman he was courting: “She is an obstinate old dame, and seems determined to ruin her whole family, rather than to let Miss America go on flirting it with her new lovers, with whom, as giddy young girls often do, she eloped in contempt of her mother’s authority.” Of the difference between the sexes, he told his future bride: “We are full of vices. They are full of weaknesses.” Perhaps she ought to have listened, but one can see why she wouldn’t have.

Hamilton was passionate. When Washington rebuked him for being late, he resigned from the general’s staff on the spot. On behalf of a friend who hoped to raise black troops in South Carolina, he wrote, “An essential part of the plan is to give them their freedom with their muskets.” When he thought he’d been slandered by a minister, he unnerved him with the reassurance that “the good sense of the present times has happily found out, that to prove your own innocence, or the malice of an accuser, the worst method, you can take, is to run him through the body, or shoot him through the head.”

Hamilton preferred to solicit or alarm, rather than take detours into French-style flights of theory. As he once explained to a British diplomat he needed to win over, “We think in English.” (Note the strategic “we.”) It was a matter of political philosophy as well as style. “The safest reliance of every government is on men’s interests,” he once wrote. At the Constitutional Convention in 1787, he deliberately outraged the pieties of the day by asserting that self-interest was good. So was force. To survive, the federal government would have to involve the people’s passions, including their avarice and ambition. And the surest way to command passions was to have the power to satisfy them. The new government should be devised in such a way that many people would form the habit of considering it advantageous to support and dangerous to oppose.

To see this government into being, Hamilton wrote most of “The Federalist.” The fastidious Hamilton and the ardent Hamilton forged a style whose power lay in the intimacy with which he knew what he was talking about. “I will not amuse you with an appearance of deliberation, when I have decided,” Hamilton wrote. Instead, the thoroughness of the detail and closeness of the reasoning amounted to force majeure. In 1789 Washington named Hamilton secretary of the treasury, and he rapidly drew up blueprints for financing the debt, erecting a national bank and setting tariffs, taxes and subsidies to encourage manufacturing. Anyone who doesn’t know what a bond is or how a bank works can learn painlessly from Hamilton’s reports. A New York merchant of the 1790s would have appreciated the well-organized number-crunching. But as a slaveholding agrarian landowner, Jefferson was baffled, or affected to be baffled, by the financial terms of art. “Hamilton’s financial system . . . had two objects,” Jefferson wrote later. “1st as a puzzle, to exclude popular understanding & inquiry. 2dly, as a machine for the corruption of the legislature.” Hamilton managed to give America the institutions of capitalism despite him.

Hamilton returned to private law practice in 1795. Two years later, he was accused of financial impropriety with a man named James Reynolds. To deny the charge, he was forced to explain, “My real crime is an amorous connection with his wife.” The confession had dignity, and even some humor, as in the account of an early meeting with Maria Reynolds: “Some conversation ensued from which it was quickly apparent that other than pecuniary consolation would be acceptable.” But the public learned that he had been taken in a badger game and had paid off his blackmailer.

Nonetheless he returned to government in 1798, through maneuvers that put him in command of a new peacetime Army. But in December 1799, Washington died. For two decades, Hamilton had fought his enemies by winning the phlegmatic general to his side of every argument. (In the tug of war over Washington’s Farewell Address, for example, Madison supplied the indigestibly marmoreal prose at the front and the tail, and Hamilton contributed the middle, where the ideas are.) Without Washington, Hamilton fought his enemies directly—and wildly. He was, he admitted to friends, “in a very belligerent humor,” initiating libel suits and hinting at duels, as pettish with testosterone poisoning as one taxi driver cut off by another at the end of a long shift. The late Hamilton threatened too much and flirted too little. In 1800 he assassinated the character of John Adams, his own party’s candidate, while Adams’s re-election was still pending. And in 1804 his disparagements of Aaron Burr provoked a challenge to a duel, in which Hamilton was killed. He enjoyed writing what he really thought more, perhaps, than a politician should.

The Undertaker’s Art, Exhumed

A review by Caleb Crain of The Go-Between by L. P. Hartley. Originally published in The Nation, 274.16 (29 April 2002): 29-31.

“It’s a great mistake not to feel pleased when you have the chance,” a rich, disfigured spinster advises a frail, well-mannered boy in The Shrimp and the Anemone, the first novel in L. P. Hartley’s Eustace and Hilda trilogy. The boy has won a hand of piquet, and the spinster has noticed that he has difficulty enjoying triumphs. Miss Fothergill (like many of Hartley’s characters, the spinster has an outlandishly characteristic name) foresees that her ten-year-old friend may not have ahead of him many occasions of pleasure to waste.

Rather than disobey Miss Fothergill, I will readily admit that I have felt pleased while reading Eustace and Hilda and very pleased while reading Hartley’s masterpiece, The Go-Between. It was a spice to my pleasure that even though the Eustace and Hilda trilogy was first published between 1944 and 1947, and The Go-Between in 1953, I had not even heard of L. P. Hartley before the novels were reissued recently as New York Review Books Classics.

I blame my ignorance on an academic education. Hartley is not the sort of author discussed in schools. He is in no way postmodern. He is modern only in his frugality with sentiment and his somewhat sheepish awareness that the ideas of Marx and Freud are abroad in the world, rendering it slightly more tricky than it used to be to write unselfconsciously about unathletic middle-class English boys who have been led by their fantasies and spontaneously refined tastes into the country homes of the aristocracy. If Hartley belongs to any academic canon, it would be to the gay novel, whose true history must remain unwritten until the theorists have been driven from the temple and pleasure-loving empiricists loosed upon the literary critical world. Hartley belongs with Denton Welch and J. R. Ackerley. The three have different strengths: Welch is sensuous, Ackerley is funny, and Hartley is a delicate observer of social machinery. But all are sly and precise writers, challenged by a subject inconvenient for novelizing, the emotional life of gay men.

They met the challenge with unassuming resourcefulness, writing what might be called fairy tales. Hans Christian Andersen was their pioneer, as the first modern writer to discover that emotions considered freakish and repellent in adults could win sympathy when expressed by animals and children. Andersen also discovered that a plain style was the best disguise for this kind of trickery and that the disgust of even the most intolerant readers could be charmed away by an invitation to learn how queer characters came to be the way they are. Thus in Ackerley, Welch, and Hartley one finds gentle transpositions—from human to animal, from adulthood to childhood, from health to illness—disarmingly exact language, and just-so stories about strange desires. Once upon a time, a man fell in love with another man’s dog. Once upon a time, a boy on a bicycle was hit by a car and could not find pleasure again except in broken things. Once upon a time, a boy was made to have tea with a crooked-faced, dying woman and to his surprise he liked her. The effect is a mood of tenderness; the stories are sweet and a bit mournful.

Hartley loved Hans Christian Andersen, but it was another writer who provided him with a defense of gentle transposition as a novelistic practice: Nathaniel Hawthorne, whose daguerreotype by Matthew Brady is the disconcertingly austere frontispiece of The Novelist’s Responsibility, Hartley’s 1967 collection of literary criticism. In the preface to The Blithedale Romance, Hawthorne had described the novelist’s need for a “Faery Land, so like the real world, that in a suitable remoteness one cannot well tell the difference, but with an atmosphere of strange enchantment, beheld through which the inhabitants have a propriety of their own.” Hartley quoted the passage with approval.

Lost time was Hartley’s fairy land. “The past is a foreign country: they do things differently there,” he wrote in the first, and most famous, sentence of The Go-Between. (He may have been echoing the first sentence of A Sentimental Journey, where Laurence Sterne had written that “They order . . . this matter better in France,” which was Sterne’s fairy land.) The remembered world could be as rich and vivid as the real one and yet would always stand at a remove. One could visit but not live there. As Hawthorne had explained in his introduction to The Scarlet Letter, in another passage quoted by Hartley, there is something romantic about “the attempt to connect a bygone time with the very present which is flitting away from us.”

The Go-Between opens with such an attempt. Leo Colston, a bachelor librarian in his sixties, has begun to sort his papers—apparently in preparation for his death, since he seems to have nothing else to look forward to. He starts by opening “a rather battered red cardboard collar-box.” It is full of childhood treasures: “two dry, empty sea-urchins; two rusty magnets, a large one and a small one, which had almost lost their magnetism; some negatives rolled up in a tight coil; some stumps of sealing-wax; a small combination lock with three rows of letters; a twist of very fine whipcord; and one or two ambiguous objects, pieces of things, of which the use was not at once apparent: I could not even tell what they had belonged to.” At the bottom of the box is a diary, and at first Colston cannot remember what the diary contains. Then he remembers why he does not want to remember it.

My secret—the explanation of me—lay there. I take myself much too seriously, of course. What does it matter to anyone what I was like, then or now? But every man is important to himself at one time or another; my problem had been to reduce the importance, and spread it out as thinly as I could over half a century. Thanks to my interment policy, I had come to terms with life . . .

A secret naturally arouses the reader’s curiosity, but Colston’s attitude toward his secret is a further provocation. The events in the diary, he implies, were both inconsequential and traumatic. He preferred a lifelong effort of forgetting over any attempt to come to terms; only by burying “the explanation of me” could he find a way to live. “Was it true . . . that my best energies had been given to the undertaker’s art? If it was, what did it matter?” An unacknowledged wound, a buried definition of the self . . . the penumbra around Colston’s secret is typical of a closeted homosexual, and yet what follows is neither a same-sex love story nor a coming-out narrative.

In the course of the novel, Colston does discover the facts of life and has at least an intuition of his oblique relation to them, but in The Go-Between Hartley was most intensely concerned with his hero’s first experiences of sin and grace. This second, more surprising parallel with Hawthorne is the crucial one. Hartley once wrote that “Hawthorne thought that human nature was good, but was convinced in his heart that it was evil.” Hartley was in a similar predicament.

Who would have guessed that the Edwardian sexual awakening of a delicate, precociously snobbish thirteen-year-old would have anything in common with the Puritan crimes and penitence that fascinated Hawthorne? Yet for Hartley, as for Hawthorne, the awareness of sin is a vital stage of education and a condition of maturity. At first young Leo Colston resists it. “It was like a cricket match played in a drizzle, where everyone had an excuse—and what a dull excuse!—for playing badly.” His moral code at the outset is the pagan one of schoolboys; he believes in curses and spells, and in triumphing over enemies by any means except adult intervention. But at the invitation of a classmate, Leo spends his summer vacation at Brandham Hall, a well-appointed Georgian mansion in Norfolk, and there his world is softened by love, in the person of the classmate’s older sister, Marian. She is beautiful, musical, and headstrong. Leo brings her messages from her fianc…, Hugh Winlove, Lord Trimingham, and billets from her lover, a local farmer named Ted Burgess. With her love comes sin—not because sexuality is evil, though it may be, but because after he has felt its touch, Leo can no longer think of the people he struggles with as enemies. The lovers make a terrible use of him, but he cares most about those who use him worst. In their triangle, he is incapable of taking a side; he is, after all, their go-between.

If you map Hartley onto Hawthorne too methodically, you arrive at the odd conclusion that Leo is part Chillingworth, part Pearl. This is not quite as silly as it sounds. Like them, Leo is jealous of the lovers he observes and trapped in their orbit; nothing is lost on him, and he is unable to make emotional sense of what he knows. (His apprehension without comprehension is a boon for the reader, who through him sees the social fabric in fine focus.) But unlike Hawthorne’s characters, Leo is a boy starting his adolescence, and that process, which he fears will defeat him, is at the heart of The Go-Between. Leo knows that the end of his childhood ought to be “like a death, but with a resurrection in prospect.” His resurrection, however, is in doubt.

Like most fairy tales, the tale of how Leo becomes a fairy will not be fully credible to worldly readers. The oedipal struggle will seem too bald, the catastrophe too absolute. Hartley was aware of this shortcoming. He knew that he found sexuality more awful than other people did, and in The Novelist’s Responsibility, he wrote about his attempt to compensate for it while writing the Eustace and Hilda trilogy: “I remember telling a woman novelist, a friend of mine, about a story I was writing, and I said, perhaps with too much awe in my voice, ‘Hilda is going to be seduced’, and I inferred that this would be a tragedy. I shall never forget how my friend laughed. She laughed and laughed and could not stop: and I decided that my heroine must be not only seduced, but paralysed into the bargain, if she was to expect any sympathy from the public.”

Hartley’s friend would probably have laughed at Hilda’s paralysis, too. In the trilogy, Hilda is the older, stronger-willed sister of the exquisitely polite Eustace, who grows up in her shadow, a little too fond of its darkness. Their symbiosis in the first volume is brilliant and chilling, but her paralysis in the third is unconvincing. It is implausible that the demise of a love affair would literally immobilize an adult woman. Fortunately, it happens off-stage, and a few of the book’s characters do wonder if she is malingering.

However, the lack of perspective may be inextricable from Hartley’s gifts. His writing is so mournful and sweet because he is willing to consider seriously terrors that only children ought to have, and perhaps only a man who never quite figured manhood out could still consider them that way. The second and third volumes of Eustace and Hilda are as elegant as the first, but not as satisfying, because Eustace’s life becomes too vicarious to hold the reader’s attention—and because the characters have grown up. Hartley’s understanding of children is sophisticated, but he seems to have imagined adults as emotionally limited versions of them—as children who have become skilled at not thinking unpleasant thoughts. As a writer, his best moments are in describing terror at age thirteen and the realization at sixty-odd that one need not have been so terrified after all. In The Go-Between, artfully, the intervening years are compressed into the act of recollection, and the novel’s structure fits the novelist’s talents like a glove.