Aldous Huxley and Neuropsychiatry

Below is a transcript of a paper I delivered in April, 2017, at the Sixth International Aldous Huxley Symposium in Almería, Spain. Held at the University of Almería, the symposium was organised by Bernfried Nugel and the Aldous Huxley Society. There I met and became friends with a number of the scholars I cite in the paper, including Sam Deese and Jerome Meckier. Running for three days, it was a wonderful event, and I am grateful to the Aldous Huxley Society for inviting me to attend.

Please do not use or reproduce this work without my express consent or permission. If you wish to cite this work, please specify that it was originally delivered as a conference paper and give details of this website. I also prepared a presentation, which is accessible here.

I have decided to speak a little about Huxley’s relationship to modern neuropsychiatry—not an easy subject, since Huxley was never directly engaged in the brain science as it emerged in the ’50s and ’60s. Nevertheless, some modern scholarship has drawn the connection.

Most directly, Nicholas Langlitz has sensed the way in which Huxley’s interest in the psychedelic experience relates to neuroscience in an article subtitled “Fieldwork in Neuro- and Perennial Philosophy.” Langlitz compares Huxley’s perennial philosophy—a philosophy of religion he describes as a “major form of philosophical thought”—to the neurophilosophy of Thomas Metzinger, who offers a more “modern philosophy of mind.”1 Salient differences between the thinkers, however, derive from the histories from which their ideas spring. For one, Metzinger’s neurophilosophy is a product of his “participation in the counterculture of 1970s Frankfurt,” as Langlitz notes. Huxley, by contrast, never witnessed the efflorescence of the counterculture after World War II. (Of course, some have suggested that the Californian counterculture was partly inaugurated by Huxley, so profound was his influence on the founders of the Esalen Institute; on this, for instance, see Jeffrey Kripal’s book, Esalen.2) But Huxley would likely have had mixed, even disdainful, feelings about both the Californian and Frankfurt countercultures. Huxley preferred that drugs were used intelligently and safely, expressing misgivings about mescaline in, among other essays and writings, Heaven and Hell3 And while Huxley provisionally approved of Timothy Leary’s early experiments (he participated in his Harvard Project with psilocybin as subject no. 11 in November 1960, and attended, with Leary, the 14th Annual Congress of Applied Psychology in Copenhagen in August, 1961 (see figure 1)), this did not seem to last long—and probably would not have lasted any longer had he lived beyond November 1963.4

Figure 1. Timothy Leary (far left), with Aldous and Laura Huxley (née Archera) (centre) at the 14th Annual Congress of Applied Psychology, Copenhagen in August, 1961. Original photograph from the Leary Archives, New York Public Library.

Huxley is sure to have rejected the chaos of Leary’s 1963 Millbrook mansion, notwithstanding (or perhaps because of) its abstract resemblance to Crome Yellow, the locus of his eponymous novel of 1921. Huxley had modelled “Crome” on Garsington Manor, Lady Ottoline Morell’s mansion, near Oxford (see figure 2).

Figure 2. Aldous Huxley, Dorothy Eugénie Brett, Lady Ottoline Morrell, and Philip Edward Morrell at Garsington, sometime in the 1920s. (Photographer unknown. Photograph courtesy the National Portrait Gallery.)

But what is the relevance of the counterculture in distinguishing Huxley from Metzinger? Langlitz notes: Since Metzinger’s philosophy of brain emerges from the Frankfurt counterculture after the Second World War, it is outwardly antithetical to “biologism” and eugenics, both of which he associates—as do we all—“with Nazi ideology.”5 And here the lines are drawn. Because Aldous’s neurophilosophy, if it may be called that, cannot be described as antithetical to biologism. His ideas are so profoundly connected with improving the body—with cultivating the “non-verbal humanities,” with educating the amphibian’s senses and emotions—that it must come to grips with biology, which is to say both with the human’s potential and limitations. Sexual selection, a subset of natural selection, must come into Huxley’s equation. Hence Huxley’s abiding interest in biological determinism: bodily functions and their failures underline all of Huxley’s thought.

But then, Huxley rarely describes sexual fitness. In his early satires, he is more concerned with vomiting and flatulence. These bodily responses determine their characters’ personalities—and even their religiosity—which makes it even more ironic that Huxley so rails against Swift’s satires, including his poem “The Lady’s Dressing Room,” for being so “unclean and unsavoury.”6 In After the Fireworks (1930), Miles Fanning considers the extent to which, after drinking excessively and vomiting, one becomes more pious. “Christians,” he thinks, paraphrasing Pascal, “ought to live like sick men; conversely, sick men can hardly escape being Christians” (444). In Crome Yellow, Denis Stone realises that all his life he has been using the word “carminative,” which refers to an agent that relieves one from flatulence, in place of a word that describes something that engenders profound insights (223–24). As with the vomiting example, Huxley’s comedic and ironic suggestion, here, is that something that relieves the body, that can cause one to break wind, can indeed also be profound, and can enable one to penetrate mental impasses. Miles Fanning and Denis Stone’s mistakes express, for Huxley, a greater truth: the body’s movements and warblings are the sources of new insights. The brain, it seems, is thoroughly embodied—and absurdly so.

When it came to eugenics, Julian was more attentive than Aldous. Julian sought to reposition eugenics as “an applied form of human genetics in the 1950s.”7 But by the 1970s, the subject had become anathematic to mainstream scientists. With eugenics denounced, the British Eugenics Society’s journal Eugenics Review rebranded as Journal of Biosocial Science, and is still known by that title today. (See figure 3, below.)8 But Julian’s interest continued, his ideas reaching an apex when he and Ernst Mayr, two venerable biologists, co-authored a paper on the evolutionary persistence of schizophrenia with two young and then less renowned psychiatrists, Humphry Osmond and Abram Hoffer. The paper was titled “Schizophrenia as Genetic Morphism.”9 Aldous, of course, had introduced his brother to Osmond.10

Figure 3. The Eugenics Review (1965) and the Journal of Biosocial Science (1969). The fomer was rebranded as the latter in 1968 to avoid negative associations.

The story of that paper is fascinating. It influenced Nobel Prize winning chemist Linus Pauling, who adopted part of the molecular component of the quartet’s research into his own treatment model, “orthomolecular psychiatry.”11 Although intuitively ingenious, this method has been comprehensively rejected by mainstream chemists and scientists.12

Julian had long been a proponent for “racial improvement,” writing of its importance in the 1920s for “education, for health, for self-development in adult life, and for the steadily increasing amount of leisure which will be available in a planned society.”13 Living in France in this part of the ’20s, however, Aldous had no such “formal connections with the Eugenics Society,” as David Bradshaw notes.14 But the connection between Osmond and Julian develops a different picture: Aldous’s experiments with mescaline, and with LSD through Osmond, reflected his keen interest in an evolutionary theory of mental illness, and his handing over the scientific reins to Julian reflects a delegation of that interest.

And Julian and Aldous had always collaborated on what might advisedly be called eugenical projects. Exercised by the intellectual debates of the time, Aldous was curious about the topic in the ’20s, and his interest “no less fervid than that of his fellow-writers,” as Bradshaw notes. Proper Studies includes a “Note on Eugenics” that cites Leonard Darwin’s The Need for Eugenic Reform (1926), and among Aldous’s friends then was the psychiatrist Carlos Blacker, then General Secretary of the Eugenic Society.15 In the ’40s, Aldous introduced Julian to the German chirologist (palm reader) Charlotte Wolff and arranged for her to compare the handprints of zoo animals, caged in the London Zoo, whose gardens Julian managed, with those Wolff called “mental defectives.”16

Sam Deese masterfully summarises Julian and Aldous’s shared view about evolutionary biology in his 2014 book We Are Amphibians. Setting the pair’s common view against that of the less mystical D. H. Lawrence, Deese writes that

Even as they arrived at very different conclusions, Julian and Aldous Huxley each sought to articulate a worldview that was religious in its depth and power and yet fully compatible with both the long-standing methods of scientific research and the most current discoveries of evolutionary biology.17

But their methods differed as much as their conclusions. Trained in zoology, Julian exemplified the new empiricism, while Aldous’s scientific commentary was always political. He opens “Madness, Badness, Sadness” by recalling that “Goering and Hitler displayed an almost maudlin concern for the welfare of animals.”18 The tyrants’ ironic antipathy towards human animals revealed to him the “state of chronic and almost systematic inconsistency” of “the world,” a defect that, for him, even science—and even eugenics—couldn’t ever mend.19 Blinded as a seventeen-year-old, now unable to use microscopes, his plan for a medical career thwarted, Aldous set about writing cautionary tales and romans à clef—and later, works in this same genre again but now disguised as essays: broadsides against modern biopolitical life, and barbarous, even vindictive lamentations on the turpitudes to which humans have stooped.20

For whatever optimism we discover in his general vision of human potentialities, Huxley expresses as much pessimism in the details of his writings, everywhere pointing out our malice and wrongness. Jerome Meckier’s portrait of Huxley’s transition from “poet to mystic” is but one confirmation that Huxley never abondoned his youthful cynicism so much as came to refine it. If he experienced an “artistic and spiritual growth,” it was “from a parodic formative intelligence into a volatile blend of satirist and sage”—although, even Huxley’s sagacious traits are underlain by his dark, satirical odium. And as he grew older, Huxley’s views ossified, his ludic playfulness transmogrifying into, at times, a mild cantankerousness, so that, in both the satirical and mystical stages, this negative impulse is there. Meckier describes it as a happy willingness to “furnish an explanatory hypothesis for the nature of things provided it was negative, i.e. meaninglessness as, paradoxically, the meaning of modern life.”21 And this same, cold pessimism underscored his attitude to the mind and brain’s evolution, together with our potential to understand it.

Within Huxley’s own magnificent philosophy perrenialis thus lie the traces of its malevolent other, a philosophy of depravity. It is especially marked in his treatment of scientific progress. Again in “Madness, Badness, Sadness,” he mourns the fact that surgical interventions have subsumed “Animal magnetism and hypnotism,” two of Huxley’s favourite alternative therapies.

It had all happened before, of course. Cutting holes in the skull was an immemorially ancient form of psychiatry. So was castration, as a cure for epilepsy. Continuing this grand old tradition, the Victorian doctors removed the ovaries of their hysterical patients and treated neurosis in young girls by the gruesome operation known to ethnologists as “female circumcision.”22

Huxley’s views strike those familiar with biopolitics, both before and after its founder, Michel Foucault, as very modern: even today, debate continues in biopolitical discourse about the ethics of female genital cutting.23 Huxley does not name Africa as the locus of his critique but fulminates against Victorian England; he resented much of the Victorian period, railing against the repressed sexualities of Victorians in the essay “Battle of the Sexes,” and jesting the deceptive Romantic poets in his satires (they were poets who, according to Meckier, he could not quite emulate), including by hailing the modernist zeitgeist in his essay “The New Romanticism.”24 The point, however, is that, for Huxley, circumcision—and castration—is a gruesome substitute for our own ignorance about the human psyche, and its elaboration as a medical procedure likewise no better than what he called, in a brilliant 1925 essay, Freud’s “hocus pocus.”25

Huxley resists these physicalist aetiologies, and these physicalist treatments, because they take too little account of the mind and overlook many aspects of the psyche. Where psychology has been advanced, similarly, it takes too little account of the body: “The basic Freudian hypothesis is an environmental determinism that ignores heredity, an almost naked psychology that comes very near to ignoring the physical correlates of mental activity” (LAS, 97). Thus, ovaries have as little to do with hysteria as men’s desires for their mothers have to do with their neuroses. Huxley advocates a middle approach—a science of the “mind-body”—and urges we become more agnostic about the relations of mind and matter, neither physicalists nor psychologists exclusively. In Literature and Science, he claims that

Men and women are much more than the locus of conscious and unconscious responses to an environment. They are also unique, inherited patterns (within a unique, inherited anatomy) of biochemical events; and these patterns of bodily shape and cellular dynamics are in some way related to the patterns of an individuals’ mental activity. Precisely how that are related we do not know, for we have as yet no satisfactory hypothesis to account for the influence of matter upon mind or mind upon matter (LAS, 82).

Seemingly technical, Huxley’s expression “cellular dynamics” is a curious reference to neuroscience. Published in 1963 before he died, Literature and Science might reflect Aldous’s attention to the work of his half-brother, Andrew Fielding, who had described the dynamics of cells and neurons in a mathematical model, the “Hodgkin–Huxley model,” in 1951, and had won the Nobel Prize in for it in 1963 with his co-author.26 I admit I haven’t reviewed Grover Smith’s edited Letters recently—Huxley might have congratulated Sir Andrew—but it seems possible Huxley used the term “cellular dynamics” with his half-brother’s work in mind. But then, Huxley must have thought too little of it to regard it as a “satisfactory hypothesis” in accounting for the influence of matter upon the mind.

It is typical for Huxley to use the best sounding nomenclature to introduce a topic only to then characterise it as altogether inarticulable—as inexpressible in language, mathematics, or any form. He had done so when he disavowed Euclidean geometry in Do What You Will, announcing ironically that “God is no longer bound… to obey [Euclid’s] decrees promulgated… in 300 BC.” And even when embracing Einstein and Riemann’s new geometric models of nature as “among the latest products of the human spirit,” Huxley would go on to question, and to mock, geometry’s fundamental truthfulness, its potential to reveal any real understanding.

An exceptional reification amid all this occurs in the Doors of Perception, where Huxley uses the expression “satisfactory hypothesis” to venerate Bergson’s model of “the brain… as a utilitarian device for limiting… the enormous possible world of consciousness… ”27 This is about as far as Huxley ever goes in the way of stamping his imprimatur on a neuroscientific theory of the mind. Enticing as it is, the assumption that we underuse our brains has long been discredited; Bergson, Broad and William James likely adopted the notion, consciously or not, from the concourses of Franz Gall and Johann Spurzheim.28

Huxley likely would have appreciated my disavowal, even of Bergson, as, for him, ignorance was not so much blissful as truthful. “Total awareness” he wrote in Knowledge and Understanding, “starts, in a word, with the realization of my ignorance and my impotence. How do electro-chemical events in my brain turn into the perception of a quartet by Haydn or a thought, let us say, of Joan of Arc? I haven’t the faintest idea — nor has anyone else.”

Huxley’s repeated denunciation of the Science of Man—which, like language, is too politicised and too technological—together with his advocacy of alternative therapies such as hypnotism and the Bates method, and his belief in animal magnetism, ESP, psi, and other parapsychological notions, constitutes his framework for developing the mind and brain for its evolution. His views presage the contemporary idea of neuroplasticity: the principle that novel processes that initiate new transmissions in the brain come to be entrenched as new capabilities. Modern neuropsychiatry accepts that just as the brain can be damaged or injured, so can its functions be improved, its fitness enhanced; it has partially come to embrace what Julian Huxley called transhumanism in 1951.29 But if Julian was quick to embrace population control and eugenics, Aldous suspected more was to be discovered through personal improvement. Educating the amphibian to deal with technological pressures could improve the future of the species. And while he burlesques neo-Lamarckians in his early satires, including in Antic Hay, where he parodies one biologist who has “found a way of making acquired characteristics . . . heritable,”30 many of Huxley’s ideas are compatible with the soft Lamarckism that is today, in spite of Darwinism’s long and continuing reign, embraced by many geneticists.31

Diagnosis at a Distance

At the turn of the twentieth century, Sigmund Freud and his otolaryngologist friend Wilhelm Fleiss parted ways in bitter disagreement. The men’s vexation arose before a lethiferous background of cocaine use and abuse, and was ultimately ended by that same route, with the pair discovering they had developed a difference of opinion about cocaine’s medical usefulness. But before they disbanded, Freud often expressed his deep admiration for Fleiss (and not only because he had originally agreed with his colleague that cocaine was a wonderful medicine, particularly for nose and throat complaints). “My respect for your diagnostic acumen has only increased further,” wrote Freud in a letter to Fleiss on June 28, 1892, after Fleiss had accurately diagnosed an illness in one of Freud’s relatives. But what was so impressive to Freud was that Fleiss had reached his diagnosis from afar, without even having met the patient. It was an exemplary case of what has come to be known as “diagnosis at a distance” or “long-distance diagnosis,” and a skill that Freud greatly admired in his friend, a kind of parascient force of perception that Freud himself wished to attain.32

By the mid-1890s, however, Freud had begun to sense that he had developed not just an admiring attachment to his colleague, but a professional dependency. It was a reliance that went further than the regular solicitation of cocaine prescriptions from his friend, who was authorised to prescribe the drug. With some insight, Freud identified that he had come to rely on his colleague’s ability to accurately diagnose his and his family members’ illnesses. His adulation would not last. In his “technique papers,” beginning in about 1912 with “Dynamics of Transference,” Freud began to eschew the notion, at least implicitly, that a medical professional could adequately establish a “clinical picture” of their patient’s illness without significant analysis. As Freud would note, the patient should be seen in situ, and often for a minimum of some weeks, before any assessment could be made of their illness. And sometimes even more preparation would be needed. Difficult cases required a “trial analysis to determine the patient’s analyzability,” a preliminary meeting in which the patient’s potential for successful analysis could be adjudged.33 Perhaps one of the reasons that Freud would come to regard the consultation as so important was that he felt it could allow the clinician to discriminate between what Freud recognised as the two fundamental kinds of neurosis. On the one hand, there were what Freud called the “actual neuroses.” These were those disorders that had a physiological aetiology. On the other hand, there were the “psychogenic neuroses,” those disorders that had derived from early childhood trauma, such as the hysterical, obsessional, or anxiety-centered disorders.34

As Steven J. Ellman has argued, Freud’s distinction between actual and psychogenic neuroses probably stemmed from his inability to observe “transference” reactions in those whose neuroses were “actual” or biological. These latter neuroses did not require psychoanalysis at all—which is to say, they did not require the retrieval of pathogenic memories—but instead needed physical treatment. The patient might need to exercise more, for instance; or, as was Freud’s more characteristic prescription, they might need to engage in more sexual activity, to release or discharge their libidinal energy or cathexes more often. The “actual” neuroses, after all, were not generally responsive to the psychoanalytic method, located, as they were, in the body. By contrast, the psychogenic neuroses were distinguished—and indeed could be diagnosed—by the appearance of transferences in the patient. These transference reactions are, in Freud’s formulation, the acted-out simulacra of the same original impulses that gave rise to the neurosis ab initio. But these impulses would now find a new object, a new target, in the figure of the psychoanalyst. Instead of the figure who originally engendered them through trauma, it was the analyst who would now be treated as though they had authored the originary trauma. As such, these transferences could quite sharply bring into view the shape and severity of the pathogenic memory.

Freud’s description of the transferences as early as 1905 augurs the significant role they will later play in his psychoanalytic framework. “What are the transferences?,” he asks in “Fragment of an Analysis of a Case of Hysteria.” His answer is instructive:

They are new editions or facsimiles of the impulses and phantasies which are aroused and made conscious during the progress of the analysis; but they have this peculiarity, which is characteristic for their species, that they replace some earlier person by the person of the physician. To put it another way: a whole series of psychological experiences are revived, not as belonging to the past, but as applying to the person of the physician at the present moment. Some of these transferences have a content which differs from that of their model in no respect whatever except for the substitution. These then—to keep to the same metaphor—are merely new impressions or reprints.35

Different to resistances, transference reactions are reactive behaviours in the patient that, though revealing the shape of the pathology, may yet mark an obstacle to be overcome. While the transferences point to the terrain of the pathogenic material, they may still be conceived of as impediments in analysis, blocks that stand in the way of recovering the true nature of the pathogenic memory. For this reason, Freud came to regard transference as a kind of resistance in “The Dynamics of Transference.” But by their nature, they were more than simple resistances, more then defenses to the treatment itself, and more than frustrations born of disinterest or boredom. For in the way that they revived the patient’s past psychological experiences, the transferences pointed rather ostensibly to the general source of the patient’s neurosis, and constituted a trace of the distressing psychogenic memory. It is because of their character, then, that the transferences can be seen (as Merton Gill has seen them) as “the only vehicle of the analytic situation.” For the point of psychoanalysis (depending on one’s theoretical orientation) is less to actually retrieve or recollect the patient’s childhood memories so much as to analyse the dynamics of this transference reaction itself, to master its mechanisms, and, in so doing, to control the power of those memories by proxy.36

Freud’s development of a theory of transference, as well as his earlier distinction between the actual and psychogenic neuroses, is likely to have been partly engendered by his feud with Fleiss. Indeed, even if the dynamics and ramifications of the Fleiss–Freud relationship are in many ways still uncharted, one of the things on which historians of psychoanalysis seem increasingly to agree is that Freud’s dissociation from Fleiss was a catalyst for the neurologist’s transition from biology-centered medicine to psychology-based psychoanalysis. When Freud and his colleague went their separate ways, it was a symbolic break: Freud’s rejection of his friend represented a rejection of his physicalist methods, just as his rejection of Breuer before him had been a rejection of Breuer’s methods (that is, of hypnotherapy and hypnosis). Perhaps more interestingly, though, it would be on the day after his own father’s funeral, on the 26th of October, 1896, that Freud would pronounce in a letter to Fleiss, in dramatic passive voice, that he had taken the decision to kick the cocaine habit. “Incidentally, the cocaine brush has been completely put aside,” he wrote to his colleague summarily. It was this break not just from the drug but from his colleague—from the man who had shown him how to diagnose from a distance—that allowed Freud to apprehend the imaginative vision of a “new science” of psychoanalysis, to “imagine,” as Justin Clemens has put it, “the possibility of the isolation of language itself as a force of transformation.” If, from this moment onward, Freud could evade “the problems of treating human psychology as if it were reducible to physiology,” it was because he had constituted the perfect litmus test for his intuition that the mind could overpower matter.37 His ability to abstain from cocaine would be a proof of his theory: that many of the neuroses were not biological but psychogenic.38

Now more than a century after the publication of Freud’s first psychoanalytic works—whether one marks his Studies on Hysteria (1895) or The Interpretation of Dreams (1901) as the inaugurating text—psychiatrists and psychoanalysts alike tend to caution against the notion of diagnosis at a distance. However, a number of psychiatrists remain tempted to throw these cautions to the wind—together with the professional statute, known as the “Goldwater rule,” which aims to guard against what the APA president Herbert Sacks called “psychobabble”: that idle speculation about a public figure’s mental illness that, when “reported by the media, undermines psychiatry as science.”39 Known more formally as section 7, rule 3 of the American Psychiatric Association’s professional ethical code, the Goldwater rule was drafted after the 1964 Presidential election in which Republican Senator and Presidential nominee Barry Goldwater was defeated by incumbent President Lyndon Baines Johnson. Goldwater had sued the now-defunct Fact magazine after it conducted a survey of members of the American psychiatric profession, calling on more than 12,000 psychiatrists to provide their professional assessment of the Republican candidate. Highly unscientific, the survey yielded almost 1,200 responses, a huge cachet of long-distance diagnoses, all of which had been scathing in their assessment of the Arizona Senator. The result was published in Fact’s next issue, its cover declaring starkly that “1,189 Psychiatrists say Goldwater is Psychologically Unfit to be President!” As Sacks, the later president of the APA, commented in his presidential column in 1999, “The bulk of the political responses, couched in psychiatric terminology, were so unfair and so outrageous to Goldwater that he sued and won a substantial settlement.”40

Following this incident, the APA issued a number of press releases that denounced such instances of long-distance diagnosis. But no less than nine years would pass before that the APA would formally enshrine the Goldwater rule in its ethical code, a short rule book titled Principles of Medical Ethics with Annotations Especially Applicable to Psychiatry published in 1973.41 Last updated in 2013, this current version of the code renders the Goldwater rule as follows:

On occasion psychiatrists are asked for an opinion about an individual who is in the light of public attention or who has disclosed information about himself/herself through public media. In such circumstances, a psychiatrist may share with the public his or her expertise about psychiatric issues in general. However, it is unethical for a psychiatrist to offer a professional opinion unless he or she has conducted an examination and has been granted proper authorization for such a statement.42

Practitioners and scholars have recently discussed reasons or justifications for dismissing the Goldwater rule in certain circumstances, such as when the diagnosis of a particular individual might be necessary for “national security” purposes, such as to avoid a situation in which the would-be patient might “rise to the level of a national threat” in the absence of such a diagnosis.43 If such arguments are to be accepted among the psychiatric fraternity, then it is indeed possible that the diagnosis of powerful figures or even state leaders, such as Donald J. Trump or Vladimir Putin—each of whom has recently been subject to diagnosis at a distance, variously by lay people and specialist practitioners—may be, in some circumstances, justifiable, which is to say not a breach of the ethical principles that underpin the Goldwater rule. What those circumstances may involve, however, remains unclear.

In any case, it is notable that a range of commentators have recently offered their diagnostic evaluation of Trump, variously diagnosing him with narcissistic personality disorder, “psychopathic narcissism,” and ADHD.44 But among the many who have offered their diagnostic hypothesis, one of the most prominent diagnosticians, John D. Gartner, a practicing psychotherapist who has previously trained psychiatric residents at Johns Hopkins University Medical School, has informally diagnosed Trump with “malignant narcissism.”45 Also the author of a book on former president Bill Clinton titled In Search of Bill Clinton: A Psychological Biography, Gartner might be said to work, perhaps unwittingly, not as a psychiatrist as such, but in an interdisciplinary field that is increasingly notable in literary and psychology studies, a field in which the ethical codes of the APA need not apply: that is, in the field of psychobiography.

But what is psychobiography? William Todd Schultz, professor of psychology at Pacific University, editor of the Oxford University Press’s (OUP) psychobiographical series “Inner Lives,” and author of the OUP’s Handbook of Psychobiography (2005), describes the subdiscipline as “the analysis of historically significant lives through the use of psychological theory and research.” The “aim” of psychobiography, he continues, is “to understand persons, and to uncover the private motives behind public acts, whether those acts involve the making of art or the creation of scientific theories, or the adoption of political decisions.”46 What psychiobiography is not, Schultz emphasises, is “pathography.” As Schultz notes,

People are not diagnoses. A diagnosis is a name—a label—not a true explanation. What we want to know is how someone became who she is, not what her DSM-derived “disease” might be.47

Schultz’s remarks recall the “label theorists” who participated in the anti- or critical-psychiatry debates—those such as Frank Tannenbaum or Erving Goffman. However, Schultz’s definition of psychobiography also suggests the importance of developing a detailed, complex, and essentially non-diagnostic description of the person under study. Outside of the psychiatric profession’s ethics code, then, the Goldwater rule might be seen as little more than a disciplinary persuasion. Whereas the scholar who produced a full and rich account of a historical person’s personality might be said to produce a psychobiography, those whose focus remains on diagnosis might be said to engage in pathography.

Despite Schultz’s apparent misgivings, there is also arguably a place for pathography. Indeed, one of the most fascinating aspects of pathography is the potential for the diagnostic hypothesis it produces—its speculation about the subject’s mental disorder—to be revealed as true. In this way, the work of pathography might resemble the work speculative fiction: it always offers a glimpse at a possible future, providing its reader or consumer with what Philip K. Dick called a “shock of dysrecognition” or what Darko Suvin called a sense of “cognitive estrangement.” As both Dick and Suvin knew, the role of science and speculative fiction is not simply to adumbrate a vision of a possible future; rather, it also serves to unveil an alternate present—to expose an underlying layer of our present-day reality—and in so doing to unsettle the prevailing understanding of what is currently possible, and of what is currently real.48 Indeed, it is exactly the uncertainty of the pathographer’s proposed diagnosis—the impossibility of confirming or disproving their pathographic speculation—that is fascinating. The question is not the formal one that involves a purely scientific process: “Can this diagnosis be verified as legitimate?” It is an informal, preternatural one, a question of faith in what I have called the diagnostician’s “parascient force of perception”: “Has the long-distance diagnostician uncovered the reality of the subject’s mental state?” In most cases, the latter question is often necessarily the only one that may be posed: for instance, the diagnosis may be of a deceased person, or of a political actor, in which case it is impossible for a consultation, for a formal psychiatric interview, to be carried out.

But even in the absence of a scientific trial, and in the absence of any statistically significant evidence, it can be illuminating to work through the pathographic speculation. If we return to Gartner’s diagnosis of Trump, for example—to his pathographic speculation that Trump is a malignant narcissist—we immediately notice the specificity of the diagnosis itself: it is malignant narcissism, rather than the more formal, more stable category of narcissistic personality disorder or NPD. To understand Gartner’s thinking, we must next look for the texts from which this specific diagnosis has been drawn. Doing so, we note that the term “malignant narcissism” appears in the DSM-V only once, and, even then, it appears in scare quotations and in parentheses, as though the designation were not a criteria-based index so much as a shorthand for a yet-unknown quantity, a disorder that one “knows when they see it.” The specific text reads as follows:

Trait and personality functioning specifiers may be used to record additional personality features that may be present in narcissistic personality disorder but are not required for the diagnosis. For example, other traits of Antagonism (e.g., manipulativeness, deceitfulness, callousness) are not diagnostic criteria for narcissistic personality disorder (see Criterion B) but can be specified when more pervasive antagonistic features (e.g., “malignant narcissism”) are present.49

Malignance, then, is an “additional personality specifier” of those with NPD; it is not, at least according to the DSM-V, a discrete personality disorder in and of itself, one that is separate from NPD. Nevertheless, some scholars working in mental health have identified malignant narcissism as “a serious condition [that is] largely ignored in psychiatric literature and research,” and a disorder that is especially difficult to diagnose because there is “no structured interview or self-report measure that identifies Malignant Narcissism and proposes a foundation for treatment.”50 A long-distance diagnosis of Trump with malignant narcissism, then, may be no more unclear, no less verifiable, than a diagnosis born of a detailed clinical analysis, so ambiguous is the specifier.

Still, others have identified the particular relevancy of “malignant narcissism” to the figure of the tyrant. In an examination of the psychological literature of politics (a field sometimes called “psychopolitics”), Betty Glad implicitly adverts to the insufficiency of the DSM-IV (as was then the latest edition) and its definition of malignant narcissism, appealing not to the manual for clarification but to the descriptive elaborations of Otto Kernberg, Vamik Volkan, and Jerrold Post—all academic psychiatrists—to supplement and reconfigure the “related subtypes” of narcissistic personality disorder that have been thus far proposed.51 Glad even suggests the inadequacy of the DSM’s theoretical foundation when she notes that the manual has been developed out of normative consultative experience, and published in order to provide guidelines for those situations in which the patient or subject could be expected to conform to the dysfunctions pervasive in the general population. By contrast, the dysfunctions observable in some figures—and, in the case of her analysis, the figure of the tyrannical political leader—actually lie outside of the normative parameters of dysfunction that arise in the general treatment of the population; accordingly, the DSM cannot be relied on to account for the dysfunctions of these individuals. As she writes,

Classification systems developed via clinical experience with persons who have been diagnosed as dysfunctional may need further elaboration for major political leaders. To understand the tyrant, we need to investigate the careers of individuals who have been successful in gaining absolute power in a broader political environment. Building on the work of Robins and Post (1997), we provide a basis for delineating, in a systematic manner, the advantages a malignant narcissist has in securing power in a chaotic or otherwise difficult situation. As discussed below, the attainment of nearly absolute power in the real world serves him while at the same time contributing to the psychological deterioration and behavioral overshooting that may lead to his eventual political undoing.52

Here, then, we see another way in which breaching the Goldwater rule may be justified—and not simply because one’s theoretical or disciplinary orientation, as I have outlined above, might enable one to claim they are simply speculating about the disorder as a pathographer, only musing as a curious dilettante, rather than offering an official diagnosis as a practising psychiatrist. Rather, the psychiatrist must deal with an inadequate manual, inadequate firstly because it is does not define the diagnosis narrowly enough, and because, secondly, its parameters, which are drawn from a normativised history of “clinical experience,” do not account for the particular dysfunction of the unusual patient. Faced with this impasse, the psychiatrist may decide to conduct a long-distance diagnosis, if only to simulate the consultation with, to gain some hypothetical access to, the unusual case study, who—often largely because of their unusualness, their anormativity—cannot be brought under the observation of the psychiatrist in any event. Thus, the psychiatrist must at once imagine the consultative experience, as well as newly define the disorder that the patient possesses—a disorder that is outside of the parameters of the instructive text. In this way, the long-distance psychoanalyst conducts a kind of clinical research both into the new diagnostic category, the new nosology, and the degree to which it might be accessed in the general population.

But for all this, others still have warned against long-distance diagnosis, and for reasons quite apart from the Goldwater rule, which seemed largely a fearful and defensive reaction to the likelihood of psychiatrists’ liability rather than a pure expression of ethical principle. For example, in what might be seen as an ironic countertransference, many psychiatric scholars have mused on the mental health of the psychiatrist Emil Kraeplin, although just as many have warned against the practice. Kraeplin, of course, remains “one of the most prominent and potentially controversial in contemporary psychiatry,” partly because he exhibited so many objectionable intellectual interests, including a belief in the potential for “negative eugenics” to “prevent the occurrence of ‘degenerative phenomena.’”53 As in the case of the tyrant, however, it is here that we see the borderlines that usually separate the discourses of epistemology, politics, and psychiatry transform into a blurry, complex, furrow. As the authors of one study have cautioned, the psychiatrist who is tempted to diagnose Kraeplin from a place and time far removed from the late-nineteenth-century, from Kraeplin’s world, will surely compromise their own professional and intellectual credibility.

But to engage in such retrospective, long-distance diagnosis and to subsume Kraepelin’s personality under the diagnostic categories of contemporary psychiatric systems is dubious in the extreme and may well reveal more about the convictions and interests of today’s psychiatrists than about the historical “patient” Emil Kraepelin. Moreover, attempts at such retrospective diagnoses put psychiatrists on the horns of professional dilemma. If they are to bring their professional expertise to bear on Kraepelin, then they must simultaneously compromise the very methods and standards on which their own expertise rests. They must forego that most important source of diagnostic information: the patient examination.54

After all, one need not diagnose an intellectual figure, need not appeal to psychiatry, to characterise a person’s ideas as wrongheaded. A racist or xenophobic argument or theory should be equally as wrong when expressed by a person free of any mental illness as when expressed by someone who has received a positive psychiatric diagnosis.

Because if Kraepelin is a controversial figure, it is at least partly due to his endorsement of the Lamarckian psychiatric concept of “pathogenic inheritance,” a concept perhaps first introduced in modern psychiatry by Bénédict Augustin Morel. Morel had also coined the neologism démence précoce in 1852 to identify a new nosological category, one that is today better known as schizophrenia. Against the backdrop of the political developments of the early twentieth century, Kraepelin’s support for research into racial hygiene, and his sympathies for the political goals of that work, has been the source of great controversy. And it is this, perhaps, that has led some to attempt to diagnose Kraeplin with a mental illness.

In 2013, Polity press published a translation of a similar work, first published in 2009, by two German scholars, the one, Hans-Joachim Neumann, a professor of medicine, and the other, Henrik Eberle, an historian. The scholars had attempted to diagnose Adolf Hitler once and for all, to settle the matter in what they called a “final diagnosis.”55 Titled Was Hitler Ill? A Final Diagnosis, the book is far from the first attempt to diagnose Hitler, and, as a comprehensive Wikipedia entry titled “Psychopathography of Adolf Hitler” makes clear, no less than 28 formal attempts have been made to systematically diagnose the Nazi leader, who has been said to have had all manner of mental disorders, from hysteria to schizophrenia, psychopathy to posttraumatic stress disorder, and asperger’s syndrome to abnormal brain lateralisation. Among the most famous diagnoses of Hitler was that produced by Walter Charles Langer, who was commissioned by the OSS (the precursor to the CIA) to develop psychological reports on Hitler before the end of the Second World War.56 Based on the testimony of psychiatrist Karl Kroner, Langer and his team proposed that Hitler had been treated in 1918 for hysteria, although the psychiatrist who treated him in a military hospital, Edmund Forster, had suicided in 1933, fearful of being interrogated.57

One of the most fascinating aspects of Langer’s hypothesis is that, because Hitler, in Mein Kampf, actually dates his decision to become a politician “to his hospitalization in the Pomeranian military hospital of Pasewalk in November 1918, where he experienced the end of the World War 1,” Hitler probably underwent some form of failed hypnosis there, or had suffered some form of hysteria in that hospital, which then triggered his will to rise to power and led to the efflorescence of his hysteria as a dictator.58 Interestingly, however, this period of time remains to this day, as Norman Ächtler notes, one of the “last opaque spots in Hitler’s biography.”59 So while the notion that Hitler underwent something of a conversion in the hospital—perhaps because he was being treated for mustard gas—is indeed a tantalising idea, the details are simply too scant to conclude with any certainty that any such conversion, any such “break with reality,” took place. It is a period that eludes even Neumann and Eberle, who rightly note that, while Hitler’s placement in the “neurology and psychiatry ward at Pasewalk field hospital would . . . appear to indicate a psychogenic condition,” Hitler’s subsequent actions are unlikely to have been a “product of an unsuccessful hypnosis” in Pasewalk, but instead would have “evolved slowly over a long period.”60 Indeed, as Neumann and Eberle finally note at the end of their volume, “the leader of the NSDAP, chancellor of the German Reich and commander-in-chief of the German Wehrmacht was healthy and accountable.”61

Perhaps one of the most responsible, albeit frightful, speculations we could make, then, is that a similar pathography as Hitler’s might now apply to those currently in similar positions of power, and to those who harbour impulses that are similarly tyrannical. To be sure, if made subject to psychiatric evaluation, dictators and tyrants may well be revealed to be malignant narcissists. However, it is also important to come to grips with the possibility that a seeming dictator may also be just as likely to exhibit no relevant symptoms, to act out no telling transferences, and so to be given a “clean bill of mental health” by the diagnostician. Alas, even if we accept that the ethical parameters outlined by the Goldwater rule are on occasion transgressible—after all, they are parameters designed to constrain professional psychiatrists in their media and public relations—what Freud implicitly understood as the defect of long-distance diagnosis remains insuperable. Having no ability to perceive a subject’s reactions, and having no capacity to observe the dynamics of their personal, extemporaneous “account” of what their actions mean, what their decision-making processes involve, one can but speculate as to what, exactly, those reactions, those transferences, might be.

Capitalist Realism and Public Intellectualism

Although a few books, numerous essays, an art movement, a literary tradition, and an ideology have all been given the title “Capitalist Realism,” it is likely that what first springs to mind when a cultural studies scholar hears those two words is a small book published in 2009.62 More than six years since its publication, that book, with the long title Capitalist Realism: Is There No Alternative?, remains a lively, engaging, and valuable critique of neoliberal hegemony, a book that, as Dwight MacDonald wrote of some British journalists’ work, treats its “readers like equals,” and offers a persuasive indictment of managerialism in the “post-Fordist” era, which is to say in our global society since “October 6, 1979.”63

As this volume reminds us, it was on that day, some thirty-six years ago, that “the Federal Reserve increased interest rates by 20 points,” and so prepared “the way for the ‘supply-side economics’ that would constitute the ‘economic reality’ in which we are now enmeshed.”64 Indeed, this event was significant—“the most widely discussed and visible macroeconomic event of the last 50 years of U.S. history,” according to one economic analysis.65 But unlike the many economists who celebrate Federal Reserve chairman Paul Volcker’s decision to restrict monetary supply in an attempt to staunch inflation, and contra the multitude who continue to regard Volcker’s disinflation strategy as a salutary lesson in achieving macroeconomic stability—in avoiding a major recession—here, in this book, we receive a completely different impression of the event. Capitalism Realism makes the remarkable contention that Volcker’s decision was a broad mistake of economic governance—a fateful decision with far-reaching social consequences for workers, who thenceforward faced only the grim prospect of “a new ‘flexibility,'” one defined by the broad “deregulation of Capital and labor.”66

Before going any further, I will finally note that the book’s author was the writer and theorist Mark Fisher, who died last week, on the 13th of January, at the young age of 48 years old.67 While many have remarked on Fisher’s book—and indeed, commiserated the author’s passing—my own appreciation for Fisher’s writing, both in Capitalist Realism and elsewhere, stems from my admiration for his attempt to avow and to formalise a singular, powerful proposition: that, for a great many who are “enmeshed” in it, post-Fordist capitalism entails brutal psychic consequences. In this historical context, Fisher acknowledges, schizophrenia marks an ontological border—a “limit concept,” a phantomic outer-edge; but the psychotic disorder is also, he writes (after Lacan), a “‘suggestive aesthetic model’ for understanding the fragmenting of subjectivity in the face of the emerging entertainment-industrial complex.”68 But if schizophrenia “marks the outer edges of capitalism,” then “bi-polar disorder,” he adds, “is the mental illness proper to the ‘interior’ of capitalism,” so allegorically synchronised, so correlative, are the economy’s “boom and bust cycles” to the “moods of populations” within the system.69

Among the many thinkers who people the author index on the Zero Books website, Fisher certainly seems one of those most suited to the Hampshire publisher’s stated aims, one of the most in harmony with its philosophy. (As one of the imprint’s commissioning editors, this is perhaps no surprise.) An imprint of John Hunt Publishing based in Ropley, East Hampshire, Zero Books (sometimes written as “Zer0 Books” or “0-books”) stands for, as it notes on its website, the production of a more disruptive, a more incisive form of writing than that which currently passes for intellectual work:

Contemporary culture has eliminated the concept and public figure of the intellectual. A cretinous anti-intellectualism presides, cheerled by hacks in the pay of multinational corporations who reassure their bored readers that there is no need to rouse themselves from their stupor.

This disillusioned view, with its eulogy for the public intellectual’s disappearance, is, of course, nothing less than accurate. For better or for worse (as, depending on how one defines them, what ramifies from the loss of the public intellectual remains unclear) the dominant media outlets seem no longer interested in according kudos, in showing any earnest deference to—and perhaps can no longer really even comprehend—those figures who had once been called (and who, lamentably, are sometimes still called) “Men of Letters.”70 “In the age of late or neoliberal capitalism,” write Jeffrey R. Di Leo and Peter Hitchcock, “society at large no longer affords its iconic or star public intellectuals much respect.”71 They are instead “widely regarded as merely representatives of ‘one side of the argument,'” their views subject to sensationalism and reductionism, whichever “side” they may happen to find themselves on (“liberal or conservative, left wing or right wing”). Unlike what may have once been the case, the public intellectual today is no more permitted to hold a finessed, conditional, qualified, or partial view, no more authorised to occupy a “rational middle ground,” than any other vendor in the ideas marketplace.72

If in the first half of the twentieth century, the exemplary public intellectual might have manifested in the form of someone like Aldous Huxley, a regular interviewee on BBC television before his 1963 death, another exemplar may have been Bertrand Russell, the talented mathematician–philosopher who routinely appeared on BBC broadcasts including The Brains Trust, and who just as often authored newspaper articles published in what we would today call the mainstream press. What soon becomes obvious, however, no matter to whom you point, is that public intellectualism in the early twentieth century was a man’s business—a boy’s club. But in addition to being men, and as well as possessing the requisite academic bona fides (although Huxley, holding only an undergraduate degree, drew his intellectual authority less from the institution than from his grandfather’s reputation), Huxley and Russell were also typical of the “writers and thinkers who,” as Russell Jacoby wrote in 1987 (in a book on American, not British, intellectuals), “address a general and educated audience” and “whose works are [not] too technical or difficult to engage a public.”73

In the later twentieth century, however, perhaps in the wake of Simone de Beauvoir’s “Femmes de lettres,” and (later) Hélène Cixous’s L’ecriture feminine, among many other feminist schools and currents, public intellectualism, losing some but by no means very much of its phallocentric bias, tapered its ongoing resistance to the image of the woman academic—or, at least, disavowed it long enough to accord some recognition to the forceful erudition of such women intellectuals as Hannah Arendt, Susan Sontag, Germaine Greer, Camille Paglia, Martha Nussbaum, Judith Butler, Naomi Wolf, among many others.74

But no sooner had the longstanding exclusion of women from the intellectual public forum begun, in part, to abate (though, as Jeffrey Di Leo points out, “only 17 percent of professional philosopher are women”75) than “immediation”—the “cultural logic of the new public intellectual”—began to reconfigure the nature of the public performance.76 Rather derisive, Richard A. Posner identifies what might be advantageous for a public intellectual in this immediated context, namely “name recognition” and a nimble enunciative style:

Many public intellectuals are academics of modest distinction fortuitously thrust into the limelight, acquiring by virtue of that accident sufficient name recognition to become sought-after commentators on current events. Some of them are what the French sociologist Pierre Bourdieu calls le Fast Talker.77

But it is perhaps the very speed of capital circulation, the pace of the “entertainment-industrial complex” and the depth of immediation, that has conjured into life this “fast-talking” incarnation of the public intellectual. Indeed, as Hitchcock notes, the “velocity” of capitalism “affects the logic of intellectual production and exchange.” And, of course, something of this velocity is also immanent in, and is in fact constitutively a part of, not only what Fisher’s Capitalist Realism mourns but what it represents.

Even if Fisher was not himself one of Bourdieu’s le Fast Talkers, it remains true that his writing, swift and piquant, exudes a combustible intellectual energy. As one one reviewer of Capitalist Realism commented,

Fisher’s style of exposition has a fast-paced, free-wheeling quality to it reminiscent of Slavoj Žižek’s writing—and, indeed, there is a Žižekian audaciousness to many of the ideas that Fisher puts forward.78

On the back of its dust jacket, Steven Shaviro is quoted as describing Fisher as a “master cultural diagnostician,” a writer highly skilled in surveying “the symptoms of our current cultural malaise.” It is not by accident that Shaviro uses a grammar borrowed from psychiatry and medicine, Fisher being so clearly a disciple of the critical-psychiatry set, of the “radical theory and politics” of “Laing, Foucault, [and] Deleuze and Guattari.”79

However, what is perhaps most clearly apparent about Fisher’s writing, in ways sometimes quite distinct from those other authors’ works, is the extent to which it yearns so urgently for genuine political change; it is less interested in proposing a new theory of the political system, or in adumbrating a tantalising speculation about the future, than it is committed to proposing reform. After adverting to the work of the critical psychiatrists, Fisher’s tone becomes abruptly more pragmatic: “But what is needed now,” he writes, “is a politicization of much more common disorders. Indeed, it is their very commonness which is the issue: in Britain, depression is now the condition that is most treated by the NHS.”80 Fisher’s scholarly episteme might be described as a kind of “no-nonsense” political sociology, an unpretentious style of public intellectualism. His was an earnest yet unidealised approach to the work of political theory—one to which we should all aspire, now more than ever.

Emotions and Publishing

I assume that, on occasion, many early-career researchers are tempted, as I am, to reach for those kinds of books that offer a system or model for publishing success. After all, what better way to reassure oneself that an academic career is feasible than to learn about others’ achievements? Dean A. Shepherd’s The Aspiring Entrepreneurship Scholar offers ample straightforward advice to early-career scholars—indeed, the advice is sometimes so straightforward, and the book’s tone so informal, that one may even begin to suspect, as I did, that Shepherd had managed to somehow manipulate the review process to publish it (a sure sign of his scholarly aplomb). How else to explain the offhand equanimity, the conversational register, of some of the remarks? Chapter 2 opens with an exemplary line: “Okay, so maybe this is self-delusional, but I think of scholars, at least the good ones, as highly entrepreneurial.”81

Despite a profusion of many such general statements in the book, statements with a wide application for would-be scholars of all types, what I discovered within a few moments’ reading—but which I had not, I admit, initially realised when I downloaded the book via my institution’s subscription—was that Shepherd’s manual is not for all scholars at all. It does not explicitly propose that we, in whatever fields of study, take an “entrepreneurial” approach to a scholarly career—and why would it? The book is not addressed to all scholars but is, rather—and of course!—a guide written for—quite literally—“entrepreneurship scholars.” Of course, as you might have guessed, I had somehow, no doubt frivolously, imagined that Shepherd’s title was a strange nominalisation of the phrase “entrepreneurial scholar,” and that his book therefore sought to advance a particularly “entrepreneurial” strategy for a “successful academic career,” whatever the field. How wrong I was. But, despite this, my own quite remarkable misprision, the book proved to grant a few noteworthy insights for all of us in academia.

But first a little more on the book’s tone. It should probably be noted regardfully that the book is published by Palgrave Pivot, an imprint of Palgrave Macmillan designed for scholars who wish to publish a work longer than a journal article but shorter than a monograph. These compact books are published as e-books, but perfect-bound volumes can be ordered on demand. And it seems that this Palgrave imprint facilitates—or, better, encourages—precisely the kind of informal, this expert-but-accessible, style of writing preferred by Shepherd; or, in fact, I should say that it facilitates this as well as many other kinds of idiosyncratic writing, since Palgrave Pivot is set up to allow authors to publish their works at their “natural lengths”—to save them the adversity of cutting a long work down to the standard 8,000-word article, and the hardship of elaborating a short book up to to the 80,000-word breadth of the monograph. The imprint is also, it seems, all for authors publishing works that do not bear the seriousness of the monograph, nor the epistemological rigour of the article. As such, you might even say that these books represent, in some ways, the formalisation of the academic blog. After all, they’re also very quickly produced, with a turnover publication time of only twelve weeks following peer review. Hence, I suppose, the name “pivot.” But just in case I am being unclear, I think these volumes represent a positive expansion of the academic publishing ecosystem.

And while I’m talking shop: one thing that I immediately noticed about this imprint’s productions were the books’ formatting details—clearly the ramifications of this new modality of academic publishing—such as the way the chapter title pages were organised. Like journal articles, each chapter title page had a selection of keywords, a copyright notice (stipulating that copyright in the work was held by the author), and a DOI number. It’s certainly a smart-looking series, typeset in the classically sleek but sharp ITC Galliard (the same font the Library of America uses in its books), as opposed to ITC Stone Serif, the usual fare in Palgrave’s non-Pivot series books; although it is also a curious hybrid, what with this strange chapter metadata in a text that is released digitally, as an e-book, but is yet-not-fully digitised, a “photographic” pdf rather than a “live” hypertext.

None of this has much to do with Shepherd’s writing, however. And, lest my earlier meaning be misread, none of my earlier comments were supposed to suggest that Shepherd’s informality, this aporetic self doubting (“Okay, so maybe this is self-delusional”), is threaded through the entirety of his book—far from it. Shepherd, I soon realised, writes not with the insouciance of the careless, and not with “informality” as such, but with the openness and directness of a guru. It is, of course, the appropriate tone for a guidebook, something of a Dale Carnegie or Carl Crow done gone academese. And I imagine that, as an “entrepreneurship scholar,” Shepherd’s “down-to-business” style of writing is an asset—and not uncommon in journals in the field of business entrepreneurship, such as the one of which Shepherd is himself the editor, the Journal of Business Venturing. But what I thought would make an interesting and productive addition to my notes, here, were the various references and allusions Shepherd makes to the emotional world of the aspiring scholar, and particularly to the emotional conflicts they face in their attempts to publish their research.

In the opening chapter, Shepherd discusses the unusually buoyant attitude he took to the revise-and-resubmit (“R&R”) decision letter while still a doctoral student: “Believe it or not,” he writes, “I was excited when I received a rejection letter,” excited “that three scholars had read my work, found something interesting in it, and provided me feedback.”82 And while Shepherd implies that he “no longer feels” as positive about receiving R&Rs as he did then, he does continue to feel more positively about them than most of his colleagues, one of whom, when they asked him “how to deal with the negative emotions associated with such a decision letter,” prompted incredulity on his part.83 But what Shepherd writes next about his process is worth quoting at length, if only because few scholars seem to have written as candidly as he does of their emotional reactions to R&R letters—that is, of the way in which they have learnt to manage their own emotions. But this long quotation will also allow me to quickly look at the tropological figures Shepherd produces to explain his emotional defences against the familiar melancholy of dejection:

Returning to the issue of excitement in receiving feedback on a rejected paper, I used that excitement to drop other work and immediately begin to interpret the letter and the spirit of the reviewers’ comment [sic] to improve the paper and resubmit it to another journal. I thought about a paper as having momentum and believed it was important to keep the momentum rolling. On the flipside, I felt that if a paper sat, it lost momentum and required more effort to “start the ball rolling” again. Fortunately, having peers as co-authors and choosing to work with people with similar values, I was able to quickly learn, enhance the quality of each paper, and put them “back in play” at journals. Speed came from energy and momentum, not from cutting corners.

Through speed and working with colleagues on papers out of our dissertations, my colleagues and I were able to generate a reasonable number of papers in the first couple of years post-dissertation. We believed that having a “reasonable” number of papers “in play” (i.e., at least three or four papers under review at top journals) was important for two primary reasons. First, we believed that the more papers we worked on, the more feedback we would receive. Feedback not only helped us improve the quality of the specific paper but also provided the basis for a deeper understanding of what reviewers were looking for and how to address these issues and follow their recommendations. Upon reflection, we were engaging in deliberate practice and hopefully building expertise. We certainly had many opportunities to learn from our failures, but fortunately, we also had some successes—some small wins—which allowed me to start to believe that maybe I could make a career as an entrepreneurship scholar.84

While Shepherd magnanimously—and doubtless truthfully—attributes a degree of his success in turning papers over quickly and effectively to his helpful peers and colleagues, what is perhaps more notable about the above summary is Shepherd’s emphasis on “excitement,” “momentum” and “speed.” None of these three words is a one that we would first associate with the monastic labour of research. They better convey the personality of the explorer or adventurer—even the champion sportsperson.

Better still, these nouns remind me of Freud’s haughty, swaggering vision of himself as a “conquistador” in the letter of which I wrote in my last post. The year was 1900, and Freud, vainglorious, wrote of his personality thus: “I am by temperament nothing but a conquistador . . . with all the curiosity, daring, and tenacity of a man of this sort.”85 I suppose there are many ways to find success as a scholar; however, what strikes me as salient about Shepherd’s keywords and Freud’s self-portrait is the common emphasis they place on this constitutive tenacity, this positively “dopaminergic” sense of ferment, that seems required of the emotionally durable scholar. In different ways, both authors suggest the importance of maintaining, for want of a better idiom, something of a quickness of mind and spirit. Excitement and energy, it may be, then, are the emotional antinomies of dormancy, the affective defences to torpor, no less in academia than elsewhere. But then, of course, we already knew that.

The Emotional Economy of Busywork

Busywork, or work that keeps one busy but serves no value in and of itself, is variously familiar to us all. But busywork seems just as likely to affect the lives of researchers, of “knowledge workers,” as it is to bear upon those of other professionals—if not more likely. Indeed, if academics are not excoriating the increasing intensity of the administrative tasks and duties they must perform in their day-to-day lives, they are likely to be heard speaking of “administrative creep” and “administrative glut,” the suggestive expressions given, respectively, to the unstanchable rise of non-academic positions, and the disproportionate load of administrative staff members appointed to them, within modern universities.86 But these commiserations, of course, amount only to a synecdochic dismissal of busywork more generally, a eulogy for a time in which the predomination of administrative tasks in academia was allegedly less pronounced.

But even the academic blog risks becoming an exemplary instance of work that keeps its author busy, but serves no material purpose. Despite its potential popularity, the blog post, it seems, could never substitute for the gold standard of academic publishing, namely, the refereed (peer-reviewed) journal article, especially when it comes to the assessment of professional academic achievement, the inevitable instance in which one academic’s research profile and activities are adjudged against those of their peers, whether for a new job, a promotion, or an award. That the blog may still be regarded less as research proper than as a category of professional service must seem instinctively true to many researchers, even despite the fact that many authorities, both within and outside academic institutions, have urged academics to blog.87 Of course, as I write this post, the advantages that the blog imparts to one in improving their research toolkit—their set of writing and reading skills—seem obvious. Blogs can surely help researchers to formulate their thoughts, to express their ideas more clearly, and to improve their writing. What is more, an academic blog, as many have argued before, promises to grow one’s readership, to pique the interest if other researchers and would-be readers—even (although it is quite difficult to know for certain) to create a positive impression in the minds of those considering one’s potential productivity as a scholar, or one’s status as a “public intellectual,” if not a tenured or full-time one.88

But (to return to where I began) the academic blog may also be categorised as busywork, as a mode of pointless, repetitive, habitualised writing, just as readily as it may be understood as a skill-honing praxis or a “lead generator.” For even as it transpires to improve one’s writing, to enhance one’s effectiveness as a composer of text, or to bolster one’s reputation outside the cloisters, the academic blog, unfortunately, is the wrong kind of text for attaining scholarly cachet. For as many academics will admit, the academic blog has proved unlikely to offer anything of value to the job-seeking scholar in the cold, increasingly market-led and budget-aware academic economy, the specular (read: intercitational) institution in which one’s status as a knowledge producer, as a “productive,” output-driven academic professional, is paramount, and less affirmed by the volume of one’s voice before the world at large (much less the world wide web) than by the frequency of one’s citations among one’s institutional colleagues. The academic economy, of course, has myriad persuasive reasons for remaining in this way so hermetic and self-enclosed. After all, the university, at least when it comes to knowledge, cannot be expected to respond to populism or celebrity—other than when for specific disciplinary purposes, of course.

In a 2016 essay commiserating the death of the academic film blog, a form whose life, we learn, lasted a mere fifteen years (from 2000 to 2015), Amanda Klein observes the unfortunate truth about digital academic discourse. The problem is that the impact of the research, however obvious it may appear to the author, just cannot be measured, or at least cannot be quantified in the same way that the peer-reviewed journal article is purported to be quantifiable—namely, by citations. Mourning the death of the academic blog, specifically that of the film scholar, Klein argues that the form simply

did not yield the legitimacy so many of us hoped for. Despite my best efforts to demonstrate the value of my own blog—citing research opportunities it unearthed, connections made in the field, and the way the medium forced me to grapple with new and exciting ideas I never would have explored on paper—the glaring absence of traditional peer review makes it difficult to quantify how blogging has impacted my research, teaching and service (the holy trifecta of academic values), even though it has greatly contributed to all three.89

Klein’s experience makes obvious the fact that academic blogging occurs in an ecology or environment that is decidedly more “digital” than that of traditional, refereed academic research. But her essay also indicates how the academic world has yet to embrace or trust, much less to adapt to, the new parameters, the new rules, of this digital ecology. Not only is the academic blog post never expected to be published in paper form, neither by its author nor its reader; it is also understood as a different kind of research work, with a distinctive set of aims, to that which is published under the consraints of the review process. Understood less as the site of formal knowledge transmission than a means by which to report or disseminate the act of knowledge making, the academic blog can do little more than advise its reader that, somewhere else, in the heterotopia of real work, of bona fide erudition, knowledge has been, is being, or will soon be made. The busywork of the academic blog may consist in adumbrating its shape, in sketching this knowledge at some spatio-temporal distance from its origin, but rarely will it substitute for the work itself.

More formally, we know that the academic blog is a work in which an author’s references will usually go uncited (instead taking the form of hyperlinks—although an exception, of course, is this blog), and one in which the author’s arguments go unreviewed by their peers. In fact, the entire system in which blogging occurs—from the initial administrative processes (finding and commissioning a webhost, developing a content management system, paying for hosting, and so on), to the production processes (perhaps writing within a CMS rather than a word processor, uploading or publishing work unilaterally or independently, rather than by means of a collaborative, bilateral email process)—takes place outside of the academic publishing system, and in reality at many removes from the standard peer-review process. What is more, the blog, at least in theory, is sometimes understood as a provisional, temporary, reviewable, and redactable form. Unlike the apparent permanence of the printed page, the material life of the blog post is at once unreliable and indeterminable. The effects of the blog’s perceived impermanence are both epistemological and technological, as if leaving the impression of text on paper marked a grand point of termination, an authoritative condition of finality, and a truly “material” imprimatur of authority, that the blog post can never receive, can never engender on its own.

But the material and conceptual distinction between the digital page and the printed one—between, say, the online website and the analogue book—is by now an old and overstated one. If anything, it is a distinction or dichotomy with many comparable antecedents, such as that between page and screen, or between stage and screen, distinctions that, in the broadest of practical terms, cannot be said to affect the communication or reception of knowledge. How else, for instance, to explain the unquestioned acceptance of airplane safety videos, whose contents, only decades ago, were distributed in paper form? Instinctively we know that, when it comes to essential meanings, the medium does not generally have an impact on comprehension. And, despite the obvious attractions of McLuhanism for detailed media analyses, varied linguistic studies have concluded that, at the level of comprehension and learning, it makes no difference if the knowledge-producing object is a hypertextual medium or a printed medium.90 Presented with the same information on a page or screen, and with all other things being equal, learners or readers will generally learn and read at the same rate, and with the same rates of accuracy. If it is a provocative conclusion, it is one that is at least generally supported by an attestation expressed by various textual theorists: that the move from analogue to digital texts is far from “an absolute paradigm shift.”91 And, in any case, doing away with this distinction allows us to address the emotional and affective economy of busywork, to gain fresh insight into the desires or drives (triebs) that lead some of us to engage in busywork as a matter of priority, and of the rewards or compromises that flow therefrom.

The discussion of paper texts reminds me of remarks once uttered by Gilles Deleuze and Félix Guattari, remarks that gesture at the erotic allure of busywork, and which are not at all—excuse the pun—immaterial in this note, which addresses busywork in the digital context, the newer ecology in which we busy ourselves today. For those authors, the seduction of busywork lay in the fetishism of “fondling records”—of, in other words, paper shuffling. As they write,

The truth is that sexuality is everywhere: the way a bureaucrat fondles his records, a judge administers justice, a businessman causes money to circulate; the way the bourgeoisie fucks the proletariat; and so on.92

Here we see an instance of the aftereffects of busywork. The bureaucrat who “fondles his records” does so while remembering the gratification he has received not only in compiling but in coming to possess and rely on them. For what is likely to be remembered, here, and what is likely to be continuously incanted as the documents are stroked, is the accomplishment of the task that engendered them. It is an act of self-assurance, a performative recitation of the very same pleasure that coincided with these documents’ original production. And yet, the word “production” here, as with all nouns related to engenderment, is misleading, as it is an ersatz efficacy, and in fact a false economy, that underlies the busywork of administration. (No wonder, then, that so many contemporary administrative processes have become automated. Institutions and governments have come to realise the essential paradox of modern-day administration, the circuitous logic of busywork in the age of digital reproduction: however useful the busywork may actually be, and no matter how large the return on investment, it is at once always too expensive (for bureaucrats, after all, are well paid) and never cheap enough—at least not when automation offers the tantalising hope, the utopian dream, of costless administration, of free busywork.93)

In all of its banality and repetitiousness, and in spite of its obvious irrelevance, busywork imbues the busyworker with the validating pleasure of realising their effectiveness and productivity, a requisite catharsis that is central to the emotional wellbeing of all humans, to our feelings of stability and security in the world, and perhaps just as central to the lives of most (other) animals. And it is no surprise that, like a child who grips a teddy or doll (or, perhaps more likely today, their iPhone), the bureaucrat is said to keep a caressing hand on their documents. For in each case, the object marks the symbolic vessel into which so much of the subject’s exertions, so much of their energies, have been channelled. It hardly seems relevant whether those exertions have been “productive” on any objective accounting of performance—whether those exertions have had greater benefit to society, to research, or to the economy, for instance. The triviality of the work, which is to say of the busywork, is of no importance, just so long as it is capable of producing the emotional discharge, the catharsis, for which it was undertaken (unconsciously or not).

Of course, many pursuits are trivial in precisely the same way as administrative busywork is trivial. Online gaming, for instance, is often acknowledged as an activity that provides the gamer with the short-term emotional rewards they may desire, and yet is also liable to problematic, if not plainly destructive and compulsive overuse, can become an excessive and problematic preoccupation that affects the gamer’s life in other, seemingly unjustifiable ways.94 And sports and other recreational activities, including running, might also be regarded as comparable to busywork, at least insofar as these activities are also liable to dependence behaviours.95 All of this, of course, assumes that busywork is itself the product of an addiction or a compulsion, an assumption that I am late to address. However, rather than turning to the cognitive science, psychological, or psychiatric literature, I would like to now supplement my discussion of busywork with a brief historical excursion into the life of Sigmund Freud, specifically relating to his close friendship with Wilhelm Fleiss.

Fleiss was the German Jewish throat and mouth surgeon (otolaryngologist) whom Freud had befriended just after Freud and Josef Breuer had published their Studies on Hysteria in 1895, and after which Freud parted ways with Breuer, whom he had scolded (at least privately) for a dubious incident involving “Anna O.,” Breuer’s patient.96 Didier Anzieu has written about the relationship that subsequently developed between Freud and Fleiss. As Anzieu notes, Freud and Fleiss were “bound together by their noses,” sutured in a strong “bond made all the stronger by cocaine.” Whereas Freud had “revealed the substance to medicine, and only just failed to discover its anaesthetic properties,” Fleiss had adopted the substance as his go-to medical tool for almost all nose complaints, urging “his patients, Freud’s patients, and Freud himself to undergo treatment of the affected parts of the nose with a local application of cocaine.”97 In many ways, Freud both depended on and idealised Fleiss, whom he called on often for prescriptions of cocaine. As time wore on, however, and notably after Freud had faced two transformative incidents, he and Fleiss’s relationship began to break down.98

What is of significance in this biographical history for the purposes of this note’s discussion of busywork? The answer lies in the distinction between busywork and “effective work or achievement” that Freud would soon come to focus on, a distinction that he would use to define the difference between himself and Fleiss as he began to distance himself from his colleague at the turn of the century. Patrick Mahony has drawn attention to the destructive impact that performing only busywork had on Fleiss during this period, both in terms of Fleiss’s own health (increasingly poor under the strain of his cocaine addiction and associated bouts of paranoia), and in terms of his professional reputation as an academic scientist, most notably relating to his loss of standing before Freud himself. In fact, as Mahony suggests, it was Fleiss’s increased tendency to prioritise busywork over serious scholarship that led to Freud’s dismissal or “deidealisation” of his once-beloved colleague, a man for whom Freud, as he would later suggest to a friend, had once harboured not just amorous but erotic feelings.99 As Mahony writes,

In his narcissistically influenced break with Fliess, the conquistadorial Freud was expecting more than prolific production per se. As time went on, the author of a publishing cure became less enchanted with Fliess’s orally delivered flights of imagination, and attributed mounting significance to the qualitative distinction between Arbeit (work in general) and Leistung (effective work or achievement)—a crucial distinction often obfuscated in Strachey’s translation in the Standard Edition. The record speaks for itself. In 1897, Fliess published a book completed the previous year; between 1897 and 1900, however—the period when the great masterpiece of psychoanalysis was being penned—Fliess published but one short article, a state of affairs that periodically stirred Freud, even while he still considered Fliess as his critical reader and unique “Other,” to vent his disillusionment about Fliess’s lack of effective work in a series of ironical remarks.

A major factor in Freud’s deidealization of his alter ego Fliess, therefore, was the recognition of the increasing gap between the latter’s self-glorifying busywork, and his capacity to externalize or transfer (übertragen) it into publication—that is, into a substantial amount of effective written performance that could be evaluated by the scientific community at large.100

Mahony’s account is interesting for a number of reasons, not least because of the implications it has for Freud’s own view about the emotional economy of busywork. Mahony suggests that Freud must have wished to end his friendship with Fleiss, not because he resented him after his “bungled surgery on Emma Eckstein,” but because he viewed Fleiss as timorous and weak-willed, as a scholar who was just not courageous enough to produce in writing—and to publish—what his oral presentations presaged. As Mahony writes, “The Interpretation of Dreams marks Freud’s distance from the publicly less adventurous Fliess, and it stands as an example of how Freud’s taking the risk of a partial writing out did not vitiate the overall status of his scientific and textual masterpiece.”101

To achieve a “partial writing out,” then, involves taking a “risk,” involves a willingness to fail, to be ridiculed, as well as a wish to be recognised. Indeed, when Mahony refers to Freud as conquistadorial, the adjective is no mere decoration. In a letter to Fleiss of 1900, Freud identified himself as just such an explorer:

For I am actually not at all a man of science, not an observer, not an experimenter, not a thinker. I am by temperament nothing but a conquistador—an adventurer, if you want it translated—with all the curiosity, daring, and tenacity of a man of this sort.102

By contrast, those who compulsively engage in arbeit—busywork, or “work in general”—may be expected to exhibit a distinctive pattern of risk-avoidance, to be otherwise commonly engaged in compulsive or addictive behaviours (such as in Fleiss’s case), and, in short, to be lacking in the “curiosity, daring, and tenacity” characteristic of, in Freud’s appellation, the conquistador.

Freud would go on to write of “repetition compulsion” in 1920 as “the manifestation of the power of the repressed.” It is a conceptualisation of repeated behaviours (such as those often brought into action during busywork) that understands them as the reflections, as the symptoms, of the now-repressed disappointments and frustrated desires of the scorned infant.103 However, Freud’s varied examples of these behaviours do not strike the reader as examples of busywork; in all its banality, in its inanity and innocuousness, and owing to its apparent lack of, say, suitable new targets for deeply emotional feelings—“objects for their jealousy”—busywork does not, at first instance, appear to be a suitable way for a traumatised individual to repeat their “unpleasure,” not the kind of performance that is likely to sate the urges of the individual who, for one reason or other, is led to “repeat painful traumatic experiences and to recreate inner issues and relationships from the past.”104

And yet, if we are to interpret busywork more broadly, and at the same time offer a narrow example, it begins to appear as an excellent candidate for Freud’s conceptualisation of repetition compulsion. Take, say, the “unofficial” example of busywork given to us by the narrative of Freud’s relationship with Fleiss. Here the otolaryngologist begins to busily postulate ever more improbable psychological theories in various unpublished documents (letters, for instance), as well as to communicate them orally; but he never publishes these ideas in scientific journals. In this case we can begin to see ways in which busywork—specifically writing and thinking but not publishing—may in fact function as a profoundly appropriate substitute for the trauma that comes from unsuccessfully performing the so-called “real work” (Leistung), which, in this case, would be constituted by Fleiss’s publication and dissemination of his theories and ideas. So, it might be said that the author who is led to write continuously but does not change their writing habits or patterns—and who never concludes their writing, their work thus forever remaining in draft form—may be understood, if not as a candidate for hypergraphia or a related disorder, as having been given over to a repetition compulsion. And while the repeated behaviour may well take the form of a completely unrelated performance (so that a writing disorder, such as hypergraphia, need not be based on a traumatic writing experience), the compulsion might also be said, in Fleiss’s case, to have been precipitated by the very trauma of not producing, say, publishable or rewardable writing in the past. Or, more to the point, we could say that it has been precipitated by the trauma of having produced only poor writing—work that has been severely criticised or impugned by precisely those whom one writes for.

There is much more that I could write on this subject—on repetition compulsion specifically, on busywork as an example of the same, and on writing as an incidental spur to emotional and psychological trauma. (The irony of saying so not is not lost on me.) However, at the risk of giving myself only more busywork, I will save those impulses for future notes.