Freud’s Mirror: The Dreaming Brain, the Ancient Art of Memory, and Why the Science/Humanities Divide Might Be Bridgeable After All
by Eric Wargo
The strangely sensible bizarreness of dreams has led reasonable people from time immemorial to assume that these nightly visions and visitations have some kind of important role to play in our lives. They have been thought at various times to represent beguiling messages from gods or demons, to be portents of the future, and to symbolically represent our wishes. Across cultures and throughout history, theories of dreaming agree on one fundamental: that dreams are meaningful and thus can (and should) be pondered and interpreted.
Sigmund Freud, of course, was the first to bring together the folk wisdom that dreams are meaningful with the then-new scientific study of mental process. His 1900 masterpiece The Interpretation of Dreams was to turn-of-the-century psychiatry what Einstein’s paper on the special theory of relativity was to turn-of-the-century physics and what Darwin’s On the Origin of Species had been to biology four decades earlier. Freud’s basic idea, that dreams are symbolic tableaux staging the fulfillment of repressed wishes, was one of the keystones of a bigger idea that excited and invigorated intellectuals throughout Europe and America and provided the theoretical coordinates for the still young scientific study of human behavior and mental health: the idea that the unconscious mind contained secrets that could make us ill, or seriously screw up our lives, unless brought out into the light of day.
But unlike Einstein or Darwin in their respective areas of research, Freud’s reputation has not fared well in the fields he helped pioneer. Although his influence continues to be felt across the humanities—in philosophy and literary criticism, for example—his name is barely mentioned anymore in scientific psychology, psychiatry, or brain science. As these fields developed newer, quantitative methods and materials in the middle of the last century, they strove for the same rigor that was expected in the “harder” natural sciences—what has been called “physics envy” (on the model of the Freudian notion of penis envy). Karl Popper, who defined the merit of scientific theories by their testability, booted psychoanalysis, with its inherently untestable theory of dreaming (and a highly questionable therapeutic method having more in common with literary criticism than with medicine) from the scientific club. Along with Freud went the mystique of dreams and dreaming, at least for scientifically respectable types.
J. Allen Hobson, probably the foremost contemporary scientific dream researcher, set the stage for the modern Freud-trashers in neuroscience by arguing in 1977 that dreaming represents chaotic hyper-activation of the brainstem during sleep. Dreams contain no meaning; the conscious mind simply imposes meaningful order on that chaos, sort of the way a patient will see suggestive pictures in a Rohrshach inkblot. Nobel Prize-winning biologist Francis Crick, the discoverer of DNA, lent fuel, and the weight of authority, to the anti-Freudians in a 1983 paper arguing that dreams are the discharging of mental static, random and meaningless associations, a way we get rid of unwanted or unneeded information—essentially, the brain farting.
For both Hobson and Crick, dreams are interesting neurological phenomena, but they do not contain meaning that could or should be used to glean any insights about ourselves or our world. Although Hobson has nuanced this views over the years, for instance arguing that dreaming represents a kind of ‘warmup’ for waking cognitive processing, he has never let up his antipathy to Freudian meaning-centered theories of dreaming (even in his most recent writings, you can still see the steam shoot out his ears when he mentions Freud, which he does strangely a lot). Hobson’s and related neurobiological theories of dreaming are not exciting, unless you’re really into the minutiae of brainstem activation, but they do satisfy a certain weird cultural need—dare I call it a repressed wish—that scientific truth be dryer and duller than the wild colorful theories of old.
Today we find ourselves in an increasingly neuro-literate world. Educated people chat knowingly about the latest brain findings churned out by the science media machine and TED talks and radio shows like Radiolab. Brain structure and function is where it’s at, and smart people casually bandy about names of neurotransmitters and regions like the hippocampus and temporal lobe; they generally aren’t caught dead interpreting their dreams or discussing dream symbolism, lest they be branded fuzzy-headed New Agers. And if they are fuzzy-headed New Agers, they’re more likely to talk in terms of Carl Jung’s archetypes and journeys of individuation, not the byzantine dissembling and mazes of symbolic sexual innuendo Freudian interpretation typically elicited. Freudian psychoanalysis, to many people nowadays, seems very much like those old maps with “Here Be Monsters” scrawled in the blank portions.
That may be about to change, however. Quietly, with little fanfare that I can detect, a densely argued but utterly original paper was published in December 2013 in one of the highest-ranking biological psychology and neuroscience journals, Behavioral and Brain Sciences, providing a big new theory of dreams and dreaming—I mean big in the sense of paradigm-shifting. This article didn’t report any new discovery or finding. Instead, like many paradigm-busting ideas—it simply drew together ideas from wildly disparate fields of study that had been “out there” all along but that no one had thought to juxtapose, and in a single stroke, showed how neatly they linked up.
Reading this paper, and the 27 peer reviews accompanying it, you can hear the collective slaps of palms on foreheads, the “duh” response of suddenly seeing something that had been staring many of the top dream researchers in the face for decades and should have been obvious, yet wasn’t. You can also hear a lot of protesting, as in, classic Freudian defensiveness. For one thing, the author, Sue Llewellyn, a professor of accountability and management control at Manchester Business School, is clearly an outsider to the field—neither a lifelong dream researcher nor an ambitious young post-doc, but a dreamer who just happened to notice that these nightly tableaux are just like the surreal scenes contrived by pre-Gutenberg scholars and orators for memorization purposes, or what is known as the ancient Art of Memory. Delving into the neuroscience of the matter, she found that abundant evidence could support the idea that dreaming is basically just the art of memory operating automatically while we sleep. It’s a simple, elegant, and persuasive solution to the perpetual question, “why do we dream?”
Llewellyn’s article brilliantly managed to make the whole field of scientific dream research sit up and take notice. Defensive-sounding quibbles over the neural mechanisms involved bespoke a deeper, wider unease that, even if many of the particulars could be nuanced, the basic idea on offer was something all of the insiders would need to deal with from here on out. As neurologist Patrick McNamara put it in a Psychology Today column about her article, “it is a shame that no scientist (including myself) who specializes in the study of dreams has proposed such a theory themselves. It often takes ‘outsiders’ to prod a whole discipline forward and hopefully Professor Llewellyn efforts will do just that.”
The mnemonic theory of dreaming truly is a big idea partly because it gives dreams back their meaning—indeed, makes them, suddenly, very cool again—the way they were in Freud’s day. Although you have to read between the lines, Llewellyn’s big synthesis is even a pretty big vindication of Freud. If you hate how Freud has been unfairly bashed and neglected by a century of psychological science that keeps rediscovering his basic ideas but giving new names to them, then that alone is reason to be excited. But its bigness is mainly that it represents a bold stroke of truly interdisciplinary thinking, crossing the most unbridgeable-seeming of chasms, that between C.P. Snow’s “two worlds” of the sciences and the humanities.
The Art of Memory
Paradigm shifts aren’t all-of-a-sudden things. You might say this one really started a half century ago, with the publication in 1966 of a rather mind-bending book called The Art of Memory by an English historian of Medieval and Renaissance symbolism, Frances Yates. When I was an undergraduate in the film department at the University of Colorado in the 1980s, a visiting lecturer (unfortunately I no longer remember his name) recommended this book to me, saying it was literally the most interesting book he’d ever read. That sounded like a pretty good recommendation, so I headed to the university bookstore and picked up a copy. Reading it that evening at home, I felt like I was being initiated into something truly mysterious and wonderful—a whole new way of thinking not only about the mind and history, but also film, visual arts, literature, psychology. It was really one of those life-changing reading experiences.
We speak today of the “reading experience,” but for people in the ancient world and Middle Ages—the main time period of Yates’ study—books were precious, rare objects, and the aim of reading—if you could read—was not diversion. It wasn’t about having a fleeting or passing experience with a book before moving on to the next one. If you were a traveling student or scholar studying a book in the library of some wealthy patron, you may never get another chance to look at that book again. If you were a monk or priest, your opportunities to read the Bible would be limited at best. The astonishing thing is, though, that learned men in ancient Greece or Rome, or in the Middle Ages, actually knew many books by heart; their minds were well-organized libraries of texts that, in their studies and travels, they had had the good fortune to hold and read and study. In debates, they could assemble and arrange evidence, quoting long passages by heart, without a scrap of paper in front of them.
The oldest complete instruction manual to the techniques used in accomplishing these feats is a text once attributed to Cicero, called Rhetorica Ad Herennium. In Book III of this ancient text, it specified that to remember a fact or an idea, an orator must break the idea down into its parts, create a vivid and interesting image to substitute for each part, and then assemble these images together into a weird little tableau or scene. To remember a whole discourse or argument, he must “place” such scenes systematically throughout a familiar or well-memorized environment like a home or public plaza. Then, when the time comes to deliver his discourse, he just takes another mental stroll through that same building, visiting each vivid image in sequence and retrieving the facts planted along that imagined route. No need to refer to any notes.
Here’s a simple example of how the method might be applied today: An American college freshman is faced with learning some major events in European history for an exam. Her professor’s study handout lists names and dates and facts that might be on the test—facts such as that the Normans invaded England in 1066. On their own, the bare facts, Normans, England, 1066, may mean little or nothing to the student—she doesn’t know what a “Norman invader” looks like; like most numbers, the date 1066 might as well be a random string of digits; and she has never been to England, although she has a lot of impressions from TV and movies and magazines constantly showing Prince William and his bride and children. If she is lucky, though, our hypothetical undergraduate may have been taught the basic tricks of the art of memory by a teacher or adviser at some point: The idea is to use all those random and stupid associations and assemble them into an image.
So, say our freshman is a maven of 60s rock trivia and knows the years of every Bob Dylan album like the back of her hand. The year 1066 thus might immediately call to her mind 1966, the year Blonde on Blonde was released. And while she may not know what an actual historical Norman looked like, she may have an uncle named Norman. Instead of repressing such “absurd” connections as they occur to her while she’s staring at her study handout, a clever, mnemonically-trained student will actually allow her naturally playful mind to make and even embellish those associations on the fly—allowing, perhaps, a mental image of a weirdly blond-haired version of her uncle (who in real life has brown hair), wearing a cheap Darth Vader cloak and mask (i.e., “in Vader”; the ‘cheap mask’ instead of a full helmet allows us to see his blond hair) slicing Prince William in half with a stroke of his light saber. For all the other major events and dates she will need to know for the test, she creates an image using the same principles, and then plants each bizarre, striking—and, yes, idiotic—image at periodic intervals in a mental walk through some familiar space, such as her dorm, or her daily route through campus. She will, I promise you, ace the test.
When Yates wrote her pathbreaking study, she took a somewhat skeptical attitude. Although there was clearly something wonderful and magical about this technique and the feats its practitioners boasted, it didn’t make sense to her; it seemed like creating new images to remember facts would actually require more effort at memorization, not less. But the psychological study of memory over the last half century has revealed just what it was the ancient memory wizards were up to, and why it really did work much better than rote memorization, or going over and over a fact until it somehow sticks (or, more often, doesn’t). Indeed, modern memory athletes still use this ancient art—it is precisely the method Joshua Foer described in his bestselling 2011 book Moonwalking with Einstein, the method that enabled him to compete in and actually win the 2008 USA Memory Championship. The ancient practitioners knew what they were doing, and looked down with scorn on anyone who attempted to learn things by rote memorization. They knew that didn’t work, and it didn’t dignify our God-given mental gifts.
When that college freshman is creating an image of her uncle in a Darth Vader costume killing Prince William, she isn’t really creating anything new she needs to remember in addition to the historical fact of the Norman invasion; she is taking stuff already in her memory, the first things that pop to mind, and hooking them to each other by slightly distorting them, the way you might attach two lengths of wire by bending the ends, creating a little story so they minimally make sense together. It’s like a little playful mental art project, in other words. The resulting image is absurd and stupid, but it actually takes little effort to concoct such images, and it takes zero effort to remember them—because of their absurdity, they stick in mind automatically. It’s also fun, not work. Thus, without expending any effort or time “memorizing” a series of historical facts and dates by rote, the student simply capitalizes on existing mental associations to do the work for her, quickly and effortlessly.
The biographies of geniuses show that, even when they don’t know they are doing it, they are engaging in just this sort of mental play, treating information as toys or materials to be creatively transformed, not intimidating data to be assimilated through rote repetition.
Rewiring the Brain
Memory works by association—the linking of new information not via logic but through the way things look or sound alike (for instance puns and rhymes), as well as by linking new information to what we were doing and where we were when we learned it. When searching our mental museum for information, our search system locates what it needs by quickly following trains of very personal and idiosyncratic links, like what you were eating for breakfast when you read a particularly interesting piece of news, or remembering a fact in a book because of a coffee stain that happened to be on that page. It’s that associative illogic that, counterintuitively, makes our memories so strong, because it enables us to quickly access needed information by many alternative routes. Your cortex is a vast multidimensional net of illogical interconnections, spanning the length and breadth of your experience on earth.
Highly emotional events are especially rich in associations, such that we often can recall vividly what we were doing when we first heard planes had crashed into the World Trade Center, or where we were when we found out that David Bowie had died.
Psychologists distinguish memory for facts, or semantic memory—like “the Normans invaded England in 1066”—from autiobiographical or episodic memory, the texture and tumult of our own existence. There is evidence that we are processing the former, learned semantic information all the time below the level of conscious awareness, but that most of the events and upheavals that happen to us during the day, our life episodes, are metabolized at night, specifically during the roughly two hours each night we spend during REM sleep. New memories are forged and older ones are strengthened precisely when we are dreaming most vividly. It thus makes a great deal of sense that the vividly bizarre images in dreams, which also come during those periods of REM sleep, could reflect this nightly process of memory-making.
Lots of circumstantial evidence gathered over the past few decades points to this: In laboratory experiments, people who have learned new material remember it better after “sleeping on it” than if they don’t, and it is known that during sleep, complex material is simplified, reduced to the “gist.” Altricial animals (those most dependent at birth, like humans and birds and some mammals, such as dogs and cats) show much more REM sleep than precocial animals (those born able to function); in other words, the more an animal needs to learn in order to survive and function, the more its brain is active at night, and the more it dreams. Rodent studies have shown that brain areas activated during daytime exploration and learning are reactivated during sleep (compared to control animals that haven’t engaged in learning). And the hippocampus, long known to play a key role in making memories, is extremely active during sleep.
One way to test the relationship between dream content and memory-making has been to expose experiment participants to very specific stimuli before sleep, such as video games, and then track whether these stimuli recurred in their dreams. Dream researcher Robert Stickgold has done these studies, and the results have been somewhat positive: Experiences during the day do recur in our dreams at night—something Freud himself knew, calling these obvious reflections of daily experience “day residues.” But obvious day residues account for only a small percentage of dream content, and even then, they are generally out of context: most dream material remains bizarre, with little obvious connection to our waking life.
It’s that “obvious” part that was always the sticking point in linking dream content to memory processes. Because they rejected Freud and everything that went with him, scientific dream researchers always took a literalistic approach to dream content, not attempting to find “hidden meanings.” Llewellyn’s hypothesis goes against mainstream dream research by proposing that those links between absurd dream content and daytime events might exist after all and that they might be non-obvious—indeed bizarre—for a very good reason. Specifically, the bizarre content in dreams bears the same relationship to a real-life event that a scene of Uncle Norman in a Darth Vader costume attacking Prince William bears to the fact of the Norman Invasion in 1066. Dreams may be showing us our private associations to new, to-be-remembered life events, not the events themselves.
In Llewellyn’s theory, dreams are little associative bundles linking daily events to each other and to longer-term material in our memory store, so that they can be indexed and found again later. Another analogy might be the way a librarian logs a new book, affixing it with a bar code and entering its information into a computer before allowing it to be placed on the shelves. What we are seeing in dreams are those bar codes stuck on daily events in our lives by the hippocampus (the brain’s librarian), enabling them to be quickly and easily found again later.
There is an axiom in neuroscience of learning: neurons that fire together wire together. That is, new synaptic connections between neurons are reinforced by firing. Dreams, in this new mnemonic view, are the experience of this firing, the experience of the forming of new mental associations to access new biographical information, the nightly rewiring of the brain.
Making Yesterday
Bundling the day’s events together is necessary to create a coherent life narrative, or sense of self. The mnemonic hypothesis explains, in a way that previous theories can’t, why dreams are so full of recognizable themes or elements from the recent past, yet why they are almost never rendered literally—and consequently, why we seldom recognize what they refer to on waking. The theory also makes new and slightly different sense of several other commonplaces about dreams and dreaming, such as the fact that they so often involve sex or sexual symbols, and that they so often contain brilliant witticisms.
Because dreams have so much sex in them, as well as suggestive genital symbolism, Freud thought they must be basically about our repressed sexual desires, and so when he interpreted dreams he always tried to home in on his patients’ forbidden lusts. This overriding interest in sex, and the assumption that everything has some sexual subtext, is one of the easily-parodied characteristics of Freudian thought that has, especially in relatively puritanical America, contributed to its marginalization over the years. If dreams actually serve a mnemonic function, however, sexual motifs would pop up frequently because memory capitalizes on the emotionally charged nature of sexuality to form strong associative networks. It is simply because sex is so emotionally powerful—perhaps especially when we are hung up and puritanical about it—that our memory likes to use sexual associations as often as it can, because those linkages are strongest.
Sex is central to the art of memory. Even casual practice bears this out, and in his path to becoming America’s 2008 memory champion, Foer noted that “it helps to have a dirty mind” [100]. Although most of the ancient teachers—as well as Dame Yates—kept mum on this aspect, at least one Renaissance memory teacher, Peter of Ravenna, wrote frankly about applying his dirty mind to make vivid memory images. In fact he regarded it as one of his trade secrets:
I usually fill my memory-places with the images of beautiful women, which excite my memory … You now have a most useful secret of artificial memory, a secret which I have (through modesty) long remained silent about: if you wish to remember quickly, dispose the images of the most beautiful virgins into memory places; the memory is marvellously excited by images of women … This precept is useless to those who dislike women and they will find it very difficult to gather the fruits of this art. I hope chaste and religious men will pardon me: I cannot pass over in silence a rule which has earned me much praise and honour on account of my abilities in the art, because I wish, with all my heart, to leave excellent successors behind me.
In our more enlightened day, you can of course substitute “whatever turns your particular crank” for Peter’s “beautiful virgins.” The take-away in other words is not that women are sex objects, but that sex, in all its forms, serves memory. Sex objects are memory objects.
Another common feature in dreams that Freud and his followers noted is the prevalence of brilliant wordplay and especially puns. A friend of mine once told me a disturbing dream in which she had attended a dinner party thrown by her older sister, where she was horrified to see her sister’s head resting on an appetizer tray. My friend had recently expressed annoyance about her sister’s accomplishments, such as her marriage and recent purchase of a new home (neither of which my friend was close to doing), so there was nothing really astonishing about the idea being expressed by this image: Her sister was ahead. My friend’s jaw dropped when I pointed this out to her.
The puns in dreams are brilliant and multifarious, linking multiple associations and utilizing multiple sensory modalities, not just the sounds of words. There are sight gags, as well as emotional and multisensory puns and situations that “rhyme” with those in real life. The brilliance of dream-thought is so excessive, so beyond our daily experience of our mundane intelligence, that people unused to recording or observing their dreams have difficulty accepting that their own measly minds could be responsible for creating these Shakespeare-worthy witticisms. It probably accounts for why, back in the day, many dreams were felt to have divine origins, and why today many people don’t remember their dreams at all—that kind of playfulness and genius simply doesn’t fit into who we think we are or who we think we should be. Witty wordplay is of course central to the consciously applied arts of memory, too: Using Uncle Norman to stand for the Norman invaders is a kind of pun, as is putting Norman in a Darth Vader costume to make him an invader.
Freud thought that puns enabled dreams to say a lot with a little—the same principle in jokes—and here the mnemonic theory is in complete agreement. Indeed there are undoubtedly many more layers of meaning even just to my friend’s sister’s head on a platter. Freud would have said the image brilliantly stated the case that the sister was “ahead” while simultaneously expressing a corresponding deeply buried death wish (standard for rival siblings); and if my friend were on his couch, he might have led her to explore other associations as well, for instance any religious associations with John the Baptist. He might have also suggested that the dream expressed the wish that the sister’s accomplishments were nothing but a trivial prelude (i.e., appetizer) to my friend’s own later achievements.
In the mnemonic view, there should be no right or wrong to such associations other than how quickly your free-associative mind can arrive at them; if free association immediately or quickly arrives at these ideas when interrogating a dream image, your memory can follow those same paths in reverse, homing in on the idea being represented via all those same avenues—and that’s the point of the dreaming process: to make an idea as accessible (that is, memorable) as possible. All the different possible trains of association radiating out from a single dream image reflect multiple paths in our brain’s associative network that keep a certain autobiographical fact like “I kind of hate my sister because she’s ahead of me” alive and available in our thoughts. Just as all roads lead to Rome, the extreme wordplay in dreams makes lots of neural roads lead to any given fact or idea, making our memories strong.
Ironically enough, though, dreams tend to fade rapidly on waking. It’s a bit counterintuitive, if they are supposed to be mnemonics. But actually, this fast fading of dreams is the best evidence that, contrary to Freudian or Jungian views, they are not “shows” put on for our benefit, something actually meant to be pondered and interpreted, but something more like a bodily function, not meant to be seen directly. Nature may not mean us to see or remember our dreams any more than it means us to see the contents of our stomach; when we do, something may be wrong. (For instance, one of the reasons many people hate puns is that they feel weirdly private, like a bodily function; and the highly pathological condition of schizophrenia has been likened to a kind of inescapable waking dreaming.)
The same thing is true of mnemonic images after the to-be-learned material is firmly fastened in memory. Like the scaffolding and cranes at a construction site, mnemonic images quickly dismantle themselves—they just vanish—once the material is fixed in long-term memory. I can rattle off my debit card number over the phone without looking at my card, and I no longer even remember the weird sequence of images I used to remember it a few years ago—something about a sailboat and a big tongue, I believe. I think there were also some naked people.
Shaving With Freud
The mnemonic theory is basically a mirror reversal of Freud’s theory: Instead of being coded communiques smuggled out past the guard posts and barbed wire that keep unconscious material imprisoned, dreams are bundles of lived experience inserted into the vast, latent, usually unconscious substrate of long-term-memory, fixing it firmly there. You could say that Freud got it exactly backwards, but mirrors get things backwards too—inverting right and left—and we can still use them to gain important knowledge of ourselves. We can still use them to shave with. The tropes the brain uses to hook new material onto older material are the same in both theories. This is why, even if Freud was wrong about dreaming’s basic function, his method, free association—along with the basic assumption that dreams can be interpreted—was exactly right.
That “censor” that in Freud’s theory stood as the guardian protecting us from our real thoughts and desires was always one of the big problems, because it assumed a kind of vigilant conscious agency who was, nevertheless, not conscious. When you reverse the flow of information, the need for a censor goes away, and the whole point becomes not censoring. When the student free associated on Normans, 1066, and invading England, she took the first things that they called to mind for her. Free association is taking the first thing that comes to mind without censoring or rejecting it—an essentially uncritical attitude to our own thoughts, which is facilitated during sleep when the prefrontal cortex, the brain’s hyper-judgmental nay-sayer, is largely off-line and other brain areas are allowed to play, like children making mischief while daddy snores on the couch.
This is why, even if dreams are not necessarily wish-fulfillments, there is no longer any reason to insist that dream analysis and interpretation would not be a useful, rich source of insight, for instance in therapy or other projects of self-analysis. It only makes sense that our long-term memories should be organized in terms of our deep and lasting priorities, hopes, fears, and insecurities—the things that matter to us. And indeed, many dreams would be expected to show our fears and insecurities as much as our wishes—a fact that Freud himself was forced to concede later in his life, for instance when pondering the obsessively fearful dreams of war veterans. Dreams reveal the deeper strata of our selves.
The n of 1
There’s a painting by Magritte that I love, called “The Voyager” (I love all his paintings): It shows a floating ball of disparate objects, a lion, a tuba, a chair, all packed together. This is something like what happens in dreams: Disparate ideas get compacted into strange, composite objects, wonderful, often funny, often whimsical … and usually difficult to share, let alone explain.
Dreams are hard to share because memory is a very personal, private place. The associations that “excite” our memory, even the unsexual ones, are uncomfortably personal, and it would make us cringe to have others spend too much time there, for the same reason we might not like other people seeing our Google search history. This is part of memory’s genius: Every person’s individual experience gives her a completely unique and vast set of associations—including preoccupations, interests, turn-ons, and things we are merely curious about—that are readily usable pegs for linking new experiences and newly learned facts. Most of those associations are so idiosyncratic, so dependent on our personal life experience, that they would never make sense to anybody else, at least without too much explanation to make it worth it—like explaining a private in-joke to someone not “in.” They wouldn’t make sense because, well, “you had to be there.”
This presents a problem for researchers who would try and study this process objectively. Even if an experimental protocol could be designed that somehow confined a sufficiently large sample of people in the same environment for a few days, subjecting them to the same limited set of experiences so they would theoretically all dream about some of the same things during their nights of confinement, their vastly different lives prior to the experiment would guarantee their dreaming brains would form vastly different associations to those same events. This is what Stickgold’s studies using video games hinted at: The bulk of our dreams isn’t obviously relatable to daily life. The only way to prove that dream bizarreness actually reflects direct or indirect associations to the day’s events, you need to go a further step and do exactly what Freud had his patients do: free associate on their dreams.
Yet a scientific dream researcher may have no experience in psychoanalytic methods of dream interpretation—it is as much an art as a science. And the very concept is anathema to most neuroscientists, who will have no truck with intrinsically unreliable and subjective Freudian methods. Even though Llewellyn’s theory offers a radically different functional explanation for dreaming than the one Freud gave well over a century ago, similar evidential problems put it in the same company, well beyond the pale of the brain sciences. The latter rest entirely on replicability—across large numbers of subjects and across time. But meaning is intrinsically personal. As numbers increase, meaning dwindles, and vice versa. The solitary dreaming subject, though, is an n of 1—the ideal subject for a humanistic investigation, since the humanities are the study of meaning, but totally inconsiderable as scientific data. How, for instance, does the researcher know which of the patient’s “free associations” are valid?
In his numerous books on dream science, Hobson wastes no opportunity to ridicule and deride Freudian ideas. He admits to harboring an extreme antipathy to the rude habit of old-school Freudians of telling you your repressed wishes about your mother based on any random dream or slip of the tongue. In his response to Llewellyn’s article, Hobson praises the elegance and originality of her theory but challenges her to design an experiment that could actually be used to test it, adding that he himself cannot imagine one. He specifically forbids any form of “anecdotal self-analysis”—providing examples of one’s own dreams and demonstrating how they support the theory (the standard move in psychoanalytic writing)—and declares that “we must not tolerate neo-Freudianism, no matter how brilliant.”
It sounds highly prejudiced, but Hobson is within his rights as a scientist to impose such restrictions: Once you start down the path of hermeneutics—interpretation, the kind of thing you do to “texts” like poems and films—you have left the realm of quantifiable results whose significance can be measured statistically. But is such a prohibition fair, or even reasonable, when the object of study appears, through quite a mass of circumstantial evidence, to be a phenomenon concerned intrinsically and deeply with meaning? Meaning is inherently specific—specific to cultures, specific to families, and especially, specific to individuals. Dreams seem to be about that last, very specific, very subjective kind of meaning. What if there are natural phenomena that are intrinsically unreproducible and unquantifiable, precisely because they involve subjective meaning? What if meaning—the business of the brain—is precisely the place where the methods of science break down and require an appeal to something beyond science?
Hobson’s objections to Llewellyn’s theory on the basis that it could never be proven experimentally sound on one level like a defense of the strategy of only searching for one’s lost keys under the streetlight, because that’s where the light is best. Wishing—such as, wishing dreams not to be ingenious symbolic tapestries woven from the stuff of our personal lives and susceptible to lit-crit-style interpretation—does not make it so. Even if the mnemonic hypothesis is intrinsically intractable to science, it seems very reasonable that that’s a good approximation of what dreams really are doing. Is science supposed to keep searching for the keys in the pool of the streetlight just because it can’t, by itself, venture into the dark where the answer reasonably seems to lie?
Being Asses Together
The gulf between the humanities, which study meaning, and the sciences, which study physical, measurable processes is so wide as to sometimes seem unbridgeable. The languages spoken in these realms are so divergent, and their assumptions so different, that no “sewing up” of the gap is really conceivable when you really zoom in on the problem. Philosopher Slavoj Žižek has called this “parallax”—a basic dualism that cannot be reconciled, and he cites as an example the increasingly vociferous debates between neuroscientists and philosophers over the problem of consciousness. Neuroscientists churn out books on consciousness almost monthly; some of these books are brilliant and persuasive. But from the subjective, “n of 1” point of view, no explanation involving neural circuitry or brain chemistry is ever going to make the “hard problem” of why and how we feel alive and aware—the problem at the heart of the meaning of life—go away.
No one, I think, has ever suspected that the question of dreams is quite as fundamental as that of consciousness. Yet ever since scientific psychology jettisoned Freud and all he stood for, the study of dreams has in a sense been like that guy circling around the base of the streetlight looking for his keys. The scientific method is powerful, but certain questions—questions involving meaning—cannot be fully answered with it. When it then attempts to deny those questions even exist and persists in some tangential line of inquiry that produces endless new results but never seems to go anywhere, it looks slightly foolish—or indeed, neurotic.
Žižek suggests that we can’t do better on such questions than simply switch back and forth between alternate viewpoints, treating them as parts of a larger gestalt, like duck rabbits having two separate non-coherent, non-collapsible solutions. On the subject of dreams, that would mean essentially institutionalizing the split that has persisted for a century in our culture. On the one hand, you have unashamed nonscientists happily interpreting dreams according to whatever psychoanalytic, Jungian, astrological, or shamanic hermeneutic they please, unpoliced by any kind of empirical or theoretical rigor; on the other hand you have neuroscientists circling in that pool of light where evidence can be obtained and is robust, endlessly churning out new, unsatisfying theories and never answering the question, what are dreams? Neither side seems aware of the existence of the other, yet both have something to offer. Is there a way the sciences and humanities can set aside their differences and reach some kind of compromise?
As a lover of the unreason exemplified in dreams and mnemonic artistry, I am sort of shocked to hear myself say this—but there is that quaint notion of reason. Nobody talks about it anymore, at least not in universities. Scientists make no room for it, because their method is empirical. But again and again, on some of the most fundamental questions, science runs aground on limits where its methods break down, and these are the limits of human subjectivity and meaning. Is there any reason why reason, that old diplomat, couldn’t bridge this divide? Could the neuroscientists and the humanists somehow reach a handshake agreement on questions like dreaming, or consciousness itself, and work in tandem, each side compromising or ceding partway on its fundamental methodological premises when reason shows their own methods are of limited value in answering a particular question?
The mass of circumstantial evidence surrounding the mnemonic theory seems enough to say, yes, this sounds like a reasonable answer—the most reasonable so far—to the question of what dreams are, not to mention possibly the most interesting. When the sciences said good-riddance to Freud, they replaced the many colored dreamcoat of the Biblical dreamer Joseph with a drab white laboratory smock. The new theory puts the color back in; it tells us we can go back to delving deeply into our dreams, extracting fascinating meaning from them, as well as a better sense of our latent creative aptitidues. It suggests how we can “dream while awake” by practicing the arts of memory in our daily lives. It’s the first dream theory to come along in decades that both seems to satisfy both the scientist and the humanist in us. Yes, you can never quite prove it, but that lack of certainty seems to be the difference between the many-colored theories that inspire us and the monochrome ones that don’t.
The reasonable thing to do would be to treat the intrinsically unprovable mnemonic theory as an assumption. We all know what happens when you assume—you make an ass out of you and me—but maybe it is sometimes preferable to be asses together than to stubbornly be the solitary streetlight-hunting kind of ass who constantly goes over old ground and never gets anywhere new. The age-old problem of dreams, and the promise of the new technicolor theory, is inviting us to be reasonable asses together, out in the perilous shadows.