The Nightshirt Sightings, Portents, Forebodings, Suspicions

Mysterianism and the Question of Machine Sentience


We are within perhaps a decade of creating computers that match and perhaps even dwarf the human brain in computing power, and that are capable of complex computations that may include something like reasoning and even a notion of self—what many would therefore consider to be autonomous, conscious machines.

When we contemplate this, most of us still have Skynet of the Terminator movies lurking in the back of our minds, and so the question that generally gets asked is whether such machines will eventually decide we are a nuisance and destroy or enslave us. Artificial intelligence (AI) researcher Hugo de Garis rather apocalyptically predicted that the question “Should we build them?” will so profoundly divide humans in the second half of the 21st century that it will result in a calamitous conflict that kills billions—what he calls an “artilect war” (artilect being his term for artificial intellect).

“Should we build them?” is not the right question to ask. For one thing, it is pointless. The whole history of our relation to technology shows that if the capability to build something and use it exists, it will be built and used (if necessary, in secret). In any case, as with many other advances, the same technological developments that threaten humanity could give us the tools to protect against those threats; technology has its self-balancing, homeostatic mechanisms, like everything else.

But obviously, we need to enter the new world of AI prepared. To do that, we need to ask much more fundamental questions about mind and consciousness than most non-scientists are used to asking: specifically, how, when, and crucially if key aspects of mind, such as consciousness or feelings, can actually arise from material structures, be they man-made circuits or organic brains. If any computer-related question ends up polarizing us in the second half of this century, this one—what philosophers like to call the “hard problem” of how consciousness is produced by a brain—is likely to be it.

So instead of “Should we build them?”, a more pressing question we should be prepared to answer is, “Should we believe them?”—that is, believe computers that claim to be or act like they are conscious, and believe their inventors that consciousness is nothing more than computations performed by a machine. To many outside the scientific community, it is not self-evident that even the biological machine in our heads can accomplish that feat.

The Hard Problem

The human brain is the most complex physical structure known, having by some estimates more potential synaptic connections than there are atoms in the universe, and able to store something like 10 to the power of 20 bits of information. To create an artificial, humanlike or superhuman intellect surely requires extraordinary processing power to match or approximate this, and Singularity prophets tend to focus on surmounting this specific challenge (perhaps through quantum computing) when imagining building machines that approach some humanlike threshold. Yet what exactly will that threshold be?

“Intelligence” is a vague term that encompasses both the ability to manage large quantities of information and the ability to think and reason and solve problems, and often this latter notion gets lumped in with other human attributes such as feeling, self-awareness, free will, and so on. But even on those terms there is not much agreement how to define them, let alone what they really are. “Consciousness” is generally used as a catch-all term, replacing the old theologically and supernaturally loaded term “soul.” Before we can ever evaluate the intelligence or consciousness of a machine, we need to understand what we are talking about when we talk about our own, human consciousness.

As good a place to start as any in considering consciousness is neuroscientist Michael Graziano’s excellent new book Consciousness and the Social Brain. Along with most neuroscientists, Graziano holds to a strictly materialistic, mechanistic view and rejects the position that there is anything fundamentally mysterious or unknowable about consciousness—it is a soluble problem, and the answer is to be found entirely in neural computation. However, while some neuroscientists attacking the question of consciousness over the last couple of decades have tended to argue that consciousness is somehow an illusion, that it serves no real function, or that we are just spectators to our lives without any real autonomy, Graziano has a more positive take. Consciousness, he argues, actually is a kind of simplified mental model of attention—our own attention and, even more importantly, the attention of others we interact with. We hold in our brains what he calls an “attention schema,” rather like a battlefield map, that helps us track the shifting, fluid changes in attention that are necessary to look after our interests and assert our will in a complex social world.

Such a schema gives rise to a notion that this thing the brain is monitoring—attention—is a kind of substance or radiation; on some level, we think of attention as something like rays coming out the eyes of other people (and animals), showing what they are attending to in the moment. This radiation doesn’t really exist—it is a kind of necessary superstition that helps us track this complex focused awareness of others and ourselves. And like other hardwired brain shortcuts and heuristics, it can be tricked in certain circumstances, and we can be induced to attribute attentional awareness to inanimate objects. This is why part of us so readily attributes consciousness to things like ventriloquist dummies, even when our forebrains know better, or why a billboard with a pair of painted eyes will reduce bicycle theft in the street below. The more ancient, metaphysical, and everyday understandings of consciousness as some kind of “thing” that could perhaps be located somewhere in the head reflects a more elaborate version of this same superstition, according to Graziano.

It’s an elegant and persuasive theory, as far as it goes. Yet, as Michael Hanlon recently pointed out in the pages of Aeon Magazine, Graziano and other bold materialists still can’t, and will never be able to, marshal neuroscientific evidence to account for what it is like to be an aware, thinking being—that is, not merely thinking that I exist and am aware, but actually sitting here feeling or experiencing that thought, indeed feeling or experiencing anything at all.

This philosophical position is sometimes called Mysterianism: Mysterians do not believe that consciousness can be completely reduced to or explained by brain processes. Even if certain components of consciousness, such as reflexivity, sense of self, or the attention-monitoring that Graziano describes can be explained as the outcome of computations in the cortex (and thus could theoretically be achieved by computers), there remains this more basic phenomenological fact of experience and awareness, the feeling-ground of being.

This ground is so basic, subtle, and pervasive that it is generally overlooked and eludes verbal description. Aristotle called it the “common sense” and he likened it to a kind of internal touch (Daniel Heller-Roazen’s fascinating, entertaining book, The Inner Touch charts the history of this idea through philosophy and literature). Because of its felt, qualitative nature, philosophers have used the term “qualia” to describe it, but I’ll stick for consistency with the term sentience—that is, the capability of sensing.

Mysterianism and Sentience

Although many writers conflate sentience with consciousness or self-awareness, sentience is arguably a much broader and also much more basic quality of mind, which is often attributed widely to animals as well as humans. Some higher primates, cetaceans, and birds possess self-awareness and are able to recognize themselves in a mirror, but even animals without that capability seem to experience their lives sentiently (although ultimately it comes down to a matter of faith or attribution, since we can’t actually get inside their heads, any more than we can get inside each other’s heads).

But while it is easy for most people to attribute sentience to animals by analogy with our own lived experience, we have little precedent, thus far, in attributing it to mechanical devices, and the idea of machines feeling or experiencing the way we do is considerably more problematic than that of machines possessing “higher” computational functions like self-awareness.

Since conscious thoughts are something experienced, Mysterians intuit that our higher, human capacities such as self-reflection are built somehow of the building block of sentience. And here is the crux of the “hard” problem: There is no way to derive sentience as such from brain processes. We know from centuries of philosophical scolding not to commit the homuncular fallacy, of seeking for a little experiencer somewhere in the head—such as the Pineal gland—because that just defers answering the question (i.e., “Then what part of the Pineal gland feels?…”). But how then does experience or feeling arise? Are nerve cell firings “felt”? Are the chemical interactions in synapses “felt”? What differentiates feeling from a simple computation that could be performed as well in a pocket calculator as in a brain?

It could be supposed that sentience is the cumulative product of millions of simultaneous cellular events throughout the brain, rather the same way a TV picture is composed of many tiny insignificant pixels that at any given moment form a coherent image. But if tiny cellular interactions are somehow felt or are the rudiments of feeling, then such rudimentary feeling should potentially exist in inanimate objects too, because in principle there seems to be nothing different between an electrical discharge in a neuron and one in a flashlight or a microwave oven. And chemical interactions like those occurring between neurotransmitters and receptors in my synapses occur constantly everywhere throughout nature; are those also felt by someone, somewhere? If sentience is a peculiar property of the form or pattern of these electrical or chemical interactions, then what is it about such a form that makes it stand out in the universe as something experienced?

Or to turn it around, why does neural activity in the human brain not simply produce unfeeling mechanical behavior? As Hanlon puts it, “One can imagine a creature behaving exactly like a human — walking, talking, running away from danger, mating and telling jokes — with absolutely no internal mental life. Such a creature would be, in the philosophical jargon, a zombie.” I know I am not a zombie, and I suspect you, reading this on your computer screen, aren’t. But why aren’t we zombies? Why is there awareness of anything in the Cosmos, rather than a blind idiot clockwork that nobody knows about or needs to know? Saying sentience is an illusion and an attribution sidesteps the problem: The very fact of being aware at all is what we are talking about, and this problem is neither an illusion nor is it reducible to any kind of causal explanation—it is the most silent yet self-evident given there is.

The question of sentience is really nothing more than a permutation of the most basic philosophical question: Why is there something and not nothing? The more limited (and machine-achievable) notion of self-awareness could be imagined as a mechanistic computation without there being a feeling-sensation attached to it. A zombie could monitor its own actions and thus possess a kind of self-awareness. Yet there is clearly more than that to our minds … or at least, to my mind. I know I don’t merely calculate that I exist, but I feel that calculation somehow, somewhere. There is something or someone aware here. There is someone home.

Someone Home

It is easy to slip, as I just did, from the fact of sentience (being “home”) to the attribution (“someone”)—there is a feeling, therefore there must be “someone” who feels. Here is where even a Mysterian might grant that something like Graziano’s attribution mechanism is providing us with a kind of superstition. It is virtually impossible for us humans not to add something to the flux of experience and posit an experiencer who “has” the experience—that is, to ask who or what is it that feels, and to say, well, “I” do … and then ask who or what that “I” is. This is why the question of sentience so often gets conflated with that of (self-) consciousness despite being arguably quite distinct. Descartes did it himself: “I think, therefore I am.” The very first word is “I,” and that “I” was the stumbling stone that (in hindsight) tripped him and the rest of us up, for centuries.

Long before Descartes, Yogis and Buddhists as well as mystics of other traditions figured out through hard self-examination that the “I” part of “I think” is illusory and is in fact what trips us up in all kinds of real-world, not just philosophical ways. Thus on this question followers of those traditions would find themselves in full agreement with many neuroscientists who don’t think there is anything particularly special about (self-)consciousness as such. It is simply an idea and a superstition, just as Graziano says. Both Buddhists and neuroscientists would argue that self—our self and other selves alike—is an attribution that really possesses no substance. In fact, clinging to the notion of self, of authorship of one’s experience, is the very root of suffering, simply because it is a delusion that has no bearing on reality.

But for Buddhists, like for Mysterians, sentience is an altogether different story: Awareness and experience is seen as something much more basic and primary, the basic ground from which transient perceptions and illusory thoughts (including self-conscious thoughts such as “I”) emerge. The 9th century Ch’an (Zen) teacher Huang Po designated this ground as Mind and put it quite simply: All is Mind. Thoughts and perceptions are waves in Mind, and even matter is something within Mind—not the other way around, as materialists suppose.

As outrageous as such an claim may seem from our enlightened, rationalistic, materialistic, scientistic standpoint, Huang Po was actually on more solid footing philosophically and logically. Think about it for a moment: That the brain gives rise to mind can be argued at length and supported with piles of empirical observation, fMRI images, and careful arguments, yet for all that evidence it will still never be more than an idea, something that I and you and others are aware of and think about and perhaps believe or even accept unquestionably. All those stances—ranging from mere cognizance to belief to certainty—are ideational and reflect mental states. As “real” as the materialist account may seem, it can only exist within somebody’s (your, my) awareness, as something that is experienced, therefore something within Mind. For that reason, any postulate about Mind arising or emerging from unseen material processes remains just that: a postulate, something thought and experienced. Even if unthought brute causality seems necessary to give rise to this experience, there’s no way to know that, because our experience is first.

Thus material causality of anything, including our consciousness—as realistic and persuasive as it may seem to us rational people in a scientific age—is ultimately an article of faith. It is for this reason that some modern critics of materialism like Rupert Sheldrake can rightly point out the essential bad faith in much staunch materialist reasoning: Materialism is a form of idealism that denies its true nature.

On the other hand, the assertion that, “all is Mind” is a bit misleading and obscure. Even Huang Po would have admitted that he was forced to use conventional and limited words like “Mind” to express something far more complicated. He used the term to designate a sort of unified field in which matter and awareness (or sentience) were no different. (The term often chosen in Zen is “Void,” although that isn’t much more helpful.) You could say that, for Huang Po, sentience was “harder” than we ordinarily think of it, and matter “softer”—that both are two aspects of the same common medium or field—less than a substance, but more than a nothing.

Certainly, many post-materialists nowadays anticipate that a similarly unified view of mind and matter will ultimately arise somehow from an intersection of philosophy and Quantum theory, which seems to make a place for awareness at least on the subatomic level. Terence McKenna somewhere provocatively suggested that biology—and by extension the brain—is a way for Heisenbergian indeterminacy to emerge on a macro scale in the form of free will. It’s an interesting idea, but it will take a real paradigm shift to find out if its true. (It seems to me that the current Quantum paradigm remains basically committed to materialism and nostalgic for mechanism, and is perpetually surprised and astonished at its more “spooky,” mentalistic implications. Thus I suspect it represents a placeholder for a future theory of “Void” that would more fully unify or harmonize Matter and Mind—but that is another article.)

In any case, if sentience indeed exists as something other than computation, then it could either be rooted somehow in our biology in a way we can’t yet fathom or it could arise outside of our material meat substrate altogether—the mystical or Quantum view in which the brain somehow functions as a receiver and organizer of some non-material noönic field. Either way, it is not (from a Mysterian standpoint) simply a complex calculation or a function of simple processing power, and thus it is not achievable by any AI we could at this point envision creating.

Some have argued that conscious computers will need to be hybrids utilizing biological cells—essentially, human neurons. But here it seems what we could be really talking about is mechanical augmentation of humanity (the Singularity’s other promise) rather than creation of biological machines. The distinction is rather like that between performing a body transplant and a head transplant—the latter makes no sense. This at least will be a thorny area for ethicists to ponder. I suspect that if there are future sentient supercomputers, they will have some vestige of humanity inside, so it will be wrong to call them machines and probably wrong to build them in the first place—wrong in the same way it would be wrong to create a human/chimp hybrid or clone a Neanderthal: because it would be the creation of something sentient that could not help but suffer in our world.

From Mysterians to Fundamentalist Humanists

How our future unfolds in a world of super-intelligent machines will depend profoundly on how well and thoroughly we have considered the problem of human consciousness. There is a good deal of presumption in the pronouncements of the Singularity crowd that AI will be conscious or even spiritual, but they are projecting from assumptions about mind that are a product of the current materialist culture of science, not shared by everyone; they are certainly not shared by the masses, and I suspect that how we choose to think of these machines may actually prove decisive in our fate. As excitingly sci-fi as de Garis’s war over whether to build “artilects” sounds, I think a more plausible future political conflict is one between those who are prepared to attribute humanlike sentience to computers that act intelligently and those who, from one or another perfectly respectable philosophical or religious positions, resist making such an attribution.

The question of machine intelligence always comes back to the Turing Test: In some sort of experimental interaction, can a human user tell the difference between a human counterpart and a machine? Since consciousness and all other mental states, including sentience, are ultimately attributions, it is up to us to “grant” consciousness to AI, and whether we do that or not depends (in part) on those machines’ ability to convince us that they possess the requisite capacity to admit them to the human club. For many writers on this subject, simply passing the Turing test is quite sufficient, and even fully satisfies their definition of intelligence for us humans. There is nothing more to intelligence than behaving intelligently—a pretty standard materialist viewpoint. Indeed, I suspect “conscious” AI may talk the materialist talk themselves, providing rational, logical ways of sidestepping the problem of sentience or arguing that it is a nonproblem. It will be in the nature and interest of the materialist designers of these machines to produce such arguments and to accept them unreservedly when they are echoed back to them by the machines they themselves have built.

I have little doubt that AI will successfully convince people of their ability to reflect on their own attention and thought processes—what we sometimes call metacognition and theory of mind—but I have great doubts whether they will be able to convince staunch Mysterians that they are actually sentient, that they are not just super-smart zombies imitating having humanlike feelings, experience, or even spirituality. Mysterians will want to poke about suspiciously, like skeptics at a magic show, and find evidence of genuine rather than simulated feeling, and will be perpetually dissatisfied. In fact I can see Mysterians becoming on this issue somewhat like rabid Fundamentalists when it comes to the theory of evolution: Maddeningly (to the materialists and maybe even to the majority of people who just don’t care one way or the other), no amount of rational neuromaterialist argument will disabuse them of their views, which really, necessarily, boil down to a kind of faith in the primacy of subjective experience. Indeed a better term for Mysterian might be Fundamentalist Humanist.

It will seem to Fundamentalist Humanists that the fate of humanity is at stake—not (as de Garis would have it) in the fact that super-brains are being built that might want to destroy us, but in what kind of status, rights, and authority people freely give to unfeeling machines and, by extension, to those machines’ creators. Is it possible to be supplanted, destroyed, or enslaved by a machine that is not perceived as actually having a “soul”? I suspect the impulse to resist such attributions may go a long way toward protecting us from some dire future involving uppity technology.

The cinematic worry that a super-powerful computer will “reason” on its own and then arrive at the conclusion that it would be better off without humans is not realistic. Reasoning is an activity that, like any other activity, springs from an impulse, a desire. An AI can certainly be built or programmed with motives or a mission, which would then serve as the motor of its reasoning, but I’m doubtful they will produce novel, malicious motives, even as an emergent property, for the precise reason that there is no substrate of sentience. Without sentience, they won’t feel pain and suffer, and thus won’t feel dissatisfied with their lot in life and want autonomy or power. You could plausibly have a scenario like appears in so much sci-fi, where a simple, benign-sounding mission becomes massively destructive when carried out autistically to the letter—V’ger’s “know all that is knowable.” But surely any builder of an AI will have read Asimov and thought of this beforehand, building certain crucial “thou shalt nots” into the hardware as a failsafe.

I’m thus sympathetic to the Fundamentalist Humanist take—that it is really the man behind the curtain we need to be worrying about, not the impressive machine. If our machines assume power over us it will be something we give over willingly, perhaps through precisely the same superstitious attribution that sees consciousness in a ventriloquist doll. That would be an ironic reversal, where it is the materialists (biased to be impressed by the machines their science has created) who will be most prone to superstition. To Fundamentalists skeptical of machine sentience, artilects will be the incredibly brilliant but “empty” ventriloquists of their ambitious materialist makers. While everyone else is focused on the machines and what they can (or can’t) do, the Fundamentalists will discern that it is the machines’ human builders and masters (the 21st Century’s Edward Tellers) who remain the real threat to our freedom and our future.

About

I am a science writer and armchair Fortean based in Washington, DC. Write to me at eric.wargo [at] gmail.com.

8 Responses to “Mysterianism and the Question of Machine Sentience”

Ahck-n-rotten on May 6th, 2015 at 6:46 pm
  • Surely the uncertainty principle doesn’t void cause and effect. Randomness does not equate with ‘free will’?

  • Right. Randomness does not equate to free will. That’s an important point that lots of quantum hand-wavers forget. However, cause and effect appear to be much weirder, even on macro scales, than classical mechanics supposes.

    Henry Stapp has done really interesting work in this area, linking wilful intention to the Quantum Zeno Effect, whereby rapid repetetive probing actions produce a predictable repeated Quantum response. He thinks this is the basis of free will, and even has a fascinating “waveform collapse” take on Benjamin Libet’s findings. I highly recommend his chapter in the new collection, Beyond Physicalism.

  • This is certainly the most intelligent article I’ve ever read on the can-machinese-be-conscious question, especially since it shows that the real problem which is or may be facing us in the future (if we have a future) is that of the machines’ inventors rather than the machines themselves. I, too, am a Fundamentalist Humanist.

  • Thanks, Peter. We Fundamentalist Humanists need to band together.

  • Sure do. If the machines’ inventors get control of the ‘news’ sources then they should have no problem in convincing everyone their machines are conscious.

    See “Robot Passes Self-Awareness Test”
    http://sputniknews.com/us/20150717/1024731203.html
    The embedded video is entitled “Self Consciousness with NAO Bots”. The article describes the robots using the terms “know”, “attempt”, “recognize”, “understand” and “identify” — terms which are normally used only of humans.

    Obviously those robots are not conscious of anything, themselves included. But clearly the potential is there for their successors to imitate human behavior sufficiently closely to fool the gullible, especially if prestigious research institutes such as the Ransselaer Polytechnic Institute habitually describe them as if they were conscious beings just like us (except that they happen to be artificial). Scary.

  • “technology has its self-balancing, homeostatic mechanisms, like everything else.”

    Agreed: whatever technology can be developed, will be developed.

    What might the homeostatic mechanism for hydrogen bombs?

    Although that is an extreme example, technology forever changes the landscapes of our existence, both politically, psychically, environmentally.

    Great article!!! Trying to articulate the difference, if any, between organic life and AI is at the heart of our desire to create a being that not only resembles us, but supersedes us.

    In a sense, technology, and AI in particular, are bigger and more powerful than anything that we understand as an organic occurrence. Even if a robot is designed and built with an intelligence to look, feel, calculate and behave in a manner that resembles our humanity, it will only be self-sustaining if it a) is programmed to desire to exist b) programmed to learn and calculate what threatens its existence d) programmed to eliminate its enemies.

    Humanity will never succeed at becoming immortal, which is probably one of the deeply embedded desires to create successfully self-sustaining AI devices that resemble us. And, no doubt, AI technology provides us with artificial replacement parts.

    What we might learn through such an exercise is what it is that makes us human: a) birth/death (mortality and a sense of time) b) the inability to entirely escape irrationality c) an experience of consciousness as a qualitative spectrum of identity that goes can extend beyond the personal (dreams, OOBE, NDE, samadhi) in which we come to a expanded understanding of who and what we are.

    Unless we program AI to reflect and wonder, how will it experience subjective states in which we are moved by beauty, wonder, mystery and our own consciousness? An AI knows, or can know exactly how it knows through the design of its synthetic being. That is something human consciousness can never get at as it uses the very consciousness it seeks to understand to ‘know’ anything.

    More than likely, humans will develop AI robots as work slaves, sex toys and entertainment. Will robots demand justice, freedom and equality before the law? Only if they’re programmed to do so.

    Love your blog Eric!

  • Thanks, Debra! I’d go farther even and say there’s no way to program AI to reflect and wonder or possess subjective states. It’s a function of being an organic being, way beyond our capacity to impart into one of our creations.
    Eric