Scientists struggle to define consciousness, AI or otherwise.

Could AIs become conscious? Right now, we have no way to tell.
Enlarge

Advances in artificial intelligence are making it increasingly difficult to distinguish between uniquely human behaviors and those that can be replicated by machines. Should artificial general intelligence (AGI) arrive in full force—artificial intelligence that surpasses human intelligence—the boundary between human and computer capabilities will diminish entirely.

In recent months, a significant swath of journalistic bandwidth has been devoted to this potentially dystopian topic. If AGI machines develop the ability to consciously experience life, the moral and legal considerations we’ll need to give them will rapidly become unwieldy. They will have feelings to consider, thoughts to share, intrinsic desires, and perhaps fundamental rights as newly minted beings. On the other hand, if AI does not develop consciousness—and instead simply the capacity to out-think us in every conceivable situation—we might find ourselves subservient to a vastly superior yet sociopathic entity.

Neither potential future feels all that cozy, and both require an answer to exceptionally mind-bending questions: What exactly is consciousness? And will it remain a biological trait, or could it ultimately be shared by the AGI devices we’ve created?

Consciousness in Von Neumann computers

For a computer to experience the vast repertoire of internal states accessible to human beings, its hardware presumably needs to function somewhat like a human brain. Human brains are extremely energy-efficient analog “devices” capable of high-levels of parallel processing.

Modern computers, based on Von Neumann architecture, are not any of these things—they are energy-intensive digital machines composed primarily of series circuitry.

Von Neumann computer chips physically separate memory from processing, requiring information to be retrieved from memory before calculations can be performed. “Classical Von Neumann computers have a separation between memory and processing. The instructions and the data are off in the memory, and the processor pulls them in, as much as it can in parallel, and then crunches the numbers and puts the data back in memory,” explains Stephen Deiss, a retired neuromorphic engineer from UC San Diego.

ARS VIDEO

What Happens to the Developers When AI Can Code? | Ars Frontiers

This restriction on how much information can be transferred within a specific time frame—and the limit it places on processing speed—is referred to as the Von Neumann bottleneck. The Von Neumann bottleneck prevents our current computers from matching—or even approaching—the processing capacity of a human brain. Because of it, many experts think that consciousness in modern-day computers is highly unlikely.

Consciousness in neuromorphic computers

Computer scientists are actively developing neuromorphic computer chips that evade the processing restrictions of Von Neumann computers by approximating the architecture of neurons. Some of these combine memory storage and processing units on a single chip. Others use specialized, low-powered processing elements such as memristors, a type of transistor that “remembers” past voltage states, to increase efficiency. Neuromorphic chips mimic the brain’s parallel wiring and low power requirements.

“A compute-in-memory device, which includes things like neuromorphic computers, uses the actual physics of the hardware to do the computation,” Deiss explains, referring to memristors. “The processing elements are the memory elements.”

If neuromorphic technology can be developed to the level needed to reproduce neuronal activity, neuromorphic computers might have a greater potential to experience life consciously rather than just compute intelligently. “If we ever achieve the level of processing complexity a human brain can do, then we’ll be able to point at [neuromorphic computers] and say, ‘This is working just like a brain—maybe it feels things just like we feel things,’” Deiss says.

Still, even in a future brimming with brain-like computer hardware and the stage set for artificial consciousness, there remains a big question: How will we know whether or not our AGI systems are experiencing sadness, hope, and the exquisite feeling of falling in love or if they just look like they are experiencing these things?

How will we ever know what’s going on inside the mind of a machine?

A cornucopia of consciousness theories

There’s only one way we can know: by empirically identifying how consciousness works in organic lifeforms and developing a method by which we can consistently recognize it. We need to understand consciousness in ourselves before we have any hope of recognizing its presence in artificial systems. So before we dive deep into the complex consequences of sentient silicon and envision a future filled with conscious computers, we must resolve an ancient question: What is consciousness, and who has it?

In recent decades, neuroscientists have wrenched this millennia-old question from the grip of philosophers, recognizing that the connection between neuronal activity and conscious experience is incontestable. Dozens of neuroscientific theories of consciousness (ToCs) exist—so many, in fact, that a concerted effort is underway to whittle down the list to a manageable few. We’ll discuss just three of them here: Integrated Information Theory, Global Neuronal Workspace Theory, and Attention Schema Theory.

According to Integrated Information Theory (IIT), a ToC developed by Giulio Toloni, director of the Wisconsin Institute of Sleep and Consciousness at UW Madison, the key to consciousness lies in a system’s quantity of integrated information—how elaborately its components communicate with one another via networks of neurons or transistors. A system with a high level of integrated information is conscious; a system with low integrated information is not.

Christof Koch, a meritorious investigator at the Allen Institute for Brain Science in Seattle, Washington, and a proponent of IIT, explains that human brains have a high level of integrated information due to the extensive parallel wiring of their neuronal networks. Information can travel through multiple neuronal pathways simultaneously, which increases the brain’s processing capacity. Modern-day computers, subject to the Von Neumann bottleneck, are primarily composed of series circuits, so a comparable level of information processing is unobtainable.

Attention Schema Theory (AST), developed by Michael Graziano, professor of Psychology and Neuroscience at Princeton, posits a different view: Our brain makes a model of what we are paying attention to, called an “attention schema.” This model, like a model airplane, is a representation. A model airplane does not include a fully equipped cabin or a functional cockpit. Similarly, the attention schema of our own consciousness is an approximation: a mental model of what our mind is paying attention to and how we are experiencing it.

AST proposes that, despite its limitations, our attention schema is so convincing that we have a tendency to incorrectly deduce that consciousness is something mystical, something “more than” matter. In reality, we are only allowed access to this representation of our mind—not our mind itself—so we cannot directly understand how our mind works, much like how a model airplane cannot replicate the experience of flying.

Global Neuronal Workspace Theory (GNWT), founded by Bernard Baars, affiliated fellow in theoretical neurobiology at the Neurosciences Institute of UC San Diego, proposes that information our brain determines is sufficiently important is selectively and temporarily placed in a central workspace within our brain (analogous to a movie theater) so that we can pay attention to it. Information we do not need to consciously attend to is stored away in connected but separate areas (analogous to backstage).

“The basic idea [of GNWT] is fairly straightforward. At any given moment, only a subset of unconscious information is selected by attentional networks, and this selection serves to connect unconscious processing modules to a ‘global workspace.’ Whatever contents are in the workspace are consciously experienced at that moment,” says Michael Pitts, a psychology professor at Reed College in Oregon.

Despite disparate approaches, IIT, GNWT, and AST share a common goal of empirically unraveling the complex relationship between brain tissue and the experience of life. Once neuroscientists get a handle on how neuronal networks produce consciousness, this knowledge can be used to understand conscious experiences—or the lack thereof—in inorganic networks.

Is computer consciousness no more than a futuristic daydream?

According to IIT, consciousness in our current computers is flat-out impossible. The hype surrounding artificial consciousness is for naught. Hardware is hardware. No matter how brilliant a machine is at playing chess, Go, Texas hold’em, or Scotland Yard, at the end of the day, it does not know that it won a game, nor has it felt the emotional rollercoaster of competition. In Koch’s words, “It has experienced exactly nothing.”

“It is not enough to look at an AI system from the outside and ask if it is conscious based on how it is behaving,” Koch says. “You need to look under the hood. A Turing machine that appears to be thinking is not conscious.”

According to IIT, the inability of a machine to “be something” that experiences itself relating to the external world lies squarely in its limited causal power. Causal power is defined as the ability of a system to use its past state to influence its present state and to use its present state to influence its future state. The more a system can influence itself, the more causal power it has. Neuroscientists use the variable “phi” to represent the amount of causal power within a system, and it is measured by analyzing the self-influencing connections between circuit components.

Modern-day computer processors simply do not have the requisite number of self-influencing internal connections to hit the threshold value of integrated information required for experience to arise. Unlike a human brain, which contains approximately 86 billion neurons with 100 trillion connections between them, a computer contains far fewer looped, or self-influencing, connections. A computer might behave with extraordinary intelligence—even intelligence that exceeds that of humans—but this does not equate to an ability to exert an effect on itself: to be conscious.

“A popular way to summarize IIT is that it proposes that a system is conscious when the whole (the integration of information) is more than the sum of its parts,” Pitts says. “IIT focuses more on how a system is arranged and how it affects itself than on what it does. According to IIT, two systems can have the same input-output behavior, but depending on how the system is organized, one could be conscious while the other is not.”

Unlike the vast majority of ToCs, which are computational functionalist theories that assume consciousness can be reduced to the physical components that produce it, “IIT starts with consciousness and works backward to the physical substrate of consciousness. IIT does not start with a physical system, like a brain or machine, and assume that it can be reduced far enough to lead us to the source of consciousness,” Koch says.

Because of this premise, IIT does not fall neatly into any of the traditional philosophical theories of mind, such as materialism, dualism, idealism, or panpsychism. “This is the challenge when you encounter two thousand years of ‘isms.’ They are taught in all of the philosophy schools and in all of the books, and they are very well-established—but they are all philosophy. IIT doesn’t fit into any one [philosophy of mind],” Koch says.

Despite IIT’s compelling theoretical framework, some neuroscientists question the theory’s structure. IIT is founded on five axioms that are considered by theory supporters to be infallibly true. Pitts explains: “Some people have a problem with how IIT starts because it is an aspirational theory that makes bold claims. Instead of taking data and building up a theory, it starts from first principles. It outlines five axioms that have to be true of any conscious experience. Then it uses those axioms to derive postulates that can lead to predictions.

“One critique some researchers have of IIT,” Pitts adds, “is that you can never get an experimental result that is going to challenge the core of the theory because the axioms are set up to be a universally true starting point. It’s too flexible; it’s not falsifiable, some people would say.”

While IIT predicts that artificially intelligent computers do not possess the extra “something” required for consciousness (namely causal power), it does not dismiss the prospect of rapidly approaching highly intelligent machines—AGI systems that will surpass humans in their computational abilities. This is a crucial distinction we must remember to make, Koch cautions, as we evaluate how best to usher in a future brimming with AGI bots: “There is a difference between intelligence and consciousness.”

Is computer consciousness an inevitable reality?

On the other side of the neuroscientific consciousness coin are computational functionalist theories, such as Attention Schema Theory and Global Neuronal Workspace Theory. Both ToCs view artificial consciousness as inevitable. In fact, AST suggests that we ourselves are effectively machines that mistakenly believe we are conscious. Consciousness is simply the result of computations; the source of these computations (brain or machine) is irrelevant so long as they occur in a specified way.

Machine consciousness seems inevitable enough to some researchers that they decided to check if it’s already here. In August 2023, Patrick Butlin, a research fellow at the University of Oxford, and Robert Long, a research associate at the Center of AI Safety in San Francisco, posted a preprint paper on arXiv.org entitled “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness.” Butlin, Long, and 18 collaborators evaluated six of the most prominent computational functionalist theories of consciousness and came up with a list of consciousness indicator properties—properties that are necessary for consciousness to arise in humans. They then looked for evidence of these indicator properties in AI systems.

Butlin, Long, and their collaborators came to the following conclusion: “Our analysis suggests that no current AI systems are conscious but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators.”

Advocates of both AST and GNWT are comfortable with Butlin and Long’s conclusion. Graziano explains that “AST is built on the assumption that people are biological machines. Everything that a brain knows about itself is necessarily derived from information inside that brain. We think we have consciousness—we’re certain of it—because the brain builds self-models, or bundles of information, that describe itself in that manner. If the brain didn’t build those models, we wouldn’t know anything about consciousness. Build an artificial system with the same information structures inside itself, and it will have the same beliefs and certainties. It should be possible (and many are working on it) to build AI that also thinks it is conscious and thinks that other people are conscious.”

Graziano’s confidence in the eventuality of AI consciousness originates from the two foundational principles of AST. First, “Information that comes out of a brain must have been in that brain,” and second, “The brain’s models are never accurate.” Using these two principles as a starting point, Graziano writes that there is no “wiggle room”—the only logical, methodical explanation for consciousness is that it originates in the brain and is, like everything else that originates in the brain, an approximation of reality.

Koch disagrees. According to IIT, the subjective experience of tasting an apple cannot be replicated by a computer due to its limited ability to exert an influence over itself—the “effect” of consciousness cannot arise. “Just because something is a perfect replica of a human brain does not mean consciousness will arise out of it,” Koch explains. “There is a difference between a simulation of a thing and the thing itself.” Even if computers of the future become as complex as brains (in terms of self-influencing internal circuitry) consciousness will not automatically be produced. The level of integrated information in a simulated brain will not necessarily match the integrated information in a real brain.

AST counters this argument by saying that the subjective experience referred to by IIT (and other theories of consciousness) is nothing more than a mental schema—a convincing illusion. We do not actually experience anything subjectively when we eat an apple; our brain convinces us that we do. In the same way, artificial intelligence will soon be able to convince itself, through an internal representation of apple eating, that it has tasted a crunchy, juicy, bright red Honeycrisp.

“Consciousness is a property that we attribute to other people and to ourselves, and we do so because it serves as a useful way to predict behavior,” says Graziano. “AST proposes that the brain builds a model, or a simplified representation, of an attentional state. We make sense of that state of attention by attributing consciousness to it. As a result, we gain a better ability to predict ourselves or other people.”

Because AST and GNWT say there is nothing “special” about consciousness—it’s just the end result of a sequence of computations—both hold that computers are just as likely to experience life as we are.
Butlin echoes this view, saying, “I think it’s likely that AI systems will soon be built with many of the indicator properties and that these systems will be much more serious candidates for consciousness than any that currently exist. These systems probably still won’t be conscious, but they will make hard questions about consciousness very pressing.”

Is it possible to unify consciousness theories?

An overabundance of ToCs exists within the neuroscience community. Until this unwieldly group of disparate theories is cohesively unified or reduced to a single theory that matches experimental results, we will not have a precise way to identify machine consciousness. To start the reduction process, the Templeton World Charity Foundation (TWCF) is funding a series of adversarial collaborations intended to increase communication between consciousness researchers and reduce gaps between ToCs. This work is imperative, and urgent, if we want to understand human consciousness before computers are complex enough to potentially acquire it themselves.

Michael Pitts recalls the media attention surrounding the Association for the Scientific Study of Consciousness Conference in New York City in June 2023. Pitts and his colleagues, Liad Mudrik of Tel Aviv University and Lucia Melloni of the Max Planck Institute, presented the initial results of the first adversarial collaboration they designed to rigorously test two prominent neuroscientific theories of consciousness: Integrated Information Theory and Global Neuronal Workspace Theory.

“We presented our initial results at a conference in New York City last summer, and the press got the wrong impression. Their idea was ‘this is one theory against the other,’ or ‘one’s going to win and one’s going to lose,’ but that’s not how it works,” Pitts said. Media focus on the adversarial nature of the collaborations fuels the perception that consciousness research is disjointed and incoherent.

Pitts and his colleagues are in the early stages of brainstorming a concept called Selective Unification, with the hope that disparate consciousness theories can ultimately be combined into one empirically sound ToC: “The idea of Selective Unification is that we can carefully select certain aspects of theories that are supported by data and unify them into one theory,” Pitts says.

Using the results from current and future adversarial collaborations, he hopes to eliminate portions of ToCs that don’t match experimental data. Specific elements of theories that survive the chopping block, he theorizes, can then be combined into a new ToC with predictions that align with experimental evidence. Pitts says, “We don’t want to combine theories in a Frankenstein way but in a way where we retain consistent pieces and drop parts that are experimentally challenged.”

Koch, though equally committed to testing ToCs, does not believe it’s possible to combine select elements of multiple theories of consciousness. He says, “They are just fundamentally different animals. You can’t squash them together. They could both be wrong, but they cannot both be right.”

Preparing for AGI, conscious or not

Debates over the nature of consciousness and whether or not AGI will ultimately experience life as we do are not likely to be resolved soon. Yet technological advances are propelling us at warp speed into a future filled with machines that will, for all extents and purposes, behave as we do. How do we prepare for this?

Koch proposes that we make an effort to increase human intelligence to offset the impending discrepancy between organic and artificial brains. Conscious or unconscious, future AIs will be much smarter than us. Why not direct some technological resources toward increasing human intelligence alongside artificial intelligence?

Graziano suggests we prepare for conscious AI by preemptively considering AI sociopathy. With widespread AGI will come increased computer influence and power. If AI develops extreme intelligence without concurrently learning to navigate the complexities of human social norms, we might have sociopathic machines on-hand that chose to kill us instead of work with us.

“Most people concentrate on consciousness as a private, internal matter. But it also plays a central role in human social interaction,” Graziano says. “We recognize each other as conscious beings, and it allows us to treat each other in a certain way. When that ability begins to weaken, antisocial behavior emerges. That’s when people start to kill each other.”

“If we want AI to be prosocial, we might want to give it the mechanisms that make people prosocial,” Graziano suggests.

Koch offers one last suggestion: Rather than scramble to deal with the inevitable superiority of AGI and the resulting ambiguities of potential computer consciousness, he advises that we regulate AI right now. “We should put some guardrails on AI as they are in the EU—that’s the only thing we can do. AGI will get here very soon. We will see how we muddle through, for better or for worse.”

Lindsey Laughlin is a science writer and freelance journalist who lives in Portland, Oregon, with her husband and four children. She earned her BS from UC Davis with majors in physics, neuroscience, and philosophy.

LEAVE A REPLY

Please enter your comment!
Please enter your name here