The question of whether artificial intelligence could ever achieve consciousness is a common theme in science fiction. Could robots ever truly feel anything—like love, hate, or fear—or would they be all “dark inside”, experiencing nothing at all?
It is more important than ever to answer this question correctly. Artificial intelligence (AI) is no longer merely a matter of science fiction. AI are increasingly capable of producing art and mastering the use of language, raising serious questions about whether AI are already capable of consciousness, or if not yet, then soon.
The idea that “digital consciousness” is possible is based on a philosophy known as functionalism. According to functionalism, our minds are just functions.
For example, although you can add 2+2=4 in your mind, a digital calculator can do the same thing. If we could program a digital computer to function exactly like your brain does, that computer would have a conscious mind just like you do. A ‘digital you’ would think and feel everything that you do. It would feel happiness, sadness, fear, and love.
But is functionalism correct?
An increasing number of philosophers think it is not, and in a recent peer-reviewed article, my co-author Corey Maley (University of Kansas) and I argue that if it isn’t, there are reasons to think that digital people would experience nothing. This, we argue, is because consciousness is analog, not digital.
To see why we think this, I want you to reflect on your own conscious experience right now. Look at your visual field. Right now, you are experiencing a variety of colors: maybe some red, blue, green, purple, and so on. Now consider what a function is. Here is a simple example: 1+1=2. This is an instance of the simple function of addition.
Functions deal with quantities of things—that is, they can be quantified. So, for example, if we think about color wavelengths—that is, the wavelengths of light that we identify with different colors—they are all quantifiable. The wavelength of ‘violet’ light is 400 nanometers (nm), the wavelength of blue 470 nm, red 665 nm, and so on.
Brain functions can also be quantified. When your brain ‘sees’ red, the neurons in your visual cortex will fire in a certain configuration and at a particular firing rate. Like other functions, we could express this function in terms of a complex equation that we might write out in a book.
Here, though, is the problem. How can one get the experience of red out of a function? The experience of red is not simply a quantity, such as the number 1 or 665 nanometers. ‘Red’ wavelengths of light are 665nm. But your experience of red does not itself look at all like 665 nm. No, red is a quality.
Indeed, the curious thing about color experiences is that they appear to be utterly simple. One cannot describe what red looks like to anyone (even yourself!). Sure, you can describe red as a ‘hot’ color, and blue as a ‘cool’ color. But you cannot actually do better than this, writing out in a book exactly what red looks like, as opposed to blue. Seriously, give it a try. The best you or anyone else can ever do is point to it. “That,” you might say, “is red. And that is blue.”
These simple facts about color experience are illustrated by color-blind people who put on glasses that enable them to see particular colors for the very first time. They are completely shocked by what they see—because no one, and no scientific theory, could ever convey to them what purple looks like before they see it.
Consciousness appears to be utterly unique in this regard. To see how, consider a famous thought experiment given by the philosopher Frank Jackson.
Jackson asks us to imagine an incredibly intelligent woman, Mary, who is raised from birth in a completely black and white room. Mary, we are told, becomes an incredibly accomplished scientist. She learns everything there is to know about light wavelengths, and about how brains process color wavelengths. That is, she understands everything about how light and brains function.
Nevertheless, it seem obvious that there is something Mary doesn’t know: what red looks like—its qualitative aspects, or ‘what it is like’ to experience it.
This, it seems, is because red isn’t a function. It can’t be. Functions are describable in terms of numbers. They deal with quantities, and hence, can be quantified. You can write out any function in a book, ranging from 1+1=2 to the functions of the brain. But what you can’t write out in a book is what red looks like. You can’t quantify it. You can only experience it.
But now, of course, all science is ultimately based upon experience. We base science on data, and the data of science just is experience. We developed theories of chemistry—of the chemical bases of water, air, and so on—using our experience of these things. The same goes for physics, biology, psychology, and so on. The data we have built these sciences on are our experiences of the world.
Yet, if science is based on data, our experiences are the data of science, and as we have just seen the qualitative features of experiences (such as the experience of what red looks like) cannot be expressed in terms of functions, then as a matter of science we should conclude that functionalism is false: our experiential data shows that conscious experience is not merely a function. It is something more.
The world we live in is not merely a ‘physical’ world of fundamental particles and forces that can be quantified in physical or biological equations, such as electrons, gravity, the functions of cells.
It is also (somehow!) a world of qualities, such as redness, blueness, greenness, purpleness, and so on.
Arguments like these have led an increasing number of philosophers to support non-functionalist theories of consciousness. One such theory is panpsychism, the view that experiential qualities such as redness and blueness are fundamental to the universe, much like gravity.
This view may sound obviously absurd. Are we really to believe that electrons have conscious minds, experiencing red for example? Not exactly. Panpsychists have subtle things to say here. But, in any case, another view closely related to panpsychism—panqualityism—might not sound so absurd to you.
According to panqualityism, qualities pervade nature. Green plants do not merely give off ‘green wavelengths’ of light. Plants are actually colored—that is, they are green. And we perceive them correctly when we perceive them as green, that is, when our minds experience that very quality.
This view doesn’t sound absurd, right? It’s the commonsense view we have when we come into the world. We live in a world of green plants, red roses, and blue skies—and we experience the world accurately when we experience these qualities in our minds.
The problem then is this: if you think that the world has qualities like these—qualities that cannot be expressed in mere functions such as 1+1=2 (which I hope that I’ve persuaded you of!)—these qualities appear to be fundamentally analog, not digital.
What makes something analog rather than digital? Consider a thermometer filled with Mercury. When the temperature outside rises, the Mercury in the thermometer expands and rises. Now consider a mechanical watch. As time passes, the gears inside the watch turn and the hour, minute, and second hands turn too.
These devices are analog because they represent one type of change (increase in temperature or time) by another similar change (Mercury expanding or gears turning).
Human consciousness seems just like this. When the sun slowly gets brighter outside, you experience the light slowly getting brighter too—just like when it gets hotter outside, the Mercury in a thermometer slowly expands. Similarly, when you look at a color wheel, you can see how the color red slowly shades into orange when it is mixed with yellow.
Digital computers do nothing like this at all.
A circuit in a digital computer is simply a repeating series of “ones” and “zeros.” Digital programs merely process binary strings of code. When the program signals ‘one’, the processor will fire 5 volts, and when it signals zero, the processor will not fire at all.
Scientists used to think that our brains function like this—that a neuron either fires (‘one’) or it doesn’t (‘zero’). But we now know that this is false. Brains are analog. Neurons do not simply fire or not fire. They fire in different shapes and communicate outside of synapses by electromagnetic waves.
But now, as we have seen, consciousness appears to be analog too. What red, green, orange, and purple look like are not merely ‘on’ or ‘off’, like a ‘1’ or a ‘zero.’ Red and orange are qualities that come in all kinds of continuous degrees, like Mercury expanding in a thermometer. Sadness, joy, fear, love. None of these features of consciousness are merely ‘on’ or ‘off’ like a one or a zero. They too are unique qualities that come in degrees, like the turning of the gears of a watch.
Here, then, is the lesson. If panpsychism or panqualityism are true, then colors such as red and green are fundamentally in nature. Our brains somehow put together (or ‘combine’) these qualities in a coherent way through analog processing, much as a painter has to put colors together on a canvas.
But this is simply not what digital computers do or can do. Digital computers abstract away from the analog features of nature, such as the forces of gravity, electromagnetism, etc. All a digital computer can ever do is process long strings of ‘on’, ‘off’, ‘off’, ‘on’, etc. A digital computer may use strings of digital information to ‘emulate’ analog functions, treating ‘red’ as a long string of code such as ‘10101000010.’
Yet, as we see here, none of this is ultimately analog at all: it is just a function that, as a string of ones and zeros, abstracts away from the analog parts of reality upon which our brains (and the colors we experience) are based.
So, it’s probably false that digital beings can have minds. To use a once-common but now somewhat outdated phrase, digital ‘minds’ can fake it, but they can never actually make it. Conscious experience is fundamentally analog—digital beings are not.
What is it like to be a ‘digital person’? Probably nothing—or static on a TV screen.
Indeed, as I and another co-author argue in a second peer-reviewed paper, the same point extends to ‘the simulation hypothesis.’ In his widely publicized recent book, Reality+, the philosopher David Chalmers argues that virtual realities are no less real than our Universe.
If we are correct, then Chalmers is simply wrong about all of this: ‘virtual worlds’ do not actually contain conscious beings, let alone green grass, red roses, love, joy, sorrow, and so on. All they are is digital code, and digital code cannot realize these analog features of our world or our experiences of them.
Now, maybe my coauthors and I are incorrect. Maybe it’s somehow possible for digital beings to experience joy, sorrow, and love. Indeed, perhaps functionalism is true. For these reasons, if we ever do bring complex digital beings into existence that appear to experience things, then we should probably “play it safe” and treat them as though they are capable of experience—just in case they are.
This being the case, why care about our argument?
A couple of reasons to care is that some thinkers have argued that we could become digital people by “uploading our minds” to the cloud (to achieve immortality), or that we might even be digital people (or SIMS) living in a simulation right now.
Another reason to care is that other influential moral thinkers—‘longtermists’—have suggested that we have a duty to care about future generations of digital people, or even engineer our own extinction so as to populate the universe with “morally better” digital people.
A third reason is that if we are right, then another theory of consciousness that is influential among scientists is incorrect. Giulio Tononi defends Integrated Information Theory, the view that consciousness is (very roughly) organized information.
If my co-author and I are right, then all of these arguments are overblown—and needlessly dangerous. Mind uploading won’t enable us to achieve immortality. We’re probably not SIMs. And engineering our own extinction to create digital people would be to exterminate living, breathing, feeling people to bring into existence unconscious robots and simulated beings.
Consciousness isn’t integrated information. It is, at most, integrated analog, qualitative information—and digital computers, by definition, are not capable of that.
Marcus Arvan is Associate Professor of Philosophy at The University of Tampa and the author of two books, Rightness as Fairness: A Moral and Political Theory and Neurofunctional Prudence and Morality: A Philosophical Theory.