Could computers, which can be completely described scientifically, have mental states? Clearly they act as if they are intelligent. They seem to understand you when you communicate with them. The can answer questions. They seem to remember things. But maybe they don’t have real intelligence. Maybe their intelligence is merely simulated. How can we ever know which is it, real intelligence or merely simulated intelligence?
Alan Turing, English philosopher, mathematician and the person credited for creating modern computing, believed that if a computer could ever fool people into thinking it was a person this would be reason to believe that it has mental states. But how could a computer ever fool anyone? You can tell by looking at a computer that it’s not a person. Turing proposed a test, which is now referred to as the Turing Test, where a person (an interrogator) can’t see who he or she is communicating with; it’s either another person or a computer. This interrogator-person would only be getting answers to his or her questions through a computer monitor. If the interrogator-person was communicating with a computer but couldn’t tell whether he or she was communicating with a computer, then Turing believed that the computer passed the test and possessed the mental state of understanding.
Some philosophers believe that not only computers could have mental states, but that some computers already do have mental states. The philosopher Shelly Kagan uses an example of a chess-playing computer to illustrate such a view [watch this video at minute 35:07]. He says that when we play a chess-playing computer we explain what it’s doing by ascribing mental states to it. We say things like it believes that we’re going to move our queen or it wants to win the game. We ascribe to it the ability to form goals and to reason about what to do. Why did the computer move it’s bishop? We might say it intends to put us in check or it believes that we’re going to pin it. Beliefs, desires, intentions, reasoning, planning – are examples of mental states.
The fact that we ascribe mental states to something to explain its behavior does not necessarily mean that it really does have mental states. It might be the case that we are merely personifying it – that is, treating this something as if it’s a person without believing that it has mental states. For example, when we say the grass is thirsty we are obviously personifying it. We don’t believe the grass has mental states. But grass does need water to survive and in this sense is similar to a person; so we speak about it metaphorically, as if it were thirsty. Perhaps then we’re speaking metaphorically when we say a computer is intelligent or has such-and-such belief or desire.
It’s true that we also treat other things, like grass, as if it they have mental states, when we know that they don’t have them. But there seems to be an important difference between these other things and computers. Computers seem to behave much more like people than grass does. Perhaps this is why Turing believed that if computers could fool people into thinking they were people, then this would indicate they had mental states. Grass isn’t going to fool anyone.
The idea that computers could have mental states is a theme often shown in science fiction movies, such as HAL in 2001 Space Odyssey or C3PO in Star Wars. But when I asks students about whether computers could have mental states many balk at the idea. Here are three reasons that I often hear from students for why they couldn’t.
- Computers are made of metals and plastics and other materials that are not the carbon-based materials that people are composed of.
- Computers are not alive.
- Computers are programmed.
Let me explain why I do not find these reasons compelling.
Leaving aside prosthetic devices, it’s true that most people are composed of flesh and bone and blood, not composed of any metal or plastic. The question is why should this fact be relevant when it comes to having mental states. Is the type of matter one is composed of more relevant to whether one has mental states than the color of a person’s skin? It’s a prejudice to say a person can’t reason because he has a different skin color. By analogy, we shouldn’t judge a computer’s ability to think by the fact that it’s made of plastic and metal. It’s important to remember that the basic constituents of the matter that makes up a brain are the same as the basic constituents of matter that makes up a computer; these basic constituents are just arranged differently. So why should the arrangement of matter matter? Without a reason, it seems like a prejudice to say that it does.
Are computers alive? I’m not sure because I’m not sure what it means to be alive. If you think you can define what it means to be alive, don’t forget that there are things that are living that do not have brains, e.g. plants. Some philosophers, such as Fred Feldman, argue that the concept of life is a mystery; we do not at present have a definition of what it means. You might think that being alive has to do with being able to use energy to function. If this is the case, then computers do need energy to function. Perhaps being alive requires being able to respire. If that’s the case, then computers aren’t alive. But it’s not at all clear why respiration would be required to have mental states. If you believe that non-physical souls can have mental states or a spiritual being such as God could have mental states, then it seems that something that is not breathing (I assume souls and God don’t breath) or isn’t alive in the way a human is alive could have mental states.
There is no doubt that computers are programmed. This means that whatever they do has been determined and at any instant there is only one physically possible thing that they can do. Some philosophers believe that this means that they aren’t freely acting. But not all philosophers believe this. As we discussed in this post, some philosophers believe that even if your actions are determined by things that you don’t have any control over you can still act freely. If these philosophers are correct, then in a sense your free actions are programmed by your genetics (nature) and your environment (nurture). Even if it’s true that you cannot freely act if you’ve been programmed, why does this mean that you cannot have mental states? Keep in mind there are philosophers who believe that no one acts freely. They do not deny that people have mental states. So it’s true that computers are programmed. But an argument is required to show why something that is programmed cannot have mental states. I haven’t seen any good argument for this.
Can a skeptic who denies that computes could have mental states give any other reasons?
A skeptic might appeal to either an empirical argument or a philosophical argument for soul dualism. If soul dualism is true (a big “if”), then you need a soul for mental states. And if you need a soul to have mental states, and computers can be described scientifically, then computers could not have mental states.
A skeptic who does not believe in souls might argue that since mental states cannot be described scientifically, computers could not have mental states. We’ve seen some arguments that attempt to show that mental states cannot be described scientifically. If you think that the zombie argument is sound, then you have a reason to believe that computers could not have mental states. If you think the knowledge argument is sound, then you have reason to believe that computers could not have certain mental states, namely those with a certain type of consciousness. Remember that insofar as the knowledge argument works, it does so only with conscious mental states, such as the look of red. The mental states that Kagan says a chess-playing computer has, such as beliefs, do not have this type of consciousness. Unlike the look of red or the sensation of an itch, a belief doesn’t seem any way to the subject of the belief. So it does not seem that we can use the knowledge argument to show that a belief and other non-conscious mental states cannot be scientifically described.
So is there any reason to believe that a computer, something that can be scientifically described, could not have mental states that do not have consciousness?
The philosophers John Searle has given such a reason. He believes that any computer, even one that could fool you into thinking it was a person, could not have any mental states. The reason is that a computer works by manipulating symbols based on their shape and position and these symbols do not have any intrinsic meaning, whereas mental states have intrinsic meaning. To say that a mental state has intrinsic meaning is to say that it refers to things beyond itself. For example, I believe that Donald Trump is POTUS. My belief has intrinsic meaning; it refers to Donald Trump, which is something beyond the belief itself. If I desire a pizza, my desire has intrinsic meaning; it refers to a pizza, which is something beyond the desire itself. According to Searle, the symbols that a computer manipulates only have meaning when we interpret them and give them meaning. So they are derivative, not intrinsic. Their meaning derives from our interpretation of them, something we can do because we have mental states that have intrinsic meaning. Searle supports his view by using a thought experiment that is known as the Chinese room argument.
I believe that a plausible response to Searle is that he is correct that mental states have intrinsic meaning and that you cannot get intrinsic meaning from symbols by merely manipulating their formal properties. However, if there was a causal connection between the symbols and objects in the world to which the symbols refer, then these symbols would gain intrinsic meaning. Such a causal connection would be realized if the computer were placed inside a robot had sensors that would allow it to interact with and perceive objects in the world.