Daily Archives: Sunday, July 2, 2017

  • Mental States and Computers

    Could computers, which can be completely described scientifically, have mental states? Clearly they act as if they are intelligent. They seem to understand you when you communicate with them. The can answer questions. They seem to remember things. But maybe they don’t have real intelligence. Maybe their intelligence is merely simulated. How can we ever know which is it, real intelligence or merely simulated intelligence?

    Alan Turing, English philosopher, mathematician and the person credited for creating modern computing, believed that if a computer could ever fool people into thinking it was a person this would be reason to believe that it has mental states. But how could a computer ever fool anyone? You can tell by looking at a computer that it’s not a person. Turing proposed a test, which is now referred to as the Turing Test, where a person (an interrogator) can’t see who he or she is communicating with; it’s either another person or a computer. This interrogator-person would only be getting answers to his or her questions through a computer monitor. If the interrogator-person was communicating with a computer but couldn’t tell whether he or she was communicating with a computer, then Turing believed that the computer passed the test and possessed the mental state of understanding.

    Some philosophers believe that not only computers could have mental states, but that some computers already do have mental states. The philosopher Shelly Kagan uses an example of a chess-playing computer to illustrate such a view  [watch this video at minute 35:07]. He says that when we play a chess-playing computer we explain what it’s doing by ascribing mental states to it. We say things like it believes that we’re going to move our queen or it wants to win the game. We ascribe to it the ability to form goals and to reason about what to do. Why did the computer move it’s bishop? We might say it intends to put us in check or it believes that we’re going to pin it. Beliefs, desires, intentions, reasoning, planning – are examples of mental states.

    The fact that we ascribe mental states to something to explain its behavior does not necessarily mean that it really does have mental states. It might be the case that we are merely personifying it – that is, treating this something as if it’s a person without believing that it has mental states. For example, when we say the grass is thirsty we are obviously personifying it. We don’t believe the grass has mental states. But grass does need water to survive and in this sense is similar to a person; so we speak about it metaphorically, as if it were thirsty. Perhaps then we’re speaking metaphorically when we say a computer is intelligent or has such-and-such belief or desire.

    It’s true that we also treat other things, like grass, as if it they have mental states, when we know that they don’t have them. But there seems to be an important difference between these other things and computers. Computers seem to behave much more like people than grass does. Perhaps this is why Turing believed that if computers could fool people into thinking they were people, then this would indicate they had mental states. Grass isn’t going to fool anyone.

    (more…)