Monthly Archives: July 2017

  • The Problem of Knowing Others’ Mental States

    I think that others, including both humans and non-human animals, have mental states. I think certain things about their mental states, for example, that some, but not all, people perceive colors. But do I have any reason to believe any of this? My knowledge of others’ mental states seem to be private. That is, I cannot have any direct access to others’ mental states. I can only have direct access to my own mental states. So how can I justify my beliefs about others’ mental states, including my belief that others have mental states? If I cannot justify my beliefs about others’ mental states, then it’s not clear how I can know anything about others’ mental states. This is a problem, what I refer to as the problem of knowing other’s mental states.

    If I cannot know anything about others’ mental states, then I cannot know if I’m giving pleasure or causing pain to others. If that’s the case, it’s not clear that I would be doing something immoral if I treat you like an object, something I can manipulate for my own pleasure. It’s not clear why I should spend any time worrying about the mental well-being of others, since I cannot know if anyone else suffers. In a later post, we will examine more closely the relationship between mental states and value.

    Underlying this problem of knowing about other’s mental states is the idea that I can conceive of others behaving as I do in similar situations but these same others having different mental states than I have or no mental states at all.  Consider the following case. I am outside in a park with my friend and we are both looking at some patch of grass. My friend and I, who both use color terms the same way, agree that the grass is green. Isn’t it conceivable that when my friend looks at the grass and says it looks green, the way the color of the grass looks to him is the way red things look to me? Isn’t it conceivable that the color of the grass doesn’t look any way at all to my friend? Couldn’t he be like a machine that can identify colors, but doesn’t have any conscious perception of them? As you think about these questions (assuming you can think), keep in mind that the scenario asserts that my behavior and my friend’s behavior are the same. So it’s not that my friend is colorblind and says that he is seeing no colors or a different color. Rather, he claims to be perceiving the colors the same way that I’m perceiving them. Since I seem to be able to conceive of these things, the skeptic argues I need to be able to show that they are not actually the case in order to know anything about the perception of my friend. But how can I show this, given that I cannot directly access his mental states?

    According to the skeptic, it’s not only the perception of colors in others that I cannot know anything about. The skeptic argues that I can’t know anything about the mental states of others. Here is how the skeptic argues for this radical claim:

    1. It’s conceivable that others behave as I do in similar circumstances but have different kinds of mental states or have no mental states at all.
    2. The only way I can know anything about the mental states of others is by making an inference – i.e. drawing a conclusion – based on how others look – their physical makeup – and how they behave in certain circumstances.
    3. But there is no way for me to know whether such an inference is any good.
    4. Therefore I cannot know anything about the mental states of others, even if they have any.

    Before we examine some responses to this skeptical argument, I want to emphasize that the skeptic is not claiming that others do not have mental states. Nor is the skeptic claiming that others mental states are in no way to similar to or different from my mental states. What the skeptic is claiming is that I cannot know anything about the mental states of others. So if others do have mental states or their mental states are similar to or different from mine, then, according to the skeptic, I cannot know this. Since I think the skeptic is mistaken, I need to respond to his argument.

    (more…)

  • Mental States and Animals

    In the last post, we discussed whether computers could have mental states. This is a highly controversial issue, especially among philosophers. What’s relatively not controversial, even among philosophers, is whether animals have mental states. By “animals” I mean non-human animals. Anyone who has had a cat or a dog as a pet takes it for granted that animals have mental states. Many pet owners describe their pets as being highly intelligent. [Watch this video and you will see some amazing intelligent crow behavior.] Besides intelligence, many pet owners say that their animals have the following mental states: sensations (e.g. they are in pain), perceptions (e.g. they can see and smell and hear and taste), emotions (e.g. they are scared or excited or happy), desires (e.g. they are hungry or tired) and memories (e.g. they remember where who you are or where a bone is buried – check out this video). Some pet owners even insist that their dogs dream. If these pet owners are correct, then animals, at least some of them, such as dogs, maybe even cats, have many of the types of mental states that humans have.

    But how similar are the mental states of non-human animals to the mental states of humans? Humans have thoughts about things. But can dogs or cats have thoughts about things? When you have a thought about something you think about it in a certain way, from a certain ‘conceptual point of view’. For instance, I think that there are electric cars in the United States, my thought is directed at something (cars) under a certain description (as being electric and existing in the United States). If we say that an animal has a thought, do we know what the thought is about. For instance, if you tell a dog to jump in the car, and the dog does it, do we know what the dog is thinking about? Is it thinking anything about the car? Does it think of it as something that has wheels or an engine? Probably not. If it is thinking of it as a car, how does it conceive of a car? If we cannot answer this, can we say that it has an thought about the car, or more generally, any thoughts at all?

    Even if we can know that animals have thoughts and what their thoughts are about, there isn’t much evidence that most animals have higher-order thoughts. A higher-order thought is a thought that is directed at another mental state, such as being in pain. We have thoughts about things in the world, e.g. cars and trees. But we also can think about our thoughts about these things. We can also think about the thoughts of others. Thus, we have higher-order thoughts.

    According to some philosophers, the fact that we have higher-order thoughts explains why our actions can be morally evaluated. When we do something for a reason, we can think think about the reason and evaluate whether it’s a moral reason. Did she kill her husband because she thought she was threatened? If her reason for acting was that she felt threatened, then we may decide that her act of killing is not wrong. Animals seem to act for reasons. The cat bit my foot because he was hungry. But is there any evidence that an animal can think about its reasons for acting. Can the cat think about its hunger. There is little to no evidence that a dog or a cat can think about its own thoughts or the thoughts of others. Furthermore, if an animal cannot think about its own thoughts, then, according to some philosophers, this means that the animal cannot have conscious mental states.

    Before we discuss these challenges to animal mental states, I want to explain why most philosophers agree with pet owners that animals have at least some mental states, even though we cannot experience or directly observe them.

    (more…)

  • Mental States and Computers

    Could computers, which can be completely described scientifically, have mental states? Clearly they act as if they are intelligent. They seem to understand you when you communicate with them. The can answer questions. They seem to remember things. But maybe they don’t have real intelligence. Maybe their intelligence is merely simulated. How can we ever know which is it, real intelligence or merely simulated intelligence?

    Alan Turing, English philosopher, mathematician and the person credited for creating modern computing, believed that if a computer could ever fool people into thinking it was a person this would be reason to believe that it has mental states. But how could a computer ever fool anyone? You can tell by looking at a computer that it’s not a person. Turing proposed a test, which is now referred to as the Turing Test, where a person (an interrogator) can’t see who he or she is communicating with; it’s either another person or a computer. This interrogator-person would only be getting answers to his or her questions through a computer monitor. If the interrogator-person was communicating with a computer but couldn’t tell whether he or she was communicating with a computer, then Turing believed that the computer passed the test and possessed the mental state of understanding.

    Some philosophers believe that not only computers could have mental states, but that some computers already do have mental states. The philosopher Shelly Kagan uses an example of a chess-playing computer to illustrate such a view  [watch this video at minute 35:07]. He says that when we play a chess-playing computer we explain what it’s doing by ascribing mental states to it. We say things like it believes that we’re going to move our queen or it wants to win the game. We ascribe to it the ability to form goals and to reason about what to do. Why did the computer move it’s bishop? We might say it intends to put us in check or it believes that we’re going to pin it. Beliefs, desires, intentions, reasoning, planning – are examples of mental states.

    The fact that we ascribe mental states to something to explain its behavior does not necessarily mean that it really does have mental states. It might be the case that we are merely personifying it – that is, treating this something as if it’s a person without believing that it has mental states. For example, when we say the grass is thirsty we are obviously personifying it. We don’t believe the grass has mental states. But grass does need water to survive and in this sense is similar to a person; so we speak about it metaphorically, as if it were thirsty. Perhaps then we’re speaking metaphorically when we say a computer is intelligent or has such-and-such belief or desire.

    It’s true that we also treat other things, like grass, as if it they have mental states, when we know that they don’t have them. But there seems to be an important difference between these other things and computers. Computers seem to behave much more like people than grass does. Perhaps this is why Turing believed that if computers could fool people into thinking they were people, then this would indicate they had mental states. Grass isn’t going to fool anyone.

    (more…)