Tags

, , , ,

My previous post on consciousness, responding to Ted Slingerland’s views, attracted great responses that deserve a more detailed explanation. Ryan Overbey addresses my claim that scientific experiment cannot disprove consciousness because such experiments depend on experience and perception:

You say “experience and perception” presume consciousness. In that case, consciousness for you seems to be defined as any system of taking in sensory data, storing information about that data, and processing that information. Am I reading that correctly? We have computer programs that can do these things. Would they fit your criteria for consciousness. If so, cool! If not, why not?

The answer to the question here is: absolutely not. Taking in empirical data and processing it, in the way that a computer program does, does not count as experience, perception or consciousness. Why? Consider dreaming. When we dream, we are not taking in any data from the observable world at all; but we are still perceiving. What a dream is, is an interior state. Of course physical changes occur in the brain when we dream; but a dream is necessarily more than that. To say a dream is nothing but those physical changes is to say not merely that the things we dream about do not exist, but even that the fact that we dreamt about them did not exist.

In the specific case of scientific knowledge: knowledge and understanding are themselves interior states, as dreams are. I think John Searle’s “Chinese room” argument demonstrates the point well. Suppose you had a computer program that could take Chinese characters as input and return other Chinese characters as output, in a manner similar to a more sophisticated version of the classic program ELIZA – to the point where a Chinese speaker couldn’t tell the difference between the program and a real person (i.e. the program would pass the Turing test). Not improbable. Here’s the trick: it’s also not improbable that you could train an English speaker to do the exact same thing, without ever learning the referent of a single Chinese word — simply learn which characters in which patterns to return based on which inputs, on the basics of pattern recognition. It would be very difficult to claim that such a person understands written Chinese; so why should we think that the computer understands written Chinese either? Understanding must be something more than behaviour; it must be an interior state.

It is these interior states of knowledge and understanding which science requires. If you program a computer to manipulate variables and record the results in order to test hypotheses, and the computer keeps doing this after the human race has died off, the computer is not doing science, because what the computer has is only behaviour, not knowledge, and science requires knowledge.

To deny the existence of interior states entirely would require that one claim that the man in the room understands Chinese despite not knowing the referents of Chinese words, and that the subjective perception of dreams does not exist beyond the movements of neurons. Such claims fly so clearly in the face of any common-sense understanding that the burden of proof must necessarily be on those who wish to deny them. On what grounds could we say that dreams don’t exist, or that the man in the Chinese room understands Chinese? It cannot be enough to say that claims about interior states cannot be empirically tested, since any requirement that knowledge be empirically tested cannot itself be empirically tested.