My previous post on consciousness, responding to Ted Slingerland’s views, attracted great responses that deserve a more detailed explanation. Ryan Overbey addresses my claim that scientific experiment cannot disprove consciousness because such experiments depend on experience and perception:
You say “experience and perception” presume consciousness. In that case, consciousness for you seems to be defined as any system of taking in sensory data, storing information about that data, and processing that information. Am I reading that correctly? We have computer programs that can do these things. Would they fit your criteria for consciousness. If so, cool! If not, why not?
The answer to the question here is: absolutely not. Taking in empirical data and processing it, in the way that a computer program does, does not count as experience, perception or consciousness. Why? Consider dreaming. When we dream, we are not taking in any data from the observable world at all; but we are still perceiving. What a dream is, is an interior state. Of course physical changes occur in the brain when we dream; but a dream is necessarily more than that. To say a dream is nothing but those physical changes is to say not merely that the things we dream about do not exist, but even that the fact that we dreamt about them did not exist.
In the specific case of scientific knowledge: knowledge and understanding are themselves interior states, as dreams are. I think John Searle’s “Chinese room” argument demonstrates the point well. Suppose you had a computer program that could take Chinese characters as input and return other Chinese characters as output, in a manner similar to a more sophisticated version of the classic program ELIZA – to the point where a Chinese speaker couldn’t tell the difference between the program and a real person (i.e. the program would pass the Turing test). Not improbable. Here’s the trick: it’s also not improbable that you could train an English speaker to do the exact same thing, without ever learning the referent of a single Chinese word — simply learn which characters in which patterns to return based on which inputs, on the basics of pattern recognition. It would be very difficult to claim that such a person understands written Chinese; so why should we think that the computer understands written Chinese either? Understanding must be something more than behaviour; it must be an interior state.
It is these interior states of knowledge and understanding which science requires. If you program a computer to manipulate variables and record the results in order to test hypotheses, and the computer keeps doing this after the human race has died off, the computer is not doing science, because what the computer has is only behaviour, not knowledge, and science requires knowledge.
To deny the existence of interior states entirely would require that one claim that the man in the room understands Chinese despite not knowing the referents of Chinese words, and that the subjective perception of dreams does not exist beyond the movements of neurons. Such claims fly so clearly in the face of any common-sense understanding that the burden of proof must necessarily be on those who wish to deny them. On what grounds could we say that dreams don’t exist, or that the man in the Chinese room understands Chinese? It cannot be enough to say that claims about interior states cannot be empirically tested, since any requirement that knowledge be empirically tested cannot itself be empirically tested.
Ben said:
Knowing well Searle’s argument, and not knowing well the field of philosophy, I find myself surprised that anyone still takes that argument seriously.
Going into the depths of Searle (i.e., beyond what you write about here), the biggest flaw in his logic is that he simply assumes that consciousness has non-simulable properties. A house fire on a computer is not a house fire, he says; hence, a computer representation of consciousness is not consciousness. But some things have non-simulable properties, and some don’t: a computer representation of an exam is, in fact, an exam. “Simulation is not reality” is a conclusion that needs proving in individual cases, but Searle treats it like a global assumption.
As for the plain and simple version of the Chinese Room argument, one of the direct counterarguments goes like this: “Just as none of my individual neurons are conscious, the blind translator in the chinese room is not conscious, indeed. However, the entire system as a whole -from inputs to translation book to blind translator to outputs- has the emergent property of consciousness, just as the system comprising my neurons put all together (with their inputs and outputs) has the property of consciousness.” Not that we can prove the Chinese-room-system *is* conscious, but this shows the failure of Searle’s argument. His logic is circular; it rests on intuitive assumptions about the nature of consciousness.
This counterargument to Searle does, indeed, use some counter-intuitive views of consciousness, but those intuitions arise substantially from the mysteriousness of how our brains operate. Hopefully, the analogy of “dumb translator” to “dumb neurons” highlights those assumptions and shows where they break down. However, this “emergent phenomenon” view isn’t the same thing you seem to argue against so strongly: it does not claim that interiorness doesn’t exist, but instead makes a different claim about how this interiorness comes about.
Of course, there’s a whole other possible line of argument here, if I may mix my metaphors and be the devil’s advocate for a can of worms. It’s empirically clear that our conscious perceptions are frequently inaccurate. Beyond simple sensorimotor stuff, we often act and decide for reasons different from the ones we consciously recognize. If our perceptions about decisions are inaccurate, are we really deciding, or just tacking on explanations? Is this “consciousness” thing just a post-hoc addition that creates a communicable (if wildly inaccurate) narrative about what we do? If so, in what way is conscious “understanding” even a meaningful concept?
Amod said:
First, I’m not yet sure how much we’re disagreeing here. As I think you note, I’m not necessarily arguing against the idea that consciousness (or interiority) is an emergent phenomenon. What I am strongly arguing against, and using the Chinese room to argue against, is the idea (which Ryan’s previous post at least suggested) that it’s entirely reducible to behaviours (such as information processing) and the physical units and systems (such as neurons) associated with those behaviours. Interior states are real; but that leaves entirely open the question of where they come from. Something has to cause consciousness to come into being, and I’m happy to leave open the possibility that that’s entirely physical; if so, it probably is an emergent phenomenon. I’m certainly not trying to advocate an Augustinian conception where free will somehow pops into existence irrespective of human biology.
As for the devil’s advocate argument: yes, our perceptions can certainly be inaccurate, and we may turn out to be wildly deluded about the nature of our own consciousness. It’s a typically Buddhist view, for example, to say that the self is not real; consciousness is more detached from individuals than we’d like to believe. What doesn’t happen there is the absence of consciousness altogether. Descartes’s arguments fail, I think, in establishing “I think therefore I am.” What he succeeds in establishing is a weaker point: “there is thinking, therefore there is being.” One can’t doubt the existence of doubt; that doesn’t make sense, it’s like trying to refute the law of non-contradiction. We may need to drastically revise our ideas of what thinking is, but I don’t think we can or should revise the idea that thinking is.
On other potential disagreements, let me ask a couple things: do you think it’s fair to say that the whole system of the Chinese room understands Chinese? And perhaps more importantly, do the people designing the Chinese room – training the man to respond – count as part of this system?
Ben said:
We do seem to mostly be in agreement, though there is one little point: emergent systems theory may be compatible with the idea that consciousness is “entirely reducible to behaviours… and the physical units and systems”, which you do argue against. It’s quite possible that understanding the units well enough will provide the entire story about the consciousness. Put differently, consciousness might be fully entailed by a complete description of the physical phenomeon. Emergent phenomenon may be compatible with reductionism.
I think that the whole system of the Chinese Room does understand Chinese. Or, rather, I’m convinced that it’s ~possible~ for the CR to understand Chinese. I think the claim “the Chinese Room cannot be understanding” entails the claim that “only neurons can give rise to consciousness.” If non-neural systems can ever have understanding and consciousness, then the Chinese Room is somewhere on that scale of complexity (or whatever elements are sufficient for consciousness); the CR could potentially be under some threshold, but it’s not impossible.
(My gut instinct would definitely be to include the coders as part of the system! However, I’d need to think through the implications more; I actually had that thought when first posting, but decided to hold it back, since I wasn’t certain of it.)
Amod said:
I think the presence of the room’s designers makes a big difference to the claim that the Chinese room as a system understands Chinese. I’m happy to agree that the whole system understands Chinese if they’re in there – because people who understand Chinese (subjectively, in their minds) are already a part of the system. If they’re not included, I’m less sure as to whether I’d agree – I haven’t thought that through.
Emergent said:
At the end of the day, the CR has limited use as an example- it’s just a bad (possibly straw man) metaphor for the question of whether computers can be conscious. The underlying question is really “Can operations on data suffice to give rise to consciousness?”, to which Searle says, “Never.” As I tried to get at in the middle of my last comment, one can claim “yes” without affirming that the CR itself does so successfully.
Therefore, I don’t think it’s worth putting too much effort into worrying about the consciousness of the CR system itself. Regardless, my intuition tells me that including the coders in the CR-system may be as valid/necessary as including the world in the brain-system.
Ryan Overbey said:
Just to be clear, I was not trying to make any affirmative claim that whatever it is you’re calling “consciousness” is entirely reducible to behaviors and materials. I can’t do that, since I don’t even know what you mean by the word. If by “consciousness” you mean some sort of Deepak Chopra-esque spooky spirit inhabiting a physical vessel but ultimately able to soar majestically into the Great Jungian Monad after death, then I wouldn’t say your consciousness is a material phenomenon, I would just say it’s a nice fantasy, like the Flying Spaghetti Monster or Amit?bha Buddha.
I was simply trying to get at what you meant by “consciousness”, since the only definition I could see you give was a rather odd one– consciousness for you is whatever is presumed when you have “perception and experience”. But then this raises the question of what on Earth you mean by perception and experience. I gave possible definitions of these which could be fulfilled by robots. But you rejected these definitions because “perception” and “experience” require an “interior state.” And in your last paragraph, you implied that whatever it is you call “interior states” are not entirely physical, they are somehow, vaguely, something more. (I am confused by how dreams could NOT be something other than “the movements of neurons”– I don’t even know what you mean when you say that common sense dictates it is something “beyond” that. I would love to hear an elaboration of how precisely a dream is not reducible to material phenomena. I can then show you the many scientific articles which persuasively do precisely that…)
Talking about these things is all rather pointless without clear definitions, of course. Better to have your definitions up front, or else people just run around in circles, trying to figure out what you mean.
For my part, I think there is nothing wrong with saying that interior states are reducible to the physical phenomena that give rise to them. And I don’t think it avoids my commonsense understanding at all. If you rip out my brain from its stem, dump gasoline on it, and set it on fire, the resulting charcoal will not dream, will not have what you call “knowledge”, and will (hopefully) not have whatever it is that you call “consciousness.”
Amod said:
First question then should be: what would you accept as a satisfactory definition? Most attempts to define consciousness are going to involve some amount of closely related terms (such as subjectivity or interior states). I can try to come up with a further working definition involving those terms if you like, but it’s probably not worth it if that’s not going to satisfy your criteria for an adequate definition, and I have a feeling that it isn’t, since you’ll probably then demand definitions of those terms and so on. The thing is, trying to demand something beyond those related terms is like trying to demand a definition of the number two that doesn’t refer to any other numbers (or to number-related words like “second”); you have already ruled the possible terms of a definition out of court. I suspect you’re demanding a definition purely in terms of physical states and processes, and it should be obvious why I’m not giving you that. If that’s not what you’re asking for, what kind of a definition are you seeking?
The example of dreams should at least help illustrate what I’m talking about, in a way probably clearer than a definition of the form “consciousness is x” will do. Dreams are the most obvious example of interior states, something which robots don’t have. The reason I say that they are the most obvious example is that dreams have cognitive content; they are dreams about something, of something, and something which is not actually in the world external to the dreaming and perceiving subject. A similar thing is true of consciousness more generally, in most if not all cases: consciousness has objects. Your position seems to require you to deny that. If a dream is nothing more than the movements of neurons, then the content of the dream does not exist, for that content is not itself the movement of the neurons.
Let me explain the point a bit more: It’s not correct to say that when I thought I saw a car in a dream, what I saw “was actually” my neurons moving. If, while awake, I thought I saw a car and it turned out to be a fast-moving moose, then I misperceived the moose as a car; but my neurons were still moving there, in a very similar way to the way they move in the dream. In both cases there is a perception of something, and the object of that “of” is not merely the neurons. (Dreams do not – as far as I know – result from faulty viewing of the interior of the eyelid.) Your position seems to commit you to denying these objects of consciousness.
You can reduce the cause of dreams to material phenomena, but not the dream itself. I’ll happily take a look at the scientific articles you mention if you like; but I will bet you that they show absolutely nothing of what you claim they show. No doubt they show that we can tell exactly when someone is dreaming by the particular movements of their neurons. That says absolutely nothing whatsoever to indicate the dream is only the neurons. It can’t. In order to say that the dream was the neurons in the first place, you had to have the subjective experience of the dream to begin with; you had to have someone telling you “I’m dreaming,” or you have nothing to connect the neurons’ movements to, to say that it was a dream.
When I say common sense dictates that it is something beyond that: suppose you somehow managed to do a properly conducted, reliable scientific experiment that demonstrates that light coming from the sky on a normal bright summer day is in the wavelength of 680 nm. Scientifically, this would mean that you have proved the sky is red. Common sense nevertheless says the sky is blue, and common sense says dreams have content.
To say that dreams are nothing more than the movements of neurons would be to say that you are not in fact seeing and hearing the things in your dreams; you are not actually perceiving them, they do not actually appear to you. If you don’t think that it contradicts common sense to say that the things we seem to see in our dreams do not actually appear in our sight, I’m not sure what I can say to you – in the same way that I’m not sure how I could respond to someone who (having done a reliable experiment) asserts that common sense says the sky is red.
I suppose I could try to do some sort of poll to indicate that the vast majority of people believe they’re seeing the contents of their dreams, but that seems like it should be unnecessary.
But beyond denying common sense, it’s logically contradictory, in the same way that doubting the existence of doubt is. What could it mean to say that the car that I thought appeared in last night’s dream didn’t actually appear in the dream? It would mean, surely, that the car only appeared to appear. But how is that different from the car merely appearing?
In short, it seems to me that you don’t really have a theory of error here. If we’re not really perceiving the objects of our dreams – if that “seeing” is nothing more than the movements of neurons – then what are we perceiving? If “we’re not perceiving anything, we just think we do” – well, what, on your understanding, would it mean that we think we perceive something?