Blog Archive

Monday, September 2, 2019

3b. Harnad, S. (2001) What's Wrong and Right About Searle's Chinese RoomArgument?

Harnad, S. (2001) What's Wrong and Right About Searle's Chinese RoomArgument? In: M. Bishop & J. Preston (eds.) Essays on Searle's Chinese Room Argument. Oxford University Press.



Searle's Chinese Room Argument showed a fatal flaw in computationalism (the idea that mental states are just computational states) and helped usher in the era of situated robotics and symbol grounding (although Searle himself thought neuroscience was the only correct way to understand the mind).

47 comments:

  1. "The code -- the RIGHT code (assuming it exists) -- has to be EXECUTED in the form of a dynamical system if it is to be a mental state."

    Building on this, I remember from studying foreign languages that all language-acquisition has a symbol-manipulating, rule-following component to it. It is as a result of consistent exposure to the language as it is used in real life (for example in conversations) that we can eventually start "understanding" it. Searle's argument is that since he cannot "understand" Chinese although he is able to answer questions in Chinese as a native speaker would, we cannot assume that a computer has any sort of understanding based on rule-following. Searle seems to be saying that since a specific computation as imagined in the CRA does not yield understanding, understanding cannot be computation. However, could it not just be that the computations that Searle was carrying out in this thought experiment were simply not the correct program for language-acquisition/an incomplete version of it?

    Searle is assuming a certain kind of software that would pass T2, but there may be different kinds of software that could pass it, and perhaps it is just that the one Searle is imagining is too simple to yield language understanding?

    ReplyDelete
    Replies
    1. 1. The TT (T2) is not a Siri-like "question answering". It is anything you could discuss verbally, with anyone, for a lifetime. The "correct program" is the one that can pass T2. (The rest is about Weak vs. Strong Equivalence.)

      2. Yes, all language includes syntax (grammar), which is just rule-based symbol manipulation. (We will discuss that when we get to Chomsky in Weeks 8 & 9. But this is about semantics, i.e., meaning, and understanding meaning.

      Delete
    2. I think that you pose a really great question here about if maybe Searle's program is just too simple to understand language. I think that no matter what the code is, it is going to be a computation (rule-based manipulation of formal symbols) and I am convinced by Professor Harnad's response to Searle that computation alone cannot be cognition. There are definitely a variety of codes, or in Searle's case book of instructions, that are possible to create. For instance one could implement a binary search, or linear search, or another strategy to sift through all of the squiggles and squaggles, but no matter what the instruction book says, all Searle is doing is manipulating symbols. One code may be faster or more efficient than another, but I think that they are all just different types of computation, so Searle’s periscope suggests to me that none will lead to true understanding. I think your point about simplicity is really important to consider if we are trying to reverse-engineer cognition by partly using computation because the software component would need to be really complex to capture the wide functions of cognition like language-acquisition.

      Delete
    3. I'm not sure what you mean about simplicity, and what it has to do with the TT. The problem with computation is that it is just syntax (manipulation of meaningless squiggles and squoggles) and meaning (hence understanding) is not.

      Delete
  2. “Note, though, that even for D3 not all the structural details would be irrelevant: To walk like a duck, something roughly like two waddly appendages are needed, and to swim like one, they'd better be something like webbed ones too. But even with these structure/function coupling constraints, aiming for functional equivalence alone still leaves a lot of structural degrees of freedom open.” (from text 3b)

    After my first reading I was skeptical that there really were some degrees of freedom to explore before moving to insist that a T4 system would be needed to pass the T3 test. My acceptance of this idea came after revisiting the topic of evolution we briefly discussed in the first seminar. A simplified version is that natural selection favours mutations that allow organisms to better adapt to their environments. The key point here is that the mutations themselves are random, they are not goal-directed, but if something does happen to function advantageously, it will likely become more frequent in the population. Consequently, evolution shows us the same function can be achieved in many ways by different structures. For instance, marine animals employ different structural strategies like swim-bladders or lipid-content in order to achieve the same macro-function of buoyancy, showing that degrees of freedom can exist when just considering a macro-view of performance. I am still unsure on the details of how to reverse-engineer cognition, but due to accepting these degrees of freedom I am open to the idea that the entire internal structure doesn’t have to be identical to our own (T4).

    ReplyDelete
    Replies
    1. Yes, evolution gives examples of "weakly equivalent" dynamic capacities.

      But whether T4 is needed to pass T3 is an empirical question. We'll find out as we get closer. (It's similar to whether we need T3 to pass T2).

      But that's not quite the same question as whether you would kick Ting. Ting has already passed T3; since she could do it without T4, the question is about whether T3 is enough, not whether T4 is needed to pass T3.

      Delete
    2. You raise an interesting point about random mutations. Building on what you have said, human behaviour/function constitute themselves a spectrum. Some people may react differently/in a way deemed "inappropriate" by social norms to certain statements, depending on how they have grown up/their ability to interact with others in a social setting. This could create issues with the idea of equivalence itself (in this case I am only considering weak equivalence). We have talked about the idea of a person in a vegetative state, and how reverse-engineering cognition to match their functions would not help us understand cognition for people in non-vegetative states. This argument convinces me that we should focus on reverse-engineering cognition in its most common form. However, I am curious as to where we draw the line for our expectations of a "normal" conversation with a robot. Must the robot use humour, sarcasm, have emotional intelligence in order to pass T2? Is it just a question of whether it can convince enough people for long enough?

      Delete
    3. What cogsci wants to reverse-engineer is cognitive capacities, not particular personality traits, which differ from person to person, and depend also on experience (inputs and outputs). And the capacities are the normal, generic capacities that people have. When you puzzle about whether it would require "X" to pass the TT, ask yourself if you would be alright with kicking Ting if she lacked X (humor, sarcasm, empathy)... (The "Ting Test" is not perfect, it can be criticized too, but it's a good way to make sense of what you would expect of a real person and what would only be incidental.)

      Delete
    4. @Stephanie Pye I think that you raise an important point. Yet, I'm not convinced evolution gives us as many structurally different but functionally equivalent examplars of cognition as you seem to suggest (or hope). As far as I know, most organisms that exhibit intelligent behaviour have nervous systems that are at their core pretty similar to ours. You might point out things like octopuses which have semi-decentralized nervous systems, but nonetheless their nervous systems are constituted of neurons. Oddly enough, reading Searle and Harnad has somewhat decreased my confidence that T3 is the appropriate level. And by that I mean that I think it could arguably be the case that in order to pass T3, T4 requirements might have to be met.

      Delete
    5. Solim, you are right that it could turn out that "that in order to pass T3, T4 requirements might have to be met." But the important thing to note in that is that the test of which T4-requirements need to be met in order to pass T3 -- is T3, not T4! In other words, some (or all) of T4 may not be required to pass T3.

      Searle (incorrectly) concluded that his Chinese Room Argument showed that the only way to solve the "easy problem" of explaining the brain's "causal power" was to study the brain. Fair enough. But the brain is not an organ like the kidneys or the heart. The heart pumps blood, but the brain "pumps" everything we can do; everything Ting can do. And it is whether we have successfully reverse-engineered a mechanism that can do everything we can do that the TT is intended to test. To pass that test, our candidate has to be able to do everything we can do.

      That means T3 tests what parts of T4 are needed to pass T3.

      Delete
  3. Since this article in conjunction with Searle has rejected T2 as being enough to explain consciousness or understanding, I am confused as to what the better test would be. Since it was stated we do not have any better empirical test than the Turing Test as far as functional equivalence, is this arguing for T3 or T4? Or proposing that the answer must lie somewhere else? Harnad did state that the answer couldn’t really be either T3 or T4 since they aren’t actually distinct and separate iterations of the same test, but rather are part of a continuum of functional equivalence where the precision of the behavioural equivalence determines where the robot is placed.

    Also, since pure computationalism is rejected in this article, does this imply that both functional and structural equivalence are needed to validate the consciousness of hybrid or noncomputational systems?

    ReplyDelete
    Replies
    1. Hi Aylish,

      Some quick responses to your thoughts:

      "I'm confused as to what the better test would be."

      I don't think there is a better test proposed here. If anything, this article states that the best test we have for determining whether understanding exists in another is the same test that we use in our everyday to solve the Other Minds problem - mind reading, or a lifelong Turing Test. That is to say, that our best test is insufficient to answer the question - but it's our best test nonetheless.

      "Is he arguing for T3 or T4"?

      In terms of which Turing Test is required for understanding, I would say both - even T2 would count as far as I get it. All that matters is that the right kind of programs be run on the right kind of hardware - and also, that there must be some hardware. From that, whether the intentionality expressed in strictly written (T2), or behavior (T3), or body (T4) is irrelevant.

      Delete
    2. There is the question of whether computation alone can pass T2 (Searle's -- correct -- answer is: no)

      Please read the other Replies about "intentionality."

      Then there is the question of whether T3 capacity is needed to pass T2, or whether T4 capacity is needed to pass T3. We don't know yet; all we know is that it can't be just computation.

      Delete
  4. "Can we ever experience another entity's mental states directly? Not unless we have a way of actually becoming that other entity, and that appears to be impossible --" This thought is rather interesting to consider when thinking about what seems to be the most intuitive take on this impossibility for the average person. I'm pulling this intuition from my experience with fantasy/fiction works (not that there is really any nonfictious literature involving taking on another being's form) such as discworld and d&d, but its seems in this regard most people believe were it possible to take on the form of another being we would not even 'access' their mind but instead only place our own in their physical form (such as maintaining your initial mental stats but using altered physical ones were you to turn into a bear, or a mouse, or a bat). Which speaks to the appeal of the hardware/software distinction.

    ReplyDelete
    Replies
    1. Assuming that "becoming that other entity" means every single cell in your body is identical to that of the target entity, I still would argue that you can't experience the other's mental states directly. I think you pointed out quite nicely with the examples of fantasy and role-playing works, that we're only pretending what it's like to be the entity.

      Maybe us humans haven't come to terms with the fact that it is impossible to even come close to experiencing what others think/feel? As Robert says in the comment below, the next step in figuring out what is actually "understanding" requires an exponential amount of research and work in the field...

      Delete
    2. A BIT OF METAPHYSICAL MISCHIEF...

      Deirdre, the intuition that "if I were you I would do this rather than that" is incoherent, since if I were you I would be you, not me, and I'd do exactly what you would do! You can see what someone else sees; you can have feelings that are similar to someone else's feelings; but they still would not be their feelings but yours. That's just the other-minds problem -- a spin-off of the Cogito. The only feelings I can ever feel are my own. The rest is just guesswork.

      Let's not get into too much other-minds sci-fi, though, because that's not what Searle's argument is about. It's just about whether doing the computations for communicating with someone in Chinese would make you understand Chinese. It wouldn't. So the computer doing the same computations and passing T2 would not be understanding Chinese either. (The only sci-fi part may be imagining that a computer could do that! But that's "Strong AI" (computationalism) and Searle just shows that it would not understand even if it were possible.)

      Wendy, you are talking about magic! If there are two people, not only can one of them not "become" the other one, just more and more similar: neither can two inanimate objects. Nor can two things be "identical" (unless they are just one thing, trivially identical with itself -- whatever that means!)

      Cog-sci may sometimes sound like sci-fi, but it's not. It's really reverse-engineering. The capacities are really there, in organisms. Anyone can observe them. The challenge is to explain how the brain -- or anything that has the same capacities -- produces them.

      The puzzles -- about what makes an object the same object across time, what makes a person the same person across time, whether you can make the some thing into another thing by gradually swapping its parts, and whether or not "two" things that have exactly the same properties are really just one thing -- are parts of metaphysics and philosophy of mind. Philosophers, because they do it for a living, do metaphysics and philosophy of mind somewhat better than sci-fi writers and cogsci people -- but not that much better. If you really want to know what they fuss about, see the Identity of Indiscernibles, the Sorites Paradox, and the Problem of Personal Identity.

      But I don't recommend it. The metaphysical puzzles are never really resolved -- and cogsci's "easy problem" is already challenging enough!

      Delete
  5. I think this reading raises a lot of questions about where can we move forward with the cognitive sciences. Now that we know that the CRA has poked a hole into the computational argument, one must wonder what is the next step for the computational argument? I don't necessarily want to posit the idea that perhaps the direction of Cog. Sci should be facing towards a form of "meta-cognition", but is it a viable option? Obviously it can't just be computation on top of computation, as that would still run into the error that Searle pointed out about understanding. But if we have a combination of computation and neural networking, I think we could come closer to figuring out what is actually "understanding".

    Though I also see this as a form of infinite regress where it would be necessary to always have a "system of computation" above the previous system in order to compute it.

    I also think that it is very fascinating how Searle's argument lead to the discovery that cognition cannot be reverse-engineered without a body to house the cognitive state. I think it says a lot as to what is the nature of cognition/mental states that a body is necessary in order to house a mental arena.

    ReplyDelete
    Replies
    1. What do you mean by "metacognition"? Is it, too, (1) something that organisms can do?, or is it (2) a theory (i.e., a candidate reverse-engineered causal mechanism) of how they do it? or (3) just a report of what it feels like to be able to do it?

      There is no infinite regress on computation. Computation remains the ruleful (algorithmic) manipulation of squiggles and squoggles based on their (arbitrary and meaningless) shapes that Turing defined. That's true regardless of which "level of interpretation" you project on the squiggling and squoggling.

      Neurons are neurons: cells that interconnect and fire. "Neural nets" are computational algorithms that can learn; they are not computational simulations of neurons, and if they were, they would not be neurons, any more than computational simulations of furnaces are furnaces.

      That a body is needed in order to be able to do everything Ting can do was already evident before Turing or Searle.

      Turing focussed on (1) the power of computation, (2) the fact that the body's "doings" are the only thing we have by way of objective, observable data, and (3) that there might be something about language that is powerful enough to test all of our cognitive capacities (T3), not just the verbal ones (T2). He was right on (1) and (2), and might also be right about (3).

      Searle showed that computation is not enough to produce all of cognition (though he thought computation could not produce any cognition at all - that only the brain has the "causal power" to do that).

      But the fact that the brain's "doing-powers" were bodily powers was obvious all along (despite the homuncular fantasies we all had about what was going on inside the internal control room in our heads, or the "brain-in-a-vat"). Nor is the insight that you need a body to "house" the homunculus, whether computational or otherwise.

      The insight is that the brain's "causal powers" are sensorimotor powers to do things with the things in the world outside the body. And that our sensorimotor powers may be what is really behind our verbal powers too. They may be what "grounds" the "meanings" of our thoughts, and words.

      And sensorimotor organs and function are not just computation. They can be simulated by computation ("Weak AI" & Strong Church/Turing Thesis) but they cannot be substituted by computation. Sensorimotor function is not computational but dynamic. The brain is (largely) a sensorimotor organ. And the body is a dynamical system, not just the hardware for hardware-independent software.

      Delete
  6. “Some have made a cult of speed and timing, holding that, when accelerated to the right speed, the computational may make a phase transition into the mental (Churchland 1990). It should be clear that this is not a counterargument but merely an ad hoc speculation (as is the view that it is all just a matter of ratcheting up to the right degree of "complexity").”

    Though I do completely agree with Harnad’s claim here that this does not constitute a legitimate argument against what Searie is saying for T2, I do think this presents an interesting0 empirical hurdle to overcome in Harnad’s claim that at least T3 or higher would be sufficient to overcome the Turing Test. In previous readings, Harnad speculated that the number of connections to reproduce cognition would be something to the order of 10^7 connections. But the complexity of these connections could affect what would be possible in this T3 machine, as the speculation suggests. For example, a neural net consisting of only input and output units can only simple programs such as logic gates, whereas a model that consists of at least one intermediary layer manifests the ability to undergo rudimentary learning via recursive changes to connection weights. So, in considering a possible program that would allow for T3, one has to contemplate what the complexity of a system has to be to even accomplish connections between peripheral stimuli and internal representations, let alone whether this connection is sufficient to overcome the shortcomings of computationalism, as pointed out by Searle.

    ReplyDelete
    Replies
    1. Complexity is no doubt needed to pass T2, or T3 (or T4, for that matter), although the "complexity" of a dynamical system is not as straightforward as the complexity of a computational algorithm. But what decides whether your candidate is complex enough is whatever it takes to succeed in passing TT (whether T2, T3 or T4), not "degree of complexity" itself. And the TT is "weak equivalence," so maybe a less complex could turn out to be able to pass too. -- And none of this has anything to do with the "mental" (feeling) other than whatever it takes to produce the TT capacity. It isn't that feeling is somehow a matter of degree of complexity,

      Delete
  7. Searle's thought experiment was to test whether the machine in question "understands" Chinese. In this case, the machine only uses computational states. Although the machine can successfully fool the interrogator (using his instruction manual with the rules of formal symbol manipulation), he has not generated a "feeling" for Chinese.

    In "Minds, Machines and Searle 2: What's Right and Wrong about the Chinese Room Argument" by Professor Harnad, he states that a life-size, life-long pen-pal passing T2 is insufficient to prove that it is thinking. In other words, cognition is not only computation. Although it can correctly manipulate “squiggles,” and seems like it has a verbal performance capacity indistinguishable from humans, it does not have symbol grounding. As I previously mentioned in a skywriting: we need a T3 that possesses robotic (sensorimotor) performance capacity and a grounded symbol system. Only T3 robots are about to ground meanings of words, to interact (recognize, manipulate and describe) with the things in its environment.

    ReplyDelete
  8. I would like to ask a question just to clarify my understanding of this paper:

    Stevan says cognitive science is all about reverse-engineering cognition and reverse-engineering means that we can only use two forms of empirical data: structure and function. If I WAS a computationalist, I would not care about structure because of tenet (2) (implementation-independence). That leaves me with just function, in which case the TT is the best we can do. And if I WAS a computationalist, then the ability to pass T2 is the decisive test for computationalism.
    So then if I WAS NOT a computationalist, the TT is still just testing function, but the TT alone is not enough. Whatever robot/program is being tested would also need to pass some other thesis or test that we haven’t discussed yet in class?

    ReplyDelete
    Replies
    1. Hi! I think this is where the different levels of the Turing test come in. I think T3 would be enough for a lot of people thaqt aren't computationalists but aren't structuralists either since, ene though it still only tests function, it tests function at a much larger scale. The machine would have to be able to interact with the world and people in the way we actual people can, which is what it takes for us to assume other people are cognizers as well. So T3, depite being only a test of function would hold the robot to the same standards that we hold other people. The T4 I think is structural as well as functional since we require the brain to work in the same way as the human brain so it's not just a test of function anymore. I think even if you were insistent that cognition is a direct result of the structure of the brain, seeing a robot with the same brain structure that does everything we can do (so a robot that passes T4) would be enough :)

      Delete
    2. "Function" is a bit of a weasel-word. What cogsci is trying to do is reverse-engineer the causal mechanism underlying what organisms can do: i.e., whatever internal "function" will successfully generate our doing-capacities, whether it's anatomical, physiological, biochemical, synthetic, analog, or computational.

      That "structure" is not altogether distinguishable from function is already evident from Gibson's notion of "affordances," which are the features of external objects that are defined by what interactions they allow (or "afford") to organisms with bodies of a given "structure" (including the structure and function of their sensorimotor organs). A chair has features that afford "sitting upon" to vertebrates of a certain shape and size, but not to, say, jellyfish.

      Delete
  9. "This is Searle's Periscope, and a system can only protect itself from it by either not purporting to be in a mental state purely in virtue of being in a computational state -- or by giving up on the mental nature of the computational state, conceding that it is just another unconscious (or rather nonconscious) state -- nothing to do with the mind.”

    It might be my 2 am brain fog but I've re-read this a few times, taken a break and re-read it again and I still don't quite get it. I think it's saying either a mental state has to be not fully computational or not computational at all for us to not be able to experience others' mental states? I'm confused.

    “There are still plenty of degrees of freedom in both hybrid and noncomputational approaches to reverse-engineering cognition without constraining us to reverse-engineering the brain (T4). So cognitive neuroscience cannot take heart from the CRA either. It is only one very narrow approach that has been discredited: pure computationalism.”

    This is something I was definitely surprised by from Searle. There's a big leap between "oh it's not all just computation" to "we HAVE TO make a brain". I like the idea because I'm in neuroscience and I want to put my degree to good use but I think we skipped a few steps. In a sense, it feels like Searle jumped from T2 (possibly just software) to T4 (it's hardware time!!) but maybe the ~mystery of cognition~ lies somewhere in between, say maybe T3?

    ReplyDelete
    Replies
    1. Hi Lyla!

      I think I have an answer to your first point. My understanding of Searle's periscope (a periscope being a tube structure with prisms that allows one to see around corners or above water from below) is that if you say that computational states are mental states (with implementation independence), that would mean that if we were in the same computational state as something or someone else, we would be able to use this 'periscope' and intuit whether this other entity has mental states. In the sentence that you quoted, my interpretation is that the only way a computationalist could defend against the periscope would be to 1) say that computational states do not equal mental states or 2) by saying that we actually aren't talking about mental states but about unconscious/nonmental states.

      Delete
    2. Lyla, Allie's reply is right. But you are right that that sentence of mine was not very kid-sibly (though reading it early in the day -- and week -- might avoid the brain fog!) Instead of:

      "This is Searle's Periscope, and a system can only protect itself from it by either not purporting to be in a mental state purely in virtue of being in a computational state -- or by giving up on the mental nature of the computational state, conceding that it is just another unconscious (or rather nonconscious) state -- nothing to do with the mind”

      I should have written:

      "This is Searle's Periscope for peeking at the other side of the other-minds barrier (because computation is hardware-independent) to see what (if anything) is going on in the mind of a computer that is passing T2 (in Chinese) through computation alone: Once Searle points out that he would not be understanding Chinese if he himself were doing exactly the same thing the computer was doing (manipulating meaningless symbols), then a computationalist (who thinks cognition is just computation) would either have to say that computation alone cannot cause a felt state (the feeling of understanding Chinese) -- so computationalism is wrong -- or that explaining feeling is not part of explaining cognition computationally -- so computationalism is incomplete as a way of reverse-engineering cognition."

      Delete
  10. In reading both Searle's paper and Prof. Harnad's response, it seems that the CRA was built in an attempt to refute the TT as producing valid results on whether some machine can be equivalent to an organic human entity. The confusion about what is /wrong/ with Searle's CRA as a counter-argument to the TT seems to come in part from the fact that the TT (as it Turing proposed it) is limited in scope to testing ONLY T2 level (i.e. excluding sensorimotor capabilities of T3, and Nervous Systems of T4 and T5) equivalence of structure/function between synthetic machines and human machines.

    Prof. Harnad writes, "The synonymy of the "conscious" and the "mental" is at the heart of the CRA....the force of his CRA depends completely on understanding's being a CONSCIOUS mental state..."

    Searle's attempt to refute the validity of the TT backfires on him, as his equation of "mental" with "conscious" (via the weasel word "intentionality") comes up against "The Other Minds Problem" (Harnad), which, with current technologies, cannot be confirmed one way or another. EXCEPT - through the practices of "mind-reading" which more or less boils down to the same evidence we can glean through the TT, thus Searle's argument is not particularly useful in pushing our capabilities for reverse-engineering human cognition.

    Regarding Searle's focus on the "intentionality" (a weasel word, as we all know) of machines in their computational processes, does this concept translate more precisely as "what it feels like to compute X process" ? If yes, would the concept of "understanding" then expand to include what he would be "doing" in the CRA (if not understanding Chinese language)? Isn't there some kind of "understanding" he (as his machine) exhibits by manipulating symbols, knowing what to do with them based on some /reason/ (rules), even if it is not the same qualitative "understanding" of the semantic meaning behind Chinese formal symbols as it is to those who do speak, read and write Chinese? He knows /what it feels like/ to manipulate those symbols, does he not?

    ReplyDelete
  11. Or rather, he is "conscious" of what it feels like to be "doing" what he "does" in his CRA, even if that is not learning the Chinese language as a native speaker would.

    ReplyDelete
    Replies
    1. "intentionality," "mental" and "conscious" are all just weasel-words for "feeling" or "felt" (and in this case, what it feels like to understand Chinese). Nothing "backfires" on Searle: He points out that he would not understand Chinese if he himself became the hardware executing the Chinese T2-passing program. The only two things he gets wrong are (1) not mentioning that he is using feeling to know that he is not understanding Chinese and (2) that the brain (T4) is the only possible alternative to computationalism.

      Delete
  12. This is kind of a small point but I noticed that Harnad mentions the other-minds problem is 'solved' between humans and some animals species with Heyes' "mind-reading"? I'm sure professor Harnad will elaborate on this later in the course but I am wondering with what tools do we solve the other-minds problem with for animals? I am not disputing that animals have minds but our solving of this problem as it relates to humans is usually based on communication with language (for example verbally expressing empathy or internal thought processes) which is something that animals don't possess (or at the very least a language humans can't interpret). Is it just inference based on behaviour (which I guess is exactly what we are doing with fellows humans except it's mostly "verbal behaviour")?

    ReplyDelete
    Replies
    1. Not a small point

      Yes, language is a powerful "mind-reading" and "mind-writing" capacity, unique to our species (discussed in weeks 8 & 9). But language is not the only way -- or even the primary way -- that we mind-read ("turing-test") other people every day. Babies can't speak, yet we are brilliant at mind-reading what they feel, want, need, think. In fact, it's exactly that same mind-reading capacity -- shared with at least all other mammals, birds, and social vertebrates, including social fish, and even some invertebrates -- that was so important evolutionarily for our altricial species (including all species -- not just birds -- whose young need parental nurturance to survive), and that extends our daily turing-testing not only to our nonhuman family members such as dogs, cats and birds, but to virtually all sentient animals, if we give it a chance.

      That said, it's also true that whereas living with family members from other species naturally engages our mind-reading capacity and empathy, instead keeping our victims out of sight and out of mind seems to allow us to keep eating their flesh and wearing their skin without any sense of remorse or shame. (More on this in the last week of the course.)

      Delete
  13. Several times in this paper, Harnad refers to implementation-independence as the "soft underbelly" of computationalism. He says further that "it is precisely on the strength of implementation-independence that computationalism will stand or fall."

    I just wanted to clarify this point for my understanding. Implementation-independence (II) is the soft underbelly of computationalism because high II implies transitivity and is vulnerable to Searle's periscope? The transitive nature of implementation-independence means that anything with the correct 'program', regardless of the nature of the physical implementation, should be able to access the same states (whether they are mental or computational is neither here nor there). This would mean that anything that passes T2 should have shared states by virtue of this transitivity and thus in the CRA, the fact that Searle could not be said to understand Chinese would be mean that any comparable machine would also lack this 'intentionality'? Implementation-independence is a double edged sword because it was necessary to refute those who emphasized the importance of specific hardware but is also vulnerable to the periscope.

    ReplyDelete
  14. You've just about got it, but the hardware-independence of computation is not a matter of degree. All computation is independent of implementation, by definition. If a property is purely computational, then any hardware executing the computation will have that property (e.g., with any desk calculator or computer or human-manipulated abacus that outputs a symbol that can be interpreted as the sum of two inputs -- or any T2-passer, whether computer or Searle, that executes the Chinese T2-passing algorithm on its input, which we interpret (wrongly) as understanding Chinese).

    ReplyDelete
  15. In response to the Systems Reply, Searle argues that he can memorize the instructions for putting together Chinese symbols and internalize the entire computational system, but he will still not understand Chinese. Therefore, it is not that he doesn't understand because he is only one part of a system that understands; even when he is the whole system, he still does not understand.

    In his paper, Harnad notes that "Searle's right that an executing program cannot be all there is to being an understanding system, but wrong that an executing program cannot be part of an understanding system." I find this quote very interesting because it opens the door to the idea that while Searle can be the entire computational system which doesn't understand, he cannot be the entire system itself––where the system is partly computational and partly analog/dynamic––and the system as a whole understands. Even though Searle can be a computational device, he cannot be a device composed of both computational AND non-computational elements, yet it is precisely this kind of device which would do the understanding. Therefore, while Searle refutes Computationalism, showing that cognition is not only computation, he leaves open the possibility that cognition could be part computation and part other, and while the computational system alone does not understand, the entire system does. I wonder, though, how exactly the non-computational, dynamic element of a T3 robot could convey that understanding; in other words, how do the dynamic elements (sensors and effectors) of a robot connect the formal symbols that are undergoing computations in the computer/brain of the robot with their referents in the world?

    ReplyDelete
    Replies
    1. What makes you so certain that the sensors and effectors are all there is to the non-computational side of a whole system? Furthermore, maybe I am not understanding your question correctly, but it is conceivable for a sensor to pick up light, for example, and then translate the intensity of that light into binary (i.e. formal symbols). Would this be an answer for how “dynamic elements” connect formal symbols to real world referents? Regardless of what you actually meant in your response, I do appreciate the conversation about the non-computational side of the system as I do not think that is made explicitly clear as to what that might be.

      Delete
  16. I'm not sure I completely understand Searle's Periscope. My current understanding is basically that since computation is implementation, if mental states can be proven to be computational states, then we can experience (or "see", if we're keeping with the periscope metaphor) other beings' minds.

    What I don't understand is that how this is the "soft underbelly" of computationalism?

    _________

    You say that, "Unconscious states in nonconscious entities (like toasters) are no kind of MENTAL state at all." and unconscious states are, "definitely not what we mean by 'understanding a language,' which surely means CONSCIOUS understanding."

    I agree that this seems intuitive, but similarly to what I mentioned in a my reply to 3a, Searle and the strong AI could be disagreeing and misunderstanding what the other means by understanding. I think strong AI proponents would argue that a nonconscious entity that can store information and act on that information in behaviorally similar ways to a conscious entity could be in a type of understanding state, even if it's a non-mental one.

    So strong AI believers' "understanding" could mean having stored information and the ability to take actions based on the contents of the information. While Searle's "understanding" could mean the mental state we feel when we consciously recall and "know" information.

    I'm not sure if this is a relevant distinction, because I could see Cognitive Scientists arguing that it's not in their domain to think about non-mental states (which I think would exist in the "understanding" definition I've projected onto strong AI believers). I do, however, think it's valuable to be concrete in what we mean is understanding, and I feel like I don't exactly understand what we mean when we refer to it.

    ReplyDelete
    Replies
    1. I also had difficulty conceptualizing how “understanding” was being used, although Professor Harnad’s paper addressed a lot of the questions that were bothering me. At first I thought it was referring only to connecting meaning to symbols, and had a similar impulse to say, robots do that too! Professor Harnad argues in his paper that it only makes sense to talk about unconscious mental states for something that has mental states (i.e. not a toaster), and that it is obvious that language understanding should refer to subjective understanding. I agree, but I also think the toaster example is unfair. Computers, unlike toasters, can be able to link words with their meaning, which is an important part of what “understanding” implies. Highlighting this alternative form of “machine understanding” therefore really isolates the subjective experience of understanding as the missing element. This can perhaps be extrapolated as subjectivity in general being the missing piece, with computation being able to carry out all the functional processes. I think this is interesting because it means we eliminate the T2 passing machine essentially on the basis of a lack of subjectivity. However, the question of lack of subjectivity, which I understand can’t be judged by the TT, seems to remain at all levels of Turing Testing - even if it becomes more difficult to prove. I don’t think I fully grasp the Professor’s argument surrounding symbol grounding yet though, so this element might clarify my questions.

      Delete
  17. One of the main takeaways I got from Harnad’s response to Searle’s Chinese Room Argument was that Searle went too far in saying that his argument completely disproved computationalism. As Harnad says in his paper, Searle did not really show that cognition was not computation at all, but rather that cognition cannot be all computation. From what I understand, this calls into question our intuition of the T3 test. If a purely computational robot passed T3, we would probably not be inclined to kick it, as we would assume that the entity is thinking. However, since Searle proved that cognition cannot be all computation, then it must mean that the T3 robot is not purely computational. A T3 passing robot must be a hybrid of computational and non-computational mechanisms. “The CRA would not work against a non-computational T2-passing system; nor would it work against a hybrid, computational/noncomputational one (REFS), for the simple reason that in neither case could Searle BE the entire system; Searle's Periscope would fail.” We can’t look into the minds of other people to gain information about their “felt state”. However, we can know the “what-it’s-like” of performing computation, and therefore cognition cannot be only computation.

    ReplyDelete
  18. In seminar, we discussed the opposition that exists between solving the easy problem – how and why we do what we do – and solving the hard problem – why it feels like something to cognize/to do things. As Professor Harnad explained, if we solve the easy problem causally, there will be little causality left with which we can explain the hard problem.
    Searle’s Chinese Room brushes up against the hard problem, by relaying the ‘feeling of understanding’ something, which while difficult to describe is surely something that is accessible to conscious thought – when I hear Hungarian I know that I do not understand. While this argument was introduced by Searle almost accidentally, it surely points to an element of cognition which the Turing test/Computationalism does not address or acknowledge – perhaps on the basis of its residual operationalism. In my mind, this point of Searle’s in some way undermines some of his conclusions – particularly the suggestion that instead of turning to computationalism at all, cognitive scientists should instead confine themselves to insights derived by reverse engineering the physical material of the brain. What could such reverse engineering tell us about why we feel a sense of understanding when we manipulate a language we know?

    ReplyDelete
  19. I’m hung up on Searle’s Periscope — it’s still not fully clear to me. Harnad mentions that we can’t directly experience another’s mental states unless we have a way of actually becoming that other entity. He then mentions that the soft underbelly of computationalism is the exception. “If there are indeed mental states that occur purely in virtue of being in the right computational state, then if we can get into the same computational state as the entity in question, we can check whether or not it’s got the mental states imputed to it”. What is meant by “if there are indeed mental states that occur purely in virtue of being in the right computational state”? Can this be understood as mental states = computational states? Is the Chinese Room Argument an example of Searle’s Periscope? (The man in the room puts himself into a computational state to determine whether there is a mental state of understanding attached to it).

    I also just want to clarify the distinction between mental state and computational state. Is mental state computational, but accompanied by the feeling of understanding/the knowledge that one is in a mental state? Whereas a computational state would just be.. Computation with no self-awareness of it?

    ReplyDelete
    Replies
    1. My understanding of Searle’s Periscope is that the CRA provides a kind of exception to the other minds problem that hinges on computationalism: If cognition is computation (“If there are indeed mental states that occur purely in virtue of being in the right computational state”), then we'll be able to experience what other beings experience, which is implementation independent. In the case of the CRA, Searle is doing the implementation-independent computation (taking Chinese questions, following English rules and writing Chinese answers) and it doesn’t matter if it’s Searle or a created T2 we're using to produce the answers. However, Searle in the CRA as the system also has a mind, so he can tell us first hand that he did not understand any of the Chinese characters he read or wrote. If computationalism is true, then Searle’s periscope should work and we could know what other’s are thinking because we just have to have to implement the same program.

      Delete
  20. From “Harnad, S. (2001) What's Wrong and Right About Searle's Chinese RoomArgument?”

    The gist of the text:

    What’s wrong with Searle: that cognition is just not computation at all.

    What’s right with Searle: that cognition cannot be just computation. Or that computationalism is wrong.

    A quote:

    “The synonymy of the “conscious” and the “mental” is at the heart of the CRA …”

    My question:

    To my understanding, Searle argues that mental states are not computational states. As Harnad explains, Searle went too far by stating this.

    Which of these claims are accurate according to Harnad’?

    a) Mental states are not just computational states (that I am pretty sure it is accurate)
    b) Conscious mental states are not computational (is this what Searle meant?)
    c) Conscious mental states are not just computational (is this what Harnad believes?)

    ReplyDelete
    Replies
    1. A further question would be:
      Are "conscious states" all "felt states" ?

      And when we talk about mental state: do we distinguish between the conscious ones and the unconscious ones?

      Delete
  21. I am trying to understand what Professor Harnad means by “the soft underbelly of computationalism.” I think this might be part of the argument, but I am struggling to understand it:

    Clearly, we cannot look into the minds of other people. We cannot know how they experience reality or how (or even if) they feel. This is the other minds problem. However, we can look at the software of computers. We also know that computers are implementation independent. Searle’s periscope only lets us peer into the minds of implementation independent beings. Since we cannot look into the minds of living beings, we must conclude that they are not implementation independent. Therefore, cognition is not computation.

    Are we sure that Searle’s periscope would not work on a robot that cognizes? If we programmed the machine, I don’t see why we wouldn’t be able to know if it is conscious or not. What is the connection between using Searle’s periscope and implementation independence? Why can we say that Searle’s periscope works on implementation independent machines at all?

    ReplyDelete
  22. “For although we can never become any other physical entity than ourselves, if there are indeed mental states that occur purely in virtue of being in the right computational state, then if we can get into the same computational state as the entity in question, we can check whether or not it's got the mental states imputed to it” (Harnad, 2001).

    This passage is what Professor Harnad equates to Searle's Periscope, the method with which Searle describes the "soft underbelly of computationalism". Moreover, if we adhere to computationalism and the hardware-independence of computational states, would this imply that our mental state or "software" could be run by a computer? This being the case, could we potentially transfer our mental state to another physical being other than ourselves? I reflect on identical twins. Monozygotic twins share the same genes, sex and in many cases are brought up in with shared environments. Yet their mental states and consciousness are not the same. I believe that this may exemplify the duality between the "software" versus the "hardware" - the essence of Computationalism's second tenet. The same physical systems can run completely different computational programs like in this twin example and conversely, "radically different physical systems can all be implementing one and the same computational system".

    ReplyDelete

Opening Overview Video of Categorization, Communication and Consciousness

Opening Overview Video of: