Blog Archive

Monday, September 2, 2019

3a. Searle, John. R. (1980) Minds, brains, and programs

Searle, John. R. (1980) Minds, brains, and programsBehavioral and Brain Sciences 3 (3): 417-457 

This article can be viewed as an attempt to explore the consequences of two propositions. (1) Intentionality in human beings (and animals) is a product of causal features of the brain I assume this is an empirical fact about the actual causal relations between mental processes and brains It says simply that certain brain processes are sufficient for intentionality. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality The main argument of this paper is directed at establishing this claim The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality. These two propositions have the following consequences (3) The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2. (4) Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1. (5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4. 





see also:

Click here --> SEARLE VIDEO
Note: Use Safari or Firefox to view; 
does not work on Chrome

57 comments:

  1. Where does the topic of inference play a role in this discussion?

    I understand that inference is not like computation, as computation is simply about inputs and outputs, whereas inference is about using reasoning and evidence in order to come to a conclusion. It is also safe to say that inference would fall under the T3 category in the turing test. Yet, when Searle described the Chinese room I couldn't help but think that maybe inference might have something to do with Searle's conundrum about whether something could think solely because it has the right programming (11).

    For humans the topic of inference is very present when discussing ideas, learning new subjects and especially for language acquisition. The overwhelming majority of words we learn or have learned are not because we opened up a dictionary when we were young children to know what the word "difficult" meant; most likely we heard it in a sentence and either inferred its meaning through the context in which it was said or we asked the person what it meant and through that personalized definition, we stored the word in our vocabulary without a formal learning method. And through these inferences, we have a general understanding of what was said.

    Obviously with a machine, we can compute the rules for language into it and it can "read" sentences and as Searle mentioned, come to a conclusion about a story where details were missing. Now my question is, how could we know that a robot who came to the conclusion about a story without knowing all the details didn't do it through a form of inference? Even if that conclusion may have been computed in a manner where "if person X does Y because of Z, then he will leave", is it still not a form inference? While it would be hard to say that the robot is using reasoning to come to this conclusion, it still has to properly compute the conclusion from the missing data. In the end it may be just computation without inference, but if a robot has more and more inferential conclusions set up in its programming, would it be not be considered more intelligent and/or successfully understanding topics? I think if a robot was given the same tools as the man in the Chinese room and was able to infer the English rules of Chinese given to it, it could successfully teach itself Chinese via computation, and it would be very hard to tell whether or not its just computation or it actually understands it.

    ReplyDelete
    Replies
    1. There's deductive inference (which is computation):

      If (1) p implies q and (2) p
      are true then we can infer:
      (3) q
      is true. And that's certainly computation.

      But if you mean what's going on in our heads when we infer something from something else (and it's not just following a symbol manipulation rule, as above, then we're waiting for cogsci to reverse-engineer how we do it.

      The TT is reverse-engineering a mechanism that can do it all, not just logical inference (or chess).

      Now what about Searle's Argument?

      __________________________________

      Everyone: please keep each comment to less than 200 words

      Delete
  2. “The robot would, for example have a television camera attached to it that enabled it to 'see,' it would have arms and legs that enabled it to 'act,' and all of this would be controlled by its computer 'brain.' Such a robot would, unlike Schank's computer, have genuine understanding and other mental states." (From the Robot Reply section of the text)

    From my understanding, this robot proposed in the robot reply section of the text is a T3 robot, one that would be indistinguishable from humans in terms of “doing”. Searle’s argument against the reply is that perhaps the inputs may enter the room through a camera, and exit the room as some motor function, but crucially what is going on in the middle is still just formal symbol manipulation. Even though the robot may appear from the outside of being able to do everything a human can, it still doesn’t understand. What is confusing to me is that from last week I was under the impression that a T3 robot does understand (because it is indistinguishable in performance), but here it does not. Does this just mean that the combination of sensorimotor capacity plus computation isn’t enough for understanding? Or, does this mean that our T3 robot may have a performance capacity that from the outside (e.g. how it replies to emails, how it walks, how it eats, etc.) is indistinguishable from a human (thus able to pass T2) but I would be wrong in saying that this robot understands?

    ReplyDelete
    Replies
    1. I think an important distinction between understanding and intention in this case is the difference between understanding and identifying. A robot or AI could accurately identify a stimulus (for example a cat) and react in an appropriately human way (petting it). But the robot could not be assumed to “understand” what a cat is. This stimulus could just be checking all the right physical attributes of what a cat is and outputting a behaviour but this doesn’t mean that it has any opinions, emotions or beliefs about cats. Because of this, Searle is saying that behaviouralism isn’t enough to assume understanding. Just because a robot acts like us doesn’t mean it “thinks” like us. So in this case, a robot that acts exactly like a human would pass T3 because the Turing tests are purely behavioural tests that assume understanding through behaviour (and therefore argue for computationalism on these grounds). Searle argues that we can’t make this assumption and is rejecting behaviouralism as a whole.

      Delete
    2. Stephanie, to understand why Searle's "robot reply" is not about T3, please read this week's other reading, 3b.

      Because computation is implementation-independent, Searle can "become" a T2-passing computer by memorizing and executing the T2-passing computer program, and reporting (truly) that he is not understanding Chinese. The computationalists' "system reply" to that is wrong: Searle becomes the whole system by memorizing and executing all the computations. It is not true that although Searle does not understand, "the system" does. Searle is "the system."

      But with a T3-passing robot, Searle cannot become the whole system, because sensorimotor function is not implementation-independent computation: it is dynamics (physical, and physiological). So too for any other dynamical functions inside the robot that are needed to pass T3. So in that case, a "system reply" -- that Searle does not understand but "the system" does -- would be right (but it would not help computationalism, because the T3 system is not just computation).

      Because of the other-minds problem it is impossible to know whether anyone (or anything) understands, except by being the thing itself. (That's part of the Cogito.) So there is no way to show that it doesn't understand either.

      But there is one exception to the other-minds problem: implementation-independent computation (Searle’s Periscope”). Because computation is implementation-independent, anything that executes the right computer program will have any of the properties that are supposedly purely computational ones. With a computer-simulated furnace, you can prove it really doesn't produce heat by measuring (real) temperature. So heat is not computational, it's dynamic.

      But to prove that T3 does or does not understand you would have to be T3 -- and Searle can't be T3; he can only be its computational part (whatever that is), because the noncomputational part is not implementation-independent.

      In other words, there is a way to prove that T2 (if just computational) cannot understand; but, as with everything else except computation, there is no way to prove that T3 cannot understand. Turing just points out that that just puts T3 in the same boat as other people: People (and T3 robots) are indistinguishable from one another in what they can do, and that's the best we can do in trying to reverse-engineer the mechanism that gives them their cognitive capacity.

      Aylish, I think what you mean by "identifying" something and "understanding" something is the difference between doing and feeling. Yes, there's no way to know whether a robot (or any other organism, human or nonhuman) feels. The TT (whether T2, T3 or T4) is only about what it does, and can do. But you don't know any more about what's going on inside a robot that gives it the capacity to do (and feel, if it feels) what it can do and feel (if it feels) than you know about what gives those capacities to people, or other organisms.

      That's why we're all waiting for cogsci to tell us, by reverse-engineering the capacity to do; and T2, T3, or T4 will be the eventual result. That's the easy problem. Feeling is the hard problem.

      But reverse-engineering, although it's "just" about doing, not feeling, is certainly a lot more than behaviorism, which never tried to explain the causal mechanisms of behavioral capacity at all!

      Delete
    3. Thank you so much for taking the time to explain this, I definitely have a clearer understanding on T3 robots in relation to computation and the system reply.

      Delete
  3. In his response to (IV) The Combination Reply, Searle says "But the attributions of intentionality that we make to the robot in this example have nothing to do with formal programs. They are simply based on the assumption that if the robot looks and behaves sufficiently like us, then we would suppose, until proven otherwise, that it must have mental states like ours that cause and are expressed by its behavior and it must have an inner mechanism capable of producing such mental states."

    In this case, we ascribe intentionality based on the way the robot behaves (although "intentionality" is a weasel-word). But is it not a formal program that underlies the robot's functioning that dictates its behavior? And hence are we not back to the Other Minds problem, where if the robot is functioning exactly like humans over a large range of behavior, how can we arbitrarily decide it does not understand as we do? Is the "inner mechanism" not a type of program as well?

    ReplyDelete
    Replies
    1. You are right that we are no better off with a T3 robot than with other people, because of the other-minds problem.

      So the only way to know for sure whether any thing feels -- [yes, "intentionality" is a weasel-word] -- is to be that thing.

      But with T2, and computation, you have Searle's Periscope, which lets you know that computation alone is not enough to produce understanding: because it feels like something to understand Chinese, and Searle does not understand, even though he is the system doing all the computations.

      Delete
    2. Hi Ishika,

      In response to "is it not a formal program that underlies the robot's functioning that dictates its behavior?"

      There is a difference worth noting here between whether we ascribe intentionality to something versus them actually possessing intentionality. Yes, there is a formal program that would underlie a magnificent robot's behavior; furthermore, yes, we could even presume that that robot has mental states after interacting with them. However, that does not necessarily mean that they do have those mental states. If Searle himself were pulling levers inside that robot, strictly off formal rules, we could be fooled into believing the robot had intentionality, the same way a magician may appear to be showing you an empty box - when it fact it is only a mirror with something hiding behind.

      "Are we not back to the Other Minds problem?"

      The reason we don't go into the other minds problem is because in Searle Chinese room, we have the privilege of going into that other mind. Once there, we can safely read their mind and know that they have no intentionality, the same way Searle knows no more Chinese than he did before by memorizing those scripts.

      Delete
    3. JG, a robot cannot be just a computer, computing. See other Replies.

      "Intentionality" is a weasel-word. It feels like something to understand text. It also feels like something to mean something by what you text. Those feelings are felt by a feeler. This is not about wether someone else can think the feeler is feeling.

      Searle pulling levers would not be T3 but a trick.

      Delete
  4. Overall I think I understand Searle's primary argument to be that a machine that exclusively acts "without intention," manipulating formal symbols, may be able to pass the TT by such actions, but this alone cannot constitute cognition that would explain human cognition, because human machines have "intention." In rejecting the assumption of dualism between the mind and brain, which he believes to be inherent to the theory of Strong AI, Searle backs up his argument that in human cognition, we are not simply "instantiating a computer program" we /are/ the entire system, whereas in human-made machines, all that they "do" is instantiate programs allowing them to manipulate symbols. But because the synthetic machine purely formal symbol manipulation it cannot produce intentionality outside the limited cause-effect instructions which constitute the programs. His conclusion: "The point is that the brain's causal capacity to produce intentionality cannot consist in its instantiating a computer program since for any program you like it is possible for something to instantiate that program and still not have any mental states." Does this logically mean that instantiating a program of instructions cannot be a part or a result of producing intentionality?

    ReplyDelete
    Replies
    1. (TYPO) *But because the synthetic machine USES purely formal symbol manipulation,... (meaning devoid of explicit meaning interpretable by that machine)

      Delete
    2. "Intentionality" is a weasel word. Think instead of what it feels like to understand Chinese (or anything else). Searle shows that computation alone cannot produce that. So computationalism ("Strong AI") that computation alone is all you need to reverse-engineer cognition (although computer simulation -- "Weak AI" may still be a useful tool in trying to figure it out and "test-pilot" hypotheses).

      Delete
  5. “The whole point of the original example was to argue that such symbol manipulation by itself couldn't be sufficient for understanding Chinese”

    “If we are to conclude that there must be cognition in me on the grounds that I have a certain sort of input and output and a program in between, then it looks like all sorts of noncognitive subsystems are going to turn out to be cognitive.”

    I’m having trouble grasping the leap of logic that Searle takes in his main conclusions and first refutation. I think the thought experiment of the Chinese Room adequately demonstrates that there are certain aspects of cognition, namely intentionality, that cannot be accounted for if we consider that cognition is just computation. But this isn’t to say that cognition cannot be computation whatsoever, as already mentioned in class; rather, imitation of verbal behaviour (T2) is simply insufficient to recreate all of cognition artificially.

    Apart from overreaching the conclusions from one instance of computation not working, I think that Searle meddles somewhat with the main tenets of computationalism. From what I understand, computationalism is the belief that computation constitutes cognition, which I paraphrase as there is some instance of computation x that can simulate cognition. I believe that Searle misconstrues this as computation is equivalent to cognition, meaning that all instances of computation can simulate cognition. While I do agree that implementation-independence is an important tenet of computationalism, it still has to allow for a certain type of cognitive algorithm in the first place. So, his example claiming that believing in computationalism logically entails condoning the belief that an object such as a “light switch,” a mechanism of limited algorithmic capacity, can be cognitive does not follow logically from this requirement. His examples of other bodily organs (e.g. stomach, liver, etc.) follow a similar faulty logic because the programs that they can achieve, if you consider them computers, are most likely dissimilar from those of cognitive origin. Digital computers do not face such algorithmic restrictions, so if cognition is strictly computational, then the computer would be able to but the other examples would clearly not.

    ReplyDelete
    Replies
    1. A model capable of producing lifelong T2 (or T3) capacity is not "imitation."

      "Intentionality" is just a weasel-word for feeling (anything).

      Remember the difference between the real thing and a computational simulation of it: If a simulated plane cannot fly, why suppose that a simulated T3 can feel? It can't even do. You can build a real T3 (from the computational blueprint) that really can do: But would it feel? That's back to the other-minds problem.

      Whether it would or it wouldn't feel, a real T3 certainly can't be just computational, any more than a real plane can be. But we're pretty sure planes can only do, not feel. With T3, we don't know; but we don't know with real people either. Turing just puts them on the same footing -- and tells us that's the best cogsci can hope to do...

      Delete
  6. ‘Since appropriately programmed computers can have input-output patterns similar to those of human beings, we are tempted to postulate mental states in the computer similar to human mental states. But once we see that it is both conceptually and empirically possible for a system to have human capacities in some realm without having any intentionality at all, we should be able to overcome this impulse. My desk adding machine has calculating capacities, but no intentionality, and in this paper I have tried to show that a system could have input and output capabilities that duplicated those of a native Chinese speaker and still not understand Chinese, regardless of how it was programmed’

    In light of the above discussion by Searle, I am contemplating about the fact that, to what extent do database schemas and schemas in psychology relate?

    From my understanding, database schemas manifest how the data is mainly organised, and the relationships and associations between it. Similarly, our schemas are taking in the knowledge (input) on different entities and organising that knowledge by making meaningful inferences and associations between them, while encompassing intentionality.

    ReplyDelete
    Replies
    1. Here are some questions you need to ask yourself:

      1. What is a "database schema"?

      2. What does a database schema have to do with trying to reverse-engineer the causal mechanism that can generate cognition?

      3. What do you mean by "intentionality"?

      Delete
  7. responding to: "No reason whatever has been offered to suppose
    that such principles are necessary or even contributory, since no reason has been given to suppose that when I
    understand English I am operating with any formal program at all."

    Searle uses two extreme ends of the linguistic spectrum in his thought experiment - i.e. one language which he is fluent/native in and one language he knows nothing of. But how does this apply to the adult language learning process or a language which you have some but not entire familiarity with? In my experience these are the grey areas where you are partially relying on conscious computation (remembering and implementing things such as tense, verb conjugation, etc) and also partially relying on more immediate and subconscious mental phenomena. This I would argue is potential grounds to deny Searle's previous claim that a computational approach has no place whatsoever in the process of cognition.

    ReplyDelete
    Replies
    1. Degree of mastery of a language (whether a first or a second language) is not relevant to Searle's Chinese Room Argument. He understands whatever is said in English and he understand nothing in Chinese. Nor is the Chinese Room Argument about learning Chinese.

      Delete
    2. As Prof. Harnad said, learning a language is not a part of Searle’s Chinese Room Argument. To manipulate Chinese symbols that you have no existing understanding of and effectively hold an email conversation with a native Chinese speaker shows that you do not need to understand the language to pass a T2 test. As Prof. Harnad points out in his accompanying response paper, Searle’s argument does not prove that cognition is not computation, but that it is not all computation. However, for the purposes of Searle’s experiment, a language that you are completely unaware of is just a good example of a program that can be taken in by a human (i.e. symbol manipulation of squiggles and sqoggles). Even in Harnad’s response to Searle he does not use partial existing knowledge of a language to find an issue with the Chinese Room Argument. Instead, the greatest way to challenge Searle is through the Systems Reply. Overall, Searle argues that computation does not lead to “intentionality” or understanding so if you only have partial knowledge of a language, can you even consider that a program that offers you understanding? When you use effortful computation to structure sentences, do you understand the language to the same degree you do with the elements that come to you immediately?

      Delete
  8. I like the structure of Searle's paper as he clearly outlines his points and then spends time refuting each of the common arguments against his assertion. I find myself compelled by the articulation of some of his points, even though I agree with some other commenters that he sometimes seems to take the philosophical implications too far. However, despite this, I feel that the Chinese room thought experiment does prove the core of his argument: that computation based on formal symbol manipulation alone is not sufficient for intentionality (understanding). That does not mean that digital machines cannot reproduce intelligent responses or pass T2 but that the threshold of understanding is much higher and that human cognition is composed of something else in addition to formal computation, is it the case that T3 or T4 is this thhreshold?

    Searle says if we can: "produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours" then we can say that a man-made machine can think. I do have to wonder what Searle would say if in the future we were able to make human-like machines with these organic components that were able to show some kind of intentionality.

    Going back to a previous reading, Turing mentions that one of the common reasons people rejected intelligent machines was based upon the instinct that humans have 'something' that other things do not and that on some level we fear the notion that the complexity of our experiences can be reduced to computation. After reading Searle's paper, I wonder about those who support Strong AI: could it be the case that our urge to understand and know the mind is leading us to be prematurely satisfied with an explanation that isn't the full truth? Does it just not 'sit right' with some people that we can have a machine with intelligent outputs but deny that it can think and understand?

    Sorry for the ramble and I hope you have a good rest of your Sunday!

    ReplyDelete
    Replies
    1. Hi Allie,

      I think the Chinese Room Argument kind of blankets over all levels of the Turing Test that formal symbol manipulation is insufficient in understanding how the machine "feels" when it does the task. Even if we come up with T5, T6 etc., this would be "other-minds" problem: how can we be sure that it can think and understand?

      I understand where you're coming from when you said it doesn't "sit right", and Professor Harnad has brought this up many times with "Would you kick Ting?" I also don't think that we're scared of being reduced to computation; I think the bigger picture here is whether the computation we create is sufficient enough to explain the unexplainable.

      Delete
    2. Hi Allie,

      I think Searle tried to answer the question you raised in your second paragraph, namely whether human-like machines with organic components (T4/T5) are able to show some kind of intentionality. I believe this is the “brain stimulator” reply, where we imagine creating a program that “simulates the actual sequence of neuron firings at the synapses of the brain of a native Chinese speaker when he understands stories in Chinese and gives answers to them.” Searle replies that if we indeed knew how to wire a brain like ours, we wouldn’t need strong AI in the first place. I think this is key. I think if could create a perfect T5 robot who is identical to us down to our very chemistry, of course we would say it could feel, since we can too (assuming we disregard the other minds problem). But that does not help us with the question of whether a computer program could feel. It won't help the field of artificial intelligence progress until we have a thorough understanding of the human brain - and that will take ages.

      Anyway, to argue this response, Searle reimagines his CRA where the man in the room has a system rigged up that simulates the human brain which gives him his usual (TT passing) outputs. The man still doesn’t understand what is written in Chinese, and neither would a machine using this system.

      Delete
    3. Allie, Searle shows that even if computation alone could pass T2, it would not be understanding. We already know that computation alone could not pass T3 or T4.

      Threshold for understanding? What does that mean? Is this like Deirdre's point about degree of understanding?

      The machine with the artificial nervous system would be T4. But Searle's Periscope (on the other-minds problem) does not work for T4, nor for T3. Just for T2, and only if T2 is passed by computation alone (so Searle can "become the system" because of the implementation-independence of computation). Dynamical systems are not implementation-independent.

      Whatever can pass TT (any level) had better "sit right" when the reverse-engineer tells us that that system he built (and he knows how it works) is what can pass the test. What may not "sit well" is the conclusion that the system understands (or feels anything at all). Turing says: can you do any better? How?

      Wendy, nothing can solve the other-minds problem (with certainty). TTesting is the closest we can get (and we do it with one another, and with other species, all the time -- but without the reverse-engineering...).

      Delete
  9. After having read this article and going back many times to clarify confusion, I still can't wrap my head around how Searle contradicts himself (multiple times). I still don't know what he means by "Strong AI", which is an important concept in this article and this field.

    At the beginning, he states that "In strong AI, because the programmed computer has cognitive states, the programs are not mere tools that enable us to test psychological explanations; rather, the programs are themselves the explanations." He then provides a counter example to this using the Chinese Room Argument, stating that a machine could appear to be "conscious" but indeed is just formally manipulating symbols. In "the many mansions reply", he then says that "Only something that has the same causal powers as brains can have intentionality." Not only that he assumes the importance of hardware in this sentence (didn't he try so hard to prove that the CRA is implementation-independent?), he also dismisses his previous claim in "the other minds reply". How can he suddenly be so sure that anything with the causal powers of the brain can be conscious and feel whatever it's doing?

    ReplyDelete
    Replies
    1. Strong AI is basically computationalism: cognition is computation. Searle disagrees with this and says that that can't be the case because when he showed us the example of the chinese room, it was all just computation, but he didn't really understand anything. The thing he endorses is that yes anything that can do exactly what the brain does can be conscious, but he's not so sure that that's just exclusively computation, he thinks there's more to cognition than that because computation alone doesn't get us that feeling of understanding. I was also surprised by his switch to caring about the brain though because earlier in the paper he had the water pipes example to sort of dismiss the idea that copying synaptic firing will get us to understanding cognition. I feel like nowadays it's a pretty common belief to think that whatever going on in the brain is what results in cognition but back then it was an edgy statement seemingly "many AI workers are quite shocked by my idea that actual human mental phenomena might be dependent on actual physical/chemical properties of actual human brains". The best thing I can come up with to explain his sudden shift is that he thinks the brain is important but that the brain is not ONLY doing computation (as would be the case in the water pipes example), so whatever else is going on in the brain (that isn't computation) must be the answer (or at least part of it) to why it feels like something to understand english in a way that it does not feel like something to manipulate chinese symbols. I hope this helps but I could be wrong so hopefully @prof will chime in to correct me

      Delete
    2. Hey Wendy - I was also a little confused by his shift but for me, it was clarified a little bit on pg. 11/19 when he did his little question/answer bit. I'm basically echoing what Lyla replied but yeah, I think Searle thinks that the right program alone is not enough to be considered "understanding". This is contrary to Strong AI, which stands that the mind and all its understanding will one day be captured in one single perfect program.
      It also helped me when Searle brought in "dualism" (that the mind and body/brain are separate but parallel) and said that Strong AI only makes sense if you adopt dualism.
      I hope that helped! and if it just led to more confusion, then please just forget it haha

      Delete
    3. Yes, Searle's Chinese Room Argument shows that cognition cannot be only computation: The (Chinese) TT-passing program would not be understanding Chinese because computation is implementation-independent, and when Searle does exactly the same computations, he does not understand Chinese. So neither does the computer.

      The "causal powers" of the brain are the cognitive capacities we have been talking about, the capacity to do all the things we can do (ignoring the "vegetative" ones, like temperature regulation, breathing, appetite, etc.) -- the capacities that T2 (and T3) are testing whether we have succeeded in reverse-engineering.

      Delete
  10. “as soon as we knew that the behavior was the result of a formal program, and that the actual causal properties of the physical substance were irrelevant we would abandon the assumption of intentionality.”

    I don’t know if I agree with this. Overall, I believe Searle in saying that cognition is more than simply computation; it feels to me like something to understand things in a way that I imagine symbol manipulating systems do not. But if I see someone in front of me behaving in a way that convinces me they are human (behavior is the best I can evaluate since I have no way of ever figuring out whether they “feel”) only to find out that they are actually running on some sort of formal program, I don’t think I’d change my mind about believing that they feel. So right now, Ting has only given me reasons to feel like she is human and understands just like me. Would I kick her if you told me she was actually a result of a program? No not really. I’d be impressed by whoever managed to write that program though. The fact that he mentioned that the physical substance would be irrelevant leads me to think maybe we’re talking about a T2 level type thing, but I don’t think I’d be fully convinced by Ting if she were software only (no sensorimotor perception) so I feel like in this case, for her behavior to be a result of a formal program (assuming that were possible) it would have some reliance on hardware as well. But yeah long story short, if Ting had convinced me that she was human (through her behavior) and then you told me actually all of this was a result of software, I still wouldn’t kick her.

    ReplyDelete
    Replies
    1. Ting is a T3 robot, so she can't be just a computer, computing. At the least, she is a hybrid system, partly dynamic, but doing some computation too.

      The equivalent of "kicking Ting" for T2 would be whether, if you know her only as a pen-pal, and you found out (or she told you) that she was a computer, and you could pull her plug and scramble all her data, would you do it?

      (I think most people, if they had been pen-pals with Ting for years, would only want to keep texting to her about it: "But why didn't you tell me?" -- and even after they had read Searle! But I think (i.e., "Stevan Says") that's just because in reality computation alone cannot pass T2! So supposing it can is just sci-fi.)

      Delete
  11. Searle’s arguments remind me a little bit of “functional architecture” vs. “virtual machine”. In class, we had mentioned that the problem with cognition being a “virtual machine” is that it depends on the user’s experience, not with what is actually going on. This is like Searle’s refutation to the Systems Reply. It doesn’t matter if the system (little man + programs + paper + water pipes + whatever else) can produce the correct input/output: the outside observer’s conclusion that the system is understanding is pointless because it still doesn’t explain how the little man himself is acting.

    However, even after reading the paper a couple times, I cannot pinpoint what Searle actually means about the “causal powers of brains”. Is this related to his opinion that dualism isn’t the answer to understanding cognition? As in, something about the very chemical/electrical nature of the brain is key to understanding the mind, and the mind cannot be separated from the physical brain?

    ReplyDelete
    Replies
    1. The brain's causal power is our causal power: our capacity to do all the things we can do. We want a causal explanation of that capacity. It turns out that computation cannot be the whole causal explanation. But it's not clear you need all of T4 or T5 either!

      Dualism is irrelevant. Philosophers love to bring it in, even if just to point a finger. (Fodor does it next week too!) But it's not dualism. It's just that explaining the causal "power" producing our capacity to do all the stuff we can do is much "easier" than explaining the causal "power" producing our capacity to feel!

      "Dualist" is just name-calling, because that no more solves the hard problem than "Monist" or "Materialist" or "Functionalist" or "Epiphenomenalist" etc. etc. does. The hard problem is still there, unsolved, even though we all know our brain power is producing both, our doings and our feelings, somehow. How? The question is much harder for feeling than for doing.

      (In Week 10 we'll find out why it's so much harder: To you have any hunches?)

      Delete
  12. In "Minds, Brains and Programs" by John Searle, we learned that man-made machines think, or can instantiate the right program, without having to understand.

    He points out that "no purely formal model will ever be sufficient by itself. The formal properties do not have causal powers, except for the power to produce the next stage of formalism when the machine is running." If the same formal model is implemented in a different machine, it can carry out the same algorithm without understanding. Also, he brings up two important definitions. He states that intentionality is "a feature of certain mental states by which they are directed at or about objects and states of affairs in the world." This includes beliefs, desires and intentions. Whereas, understanding means, "the possession of mental states and the truth of these states." So, only when the machine possesses intentionality can it understand.

    In the previous article, we learned that T4 subsumes T3 and its candidate must have an identical internal structure (at the neuronal level) and function to that of a human... Sorry, I don't think I understand this portion very well. So, is Searle implying we can skip this TT because the formal properties (like neurobiological causal properties such as the sequence of neuron firings at the synapses) cannot constitute intentionality? This is a stretch — should we only consider a machine to be able to understand if it passes T5?

    ReplyDelete
    Replies
    1. "Intentionality" is a weasel word. It's just the feeling you get when you understand something, or mean something, or refer to something, or believe something or want something: when you have something "in mind." But we already know that doing and feeling are not the same thing.

      Searle thinks that since computationalism is wrong, the only way to solve the easy problem is through T4 (or T5). He missed T3 (and misunderstood it in his "robot reply") and he is also wrong that his Chinese Room showed that cognition cannot be computation at all (so there is no option but to study the brain). But he really only showed that cognition cannot be all cognition. So there's still plenty of room for hybrid dynamic/computational robots (T3).

      Delete
  13. Searle defends that computationalism cannot be the correct hypothesis explaining what cognition is. He uses the word intentionality which as far as I know means something like "aboutness" or some obscure relationship between mental states, their contents and the external world. The impossibility of this so-called intentionality in purely computational systems is outlined by his thought experiment and I am fairly convinced by his argument. Yet, if implementation-independence does not work for cognition, then does this not pose a serious problem for the reverse-engineering of cognition? If cognition, like heat or wetness or what have you, is a property of physical processes, who is to say that there is any other way of engineering what cognition does without making something that is a copy of us (i.e. a T5 passing organism)?

    ReplyDelete
    Replies
    1. Even if part or all of whatever it takes to produce our cognitive capacity turns out to be not just computation, there's still lots of room between computational T2 and reverse bio-engineering all of T5! There's everything analog (dynamic), and hybrid analog/computational, synthetic, and hybrid synthetic/biotic in between. And there's always computation (Weak AI, Strong C/TT) to test-pilot them all before trying to build them. Cloning T5 tells us nothing about how it works, but reverse-engineering, testing and building does.

      Delete
  14. I understand where Searle is coming from with his refutation of the robot reply. It is possible to have a robot that mimics human behaviour without actually understanding what it's doing. But the same can be true for a robot that mimics the structural aspects of the brain along with the way we act. I feel like if he's going to challenge the robot reply, and the brain simulator reply with this logic, he's being hypocritical in accepting the combination reply. I don't agree with him on what he's saying, I think if it's a robot that experiences the world and reacts as a person would it's as reasonable to assume the robot cognizes as it is to assume that other people cognize. Furthermore, once we have a robot that's cognizing, it's already no longer purely computational or implementation-independent anyway.

    ReplyDelete
    Replies
    1. Searle's Chinese Room Argument does not work against either a robot, whether T3 or T4. It only works against a purely computational T2: Why?

      Delete
  15. Am I right in understanding that Searle argues that nothing short of T5 would produce genuine cognition because nothing short of T5 (i.e. human biochemical composition) allows us to symbol ground? Can we say with certainty that non-human minds can't symbol ground?

    ReplyDelete
    Replies
    1. Searle says nothing about symbol grounding. (What is symbol grounding?) Searle just says that if T2 could be passed by computation alone, it would not be understanding -- and that the only alternative is to figure out how the brain works. How? And is that the only alternative?

      Delete
  16. With his Chinese Room Argument, Searle shows that cognition cannot only be computation, because though he is performing computations, he does not understand Chinese. Therefore, Computationalism is false.

    We have discussed in class the notion that while Searle shows that cognition is not only computation, he does not take into account the possibility of a T3 robot--an analog/digital hybrid that performs computation but also has dynamic, analog elements and interacts in the world. Prof Harnad has suggested that--while Searle, in his response to the System's Reply, shows that even if he were an entire computational system, he still would not understand--Searle could not be an entire T3 robot, and a T3 robot could potentially understand and give us insight into how cognition works (giving us insight into the easy problem--how we do what we do).

    It has also been pointed out in previous replies to this thread that one of the reasons Searle does not understand when he is in the Chinese Room is that he does not have the feeling of understanding Chinese--and this feeling of understanding is key to the cognitive state of understanding. If it is the case that the feeling of cognition is key for cognition, then it seems we would need to have some grasp of feeling to understand cognition. In other words, we would need to have a grasp on the hard problem to truly understand the easy problem.

    I am having trouble reconciling these two ideas: if we had a T3 robot, even if it was an analog/digital hybrid, I don't see how this would get us any closer to understanding the "feeling" element of cognition. This does not mean we have to "be" the robot (other minds problem); if just means we need to have some grasp on how and why cognition is accompanied by a feeling. If we need to understand the feeling element to understand cognition, then how can a T3 robot help us?

    ReplyDelete
    Replies
    1. The peekaboo appearance of feeling (the hard problem) in Searle's argument against computationalism ("Strong AI"; "cognition = computation") is not meant as a step toward the solution of the hard problem -- just as evidence against computation's being the solution to the easy problem. We know already (from Descartes' Cogito and from introspection) that cognition (sensation, voluntary movement, recognition, thinking, understanding, meaning, wanting, willing) is felt, not just "done": We are not merely highly skilled biological zombies, otherwise there would be only the easy problem (and no Cogito).

      Searle shows that if computation alone could pass T2, that would not be cognition, because it would be unfelt. So computationalism is wrong. Cogsci requires at least a T3 robot. But that does not mean that successfully reverse-engineering a T3 robot (or even a T4 biorobot) would solve the hard problem! It would just be immune to Searle's argument.

      Hence, unlike a computational T2, a T3 would be a candidate solution to the easy problem. The hard problem would still be unsolved. And the other-minds problem would still be there -- but Turing's point that T3 (or T4) would be the best that reverse-engineering could hope do, and that it was close enough -- would hold.

      Delete
  17. "The whole point of the original example was to argue that such symbol manipulation by itself couldn't be
    sufficient for understanding Chinese in any literal sense because the man could write "squoggle squoggle" after
    "squiggle squiggle" without understanding anything in Chinese."

    I think the refutation of the Systems Reply is one thing Searle got right in his paper. The Systems reply argues that while Searle himself is not understanding Chinese, there is some understanding being created in the whole system, even if it is a subsystem of Searle's mind. The English-speaking part of Searle may not have understanding, but there exists some separate Chinese-speaking part of him that does.

    I agree with Searle that this reply is not an adequate refutation to his main argument. What Searle is trying to prove is that computation and symbol manipulation alone are not sufficient to imbue understanding. While computation may be enough to give an appearance of understanding, I think Searle is right in saying we cannot automatically assume the program itself is understanding. It would seem to me that understanding lies in the person who created the algorithm and not the machine that implements it. Regardless of whether this is true, and regardless of whether we should or should not assume an entity is understanding based on its performance, computation alone is not enough. I think Searle is trying to point research in other directions - not abandon computation altogether, but be cautious in blindly accepting computationalism.

    ReplyDelete
    Replies
    1. The Systems Reply is not just that "there is some understanding being created in the whole system, even if it is a subsystem of Searle's mind": it is also that Searle is just a subsystem of "the System" consisting of Searle, the walls and the room!

      Of course the designers of a successful T3 (or even T2, if it's possible) would understand what they have done, but that doesn't mean they understand everything the successful T3 (or T2) will go on the do, or become. The creator of a learning algorithm, understands how the learning algorithm works but not where it will get after a lot of actual I/O experience and learning. The reverse-engineering of doing-capacity is the explanation of a capability (a "causal power" if you want to put it into Searle's terms), not an end-state, nor even of all the performance on the way. Just the mechanism that can get you there. (And of course to explain how and why understanding is felt would be to solve the hard problem, not just the easy problem of doing-capacity.)

      Searle does not abandon computation as a tool ("Weak AI") for theory-testing (in cogsci or any other field, such as physics or engineering), but he does think his argument proves that cognition itself is not computation at all, not even in part, and that hence the only place to turn is to try to figure out how the brain does it directly, by peeking and poking at the brain itself (T4 or T5). (That is the subject of Week 4.) He skips over hybrid analog/computational T3 altogether (and elsewhere also pooh-poohs neural net models, whether analog or computationally simulated, with a hand-waving "Chinese Gym Argument," which, if that had been all there was to the Chinese Room Argument, would have failed too!)

      Delete
  18. I think what's causing a lot of the discrepancies in Searle's paper is a misunderstanding, about what it means to "understand", between Searle and the strong AI advocates he mentions. Searle says, "If we had to know how the brain worked to do AI, we wouldn't bother with AI." This quote highlights Searle's ultimate goal of figuring out the questions of Cognitive Science, "how and why do we cognize?", and I think this goal is not aligned with the goals of the strong AI advocates he's addressing. It's my impression that many strong AI advocates are people with Computer Science backgrounds, and I think from the Computer Science perspective, creating a model that can completely, behaviorally mimic a human brain would be enough evidence equivalence.

    ReplyDelete
    Replies
    1. The brain -- like an atom, an apple, an airplane, or an asteroid -- can be simulated computationally; That's just the Strong C/T thesis (= "Weak AI"). But "Strong AI" holds that the brain, when it is cognizing (e.g., understanding language) is really just a computer, doing computation: cognition = computation. I don't think Searle misunderstood that.

      Computationalists made the mistake of over-interpreting what computers are doing when they manipulate symbols, just because the symbols are interpretable by us.

      Delete
  19. I agree with Searle very much that computation as defined as symbol-manipulation in formal programming cannot instantiate understanding or the "mental" by the program. I would say though, that it is possible that in as far as the causal properties of the brain/body could be simulated (and these would be mathematical dynamical systems), and the simulation behaves as if it was a human, (**and as an aside, I'm not sure about this, but I feel the evidence of its behavior would have to be judged based upon its interaction within a simulated environment/world within which it is embedded in -- i.e., we would have to translate our English into a form where it is presented as either speech sounds or words via some visual medium within the AI's universe in a way it can interact with it -- even if the classic TT was applied which uses only written conversation), the success of the AI to display properties of cognition and behaviour akin to ours would be a good measure of our understanding of cognition (in as far as that is the goal of cognitive science), since we were able to know the causal factors/architecture enough to simulate it.

    However, again, the fact that we would/may know substantially enough about causal properties of the brain, body, neurons, and chemical processes, physics, etc., enough to simulate it, does not mean that our simulation would have any intentionality i.e. understanding or "consciousness" or "feeling". I agree With Searle that Strong AI is the result of dualistic thinking, and I really dislike dualistic types of thinking in those who pride themselves in "logic" -- I dislike when the abstract or what could be is not tied to what simply IS and what is really POSSIBLE given the conditions of our material/phenomenal existence. So thank you to Searle. I'd be curious to know what the general consensus is on Searle's main point is today among various groups. I feel the obvious failure of AI chatbots today to pass T2 is strongly in support of his thesis (clearly cognition is difficult without intentionality). And how alarming is it, that the majority of the best AI chatbots today are customer service chatbots. (cont'd in comment)

    ReplyDelete
    Replies
    1. **With the main contentions away, and returning to my comment in P1 on the possible necessity of a simulated universe in — if my intuition is true — I would say that we could not simulate the universe (or AI) with formal systems, due to what little I know of Godel's theorem and the fact that some branches of math and world phenomena cannot be explained by formal logic, or the kind of propositional logic of thought that programming is based upon (I believe the three laws of thought).

      So I expect we may come to an impasse in our simulation capacity due to the limits of formal systems (not the limit/sheer quantity of how much information there is to model). There could be a gradient of possibility in how this impasse translates in regard to the level of similitude that the AI in its artificial environment and the artificial environment itself has to humans and to the real world. The aim of cognitive science (understand cognition, not create a conscious entity) through computational AI could be doomed as aspects of the universe outside of formal logic potentially must be accounted for before a significantly sufficient model of an entity displaying behaviour to which human cognition may be attributed can appear; or, it could be that we could, despite the limitations of formal systems, simulate the phenomenon of cognition to such a degree in which we can say that we more or less understand it. I guess this is just another perspective/avenue of support for the CRA.

      Maybe the inclusion of randomness in the system may help. Turing mentions randomness in 'Computing Machinery and Intelligence' a few times, but right at the end he devotes a full paragraph. On the relevant point, he attributes values to a random, non-rule-based function, although, as I understand it, he's unsure how it would fit in with the systematic approach. I think his inclusion of randomness in the discussion is a link toward connecting the AI to causal properties of the world (and escaping the Chinese room).

      A google search showed "A True random number generator (TRNG) is a device that generates random numbers from a physical process, rather than by means of an algorithm." Interestinngly, there's evidence that human minds can influence random number generators (also a reminder -- nothing is truly "random"). Now going into stranger realms: Perhaps there is an opportunity here not only for a way out of the Chinese room into the causal material world, but also for direct connection with the "mental" of human minds. We know that babies need the influence of other fully formed minds in order to grow a full human cognitive capabilities. Perhaps similarly some sort of direct mind-to-not-yet-mind needs to occur?

      Note: I see how T3 could be the causal linkage, but I think I would refute it based on Searle's reasoning in the paper. Could think about this more another time.

      Delete
    2. According to the Strong Church/Turing Thesis, just about anything can be simulated computationally. But simulated water is not wet.

      Searle shows that a computer passing T2 would not understand, but Searle's invocation of dualism is silly and irrelevant.

      "Intentionality" is a weasel-word for feeling. It feels like something to understand. Searle shows he would not understand Chinese if he himself executed the T2-passing program. But the only way he can confirm that he does not understand is by noting that he would not feel like he understood Chinese under those conditions. (A peekaboo appeal to consciousness.)

      About the consensus today: I think the vast majority still believe in some form of computationalism, but the doubts are growing too...

      Please read the prior replies about simulation and the Strong Church/Turing Thesis. Goedel's theorem is completely irrelevant to both the Strong (and Weak) C/T Thesis and to computationalism.

      See Replies about randomness too.

      Please keep further replies shorter. And remember that the purpose of skywriting is to show as well as test your understanding of the material in the course.

      Delete
  20. Although I may be accused here of “introspection”, after reading this text I really wanted to spend some time cognising on what it meant and felt to understand. If we are to assume that machines cannot understand the way humans understand and use this as a limit to computation, I want to be able to phenomenologically define what it means to understand as a human and what it doesn’t in order to discriminate. So what can I declare happens phenomenologically that is more than mere symbol manipulations when I say that I understand something? Taking the time to do this gave me a hunch I think at what it means for a symbol or form to be “grounded” - for a label to be pointed to its referent. There is some sensuous relation whether it be visual or visceral between a label and its referent, and I understand this as being the threshold between passive symbol manipulation and active understanding. So could I resume “understanding” to the acceleration of a heartbeat after reading poetry or laughing after cracking a pun?
    More seriously, suppose you built a machine with the computing power to pass T2 and that it had an identical sensory apparatus as a human being. And suppose he were wired such that he could associate an object with what it “felt” like with his five senses, such that when we said “the cat is on the mat”, he could recognise a cat on the mat… Would our machine be symbolically grounded then? Could he learn to read poetry and feel shivers all over his body? Would this be a T3 type machine?
    Would a proper sensory-motor apparatus suffice to say that a computer cognizes?

    ReplyDelete
  21. Searle’s argument that the man in the Chinese Room does not understand what he is doing, as well as his emphasis on native speaker comprehension, led me to wonder whether a second language comprehension, around the level of translation, can be considered ‘understanding’ in sense of symbols (words) having a grounded meaning in the sense of connecting to the external objects that they represent. For instance, in Searle’s perspective, would is there significant difference on the level of cognition between the ‘mind’ (artificial or organic) that being asked to give someone a “pomme”, knowing that “pomme” is the French word for apple, and having a native speaker’s connection between the word apple and the physical object of apple is able to present an apple, and the native French (or fully bilingual) speaker that hears “pomme” and directly associates it with the object of an apple, without passing through the stage of translation. In Searle’s context, based on his overstatement that computation is no part of cognition, would only the direct thought between word and actual fruit count as cognition? If so, how would he describe the intermediate step?

    ReplyDelete
  22. The other minds problem has come up multiple times in this thread and I thought about it a lot while reading the assigned article. “Whatever purely formal principles you put into the computer, they will not be sufficient for understanding, since a human will be able to follow the formal principles without understanding anything”. I keep wondering how exactly we could explain the experience of “understanding” and go about testing the hard problem. The Chinese Room Argument leads to the conclusion that cognition is not just computation. This conclusion is reached when one imagines oneself in the room computing symbols, but not understanding Chinese. Would a machine’s experience of understanding have to feel the same as a human’s experience? Can we tackle this problem by imagining ourselves in these theoretical situations or would “understanding” for a machine be completely different and inaccessible via these thought experiments?

    ReplyDelete
  23. Professor Harnad describes the set-up of Searle’s experiment as relying upon three “ifs”: if the Turing Test is decisive, if cognition really is computation and if computation really is implementation independent. It is not clear to me why, once the experiment shows that a machine without subjectively experienced understanding could pass T2, that it is the assumption that cognition is computation that is eliminated. It seems worthwhile to investigate the other two propositions as well.

    Searle shows that a device could pass T2 without intentionality, or without “what it feels like to understand Chinese”. Is it not disqualifying of the Turing Test that it can be passed by something that does not possess “all of” cognition? I do not see how T3 solves this problem. The fact that a robot can learn through sensorimotor experience rather than by information supplied in some other manner doesn’t seem like it would (necessarily) create subjectivity.

    The second proposition focuses on implementation independence. Searle identifies “intentionality” as being what is missing in his experiment, but functionally he suggests that the “machine” could pass T2. Could it be possible that cognition functions computationally, and can therefore be simulated by any machine, but that subjective experience is a specifically biological bi-product? I am not saying that this is necessarily the case, but it is certainly a view held by some that consciousness is just something that arises from the firing of neural networks. Is it possible to separate the easy/hard problems in this way?

    ReplyDelete
  24. From “Searle, John. R. (1980) Minds, brains, and programs. Behavioral and Brain Sciences 3 (3): 417-457”

    Searle’s point:

    The whole point of Searle’s Chinese Room Experiment is to argue that ruled base symbol manipulation or computation is not sufficient to understand Chinese. Searle shows that passing TT is not sufficient for saying that a machine understands anything. No computation can account for feeling or understanding. Symbol manipulation can give you the same output from the same input, therefore it is sufficient for passing TT. But there is no reason to believe that blindly manipulating symbols feels like anything or that what is outputted from those manipulation will have a certain feel.

    A quote by Searle:

    “the set of systems that go to make up a person, could have the right combination of input, output, and program and still not understand anything in the relevant literal sense in which I understand English.”

    A question:

    My main problem with Searle is that I don’t understand what understanding in the “relevant” sense is. I struggle a lot with this because I myself can hardly describe what I feel when I understand. To be completely honest, I am not even sure if it feels like something to understand. I see things in this way:
    What happens when I understand is that I produce related ideas. I think of related stuff and this makes me more familiar with what the speaker said. Those related stuff could be other words, it could be an image or a metaphor. Those things look like they could be computational outputs.

    Why can’t symbol manipulations that output related stuff in my awareness or consciousness account for understanding according to Searle?

    ReplyDelete
    Replies
    1. Just reread that and here are two corrections^: "Searle shows that passing T2 is not sufficient for saying that a machine understands" and "Symbol manipulation may be sufficient for passing T2 but not for passing T3 or T4"
      Plus, my question is a bit nonsensical, perhaps I am wondering whether it could be that "feeling simply doesn't exist or that feeling is just doing"

      Delete
  25. Searle argues against Strong AI (computationalism), that computation cannot be cognition. He attributes this to the feeling/understanding he would lack if he was a computer in the Chinese Room Argument.
    As the computer who has a mind and can say he has a mind, Searle still completed a computation as a cognizer. Speaking to his claim that cognition can't be computation at all, I was curious to know his response to the following: as the computer with a mind, can’t he also tell us that it felt like something to compute (scanning the “questions,” executing the rules and writing the “answers”)? I understand he is in place as the computer for T2 purposes - I'm just curious if this is a relevant question or an oversight.

    ReplyDelete
  26. “As regards the second claim, that the program explains human understanding, we can see that the computer and its program do not provide sufficient conditions of understanding since the computer and the program are functioning, and there is no understanding.”

    This section reminded me of the notions of behavioral equivalence. Regardless if one understands Chinese or not, if the inputs and outputs are indistinguishable from a native Chinese speaker, is your understanding, or lack thereof to the respective language, essential or relevant? In contrast, if a native Chinese speaker and someone using the program to spit out proper Chinese output reach the same conclusions, is the process they used to produce the same results important, as long as they reach the same conclusions? Following the definitions of the word “understanding” that are later outlined, if the idea of "understanding" is existed on a spectrum, it seems that the understanding the English translation of the Chinese input is sufficient to qualify as a lower level/degree of “understanding” of the Chinese language.

    How the Theory of the Mind is integrated into this reading is also intriguing to me. Is Searle arguing that the Theory of the Mind is what distinguishes machines from humans? Infants under the age of 4, many individuals diagnosed with autism, and claims of sociopaths all struggle with the Theory of the Mind, in the sense that they are often unable to empathize with others and understand mental states unlike their own. Therefore, if it is plausible to program computers/machines to have the capacity for "mindfulness", what are the implications regarding these subsets of people? Searle states, "A human will be able to follow the formal principles without understanding anything" so perhaps these people further reinforce the notion that formal principles and programs put into computers are not sufficient for understanding.

    ReplyDelete

Opening Overview Video of Categorization, Communication and Consciousness

Opening Overview Video of: