Blog Archive

Monday, September 2, 2019

10c. Harnad, S. (2012) Alan Turing and the “hard” and “easy” problem of cognition: doing and feeling.æ

Harnad, S. (2012) Alan Turing and the “hard” and “easy” problem of cognition: doing and feeling. [in special issue: Turing Year 2012] Turing100: Essays in Honour of Centenary Turing Year 2012Summer Issue


The "easy" problem of cognitive science is explaining how and why we can do what we can do. The "hard" problem is explaining how and why we feel. Turing's methodology for cognitive science (the Turing Test) is based on doing: Design a model that can do anything a human can do, indistinguishably from a human, to a human, and you have explained cognition. Searle has shown that the successful model cannot be solely computational. Sensory-motor robotic capacities are necessary to ground some, at least, of the model's words, in what the robot can do with the things in the world that the words are about. But even grounding is not enough to guarantee that -- nor to explain how and why -- the model feels (if it does). That problem is much harder to solve (and perhaps insoluble).

38 comments:

  1. It is noting that there are two kinds of understanding: the *doing* kind and the *feeling* kind.
     
    The *doing* kind of understanding refers to what we do when we ground symbols. It refers to the act of associating our sensorimotor data with the words in our lexicon. The *feeling* kind of understanding has nothing to do with grounding; rather, it is what it is *like* to have grounded words.
     
    When Searle sits in his Chinese room, he lacks both kinds of understanding: his symbols are neither grounded, nor do he feel like they are. As far as the Turing Test goes, only the lack of grounded is needed to demonstrate why computation alone wouldn't do all that we do. Computation it wouldn’t ground its symbols in sensorimotor data like we *do*.
     
    The same distinction needs to be made for feeling: even though we say "I do feel" or "I have the capacity to feel”, we in fact do not do a feeling the same way I do an action. I feel a feeling instead; feelings are not actions. That's why understanding of the *feeling* kind isn’t relevant to the Turing Test.
     
    But if Cognitive Science stemmed from the Turing Test, why should it seek to explain things that we feel if feeling is not an action? Why did cognitive scientists branch off? To the latter, I propose that cognitive scientists made the category mistake of thinking that “feeling” was a kind of “doing”, so explaining feeling became relevant to explaining how we do all that we do. To the former, the hard question had no place in the Turing Test, so the hard question shouldn’t be taken up by *Turing* cognitive scientists in the first place. Those who do take up the question are cognitive scientists of another kind.

    ReplyDelete
    Replies
    1. I can't quite follow you:

      Yes, doing and feeling are different.

      Yes, you can't observe or measure (others') feeling, just its correlates.

      But I'm not sure what you mean by two kinds of understanding -- the "doing" kind and the "feeling" kind.

      Although understanding the meaning of words is not just the grounding of symbols, I wouldn't say that understanding words has nothing to do with sensorimotor grounding.

      What is true is that symbol grounding is only about doing: being able to do all the T3 things we can do with the things that words refer to.

      Understanding and using words requires both T3 grounding and feeling. But the hard problem is to explain the causal role of the feeling.

      Turing's method applies only to doing, and the causal explanation of organisms' capacities to do what they can do. So Turing recognized that there were limits to his method.

      But explaining how and why organisms feel is surely within the remit of cognitive science! So it is cognitive science (not Turing) that has the hard problem.

      Delete
  2. I felt like I’ve heard this entire reading in lectures which is pretty cool. Prof Harnad you definitely have a distinct way of saying things that is very you.

    “Turing was perfectly aware that generating the capacity to do does not necessarily generate the capacity to feel. He merely pointed out that explaining doing power was the best we could ever expect to do, scientifically, if we wished to explain cognition.”

    At first, I was surprised that this reading is so late in the course as opposed to being near the beginning when we were still figuring out what the easy and hard problem were. At this point it feels almost trivial to talk about feeling when we talk about the Turing test because we’ve been over it so many times, reverse engineering cognition is about getting something to do everything we can do, feeling has nothing to do with it. Even though we know there’s more to it than that (there’s the fact that it feels like something) we don’t have a way to test that –nor do we really even know where to start, so we’ve chosen to take it out of the equation. I chose that quote because the “best we could ever expect to do” part had me wonder: best we can ever expect to do? Or just best we can do for now? Because surely, just as generations before us could not imagine the scientific discoveries we’d figure out, it would be plausible to think that the hard problem will one day get solved. Maybe I’m just being optimistic, but I feel like we have to figure it out eventually

    ReplyDelete
    Replies
    1. Fair point! "best we can do for now..."

      The mind/body problem (aka the hard problem) has a long history, though, and has been thought about by a huge variety of thinkers. So it's a part of this course that is worth carrying along with you for the rest of your life (and your grandchildren).

      Delete
    2. @Lyla I agree that the "best we can do for now" is a good way of putting it. I would imagine that before computation even the easy problem seemed insurmountable. Even though there is still a long way to go in solving the easy problem, a solution does not seem outside the realm of possibility. I too wonder if there will be some sort of breakthrough in the distant future that will not immediately solve the hard problem, but allow thinkers to not see it as insoluble.

      Delete
    3. Lyla, this is a question that bothers me as well, and I tend to go back and forth in my opinion of it. It feels naïve to proclaim that a question will never be answered, especially given that this has been done so often throughout history for problems that science later solved (as you point out). However, there does seem to be unique barriers to solving the hard problem. I think it goes beyond the fact that it has been studied for a long time by many different people, as Professor Harnad observed. If that was the only reason to be skeptical, it would be easier to have faith in future discoveries. The fact that subjective experience seems completely inaccessible from a third-person perspective is what makes this problem inherently different from all others. Despite this, I can’t help but be an optimist like you and Stephanie. I think that we will at the least get a better understanding of what ingredients are needed for consciousness, even if we might always be limited in saying with certainty where it does and does not exist.

      Delete
  3. We have discussed that given the Turing Test is a test of indistinguishability for a lifetime, and given that semantics and syntax in language are not independent, it is unlikely that a T2 computer could really pass the T2 test because it has no way to ground symbols. Just based on the output from the Chinese room it appeared as though Searle’s behaviour matched the behaviour of someone who understood Chinese. This won’t be sustainable for a lifetime Turing Test though because Searle doesn’t understand. The way that I know I understand something is not because I give the right answer. It is because I know what it feels like to understand. It feels like something to cognize, and it is because of that feeling that we could reject cognition as only being computation. What the Chinese Room Argument and Turing Test do not show us though is how and why we feel. Our T3 robot may feel or may not feel but we really have no way of knowing because Searle’s periscope does not extend to T3 robots.

    In our reverse-engineered heart if we removed a valve and then tested how it worked, it would be very clear to see what the function of that valve is – what its causal role is in the heart. Maybe once you remove it you notice that suddenly no pressure is maintained in a ventricle and not very much blood can be pumped. The function of that valve is clear, and we see what changes without it. All three papers in week 10 and the YouTube videos make it clear that if we somehow removed feeling but cannot ascribe a function to it, how would we go about determining the causal role of that feeling?

    ReplyDelete
    Replies
    1. If you've grasped computation, categorization, symbol grounding, learning, evolution, language, propositionality and UG as well as you've grasped the above, I have no doubt that you will fulfill my fervent wish that everyone in this course gets an A!

      Delete
  4. I really enjoyed reading this paper, and the other papers for Week 10, because it helped solidify the concepts that we've been talking about for the whole semester. Professor, you weren't lying when you said that the first week would cover the whole class!

    Because of the other-minds problem, we can never be certain if anything that passes T3 is actually feeling. Descartes shows us that I can only ever be confident about the existence of my own feelings, since I am the feeler (and it isn't relevant to argue about the validity of the feeling, just that it is being felt!)
    In 10b (Harnad on Dennett on Chalmers on Consciousness), you write that maybe "feelings must piggy-back, somehow, on T3-power". Professor is that you making a very strong educated guess? Or maybe you meant it as a reply, kind of like: "if you think T3 can feel, stop bickering about what it feels about. Tell me how and why it is feeling at all"?

    ReplyDelete
    Replies
    1. a little overview...

      That a T3 would feel is just "Stevan Says."
      Ditto for “T2 could only be passed by a T3”
      Ditto for "the brain produces feeling"
      and for ”only human beings have language"

      But some important things that are not just "Stevan Says" are the following (this is a quick overview of some of the main points of the course):

      the Cogito/Sentio (that besides formally provable theorems the only thing that is certain is that I feel) is true

      computation is the interpretable, implementation-independent manipulation of arbitrarily shaped symbols on the basis of rules (algorithms) operating on the symbols’ shapes (syntax), not their interpretations (semantics)

      the weak C/T Thesis is that computation is what mathematicians do

      the strong C/T Thesis is that just about anything can be simulated [modelled] computationally

      computation is a powerful tool for testing causal hypotheses through computer modelling (“weak AI” = strong C/T)

      the easy problem of cognitive science is to reverse-engineer (i.e., discover and test causal mechanisms that can produce) the capacity to do all the things organisms can do)

      cognition is the capacity to do all the things organisms can do — and the capacity to feel

      cognition is not just computation

      sensorimotor function is not computation: it can be computer-simulated but that is not sensorimotor function

      information is the reduction of uncertainty about which of a finite number of choices is the right one

      categorization is doing the right thing with the right kind of thing (for survival, reproduction, success)

      evolution is lazy: for economy and for flexibility, rather than pre-coding all traits, it offloads as much as possible onto the environment; in cognition, this means the capacity for learning and communication — and in humans, language

      categorization is based on detecting the features that distinguish the members from the non-members of a category

      some feature-detectors are innate; most are not (evolution is lazy)

      three ways to learn to detect new categories are (1) unsupervised learning (passive exposure), (2) supervised/reinforcement learning (trial/error/feedback from the consequences of doing the right or wrong thing) and (3) instruction (through language)

      to learn new categories through language, the meanings of (enough) words have to be grounded by sensorimotor categorization through (1) and (2) so the rest can be learned by combining already grounded categories combined into propositions that define or describe the features that distinguish members from nonmembers

      a proposition is a subject (category) and predicate (category) with a truth value (true or false): e.g., a zebra is a striped horse, the cat is on the mat.

      sensorimotor (T3) grounding connects a cognizer’s words to the individuals (proper names) or categories (content words: nouns, adjectives adverbs) to which they refer

      language is not just the names of categories; it requires syntactic rules (how to put words together) to distinguish subjects from predicates, negation from affirmation, etc.

      syntactic rules can be learned thought (1), (2) and (3), but some of them are innate (UG)

      feelings are biological traits; the capacity to generate them is part of brain function

      there is no reverse-engineering solution in sight for the hard problem of feeling: how and why do organisms feel

      evolution, besides being lazy, is morally blind (the “Blind Watchmaker”)

      yet the only thing that matters — and the only thing that grounds the predicate “matters” — is feeling

      Delete
    2. "Cognition is the capacity to do all the things organisms can do — and the capacity to feel"

      I am unsure if these two things, although co-occuring, should be thrown in the same basket. Does this imply that a TT-passing robot will necessarily feel if it is to cognize as we do (although we won't be able to show that that is so)? I was under the impression that reverse-engineering a TT-passing robot would potentially lead to cognitive scientists building a feeling robot, not that it is a necessary consequence of building said robot.

      Delete
    3. I'm not sure what you mean, Solim. Organisms can do things (behavior) and they can feel. So cogsci needs to give a causal explanation of both those capacities; they are both cognitive.

      The only place you'll find necessity is in formal maths. Science is just high probability. I'm not sure what brought it up. (Was it the point [made elsewhere] that if someone could prove that a T5 Zombie is impossible then they would have solved the hard problem? That's true. But no one has proved it, just as no one has solved the hard problem any other way.)

      Delete
    4. You say that "Stevan Says: the brain produces feeling"
      Is this because there's no causal mechanism that we can identify and that there's ways to pass TT without feelings that this is only an assumption? I agree with Robert that our capacity for doing and feeling seem to be so closely linked that it might very well be that answering the easy problem would involve answering the hard problem as well. But I also find it difficult to think of any other theory of what causes feeling (as a general structure, not a detailed causal mechanism explaining all feeling). If not the brain then what else? It would be a huge coincidence if all organisms that have a CNS/brain also have feeling but that this reality is purely correlational.

      Delete
    5. Yes, it's easy to agree that the brain causes feeling, and even that it does it in the service of being able to cause doing.

      But it's hard to explain how, or why.

      Delete
  5. I hope I am not misunderstanding what we have been taught so far when I say this, but I get the sense that it's possible that if we solve the easy problem, we will potentially solve the hard problem as well.

    While it is true that the easy and hard problem are not the same thing and we can properly distinguish the two, I don't think one could occur without the other. I don't think we can properly do things if we don't have the capacity to feel -- and vice-versa -- I don't think we can feel without the capacity to do. Because even if we do not have the capacity to do something, we do have the feeling of what it is like to not do it (I.e. I don't know what it is like to speak Russian, therefore the only feeling I have towards speaking Russian, is not speaking it). Furthermore, doing capacity is not necessarily separate from feeling capacity. If I have a tooth ache, the way I feel will most certainly influence what I do. One example of this would be me deciding which side I chew my food on. This would be a demonstration that feeling capacity affects doing capacity (though it is hard to say if it is causal or not).

    I am not trying to say that easy problem and hard problem are one and the same, because they are clearly not. What I am trying to figure out is how different are they from each other because it seems as if they are intimately linked with each other and at times, inseparable.

    ReplyDelete
    Replies
    1.  ”I don't think we can properly do things if we don't have the capacity to feel -- and vice-versa”

      I feel that too. But the problem is explaining how and why: What is the causal role of feeling? (Because that’s the hard problem: it’s a problem of causal explanation, just like the easy problem: reverse-engineering how and why.)

      ”I don't think we can feel without the capacity to do. Because even if we do not have the capacity to do something, we do have the feeling of what it is like to not do it “

      That’s getting a lot of mileage out of introspection. But the ones doing this introspection are biological bodies with brains; and, until further notice, our categories (including “doing” and “feeling”) had to be grounded (in one of the three available ways) through our sensorimotor interactions with their referents — unless you think they’re innate categories, in which case you only need to explain how and why they evolved (which is just another variant of the hard problem).

      You know you don’t understand Russian because when you hear Russian spoken you have no idea what it means, whereas when you hear English spoken you do. Members and non-members of the category “languages I understand.” (But do you know what it’s like to be sound asleep [not dreaming]? or what “Laylek” means?)

      Yes, pain keeps reminding me not to chew on the injured-tooth side, but my brain is producing that pain, and it’s also producing where I do and don’t chew (and the feeling that there is a me). But you need to ask yourself: why does my brain (or evolution) bother with all that intermediate homuncular loop, rather than just going from the cause to the effect (via any necessary unfelt causal processes in between), from the injury to the not-chewing-here (including any requisite supervised learning in between)?

      The hard problem is just as much a problem of reverse-engineering as the easy problem. It requires a causal explanation, not just an introspective one.

      ”This would be a demonstration that feeling capacity affects doing capacity (though it is hard to say if it is causal or not).”

      You can say that “hard” again!

      The hard problem is to say how and why it is causal! Of course we feel that what we do is linked to and caused by what we feel, and vice versa. But cogsci has to do better than that: it has to reverse-engineer it, with a causal mechanism. But when we try to do that, we only manage to explain the doing part; the feeling part is just “there” (if it’s there: remember that T-testing is blind to it, because of the other-minds problem). So far it seems to be causally superfluous. And that’s the hard problem.

      About the link between the EP and the HP: You have to distinguish real causality from felt causality. (And, yes, that’s also connected with the feeling of free will; if it weren’t for that felt “causal power,” the HP would be more a decorative matter than a functional one. Bring this up in class Tuesday…)

      Delete
    2. This idea that the hard and easy problems go together makes a lot of sense to me. How we feel is impacted by what we do, and vice-versa.

      To play devil's advocate, though, why is it necessary to have feeling and doing interconnected? Epiphenomenalists believe that feeling is a result of actions, but that actions are not affected whatsoever by our feelings. Physical processes are only affected by other physical processes. I don't take this view myself, but I am curious as to how we could refute it.
      In your tooth example, you feel pain in your tooth because you tried to eat a rock, so your brain is sending you neural signals of pain. However, this pain will not cause you to act any differently. It is instead the physical state of the brain that causes you to eat with the other side of your mouth. Sure, you might think it is the pain that is causing you to avoid using this tooth, but in reality, your mental state doesn’t affect a thing.

      If epiphenomenalism was correct, I think the hard problem would be even more difficult to solve. Why would we ever evolve like this? Feelings would be like the audio and visual stimulus you get from watching movies. You can see and hear all the action as if you are somehow involved, but your reactions to these stimuli don’t affect how the characters in the film act.

      Delete
    3. "Epiphenomenalism" is not a causal theory (reverse engineering); it is just one of many (empty) metaphysical speculations: Empty because they don't explain anything. They only rename the (hard) problem, and wrap it in an untestable metaphysical assertion. Here are a few:

      1. Monism: Feelings are physical, just like bodies and their doings.

      2. Dualism: Feelings are not physical, unlike bodies and their doings.

      3. Epiphenomenalism: Bodies and their doings have causal effects on feelings but feelings have no causal effects on bodies and their doings.

      4. Interactionism: Bodies and their doings have causal effects on feelings and feelings have causal effects on bodies and their doings.

      Not one of them answers cogsci's substantive question, which is: "How and why do brains cause feelings?"

      By the way, we already "knew" before this course that bodies could cause feelings: I put my hand in the fire and it hurts. And that feelings could cause doings: I pull my hand out of the fire because it hurts.

      We also knew that brain activity and doings are correlated with feelings (the way mirror-neuron activity is correlated with my making a movement, and with my seeing you make the same movement). But correlation does not explain causation).

      We also know that brain activity causes both (D) doings and (F) feelings. Cogsci has the task of explaining how and why, for both. Cogsci has a good chance (and has already made a bit of progress) with reverse-engineering D (the easy problem). But reverse-engineering F is turning out to be much harder (and so far there is no clue even about how to start).

      Delete
  6. Turing formulated the test we can use to determine whether we have successfully reverse-engineered cognition. Reverse-engineering cognition (and in doing so explaining how and why we can do what we do) is the easy problem of cognitive science. Explaining how and why we feel is the hard problem of cognitive science (and the one Dennett believes does not exist).

    Computationalists believe that cognition is just computation. "Stevan says" Turing was not a computationalist. He did not suggest that the machine that would pass the Turing Test had to just be a computer.

    Searle demonstrated that cognition was not just computation via the CRA. Manipulating symbols correctly using rules to produce a given output did not result in understanding. So, there needs to be something more to cognition than just computation, which Harnad has described as the symbol grounding problem. This transcends formal symbol manipulation to enter the realm of sensorimotor interaction with the world. This is why it would require at least a robot (as opposed to just a computer, as argued by computationalists).

    I know we've been over this before, but just to clarify, is this correct?

    Searle uses the hard problem in passing as evidence that the easy problem cannot be solved using just cognition. It feels like something to be cognizing, and since he does not feel like he is understanding when he manipulates the symbols, it cannot be the same as speaking the language (since that would have made him feel like he was understanding) and so just computation cannot explain speaking the language.

    ReplyDelete
    Replies
    1. Yup, you've got it.

      So what about the hard problem then? If passing TT can's solve it, what what can? Or if not, why not?

      Delete
    2. Ishika, this is a fantastic summary and really helped me put together some of the different pieces of the course. One thing I'd like to clarify (which also comes up in the reading) is this: Does Searle not understand Chinese because the Chinese symbols are not grounded or because he doesn't feel what it is like to understand Chinese, or both? I think the answer is both because (1) Searle is only doing computation, manipulating meaningless symbols that have no sensorimotor connection to the world and (2) Searle obviously does not have the feeling of understanding Chinese because he cannot speak Chinese. I recall that we established that meaning = grounding (which is part of the easy problem) + feeling (which is the hard problem), and Searle has neither of these elements. Hence, it seems to me that Searle showed in one shot that grounding requires symbols to be connected to their referents (his symbols are not so they have no meaning) AND that in addition to grounding, a feeling is required––as it feels like something to understand. Meaning, then, requires two essential ingredients––one from the easy problem and one from the hard problem.

      So, while a T3 robot could ground it's symbols, it is not clear whether it could feel; so we don't know whether a T3 would have meaning or just grounding.

      Delete
  7. So, based on the readings 10a-c and previous skywriting conversations, it seems like Turing's methodology for reverse-engineering doing-capacities is limited at the moment, by the apparent requirement of more than computational abilities in order for a machine to pass the TT, because it thus depends on reverse-engineering sensorimotor- and "feeling"-capacities too?

    Or is it more that Turing's design was only ever intended for use in reverse-engineering computational abilities (trying to answer the easiest form of the easy problem in the TT), and that endeavour is a separate path of inquiry from that of cognitive science, trying to solve the hard problem?

    In this piece, Harnad writes that the "physical version of the C/T Thesis" states that almost any physical thing can be approximated by computational simulation - this is about the boundaries of what computation can do. There is also "The mathematical version of C/T Thesis" which states that computation - epitomized in the Turing Machine - can do exactly what humans can do when we compute things – also about the boundaries of what computation can do, but in this case in relation to a type of thing that humans feel we can do.

    I still wonder why feelings are left out by Turing – does he believe that feelings cannot be results of any kind of computation that is going on / being “done” inside the human machine?

    ReplyDelete
    Replies
    1. Hi, Alex! I’ll try to take a stab at the first question that you bring up in your post. So, from what I understood from this reading and the course in general, the Turing Test is basically a criterion for testing the method for reverse-engineering the easy problem of cognitive science, i.e. how and why we do what we can do. This cut-off is essentially some sort of equivalence to what cognisers do and is based on our *observation* of sameness in behaviour between the reverse-engineered A.I./robot in question and cognisers. Since feelings can’t be directly observed, I would say that it is more that “Turing's design was only ever intended for use in …trying to answer the easiest form of the easy problem in the TT),” as you so eloquently put.

      I do have to disagree with you on how you phrased certain observations that you had while reading the text, however. Namely, contrasting the “separate path of inquiry” in cognitive science that is trying to solve the hard problem with the reverse-engineering of computational abilities (I would assume of the mind) appears to suggest that the easy problem does not pertain to cognitive science. This is contradicted right at the very beginning of Harnad’s text in which he states that the very suggestion of reverse-engineering cognition that Turing puts forward is what “set the agenda for what later came to be called ‘cognitive science.’ The difference between the easy and hard problem lies not in what field they belong to but in the ease at which we can causally explain them.

      Delete
    2. AlexST, Turing left out the the hard problem (feeling) from the TT methodology not because of any limitations of computation but because he knew that his method could only explain observable doing capacity (the “easy problem”).

      (“Stevan Says” that Turing also had no problem with sensorimotor function because he was not a computationalist.)

      Yes, like it or not, the hard problem is belongs to cogsci (who else? quantum mechanics? exobiology?). But whether it can be solved is another matter.

      What have the two things you mention in your 3rd paragraph been called in this course?

      I don’t understand your fourth paragraph.

      William, your replies to AlexST were all correct!

      Delete
  8. This course has really helped me understand what cognitive science actually is and it all starts with Turing and the Turing Test. Turing had the insight that if we can build something that can correspond with a human over email in a way that’s indistinguishable from a human then we have the causal mechanism that explains how our verbal capacities work. This strategy is referred to as 'reverse-engineering' (coined by Dennett I believe) and as Harnad said, this approach laid the foundation for cognitive science, which is concerned with reverse engineering all human doing-capacities (T3 or above). The difference between Turing's original suggestion for test criteria (T2) and T3 is the understanding that humans can do a lot more than just communicate verbally. If we want to answer the question of how and why we can do everything we can do, we need to have a causal explanation for all doing capacities.

    What is that causal mechanism? We are still very far from answering this question but we can discuss some of the contenders. One of the contenders which held a lot of promise was computation. Computation is manipulating symbols based on their shapes and not their meanings, the output of which is semantically interpretable (the output has to mean something to someone). Computation is incredibly powerful because it can simulate almost anything (this is based on the Strong Church Turing thesis also called Weak AI by Searle). A computationalist would say that cognition is computation and so a candidate that passes T2 could be purely computational. We have reason to believe that this is incorrect based on Searle’s Chinese Room Argument, which showed that a computational system would lack ‘understanding’ (this is the because the feeling of understanding would be absent ). So we know cognition isn’t all computation: we can’t derive meaning from formally manipulating symbols alone. Steven says that to pass T2 for a lifetime you would need the symbols to be grounded (the symbols have to be connected to what they refer to in the real world) and this requires sensorimotor capacities (which are not computational). Harnad writes that Turing would have known that verbal capacities require connection to the outside world through sensorimotor systems (what would a purely computational T2 do if its human pen-pal sent an image through the chat?) and so Turing would not have a been a computationalist.

    ReplyDelete
    Replies
    1. AlexTS, Turing's methodology (of reverse-engineering al of our doing-capacity: the easy problem) is not what is limited; it is computation as the causal mechanism of all of our doing capacity (i.e., computationalism) that is limited.

      (And Turing's methodology admitted from the outset that it could not test or explain feeling. He did not include what he could not explain.)

      The hard problem belongs squarely to cogsci (and feeling is certainly part of cognition), but it's not clear whether cogsci can solve the hard problem.

      The "physical version of the C/T Thesis" is the Strong C/TT (computation can simulate/model just about anything).

      "The mathematical version of C/T Thesis" is the Weak CT/T (computation is what mathematicians do).

      Allie. You seem to be understanding it pretty well.

      I don't know if Turing was a computationalist ("computation = computation"). I think not. Turing meant the reverse engineering of all doing-capacity, not just verbal capacity. So T3 capacity was part of TT all along. Verbal capacity has to be grounded in sensorimotor robotic capcity (T3).

      The purpose of the "all" constraint was to minimize the underdetermination -- the number of degrees of freedom for passing the test in different ways. (But I don't think Turing insisted on "strong equivalence" and I'm not sure what he thought about the need for T4.

      Delete
  9. Like many other students, I enjoyed reading this article too! It helped me review concepts about the Turing test, the easy problem and the hard problem.

    As we already know, the Turing Test is the penpal, verbal version of the TT used to solve the easy question: How and why organisms do what they can do? It involves sending messages back and forth between a candidate and the interrogator. The candidate is a computational system, so it manipulates symbols based on formal rules based on the shapes of the given squiggles and squoggles, not their meanings. Hence, computation is just syntax. To pass this TT, the candidate must be able to communicate verbally for a lifetime indistinguishably from a human. In his Chinese Room Argument, Searle proved that even if the T2-passing robot provides the right outputs for the inputs (just doing the doing), it does not necessarily understand the symbols. Therefore, (1) cognition is not only computation, (2) computation is hardware-independent, and (3) the TT is not decisive.

    Stevan says that symbol manipulation is insufficient to pass T2. The T2-passing robot should know what the symbols mean to use them in conversation. Therefore, it must be able "to recognize, categorize, manipulate, name and describe the things in the world that the words denote." This requires a sensorimotor capacity. The robot can ground symbols through interactions with the things the squiggles and squoggles refer to in the real world. Unlike the T2 robot, a T3-passing robot has internal and external physical parts allowing for this robotic performance capacity and a grounded symbol system enabling it to “feel something.” Yet, because of the other minds problem, we can not be 100% sure it understands the symbols (i.e., is cognizing) and that it is feeling at all.

    ReplyDelete
  10. This reading was helpful in bringing together the concepts we’ve discussed throughout the term. I’d like to take this opportunity to summarize my understanding of reverse engineering related to the Turing Test. . The goal of cognitive science is to reverse engineer the human capacity to cognize. There are two problems associated with this goal. The easy problem : determining how we do what we do. The hard problem : determining why we do what we do. The Turing Test sets out to reverse engineer the human capacity to think, to the point that the final product is indistinguishable from real, human capacity. Once this has been done, we will theoretically understand cognition. It has already been determined that cognition is not computation (illustrated by Searle’s Chinese room argument). This is because in his thought experiment, there was no understanding — computation alone lacked the feeling component of human cognition. As was established by Descartes, the only thing we can be certain of is that we feel. “I think therefore I am”. This emphasizes the persistent difficulty the hard problem raises.

    ReplyDelete
    Replies
    1. Hi Claire, I would like to tweak a couple of things in your summary:
      - While it is true that the goal of cognitive science is to reverse-engineer the human capacity to cognize, the Turing Test is not the process of reverse-engineering itself. It is just a test to see if a robot or penpal (in the case of T2) can have the same capacities as a humans.
      - The easy problem is not determining just how we do what we do. It is also causally explaining why we do the thinks we do. Together, the easy problem is just explaining the doing capacity.
      - The hard problem is not determining why we do what we do: in fact, it has nothing to do with doing. It 100% relates to how and why we feel. We know when and whether we feel, because it's one of two things that we can always be certain about.
      - I don't see how the cogito "emphasizes the persistent difficulty the hard problem raises." Do you mean that you are frustrated at the fact that you know that you feel, but you can't causally explain how and why you feel? I share the same sentiment, and I'm growing more and more comfortable with this explanatory gap. After all, what's the fun in explaining anything and everything that has ever existed?

      Delete
    2. To briefly add to the first point:
      - My understanding is that if we create a machine (T2/T3/T4 depending on your stance) that passes the Turing test, then we have found a viable explanation for cognition. So this is how the Turing Test ties in with reverse-engineering.

      Delete
  11. The methodology of cognitive science for explaining cognition is attributable to Turing and his Turing Test. The methodology of Turing is to build computational machines that have human capacities indistinguishable from a real human for a lifetime. Successful computational designs meaning designs that possess such capacities offer a plausible causal explanation for cognitive capacities. The Turing Test was initially conceived as a test of verbal human indistinguishability (T2). Turing would agree that a machine which is only verbally indistinguishable to a human is not yet the full explanation of cognition. They are other things than verbal capacities to humans such as all our doing capacities and our capacity to feel. Harnad proposes that a T2 test can probably not be passed by computation alone. He argues that in order to pass T2, a machine would have to have grounded words to their referents in the world through sensorimotor interaction. For Harnad, a T3 passing robot: a robot that would truly be indistinguishable in all that we can do (not only verbally) would be the right level of determination for an explanation of the easy problems. T2 is underdetermined and a T4 is overdetermined as T4 indistinguishability (molecular identity) doesn’t matter more than T3 indistinguishability for our subjective ascribing of thinking to the robot. For the doing capacities, if a T3 robot is indistinguishable in what it can do compared with a real human, a T4 wouldn’t bring anything more (T3 is already indistinguishable). Nevertheless, for our feeling capacities, one could prefer a strong equivalence (T4) over a weak equivalence (T3). While there would be no means of testing out if a T3 (nor a T4) had feelings (because of the OMP), a molecular identity/strong equivalence of a T4 (even if we don’t know the causal mechanism of feeling) looks like it would also have feeling.

    ReplyDelete
    Replies
    1. 1. Neither T3 nor T4 can be purely computational.

      2. Strong/Weak Equivalence is originally a computational notion: Same I/O but same/different algorithm.

      3. T4 vs T3 is not a matter of same vs. different algorithms as humans.

      Delete
  12. The Turing Test was meant to provide a causal explanation for how and why we can do what we do, and Turing believed that this was the best we could hope for. I enjoyed this reading because I felt it brought together many central issues that the course had focussed on thus far, including the hard and easy problem, computationalism, and the Church/Turing thesis. What we have learned thus far is there have been many attempts to solve the easy problem, or reverse engineering our cognitive capacities, but the hard problem of how and why we feel remains firmly out of reach. I wonder if this should even be a goal of cognitive science to answer? All the readings have led me to feel that there is no hope of solving the hard problem, and I find myself agreeing with Turing that the answer to the easy problem is the best we can do. Is it really hopeless, or is it perhaps a question better left unanswered? Should cognitive science just give up on this altogether and instead focus on the easy problem?

    ReplyDelete
    Replies
    1. I agree with you in that I don't think we can currently answer the hard problem. I don't see how we could explain the how and why of feeling when we can't even explain the how and why of doing yet. But I don't think this means we have to give up on the hard problem entirely, especially because sentience is a big part of our cognitive capabilities and I don't think we can simply be content not even trying to find the answer. I'm also (probably wrongly) hopeful that if we can answer the easy problem, it'll at least give us the tools to advance on the hard problem.

      Delete
    2. Replying to what Ada said about answering the easy problem giving us some tools to answer the hard problem, I think this is partially true. If we take the argument mentioned by Professor Harnad earlier in the class - that there is a chance in that answering the easy problem we will use all the degrees of freedom, and have no causal degrees left to work on the hard problem- it is also possible that feeling could be an emergent property. In other words, if we can make something that passes the Turing test, and does everything we can do (including symbol grounding, as professor Harnad discussed, and some degree of mind reading), it is very possible that that mind would feel. While in some ways I know this sounds like a lazy answer, in which we hope to just get lucky, I think there is a real possibility that feeling is not causal in a way that is separate from doing - that it is in fact required for doing, such that making a robot that can pass the Turing test for a lifetime will in fact create a feeling mind, whether Turing intended it too or not. Of course, even if this is the case it does not necessarily mean that we will then know how or why that particular mind feels.

      Delete
  13. In one lecture I misspoke and said that there was debate over whether we can be sure we were feeling or thinking. Oh boy was that a bad thing to say. I think the best take-away I received from this reading was the connection between Searle’s CRA and Descartes Cogito. I always felt that Searle’s argument that he didn’t ‘understand’ the Chinese was a little weak. However, connecting the indisputable fact that we can only be certain that we feel and cognize to Searle’s claim, I now see that his argument holds. If he claims there is no understanding in what he does during the CR experiment, then there is certainly something missing. Computation cannot be all there is to cognition. So, in sum, this lesson of mine also demonstrates why we should not strive to answer the hard problem with the Turing test. Searle is essentially a T5, indistinguishable inside and out, yet lacks understanding in his CR, thus lacks insight into the hard problem. We should however exhaust all our knowledge of the easy problem, the part of the CRA that involves computation. We must focus on the easy problem because that is all we can feasibly advance our understanding of.

    ReplyDelete
  14. I was somewhat surprised to see another Turing related reading at this point in the course, but it did include a succinct refresher on the easy/hard problem, reverse engineering etc. Its a very good way to ground everything we have covered cycling back to it, especially noting the coverage in the first lecture of the semester.

    Reviewing the Turing test tiers was also useful;
    T2 - the email test
    T3 - the robot test
    T4 - neuro test

    ReplyDelete

Opening Overview Video of Categorization, Communication and Consciousness

Opening Overview Video of: