Blog Archive

Monday, September 2, 2019

2b. Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing, Machinery and Intelligence

Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing,Machinery and Intelligence. In: Epstein, Robert & Peters, Grace (Eds.) Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer. Springer 


This is Turing's classical paper with every passage quote/commented to highlight what Turing said, might have meant, or should have meant. The paper was equivocal about whether the full robotic test was intended, or only the email/penpal test, whether all candidates are eligible, or only computers, and whether the criterion for passing is really total, liefelong equavalence and indistinguishability or merely fooling enough people enough of the time. Once these uncertainties are resolved, Turing's Test remains cognitive science's rightful (and sole) empirical criterion today.

68 comments:

  1. Is the link not working for anyone else?

    ReplyDelete
  2. This post is not directly relevant to the paper - that talked about how the Turing Test was meant to demonstrate whether computers could "think", "thinking" being defined by doing the same things we can do. The paper goes on to endorse a T3 version of the test, to see if the bot "can do" all the things that we can do with our sensorimotor capacities in the real world.

    (There used to be a lengthy rant about the limitations of the Turing Test, but I realized that was covered in class, so for the sake of preserving only new information, the relevant section was removed. Enjoy the modestly-relevant).
    ***

    The less relevant, tangential, loosely worded and existential section begins here:

    I'd like to contemplate one aspect of the quest of finding other minds that I think doesn't get enough attention: how our decisions about who others are are reflections of who we are. To do that, I will expand the question of other minds to the idea of consciousnesses - fully acknowledging that it is possible that another being is conscious but without a mind.

    To do that, I'd like to draw everyone's attention to this podcast by Radiolab called "More or Less Human", which features demos on how we might test a bot to see if it's human, a time we may have failed the Turing test, and what other tests might give us more insight into what is a mind and what asking that question tells us about ourselves: https://www.wnycstudios.org/podcasts/radiolab/articles/more-or-less-human.

    One section of the podcast concerns this video where anonymous people "kill" an animatronic dinosoar: https://www.youtube.com/watch?v=pQUCd4SbgM0.

    Watch the video. How does it make you feel? Did you cringe or feel guilty or like there was something wrong being done? Why is that? They discuss this idea briefly in the podcast, pointing out that torturing Pleo, even if it is just a "robot", is demonstrating sociopathic behavior.

    Maybe we feel uncomfortable because we think wanting to cause pain (intentionality) isn't right, even if there is no one receiving the pain; maybe it's because we believe that Pleo does experience some form of pain that warrants defending them; maybe it's because we're not sure ourselves where the line is drawn, and whether we would do the same.

    What we do to robots, or animals, or anything that isn't human, isn't so much a commentary on what minds are - it's more a commentary on where we draw the line. Our actions are the products of our beliefs, and one of those beliefs is whether something is conscious enough to warrant doing XYZ to it. How we come to that decision is based on what we believe to constitute consciousness, or to that end, what actions we seek in others to decide that they are conscious.

    What this video suggests is that there may be no one action that fits the bill, or even a given set of actions that would do the trick: for even if they could do everything a baby could do (like protest, express pain), then we seem ok with doing it anyways if the process by which they do it isn't enough like ours. Process seems to matter at the outset - but how important is it really in retrospect? The whole story? I don't think so. None of the story? Neither that, else this video would have been flagged for abuse. The answer seems to lie somewhere in between...


    ReplyDelete
    Replies
    1. Hi JG, good post, but please make them shorter, still kid-sibly, but more concentrated. Otherwise I won't have time to reply!

      1. "Conscious without a mind"? What does that mean? "Having a mind" just means "being conscious" which in turn just means "being able to feel" ("sentience").

      2. Sorry I just don't have enough time to watch videos or podcasts or else I will never be able to review all the skywritings! Others in the class may have the time, though.

      3. There is sociopathic behavior (doing) and sociopathy (feeling: pleasure (or neutral feelings) in hurting others). They are not the same.

      4. About torturing robots, see Spielberg's AI: Another Cuddly No-Brainer which we may discuss at the end of the course.

      5. "What we do to robots, or animals, or anything that isn't human"? "whether something is conscious enough to warrant doing XYZ to it"? Doesn't it depend on whether it feels?

      The podcast does not sound very deep. Let's try to go deeper in this course.

      Delete
  3. "There is no one else we can know has a mind but our own private selves, yet we are not worried about the minds of our fellow-human beings, because they behave just like us and we know how to mind-read their behavior"

    In order for a computer/robot to pass T3, it would need to have sensorimotor abilities. In terms of understanding the world around it and computing values based on its sensory input, it would do fine. However, given that it would still be a robot, would its (non-verbal) communication - which in part constitutes our ability to "mind-read" - then not be impaired, and isn't communication inherent to being human? For example, when we are having a conversation, isn't the conversation's tone and direction often determined by the body language of both people involved? Or does T3 still assume that the robot/computer would be hidden and we would not be able to see it? This is assuming non-verbal communication is still part of performance data.

    ReplyDelete
    Replies
    1. I think this is an interesting point that I was also thinking about! My only hesitation would be to consider a common communication strategy we use: phone calls. I’m sure all of us have received at least one robo-call in our lives and quickly determined that it wasn’t a real person because the tone of the voice was monotonous or emphasized strange syllables. But when we hear a voice that has the tone and cadence of a real person, we likely assume it is another person. Using this, we can gleam a lot of extra information from listening to how someone says something. So to pass T3, a robot would need to be able to not only interact with the world, but also interpret tone of voice and subtle nuances that a normal person would pick up on, such as sarcasm (an endeavour some humans struggle with). Considering this, I think body language is important when it comes to communication, but I’m not sure if this can be argued as being an inherent aspect of communication, at least in modern times.

      Delete
    2. Ishika, T2 is part of T3. Ting can talk.

      Aylish, T2 is our verbal texting capacity. And it lasts a lifetime: Texting with Ting about anything as long as you want. Tone of voice and body language would be part of T3.

      Delete
    3. I've also been thinking about the significance of nonverbal communication skills. This led me to consider individuals with autism who struggle with social communication, yet whom we would still consider human, with the capacity to think. There are many individuals whose behaviour lies outside the "normal" range, yet we still believe them capable of thought.

      The section Ishika quoted : “we are not worried about the minds of our fellow-human beings, because they behave just like us and we know how to mind-read their behaviour”, leads me to think about the huge impact of physical appearance. Even when the people around us exhibit irregular behaviour, we don’t doubt they are human beings with thoughts and consciousness.

      I realize that our discussion is about understanding cognition, not fooling others into thinking a machine is a human. However, I find it interesting that when the above factors are considered, it may be more important for a machine to appear realistically human, than for it to behave in a realistically human way.

      Delete
    4. What is important (for cognitive science -- rather than social or clinical psychology) is reverse-engineering cognitive capacity, rather than, say, personality or psychopathology.

      Suppose you were trying to reverse-engineer how it is that an airplane (that grew from a bud on a tree) is able to fly (when flown by human pilots)in a world where planes were discovered rather than built from scratch: Aeronautic (reverse-engineering) science would not be concerned with which passenger safety system it has installed for water landings (even though that too is important for other reasons) but how it is possible for a plane to get off the ground and stay aloft.

      Another analogy would be reverse-engineering the heart's capacity for pumping blood: Explaining the cardio-pathological effects of alcohol-drinking or smoking (or meat-eating!) would not be part of the quest for the explanation of how hearts pump blood.

      On the other hand, the differences between average hearts and the hearts of elite athletes (both whether their cardiac efficiency is because of genes or training or both) might give a clue as to how the heart pumps blood.

      Ditto for how brains pump cognitive capacity. (An idea for those who might favor T4 over T3; to be discussed in Week 4.)

      Delete
  4. “But even just for T2, the question is whether simulations alone can give the T2 candidate the capacity to verbalize and converse about the real world indistinguishably from a T3 candidate with autonomous sensorimotor experience in the real world.”

    “Wouldn't even its verbal performance break down if we questioned it too closely about the qualitative and practical details of sensorimotor experience?”
    (both quotes from assigned text)

    I would like to respond to these two remarks by Harnad, both in the theme of questioning whether a T2 computer could actually pass the T2 test, or if only a T3 robot could pass the T2 test. I really like this point about the sensorimotor experience because it underscores how complex our relationships with the physical world are. For instance, what if one questioned the T2 computer about optical illusions? Perhaps the T2 computer could simulate how the eye processes these images, but this is a difficult feeling to truly describe what it is like to not fully trust your own brain unless you have had that experience. That being said, the T2 computer could capitalize on the idea that there may not be a fixed qualitative experience associated with sensorimotor interactions with the world. I would imagine my qualitative experience of a colour is far different than the qualitative sensory experience of someone who has synaesthesia. Even if they didn’t have synaesthesia, how would I know that my sensorimotor experience is qualitatively the same as theirs? So if T2 emailed me back about their simulated sensorimotor experience, I don’t think I would be confident in dismissing it as simulated. The strong Church-Turing thesis tells us that this simulation can be accurate to the dynamic sensorimotor experience only to a degree, but I wonder if this degree could actually be acceptable and close enough. Given the optical illusion example above, do we even fully understand all of the practicalities of our own sensorimotor experience? If the simulation was really close to the real dynamic process could we tell the difference?

    ReplyDelete
    Replies
    1. Good points, but remember to keep doing and feeling distinct, because they are. The TT (from t0 all the way to T4) is a test of doing only. And the only doing that T2 tests is verbal. So it's about what would be needed for computation alone to be able to generate anything a person could say -- and respond to anything that is said, indistinguishably from what Ting or any of us could say. Since it's T2, we're not talking about Ting, who is a T3 robot. We are asking whether a T2 computer could describe -- not feel -- an optical illusion. No reason they cannot describe one, just as they can describe an apple. Probably a GPT-3 deep-learning chatbot could already describe an apple or an optical illusion. But that really is an illusion because either GPT-3 cannot pass T2 or -- if it can -- it means T2 is not a strong enough test.

      Delete
  5. In his article annotating Turing’s paper, Professor Harnad points out a “fatal flaw” in the Turing Test at T2 level. He argues that there are arbitrary restrictions on what Turing thinks a machine should do at this level, namely the hypothetical situation where the T2 penpal would fail to comment on anything outside of the actual conversation (images, real-world events etc.) I would imagine that if Turing were to revise his original paper in 2008 (this was when Harnad published his paper), he would also be inclined to agree that there should be modifications on what should be considered T2.
    I would propose a new level for the Turing Test as somewhere between T2 and T3. In this new level, I would put what Professor Harnad proposed in his paper (commenting on photos, bringing up new and relevant discussion topics). I would say this is verbal communication-plus since it requires the acquaintance to whoever it is talking with. For example, a machine would need to know me and my interests in order to bring up a new discussion topic that would be pertinent to my sphere of knowledge. Failure to discuss anything relevant would be an indicator that it has failed this new level of Turing Test.

    ReplyDelete
    Replies
    1. "Stevan Says" that to be able to text coherently about anything a real person can text about (T2) requires direct sensorimotor interaction (T3) with the things that are being texted about. A trickbot like siri or GPT-3 can fake it for a while, but that's why the "lifelong" criterion in so important. The Loebner Prize is not T2.

      Delete
    2. I think that it is really creative that you propose a T2.5 (what I am going to call this level between T2 and T3)! I understand how it is different from T2 in that it can comment on photos and current events in the world, but I was just wondering if you could clarify on how exactly that is different from T3. Is it different because we will still only evaluate it via email?

      I am also wondering about the idea of acquaintance. So for a robot to pass this T2.5 test, not only would it need to deliver indistinguishable emails compared to a human, but it would have to specifically be indistinguishable from a human that knows you to some degree. Do you mean that it would be able to get to know you during the course of the test or somehow be preprogrammed to know you personally? I would be skeptical that even another human could pass the Turing Test if they had to imitate/be indistinguishable from a friend/acquaintance of the interrogator. From my understanding given we are focused on the general cognition and not individual differences the interrogator would be a stranger to both the robot and human participating in the test. Your point about pertinence to your sphere of knowledge really makes me realize how complicated communication and interaction with others is; it is far more than just asking and answering questions but also seems to require theory of mind to posit what the other person believes or knows to be true.

      Delete
    3. Stephanie I'll let Wendy try to reply about T2.5, since I don't think it is useful to define hybrid "levels." It would all just be arbitrary toy fragments of performance capacity (like chess-playing or designing acrostics) -- i.e., t0 -- plus T3 and T4, if it weren't for the special autonomous features of language that suggest human cognition could all be tested verbally, by T2 alone. (And "Stevan Says" T2 is not really a cognitive "level" because it could only be passed by a T3 (or a T4) robot; it's just that it could be tested with T2 alone, i.e., verbally rather than robotically.)

      Delete
    4. Not sure how to tag users in a reply so everyone would get a notification...

      @Stephanie Pye @Stevan Harnad I wouldn't say that I'm proposing a hybrid level, it has to be a completely distinguishable thing (maybe bump "T3" to now be named "T4"?) If you think about it, all our activities on the Internet is already being analyzed: Google (and other big tech companies) already know almost everything about us, based on our interactions online. I imagine this is just machine learning, proving that it is possible to be "acquaintances" with a real life user.

      Delete
    5. Wendy, your distinctions are arbitrary. Turing's levels are not:

      T2 everything a human can text, indistinguishably from any other human

      T3 everything a human can text and also do with all the things in the world they can text about, indistinguishably from any other human

      T4 everything a human can text and also do with all the things in the world they can text about and everything their brain can do, indistinguishably from any other human

      T2.5: T2 everything a human can text plus comment when shown photos

      is not a level, it's just T2 + an arbitrary toy fragment of T3.

      The point of ask for it all, and not just for toy fragments is that toy fragments can be done in many different, arbitrary ways. The TT tries to narrow it down to something with a better chance of being the real thing.

      Consider a t0 that can only play tic-tac-toe, a t1 that can play tic-tac-toe and also checkers, a t2 that can play tic-tac-toe and also checkers and also chess...

      Delete
    6. Responding to Wendy’s point on web tracking - I was also thinking about this, and if a machine equipped with this ability would even count as anything beyond T0, despite seeming to be able to do the image identification and context-specific discussion T3 is supposed to be capable of. An AI could scan various social media platforms (Twitter, Facebook, Instagram) and could answer questions on the content of photos with much more accuracy like identifying people based on tags, photo captions, and using a computer vision algorithms to match individuals in photos across platforms (if text isn’t enough). But, since web tracking is still an AI just as GTP-3 is, I’m assuming it would be considered as T0 and thus be subsumed by T2. Is a toy fragment of T3, not a toy fragment of T2?

      Delete
  6. |Quote: And even if we are handicapped (an anomalous case, and hardly the one on which to build one's attempts to generate positive performance capacity), we all have some sensorimotor capacity. (Harnad, 2008)

    As determined, performance capacity is what is sought for in the Turing test. I'm, however, a little unclear on whether the goal really is to find a machine that is an idealized version of a human (no disabilities, no neurodivergence, etc.)? Would it truly represent human cognition? On the other hand, would a human who behaves differently, has different sensorimotor capacities (someone who's colorblind, for example, or disabled), and is neurodivergent not be considered to hold the "ideal" cognition we are looking for in machines?

    ReplyDelete
    Replies
    1. The objective of the TT (and of cogsci) is not to "represent" cognition but to reverse-engineer it: discover a causal mechanism that will produce cognitive capacity (what cognizers can do) and test whether they have succeeded. It's not "idealization" to require that the cognitive capacity should be anything that an ordinary, normal human can do (think of Ting) and not what a human in a chronic vegetative state can do. Of course the chronic vegetative state (CVS) eventually has to be understood too, for clinical reasons, but would that not have a better chance top-down, from the full TT, rather, than bottom up, from a CVS-TT?

      That said, the path to the full TT will no doubt require scaling up from a lot of partial capacities (locomotion, learning, reasoning) -- though probably not from the modelling of deficits (paralysis, dyslexia, blindness). But who knows? There is no formula for where creative modellers will get their ideas. And of course for understanding organs and organisms, Claude Bernard recommended damaging them to help understand how they functioned normally. That has worked for clinical medicine (to the benefit of countless suffering humans beings, though at the cost of incalculable suffering to countless nonhuman beings); in cognitive science, however, it only applies to T4.

      Delete
    2. I feel the definition of an "ordinary, normal human" is not clear enough. A human in a CVS state can be pretty easily said to not be "ordinary". But what about much more common traits? Things that science doesn't perfectly understand yet, like neurodivergence? If we are trying to come up with all the "ingredients" to make the "soup" that is human cognition, what is the recipe that we are basing our reverse-engineering on? Again, what is an "ordinary, normal human"?

      If we consider that a wide variety of people cognize (a man with retrograde amnesia, a woman with severe schizophrenia, a disabled non-binary person, to name a few examples), then the "ingredients" for cognition must be something that all of them possess. What is this standard cognition (based on standard human capacities) that we are trying to reverse-engineer, when these capacities are obviously not a necessary condition for cognition?

      Delete
    3. There are many ways to model a fragment of human cognitive capacity, fewer ways to model it all.

      Let's wait till we get closer to the target before worrying about whether we are too close or too far.

      Delete
  7. If it is agreed that computation is implementation-independent, why would we consider T5, or even T4?

    ReplyDelete
    Replies
    1. Recall that it is unclear whether or not Turing was a computationalist. Prof. Harnad takes it that he wasn't, as mentioned in "The Annotation Game". I think there are better reasons as to why we shouldn't need to consider T5 or even T4 to determine whether or not a machine can "do as thinking does". The Turing test is meant to be a test of performance capacity. Any man-made machine that would pass it has to be indistinguishable (in its performance capacity) from other intelligent systems (i.e. humans). T4 and T5 both include criteria that go beyond behavioural performance, they include restrictions on what kinds of internal structures would be appropriate to pass the test.

      Delete
    2. Eli, I hereby endorse Solim's reply to your question!

      Delete
  8. T4 has fuzzy boundaries, as mentioned by the blushing case and I was wondering how would we be able to properly discriminate between those two?

    Because you mentioned how tone of voice can be considered T3, but I think it could be considered T4 as well -- what if the robot wants to become sarcastic or reply in a tone that brings upon more questioning? I think these would be more representative of internal states than simply sensorimotor capacities.

    I'm not sure if there really is a way to really discriminate properly upon what can be considered T3 or T4. However, I do think that certain T3 actions are exclusively T3 and the same goes for T4. Examples of exclusive T3 actions could be reflexive movements and for T4 that would be more about emotional and intellectual states. Overlapping T3/T4 states would most certainly be blushing, or anything that cannot be deemed mutually exclusive from the two.

    With that said, I think it would be necessary to include either a T3/4 category or add another category where this is included; such as T4 being the level where sensorimotor and internal states interdigitate, and then T5 is exclusively about internal states. Because I think for internal states that effect the physical (such as being nervous) cannot be separated and leaving such a fuzzy boundary becomes a disservice to the hierarchy.

    ReplyDelete
    Replies
    1. Both the T3/T4 boundary and the cognitive/vegetative boundary are fuzzy, but now, when we are still so far from being able to reverse-engineer basic cognitive capacity, it's a bit early to worry about the fine-tuning...

      Delete
  9. "The fact that eyes and legs can be simulated by a computer does not mean a computer can see or walk (even when it is simulating seeing and walking). So much for T3. But even just for T2, the question is whether simulations alone can give the T2 candidate the capacity to verbalize and converse about the real world indistinguishably from a T3 candidate with autonomous sensorimotor experience in the real world”.

    Here is a fun question: If a machine can simulate a virtual reality which, if I am not mistaken, is within the capacity of a T2 machine, and the interrogator is not aware that they are in a virtual reality, does that mean the T2 machine is effectively passing as a T3 machine? The machine can simulate all the abilities of a T3 machine, like actually flying a plane, in this virtual reality. I understand this is essentially simulation theory of reality so I am probably distracting from the real argument here. Ultimately, I am making a case for the power of complex simulation.

    Later in the paper this issues seems to be addressed:

    "But it will still be true, because of the Church-Turing Thesis, that the successful hybrid computational/dynamic T3 robot still be computer-simulable in principle -- a virtual robot in a virtual world. So the rule-based system can describe what a T3 robot would do under all contingencies; that simulation would simply not be a T3 robot, any more than its virtual world would be the real world”.

    This touches on my question, but doesn’t address if the person knows that they are in a virtual world. If they can’t tell the difference between a virtual world and the real world, perhaps then we have successfully passed T3 with a computation-only machine.

    ReplyDelete
    Replies
    1. It's fun, but it's better to resist the temptation to play with the sci-fi aspects of TTesting! (Cogsci is sci-fi-ish enough already!)

      Variants on the TT based on ways to fool people are not relevant to cogsci, whose mission is to really reverse-engineer real cognitive capacity, not to fake it. Designing virtual-reality devices is a remarkable spin-off of the power of Turing computation and the Strong C/T Thesis, but it does not make us any the wiser about how organisms cognize!

      Remember that in order to learn something about how cognition works via simulation, you have to be able to simulate the world as well as you simulate the cognizer, and this is a rather taller order than cogsci. It's also true whether you are aiming for T2 or T3. For T2 you have to "simulate" anything and everything that might ever be said by or to a cognizer; and for T3 you would have to simulate any and every robotic (sensorimotor) interaction a cognitizer might have with the things in the world. Good luck!

      Delete
  10. I do not blame Turing in regards to his choice to draw the physical/intellectual line where he did. I acknowledge that his focus on T2 is an incomplete recreation of cognition, but I would also acknowledge the dissatisfyingly incomplete nature of attempting to apply T3 to something akin to the Imitation Game.

    As something that began as a thought exercise with very clear and easily imagined and understood parameters, how do you alter it in a way that can test all physical elements of a machine that it must replicate to display 'cognition' and still maintain the visual illusion that it may be human? The text based game allows for this factor of appearance to be removed, and I understand the appealing simplicity of focusing solely on verbal skills rather then attempting to determine where the line would have to be drawn within the murky realm of physical ability in order to maximize the cognitive replication while still avoiding the issue of giving away its non-human status via its appearance.

    ReplyDelete
    Replies
    1. "how do you alter it in a way that can test all physical elements of a machine that it must replicate to display 'cognition' and still maintain the visual illusion that it may be human?

      Build a robot that can do everything (cognitive) that Ting can do. That's the real (i.e., cognitive) part of the TT. The cosmetic part (appearance) is getting easier and easier.

      Delete
  11. Firstly, in “The Annotation Game: On Turing (1950) on Computing, Machinery, and Intelligence”, Professor Harnad defines “thinking” from Turing’s classical paper as “an internal state that is observable as neural activity and introspectively observable as our mental state when we are thinking.”

    Secondly, to recap, in order for a digital machine to pass Turing's Imitation Game, it would have to (1) successfully hold a conversation and (2) fool the interrogator into thinking it is human. After passing the Imitation Game, this allows the inventor to answer the easy question: How and why can organisms’ structures and functions do what they can do?

    Thirdly, Professor Harnad describes the hierarchy of Turing Tests.
    - t0: the candidate must complete a specific task to the degree that it Is indistinguishable from a human (e.g., play chess)
    - T2: the candidate must email or verbally communicate to the degree that it Is indistinguishable from a human (e.g., tell us whether the moon is visible at night)
    - T3: subsumes T2 and the candidate has identical robotic performance ability to that of a human (e.g., interact with the world)
    - T4: subsumes T3 and the candidate has identical internal structure (at the neuronal level) and function to that of a human
    - T5: subsumes T4 and the candidate is “indistinguishable from humans right down to the last molecule”

    Lastly, Professor Harnad states that "the level of test that Turing should have intended was T3" for robots. In class, we said that T3 can not be passed by computation alone. A robot has to be able to interact with the world and this requires internal and external physical parts. As well, a digital computer cannot simulate sensory transduction or real movement etc. If Turing had used T3, "no" would be the answer to his question (“Are there imaginable digital computers which would do well in the imitation game?”). Digital computers alone would not pass his Imitation Game, because they lack this robotic (sensorimotor) performance ability.

    ReplyDelete
    Replies
    1. Good summary. A few correx:

      (1) successfully hold any conversation lifelong
      (2) not"fool the interrogator into thinking it is human": really produce human cognitive performance capacity, equal to, and indistinguishable from real human cognitive performance capacity.

      TT (as the methodology of cogsci) is not "imitation" or a "game." It is the attempt to test whether cogsci has successfully reverse-engineered cognition.

      It is the attempt to do anything (cognitive) that you can do, Ting!

      Delete
    2. P.S. although it is sometimes associated with the symbol grounding problem and even with categorical perception, I have not yet found anything relevant to the TT in the work on the "uncanny valley."

      Does anyone see something relevant (to cognitive function and T3 -- rather than just vegetative function) in this kind of humanoid robotics?

      Delete
    3. Hi Professor! I came about an interesting read written by Paul Teich.

      He lists some limitations of the TT and offers an alternative, modernized approach to assessing a machine's NLP. As well, he believes that "an AI trying to pass a general conversational effectiveness test (implicitly or explicitly) must first pass through the uncanny valley."

      You can find the article here: https://www.nextplatform.com/2019/03/18/modernizing-the-turing-test-for-21st-century-ai/

      Delete
    4. T2 does not test "a machine's NLP." It tests whether it is able to text indistinguishably from us (for a lifetime). The "uncanny valley" (if it's part of the TT at all) would be part of T3, because it requires vision (optical input). Not being able to carry a tune would not mean failing the TT either...

      Delete
  12. - “What thinkers can do is captured by the TT. A theory of how they do it is provided by how our man-made machine does it.”
    This paper really helped me clear up what the purpose of the TT is really about. The TT is not concerned with whether a machine feels what I feel when I think, but whether a machine can think like how I think. And it doesn’t matter what the specific chemical and/or electrical process might be, because that is concerned with T4 and the TT that we are talking about here largely rests in T3.
    However, I cannot understand why feelings must be abandoned in order to focus solely on thinking. Why can’t they be considered together? People think and then act in accordance to how the people around them think and act… should a machine that passes T3 also be able to pass this? As in, also show that it can cooperate and follow the norms and values of the society around it? And could that be possible without including feelings?

    ReplyDelete
    Replies
    1. It is not that feelings are abandoned by the TTest. It is that the only thing the TT can test is doing and doing-capacity. That's the "easy problem." Feeling is the "hard problem."

      The "easy" question is" "how and why can organisms do what they can do?"

      The "hard" question is: "how and why can (some) organisms (sometimes) feel what they feel?"

      Do you have any ideas about how the hard question could be answered?

      (The causal connection between feeling and doing -- "free will" -- is the crux of the hard problem.)

      Delete
    2. Okay, just to make sure that I am on the right page:

      We are trying to understand the mechanisms behind cognition. The TT is the first step towards that by trying to reverse-engineer the causal mechanisms that allow humans to do what they do when they are thinking. This separates DOING and FEELING. In order to execute the TT, one must first assume that below the social and psychological differences that exist between individuals, there is an underlying universal mechanism for how thinking occurs. To tie this in with my post on 2a., the TT passes when it can do everything we can do (I/O match up) and we can't ever tell that there is a difference on how it is doing it.

      Also thank you, I greatly appreciate the easy/hard distinction. Although the hard question is something that will have to rest in my head for a little while longer before I can even attempt to respond.

      Delete
    3. Just one thing to bear in mind: The TT is about doing not just about moving. So it reflects thinking too, not just motor activity. The hard problem is about how and why organisms feel, rather than just do. Another way to put it is: why is thinking felt, rather than just "done" (internally)? The "other-minds problem" (how to determine whether, when and what an organism is thinking and feeling) could perhaps be solved by something like neuroimaging; but neither the hard problem nor the easy problem could be. It's the difference between predicting the weather and explaining it.

      Delete
  13. “Disabilities and appearance are indeed irrelevant. But nonverbal performance capacities certainly are not. Indeed, our verbal abilities may well be grounded in our nonverbal abilities.”

    I think the point that Harnad is even more poignant if we consider the ramifications of sensorimotor capacity in a Learning Machine. When we are teaching children, actions are often demonstrated, which function as a shortcut to explain certain procedures rather than describing them in words (it wouldn’t be very effective to describe how to action in words to children anyway). But they can also be used to describe concepts, some of which are arguably indescribable in words in the first place. And if you’re a babbling baby who’s just learning language, sensorimotor demonstrations are arguably necessary to get language in the first place (i.e. the symbol grounding problem). This embodied idea of learning language also seems to be hardwired in our brain: in some experiments, there was brain activity associated with the language areas as well as the motor cortex in describing certain actions and objects.

    If sensorimotor experience is so essential in how we learn vocabulary, how can we expect a T2 machine to truly exhibit the same learning behaviour as a ‘natural intelligence’ would given that it is endemic to both behavioural and hardware-related aspects of learning?

    ReplyDelete
    Replies
    1. You got it. More on this when we get to mirror neurons -- and later again with the evolution of language; the transition from nonverbal to verbal communication.

      Delete
  14. “By the same token, we have no more or less reason to worry about the minds of anything else that behaves just like us -- so much so that we can't tell them apart from other human beings. Nor is it relevant what stuff they are made out of, since our successful mind-reading of other human beings has nothing to do with what stuff they are made out of either. It is based only on what they do.”

    This follows up nicely on my 2a skywriting. I was wondering what level of the TT we’d require to believe it has achieved its purpose of reverse engineering cognition (and accordingly, teaching us how we can do what we do). I think most of us can agree that T2 isn’t getting us there quite yet due to the lack of sensorimotor aspects; there’s so much more to us than verbal communication. We’ve assumed that to pass T3 we need a robot that can do absolutely everything we can do. I think many would be satisfied with a T3 passing robot as a replica of cognition. As a neuroscience major in the cognitive stream, I’ve been taught to emphasize the role our brain circuitry plays in different aspects of cognition. Today, we had a guest lecturer talk to us about how long-term memories are maintained, according to him, in the increased synaptic strength between neurons due to a protein kinase blocking the endocytosis of certain receptor which was deemed crucial for remembering. If I were to meet a T3 robot, I’d be itching to know whether these same processes are happening, and part of me would want to only label it as successful if they were. But the reality of the situation (at least to me right now) is that as long as the robot can do everything we do, we’ve managed to get it to cognize and if that robot happens to be made out of wires and not brains, that shouldn’t change our minds (pun not intended). So maybe T4 is ideal for people like me who desperately want to see the brain replicated –more out of curiosity than anything, but it is not necessary for the sake of figuring out cognition.

    ReplyDelete
    Replies
    1. T4 would be important for clinical understanding (and curiosity) but is T3 enough so you wouldn't kick Ting.

      Delete
  15. The Annotation Game really cleared up a lot of interrogations and I think that I better understand it’s mission in “reverse-engineering” cognitive capacities.

    Building on what I wrote on in my last post on non-human intelligence and unintelligent human behaviour, I want to dig deeper into the role that various cognitive biases have in how we think what we think. Cognitive biases are as I said previously systematic errors in the thinking of normal people intrinsic to the machinery of cognition and therefore processes that our Turing machines will have to integrate in order to pass the test. They will in other words have to learn to shortcut their more rational procedures to let human biases come forward. My question goes to whether or not T2 and T3 machines would be equally equiped to reproduce the same cognitive biases.
    My concern is I wonder if some of these biases are sensory-motor dependent. Say for example you had T2 and T3 guess the name of a flower with six petals and you primed both machines by putting them in a pink room that smelled like roses. Would T3 be more likely to mistakenly take an “intuitive” guess at a rose (which has over twenty petals) than T2? In other words could we fool a T2 type machine by exposing him to questions that would usually stimulate cognitive biases in a way that we could not fool a T3 type machine that has learned priming through his sensory-motor apparatus? This is obviously a very porous hypothesis and experiment and would need a more rigorous defining of what it means to be “fooled in a human way”. Nevertheless, there may be some interesting hints at how we think what we think in studying the co-evolution of cognitive biases and the sensory-motor apparatus.

    ReplyDelete
  16. This paper was really helpful in understanding Turing's paper and clarified a few points I hadn't been sure about.

    I was, however, very surprised by the last paragraph mainly because I hadn't taken Turing's refutation regarding extrasensory perception seriously (since I hadn't realized Turing was taking extrasensory perception seriously). I'd kind of assumed he was addressing their concerns in a more sarcastic manner and I was very disappointed to find out he had been serious in doing the experiment in a telekinesis proof room.

    I was confused by the T5 in that I did not understand the purpose of that level of similarity. I can, to an extent, see how T4 could be useful, be it for purposes of sheer curiousity or maybe helping clarify the boundary between the cognitive and the vegetative, but I don't understand what can be revealed by making a synthetic human clone essentially if we've already achieved T4?

    ReplyDelete
    Replies
    1. Sir Isaac Newton believed in Alchemy...

      T5 is irrelevant. It's more stuff to know (for clinical reasons or curiosity), but crucial for reverse-engineering cognition.

      Delete
  17. “This does not, of course, imply that we are not machines, but only that the Turing Test is about finding out what kind of machine we are, by designing a machine that can generate our performance capacity, but by causal/functional means that we understand, because we designed them.”

    In my last skywriting, I thought that the Turing Test would be strengthened by the condition that we did not understand how the machine worked. This came from a fundamental misunderstanding that the Turing Test was simply a test for artificial intelligence to see if it possesses the ability to act completely indistinguishable from a human. However, after reading Harnad’s paper, I understand that this would not be a desirable quality. The point of the Turing Test is not merely a measure of whether or not a machine is cognisant, as I had previously thought, but rather a way to gain more information on what type of machine we are.

    That being said, I still believe that if we were to create a machine that was indistinguishable from a human, we must regard that being as “conscious” even if we did not have a full causal understanding of its function. This may not give us any insight into our own consciousness, but perhaps it would help to define what is possible in cognitive science. We do not have full causal understanding of ourselves, yet “we have no more or less reason to worry about the minds of anything else that behaves just like us”. If this is true, there is no reason why we should worry about a machine that behaves exactly like us having a mind or not.

    ReplyDelete
    Replies
    1. I agree that whatever passes TT (e.g., any of us, and any device we build) cognizes -- but how are you imagining we could "create a machine that was (TT-)indistinguishable from a human" without understanding how we did it, so how it works?

      A hole-in-one the first time you hit with a golf-club, or an abstract masterpiece by spilling paint on a campus, maybe, but designing a T2 or a T3 while sleep-walking?

      Delete
  18. In response to Turing specifying the witnesses in his test can only communicate via text, you argued:

    “It does have the unintended further effect of ruling out all direct testing of performance capacities other than verbal ones…we can all do a lot more than just email”

    Your argument here is relevant under the assumption that the other things we do are also part of thought, but I don’t think that is necessarily true.

    Later when describing your T series, these discrepancy is what defines the difference between T2 and T3, and you argue that Turing “should have” intended to be at the level of T3, when he’s only at T2. But I think you arrive at this conclusion because you include these “non-verbal” performance capacities as part of “thinking” while Turing purposefully excluded them because they did not belong in his definition; to him, I don’t think digging for truffles, and star-gazing, in themselves, constitute thought.

    Earlier in the paper you argued that he mistakenly makes draws the line between physical appearance & structure vs verbal & non-verbal capacities, but I think a more accurate interpretation of his distinction is things that are thinking vs things that are not thinking.

    ReplyDelete
    Replies
    1. Turing cannot know any more about what thinking is than we do (until he builds something that can pass his TT).

      His method is to try to reverse-engineer all the things a thinker can do. Yes, they can talk and text. But what makes everything else they can do with the things they can talk and text about -- like recognize them, name them, and describe them -- not thinking? Or done without thinking?

      And while we're at it, what would make anyone think that it was possible to build something that can talk and text (for a life time) with any of us, about all the things we can talk and text about -- without being able to recognize them, name them, or describe them?

      The hunch that testing T2 (texting) capacity might be enough is a reasonable hunch; but saying that (the untested) T3 capacities are not necessary to pass T2 (despite the obvious connection) is going a bit further isn't it?

      Delete
  19. Turing's question isn't about thinking or consciousness but about performance capacity, and I think this is a really important distinction. Many of my doubts about the Turing test came from the disbelief that a machine that produces intelligent responses is necessarily intelligent. However after having read both of the papers this week many things have been clarified for me. It's not about whether machines can be intelligent but whether we can replicate the performance capacity of humans using a non-human machine. Is all of the confusion in the popular imagination of Turing's beliefs just come from miscommunication and a lack of understanding? What does Turing actually say on the intelligence and consciousness aspects? Where does he stand on what intelligence is? Is it a misnomer to call AI Artificial intelligence?

    "This is fine, for thinking (cognition, intelligence) cannot be defined in advance of knowing how thinking systems do it, and we don't yet know how."
    My question is, are we at the point where we think we can define intelligence now? Do we think that we have reverse engineered enough to have a clearer idea of this concept? (I know that there is no machine humans have created that can pass T3, but surely the knowledge we've gained up until this point tells us something).


    How could we ever confirm that a machine passes T3? How can we devise a test where we know whether a machine is indistinguishable from a human in ALL performance capacities involving sensorimotor abilities. Just as the abilities of a T3 machine subsumes those of a T2, are there certain sensorimotor abilities that we feel encapsulate others so we don't have to test every single scenario?

    ReplyDelete
    Replies
    1. 1. Intelligence is as intelligence does (or can do). ("Easy Problem": reverse-engineer how and why organisms can do all the things they can do)

      2. Turing renounced solving the "Hard Problem" of consciousness (reverse-engineer how and why organisms feel rather than just do).

      3. All the TTs, T2-T4, can only test doing-capacities. So Turing says forget about trying: there's no way (because of the Other-Minds Problem, even if we set aside the Hard Problem.)

      At what point are we in explaining (i.e., reverse-engineering, not "defining") intelligence? Not very far!

      Ting is already passing T3. If we ask her lifelong friends whether they have any reason to believe she is a zombie, they would say of course not. (So it would be possible to pass T3, if Ting were a robot.)

      Delete
  20. Throughout the text, Prof. Harnad distinguishes T2 and T3 Turing test levels -- T2 being the lifelong pen pal computer that responds only by text, and T3 being a robot who can interact in the world identically to the way a human can. He argues that to truly pass the Turing Test, T3 must be achieved. At first, this argument didn't make sense to me. I imagined that if a T2 computer had a massive database and could converse by drawing on that database and following an algorithm, then this would be sufficient to pass the Turing test. In other words, I figured that if a T2 machine contains, for example, the propositions "apples are spherical" and "apples taste sweet" (as well as many more), then the machine should be able to sufficiently converse about apples without grounding them -- that is, without having to use sensors to recognize apples and identify them in the world. Harnad notes that a T2 computer would likely not be able to converse convincingly for an extended period of time since it can't ground the objects that its symbols refer to, but I was not sure why this is.

    However, things became clearer when I realized that even if--contrary to what Prof. Harnad argues--a T2 computer could converse convincingly, the Turing Test is not about tricking people; it's about reverse engineering a machine such that it can do the things we can do, thereby understanding how we do what we do (a part of the easy problem). Given this, it makes sense that to truly pass the Turing Test T3 is required, because to wholly understand how we as humans cognize, we must understand how stimuli are registered and how symbols are grounded, and this is only possible with a T3 robot.

    ReplyDelete
    Replies
    1. To ("truly") pass TT, whether T2 or T3, requires only the capacity to do anything a human thinker can do, indistinguishably, for a lifetime. Recognizing and interacting with the things in the world that their texts are about would seem to be a necessary first step. Plenty of computational games (from Siri to GPT-3) could fool us for a while, but not for long; ditto for today's robots, once we got to know them. A Ting is still far off.

      Delete
    2. “At first, this argument didn't make sense to me. I imagined that if a T2 computer had a massive database and could converse by drawing on that database and following an algorithm, then this would be sufficient to pass the Turing test.”

      I definitely had a similar initial reaction to you Alex. Professor Harnad seems to give two types of justifications for requiring T3, suggesting that T2 on it’s own is either incomplete or is impossible.

      In terms of the incomplete nature of T2, it does make sense that verbal capacities are only a fraction of cognition, and that is somewhat arbitrary to single out this ability.

      In terms of whether it is possible for a T2 computer to pass the test, I am somewhat ambivalent. On the one hand, it does seem like a genuine limitation that the computer will always be receiving a second-hand account of the world. It seems true that the “real world is better used as its own model”- although I am not sure that this necessarily makes passing the TT impossible. Even if you questioned the computer on the details of sensorimotor experience, a computer could easily have access to every, or at least “countless”, ways that that experience has been described before. It may not be able to generate anything new on this topic - but most people probably couldn’t either.

      Overall, it is difficult to buy-into the descriptions of the limits of the T2 computer when the computers we interact with every day do not have these limitations. I am puzzled by the suggestion that engineers would have to “second-guess” every possible outcome - surely manual coding was never going to pass TT. I think when most of us think of a T2 robot, we imagine that it would have access to something like the internet, and be capable of machine learning, which does not seem to require embodiment. I also imagine a computer able to process the types of sensory information that can be encoded in an email, so not taste, touch or smell, but certainly image and sound “files”. Perhaps this inherently violates the definition of T2, but it feels strange to say that since the T2 device could not see a picture, or somehow wouldn’t be aware of current events (things that Siri achieves easily) that we then have to move to requiring foraging truffles.

      Delete
  21. Some questions about the TT:

    Is passing the Turing test a necessary and sufficient condition for acknowledging cognition in a machine? Is that what the idea behind the TT is?

    When we're talking about cognition as a capacity to do what we do, is this capacity the potential to do these things? I'm still caught up on this idea that we might not "pass" a machine if it was unable to do all of the things T3 involves, but we (hopefully) wouldn't deny cognition to a disabled person in the same situation. Is this capacity therefore the potential for all the T3 stuff? This also brings me back to my first question. If not everything about T3 is necessary to pass T3, then how is it that it is necessary for cognition?

    ReplyDelete
    Replies
    1. The capacity to do what any of us can do (lifelong, potentially) indistinguishably from any of us is the best we can ask of any TTest. Successfully reverse-engineering that capacity would produce at least one solution to the easy problem.

      Would that guarantee that cogsci has explained cognition (at least as credibly as any other branch of science, which (unlike mathematics) cannot provide necessary or sufficient conditions, just high probability on the basis of the available evidence.

      But the hard problem (of explaining how and why organisms feel) is also part of cogsci; and reverse-engineering and Turing-Testing the solution to the easy problem does not solve that.

      Yes, capacity is potential-to-do (including potential to learn to do). TTests are tests of whether we have successfully reverse-engineered cognitive capacity. The idea is to reverse engineer the capacities before you try to explain what happens when they are disabled. Not to diagnose or treat disablement.

      What cogsci is trying to reverse-engineer is normal, generic human cognitive capacity, not genius or pathology.

      Delete
  22. A question about class material: I didn't quite understand what was the link between the discussion on functional architecture/the virtual machine and the software/hardware distinction?

    ReplyDelete
    Replies
    1. The hardware/software distinction is that the input/output performance capacity of an algorithm (software) is the same no matter what hardware executes the algorithm. (If I/O speed is not counted as part of the capacity.)

      The rest is just about the interpretation of the output. If Mac software is simulating a PC, that's virtual "hardware". It's really Mac hardware, but to a user, it looks and feels just like a PC. Pylyshyn thought the distinction between what was and wasn't cognitive was somehow like the level of the virtual machine (PC) rather than the real machine (Mac). Anything below the level of the virtual machine was not cognitive (not just the real hardware).

      But that's homuncular, because we are the cognizers, not the "users" of our cognition. And the only real distinction is the hardware/software distinction. The rest is just interpretation of what the software does (including in virtual reality simulations).

      Delete
  23. From “The Annotation Game: On Turing (1950) on Computing, Machinery, and Intelligence.” by Stevan Harnad

    Hi professor,

    In your annotation of Turing’s text, you restate a more accurate response to Lady Lovelace’s objection mainly:
    “The correct reply is that (i) all causal systems are describable by formal rules (this is the equivalent of the Church/Turing Thesis), including ourselves; (ii) we know from complexity theory as well as statistical mechanics that the fact that a system's performance is governed by rules does not mean we can predict everything it does; (iii) it is not clear that anyone or anything has "originated" anything new since the Big Bang.”

    I get (i), but I have difficulty understanding how (ii) is true and what (iii) means. Would you mind elaborating a bit on those points. Thank you in advance.

    But if I understand the gist of it: the answer to Lady Lovelace is that actually, a machine could do unpredicted things (“new” things) even if it is a causal system (therefore passing TT in that respect).

    ReplyDelete
  24. I think that T2, i.e. that written conversation is sufficient, is true, although I would modify it minorly. T2 does not need to be totally indistinguishable from a human being, in terms of having to fool anyone and everyone for a lifetime. We are not trying to recreate a human being, who can star-gaze or forage for food. Other animals can also star-gaze, forage for food, hunt for truffles, and even look at the moon — what they can't do is look at it then tell us about it. What we want is evidence of cognition, although cognition may be dependent on sensori-motor action. What we want is basically a convincing display of human cognitive capacity, and not a convincing display of actually being a human or "fooling" someone into thinking it is a human (so TT can be slightly modified to remove the cheeky aspect and allow for the question to be 'whether you think what you conversed with has the same cognitive capacity you recognize yourself or other humans to have', or something like that). Analyzing a computer's linguistic production can show cognition, through conversation, based on its ability to appropriately respond to the real human. I believe language shows evidence of capacity for reflective thought, wondering, questioning — in short, evidence of a mental life. Moreover, it focuses on an apt and relevant feature of human cognition. We don't need irrelevant aspects like whether it actually walks around like we do. To illustrate, a person could converse with another person who is confined to a room and no contact with the outside world. Of course the human doesn't know about contemporaneous events; similarly, the lifetime aspect should be dropped as obviously the human is not living normal lives like the rest of us. On the point not being able to respond to photos — I take T2 to strictly be a language to language communication. Only language inputs from the real human is allowed. Turing never called for photos either. The TT stands as a test, although I am doubtful that a being who never did ever take a walk in the real world and interacted with other humans could pass the test (although one who did in a virtual one might, as I mention in 3a skywriting).

    ReplyDelete
    Replies
    1. I have some contentions about T3/T4:

      1) It seems impossible to me that any robot would pass T3. I don't know how to argue this, other than... to just look at a human. For 10 minutes. I think any objections is just denying the reality of the situation.

      2) But why is there such emphasis again on "fooling" anyway, which I thought Prof Harnad disputed? Isn't what we want performance capacity, and specifically, performance capacity of that which is quitessentially human cognition? Also, if we're going to say the candidate needs to do all that a thinker does, such as walk or forage, why isn't it mentioned anywhere that the candidate should feel, as we do so obviously feel?

      3) If not for fooling, but as a necessary component to instantiate human cognitive capacity, the machine walks and has "sensory-motor capacity", I would dispute that it has intentionality just because it has some analog components e.g., a camera, for the reason outlined in Searle's CRA paper. Saying it has "sensory-motor capacities" may sound like it really "senses" or "feels" but it doesn't, although it might act to someone briefly watching as if it does

      4) "T4: Total indistinguishability in external performance capacity as well as in internal structure/function." a) I would argue, wouldn't total indistinguishability in 'external' performance capacity necessarily mean total indistinguishability in 'internal' structure/function? Maybe T3 couldn't happen without T4 or there is a direct correlation?; b) What is meant by total indistinguishability in internal structure/function? On what level of analysis and how? At the level of neurochemicals, in which all their movements are accounted for as it travels what are axons? The "wiring"? All the blood flow happening in the brain? The structure of the corti? The structure of the red blood cells white blood cells, and fats? Oxygen? Why stop at the brain? What about the central nervous system? The rest of the body? Isn't the gut-brain axis important? How are you going to translate information from an analog camera so it fits with the simulation? Can you expect the structure to hold shape in the real world and have the same internal structure and function? Again, I'm confused about the word 'internal' — if meaning everything inside the skin; can you really model that internal structure, down to protons and electrons if not more? Where to stop? And then I guess I'm confused how this model would interact with the analog components and how they would fit together, and if the addition of computational processes for how they fit together would destroy the indistiguishability of structure to the human system.

      5) Sorry the above point is a little confused, it could be that I just don't see because it's so complex how it could be instantiated. *Maybe* point 3 is wrong, but even if so, I think T2 test would suffice as a marker of cognition. And maybe T3/T4 doesn't need indistinguishability of function and structure. I think electrical/energetic processes must occur for feeling/consciousness (not just in 0 and 1 voltage gating as in digital computers), but perhaps something like a formal computational system can partly mediate/coordinate how the electrical components fire.

      And T5 is not really a "test", it's basically asking if a complete duplicate of a human will have human capacities, which perhaps Harnad mentioned in the text

      Lastly — "Surely the Turing Test is not a license for saying that we are explaining thinking better and better as our candidates fool more and more people longer and longer." I don't see why it isn't — doesn't it mean we're approximating it better? I would also add, if it passes for increasingly more intelligent people.

      Delete
  25. An interesting variant on the idea of a digital computer is a "digital computer with a random element"... Sometimes such a machine is described as having free will (though I would not use this phrase myself)"

    I believe this specific Turing passage is still very relevant considering the advancing tech and groundbreaking AI that continues to emerge in our lives. From my understanding, it seems that the most "intelligent' computers and machines are those that have a designated function or are module-specific. Instances that come to mind range from IBM's Deep Blue developed in 1985 to Tesla's Autopilot self-driving cars release in 2019. Both are limited to the mastery of chess and driving, respectively; experts only in one capacity. However, it seems that what Turing is alluding as he discusses "a random element" as part of digital computers is T3? AI research that is capable of building computers with expertise in diverse and varying capacities must the research of the 21st century (perhaps the next step after we achieve a firmer grasp of cognitive modelling), all in the name of total indistinguishability in sensorimotor performance capacity. I just wonder if successful, AI sentience in the form a kinder C3PO or in the form of the evil Skynet would materialize (excuse the corny sci-fi references).

    ReplyDelete
    Replies
    1. Turing just means probability and probabilistic models, rather than only exact algorithms. T3 is our full sensorimotor (robotic) capacity, rather than just our verbal capacity (T2). The models so far (whether computational or not) are just toys. They can be done in so many different ways that the way they work may have nothing to do with the way organisms do what they can do. That's why Turing insists on complete indistinguishability (for a lifetime) for the human TT.

      Delete
  26. In reading Stevan Harnad’s response to Alan Turing’s seminal paper on “Computing Machinery and Intelligence”, I started wondering what would happen if a T3 Turing Machine which passed the test were to act not as A or B, but as interrogator. Assume a test with a T3 machine acting as A, a person acting as B, and another T3 machine acting as B. Would our ideal T3 machine be able to distinguish another T3 and a person as readily as a human asking questions would? Further, would the processes by which it made distinctions be similar to that of a human’s (when I say “should”, I mean not only in order to pass the test, but to be said to have actually cognized)?
    In theory, running this variation on the Turing test would examine on some level the robot’s ability to perform not only the empirical criterion, but also the intuitive criterion of thought – a level of ‘mind reading’ which is emphasized and specialized in human cognition, as well as that of other highly collaborative species. Mind reading, while considered a species wide trait, is also dependent to a degree on cultural norms – how certains symbols, pieces of language, etc, register is not only context dependent but culture dependent. Is it in the interest of modeling cognition to also explore developing subjective cultural competence, and is it even possible for us to ‘teach’ this to machines, as so much of recognizing familars, group members, and mimicking others occurs below the level of observable cognition?

    ReplyDelete
  27. In Harnad’s paper, he asks “whether simulations alone can give the T2 candidate the capacity to verbalize and converse about the real world indistinguishably from a T3 candidate with autonomous sensorimotor experience in the real world.” I am not sure whether what our current T0 AIs are doing is considered “sensorimotor.” The Google AI which is able to generate millions of unique pictures of cats, or InceptionV3 which recognizes items in a photo, do not employ the use of any physical image capturing system. InceptionV3, for example, uses software which scans through the pixels of a photo and looks for familiar patterns. If enough of these patterns shout out “cat” to the program, it will label the photo as such. If we built a T2 robot which also perfectly masters this (feature detecting and photo generating) ability, would it be considered a T2 or a T3 passing robot? If it is T2, then I think that we might be able to create an identical simulated replica of the world, give it to the T2 candidate and not have any issues conversing with it about real world events, photos, or anything else. If we must accept that this is a sensorimotor capacity then it probably falls under T3 and I am uncertain whether T2 really is strong enough to answer our problems.

    ReplyDelete

Opening Overview Video of Categorization, Communication and Consciousness

Opening Overview Video of: