Blog Archive

Monday, September 2, 2019

2a. Turing, A.M. (1950) Computing Machinery and Intelligence

Turing, A.M. (1950) Computing Machinery and IntelligenceMind 49 433-460 

I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words. The new form of the problem can be described in terms of a game which we call the 'imitation game." It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B. We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"




1. Video about Turing's workAlan Turing: Codebreaker and AI Pioneer 
2. Two-part video about his lifeThe Strange Life of Alan Turing: BBC Horizon Documentary and 
3Le modèle Turing (vidéo, langue française)

70 comments:

  1. Turing mentions the idea of a "Universal State Machine", wherein with enough information, we could produce the states of the Universe. One contention that comes up is the continuity one (objection 7), which goes that even non-continuous state machines could play the imitation game - they just have to answer fast enough. I make no comment here on whether the criteria of continuity is a good one for deciding if something is a someone, or thinking at the very least. However, I do ponder here whether Turing Machines would fit the bill if that were the case.

    I wonder though, could the argument be made that continuity is essential to thinking? We often conceptualize the way we think as a step by step process, but I am inclined to say that that is an oversimplification. If you were to draw a neural map of every neuron, map every fire and connection, even if every chemical change is causal, it would at the very least be multi-level and continuous. With so much change and chance, one would be inclined to believe that, as much as a second can be parsed to infinity, our mental map can be said to have an infinite number of different possible states. The Turing Machine, however, only has a discrete number of states by definition.

    On the other hand, if we do assume continuity is quintessential, could it be said that the Turing Machine is indeed continuous? While it does have discrete states, those states are in flux, since the Machine is at all times doing one thing, or another, or in between somewhere.

    ReplyDelete
    Replies
    1. The Turing Test is not a game, nor is it imitation.

      The TT is the attempt to answer a scientific (or reverse-bioengineering) question: How are organisms able to do all the things they can do? How can a machine (because organisms are all machines, i.e., causal mechanisms) produce that capacity?

      The Turing Test is the test of whether we have figured out how the machine works well enough to be able to build one and show that it can do it all, indistinguishably from (and to) the natural machines: us.

      Neither continuity nor time are relevant. The machine just has to be able to do it, somehow, for a lifetime, just as organisms do.

      What is not evident, though, is whether computation alone (discrete symbol manipulation, running on a computer) can pass TT. T2 is only verbal (symbols in, symbols out). Can computation alone pass that? And then what about T3: robotic (i.e., sensorimotor) interactions with the real things in the real world? That necessarily requires at least some dynamic (hence noncomputational) activity (e.g., chemical), whether or not it is continuous.

      Delete
    2. Then is the TT only practically concerned with input/output? And once enough I/O "match up" with how humans think, then we go back, open up the machine, and see how it was done? This kind of reminds me of the concept of behavioural equivalence that we spoke about last week!
      I want to echo JG's question though, because it makes me wonder whether or not the TT then is... missing the point? Or maybe I am gravely missing the point.
      As in, Turing asks "can machines think?" but maybe a different way of phrasing it can be "can machines think like how we think we think?" (repetitions intended, cause we don't really know how we actually think). And is it important that a machine that gets an A+ on the TT thinks in the "right way"? I had understood it as yes, it matters that a machine also passes T4 in order to be seriously considered.

      Delete
    3. wait, a much more succinct way of asking this would be whether it matters if we use strong vs. weak equivalence. I think it makes a big difference depending on which one we choose to use, but it's also a decision that has to be made.

      Delete
    4. Esther, "I/O match up" means being able to do everything an organisms is able to do! A physicist's "I/O match up" is everything any form of matter is able to do!

      "Oopening up the machine" would be T4. Doing that with nonhuman victims has not succeeded in reverse-engineering their cognitive capacities. How would you "open up" a human? (Neural imaging is a bit better than introspection, but it does not tell you how or why the brain can do all it can do either, as we will discuss in Week 4.)

      How can trying to reverse-engineer how organisms can do what they can do, and then trying to test whether we have successfully figured it out, be "missing the point"? That there might be more than one way to pass T3, and we want to know the right one? How do we figure that out? Go on to T4? Would you be willing to kick Ting until then? (And is the possibility that there is more than one way to reverse-engineer cognition really a problem when we have not yet found even one that comes close?)

      The TT is pass/fail. If it can do everything we can do, and we can't tell it apart from one of us, for a lifetime, it passes. If it can't, it fails.

      "Strong Equivalence" for T3 would be T4.

      Delete
  2. Randomly found this article published last week called "A robot wrote this entire article, are you scared yet human?" where the robot is trying to convince us not to fear AI and felt like it belonged here: https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3

    ReplyDelete
    Replies
    1. GPT-3 just goes to show you how far unsupervised correlational analysis of huge bodies of text data can get when it then generates text based on the patterns in its enormous data-base. There's no meaning in it; it's just clichés distilled out of that data-base. It's also extremely superficial, and not very coherent either.

      But GPT-3 not a robot! It has no body, no sensors and effectors, no contact or interaction with the world other than swallowing an endless stream of meaningless system that had been structured by the cognizers that wrote them. Turing's Test is not an "imitation game" -- but GPT-3 really is!

      Delete
    2. Yeah I agree, it sounds a lot like "this is what a human would want to hear" based on all the info it has access to but it's cool that this was published right while we're about to start talking about Turing and the imitation game

      Delete
  3. Unrelated but posting here in case others have this question: is the midterm date already set (or at least a rough estimate)?

    ReplyDelete
    Replies
    1. We'll pick the time for the mid-term tomorrow, in class. Please remember to bring it up!

      Delete
    2. We voted and the vote was to distribute the take-home exam in week six (Oct 27) and return it in week 7 (Nov 3) (unless you counted Oct 27 as week 5? This Week, Turing, is Week 2. The Intro overview was Week 0.) If there is any uncertainty, bring it up again next week.

      Delete
    3. I thought we decided on October 6th and not week 6?

      Delete
    4. Bring it up next week and we'll sort it out. Oct 6 is week 4, which is kind of early)...

      Delete
  4. “It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child. Things would be pointed out and named, etc.”
    If the Turing Test is designed to see if computation alone can explain all of cognition, then does this statement from Turing not contradict his own test? Since sense organs like sight and smell wouldn’t be purely computational. They would have to interact at least chemically and visually with the world and require some amount of interpretation. Another consideration would be if computers would be able to make subjective and qualitative choices that have no correct answer but require sensory input: like “which cake tastes the best and why?” (Although I admit this question probably isn’t relevant since the robot could just lie or choose an answer at random).

    ReplyDelete
    Replies
    1. I understand your confusion, however I think it is imaginable that sensing the environment can be accomplished with pure computation (as done by driverless vehicles today). Furthermore, Turing does suggest a few, computationally understood, teaching strategies for a child machine such as punishment-reward learning and other "unemotional" channels of communication. On a whole, I think Turing's idea of starting with a child machine brain and teaching it is a brilliant approach because it theoretically allows the machine to develop the programming complexity of an adult machine brain. At least that is how I see this approach working in some part. Turing even states that the teacher of the machine would be ignorant to what happens inside the machine while learning.

      After reading your comment again I think you touch on something interesting. When it comes to tasting foods, does this really contribute to Turing's goal of understanding cognition? If you can create a machine that has opinions regarding non-sensory topics, would we need to confirm it can have opinions about its senses? Maybe if we want the machine to pass t3, but for the standard Turing test I do not think this would be an issue.

      Delete
    2. Aylish Good point. And that's why "Stevan Says" that Turing was not a computationalist. But passing the TT (whether T2 or T3) requires the capacity to learn; so a TT child would just have to learn more and longer! (Swallowing food -- as we discussed in class about Ting -- is T3, but tasting it is feeling, not doing, so no TT (not T2 nor T3 nor T4) can demonstrate it. On the other hand, T2 can already demonstrate the capacity to talk about the taste of food.

      Matt, input other than verbal (symbolic) is no longer just T2. I'm not sure what you mean by "opinions": For T2, there's just what it can say and respond to saying. For T3 there's what it can say as well as what it can do. An "opinion" can be expressed in words and deeds. But it also feels like something to have an "opinion" -- in other words, to feel what it feels like to believe that something is true.

      Delete
    3. “Since sense organs like sight and smell wouldn’t be purely computational. They would have to interact at least chemically and visually with the world and require some amount of interpretation.”

      I’m having trouble grasping the relevance of sensorimotor input. In the case of computers, my understanding is that they are “dynamic” systems, in the sense that they obey the laws of physics and so forth, but that they function computationally. This is essentially the hardware/software distinction.

      I understand the brain to be of a similar nature. There is no doubting that the brain itself is physical and subject to dynamic processes, such as chemical and electrical activity. But surely, in the same way as the computer, the presence of dynamic processes does not mean that it cannot function computationally. I’m confused about how sensory input complicates this. We already know a lot about how our sensory organs take input such as sight and smell and transform them into a format that can be processed by the brain (as chemical or electrical currents). Why is it impossible that these inputs are simply translated and processed computationally?

      Delete
  5. “"A variant of Lady Lovelace's objection states that a machine can "never do anything
    really new." This may be parried for a moment with the saw, "There is nothing new
    under the sun." Who can be certain that "original work" that he has done was not
    simply the growth of the seed planted in him by teaching, or the effect of following
    well-known general principles”


    I think Lovelace’s objection can be further argued against if we consider what Lovelace is implying: that humans (or organisms in general) can go beyond what their hardware says they can do, and robots cannot. This belief, to me, seems very arbitrary. Our brains are composed of an astronomically large amount of neurons, but this quantity is not infinite. This suggests that there has to be some sort of limit to the brain and therefore what it supports, the mind. However, since the full nature of the mind and brain is unknown, there’s no way of definitively knowing what these limits are, yet. Apart from the theoretical bounds of brain power, there are plenty of examples of feats that we as individuals can’t do and will never be able to do, e.g. visualising a colour not on the visual spectrum even if you don’t see beyond this range. Any changes to these known bounds in our descendants would not be because they defied their hardware, but because the hardware changed by virtue of evolution, just like the next ‘generation’ of machines would be able to do more than their predecessors because they were upgraded.

    ReplyDelete
    Replies
    1. You're right, but there's no need to go into the stratosphere to demonstrate it, nor even to the full T2. Any computer program that can learn and recombine what it has learned can do things that are new -- including things that the one who designed the code did now know or expect. That's part of the power of computation, and recombination. In fact recombinatory DNA (which is not Turing computation) can do it too, as you point out.

      Delete
  6. 2 questions that came up in reading Turing's paper were:

    1. Does the infinite amount of paper with which the machine is provided correspond to an infinite amount of paper on which rules are written and calculations can be performed by the human computer?
    2. Turing's thesis refers to "states" of the machine. I understand that he was not so much concerned with computation as an analogy for the brain as much as proving a machine could fool a human into believing it is also a human. However, can "states" be interpreted as an analogy for the physical state the mind is at any given point in time?

    In objection (8), The Argument from Informality of Behavior, Turing states "For suppose we could be sure of finding such laws if they existed. Then given a discrete-state machine it should certainly be possible to discover by observation sufficient about it to predict its future behavior, and this within a reasonable time, say a thousand years."

    We are indeed governed by laws of behavior. In his refutation, is he arguing that in the same way as it is impossible to enumerate every single law of behavior that governs humans, it is unrealistic to expect to know all the laws governing a digital machine, and that the machine is then also unpredictable? Would we not already be able to predict its responses given that we have programmed the machine?

    Moreover, is laws of conduct then just a law of behavior, or something we would teach to the machine?

    ReplyDelete
    Replies
    1. 1. The infinite tape is the potential input (data). The hardware that executes the computations is a finite-state machine, but with the Turing machine's few simple capacities (read, write, advance, erase, halt) the data can reconfigure the finite states to simulate any finite-state machine, or just about anything (Strong C/T Thesis).

      2. Turing was not interested in fooling anybody. He really wanted to find out how to produce the capacity to do anything that Ting can do, for a lifetime, indistinguishably from what anyone else can do. (The states Turing was talking about were the finite states of a Turing Machine. And "Stevan Says" that Turing was neither a computationalist, nor any more interested in T4 than he had to be -- in order to pass T3!)

      The issue of predictability is a complexity-theoretic one: There are things that are predictable in principle from the available data, but testing the predictions is "NP-complete", which means it would take forever to do the calculations.

      Delete
  7. "If we substitute "laws of behaviour which regulate his life" for "laws of conduct by which he regulates his life" in the argument quoted the undistributed middle is no longer insuperable. For we believe that it is not only true that being regulated by laws of behaviour implies being some sort of machine (though not necessarily a discrete-state machine), but that conversely being such a machine implies being regulated by such laws. However, we cannot so easily convince ourselves of the absence of complete laws of behaviour as of complete rules of conduct” (Contrary Views on the Main Question, #8)

    I need help understanding the Argument from Informality of Behavior. I became confused once Turing discusses laws of conduct versus laws of behavior. I understand the difference between them but lose him when he brings up the undistributed middle fallacy. Is he arguing we cannot rule out a machine mimicking human behavior because we are yet to exhaust our search for a complex set of laws of conduct that we can program the machine with? Furthermore, why do we believe that being regulated by laws of behavior implies being some sort of machine? Ultimately, I struggle to see the real reason for distinguishing the laws of behavior from the laws of conduct, especially since it is imaginable to program laws of behavior with laws of conduct. Clearly, I am missing something here.

    ReplyDelete
    Replies
    1. I have read this argument (8) a number of times and ended up returning to my notes from PHIL 210 to try to find the logical fault that he references. I am not entirely clear on the undistributed middle, however I do agree with Turing that this is a flawed argument. Turing tells us the stance of argument (8) is that "if each man had a definite set of rules of conduct by which he regulated his life he would be no better than a machine. But there are no such rules, so men cannot be machines." The fallacy in this argument is that there could be other ways to be a machine that do not rely on someone following these conduct rules. His later point is that people seem pretty sure that these conduct rules do not exist, but do not seem as sure that behavioural laws do not exist. I think that he argues we have not studied humans enough to say that we aren't governed by some sort of behavioural law system, in which case perhaps we are machine-like.

      Your question is really interesting about why we believe being governed by this law system gives us qualities of machines. Perhaps it is connected to our desire to feel like we have free will and are agents in the world; I would like to believe that the decisions and actions I make daily are not due to some if...then... statements in my brain. I think that this distinction between laws of behaviour and laws of conduct is essentially so Turing can underline there might be some predictability in human responses to a stimulus (I would even think something like the knee-tap reflex would fall under this as a law of behaviour), yet other human responses, those which fall under conduct, are not predictable (like the non-uniformity of response by people when they see a red and green traffic light).

      Delete
    2. Matt, I agree that Turing is being a bit wish-washy here. A "machine" is just any causal "system,' whether a proton, a plant, a pangolin, a person, or a planet. It is governed by the physical laws of cause/effect.

      Whether the machine is a discrete finite-state (digital) one (like a Turing Machine or a computer) or a dynamical (analog) one (like a furnace) is another matter -- but according to the Strong C/T Thesis, the computer can simulate just about any dynamical system.

      (A digital computer is a dynamical system too, but the program it is executing -- its software -- is independent of its hardware: computation (symbol-manipulation) is independent of the dynamical system that is executing the symbol-manipulations, except that there has to be hardware to execute the software.)

      The behaviour/conduct distinction is pretty empty, but the competence/performance distinction (which Chomsky uses too)is not. Competence is the capability to do what the system can do; what the system actually does depends on what data (input) it encounters. In principle the outcome is predictable, but it's usually an NP-complete problem to predict it. (Probably nestled in there with the intuitions is the question of "free will," which is just a feeling, and probably an illusion. It is also the epicenter of the "hard problem" which is the problem of giving a causal explanation of feeling.)

      The "easy problem" just calls for coming up with the causal mechanism that produces the capability.

      Stephanie You are mostly right. Causality and predictability are not the same thing, and the feeling of free will is yet another matter.

      Delete
  8. "Our most detailed information of Babbage's Analytical Engine comes from a memoir by Lady Lovelace (1842). In it she states, "The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform" (her italics)." (quote from the assigned text, (6) Lady Lovelace's Objection)

    I would like to add on to the rebuttal Turing makes and the additional points that my peers have made to this sixth objection by involving language, specifically the idea of Noam Chomsky of a generative or universal grammar. One of the most fascinating things to me about language is how it is possible for me to generate a totally novel sentence simply because I have these basic syntactical or grammar tools and so many words at my disposal. Theoretically I could email someone, as if participating in the proposed Turing Test, a totally novel sentence that I had never heard before in my life and furthermore nobody in history had ever said or written, although this last condition may be hard to actually check in real life. As mentioned previously, this ability comes from simply knowing the basic grammar rules and having a wide range of possible meaningful words to use as building blocks. I do not see why a Turing Machine could not, as Lovelace says, “originate” a totally novel sentence as well. Would the Turing Machine know in the same way we know what that sentence means? No – but this is not a problem in actually generating the sentence, because the meanings of the words themselves are not necessary to apply the syntax and create this totally new sentence. For instance, the first time I read this quote from our assigned text: “It is in fact the solipsist point of view,” I had no idea what the word solipsist meant, yet I could confidently tell that the sentence was grammatically correct. In this way, I do not see the problem in computation generating something utterly novel, thus challenging the position of Lovelace.

    Here is a source I consulted: "Tool Module: Chomsky’s Universal Grammar. (n.d.). Retrieved September 17, 2020, from https://thebrain.mcgill.ca/flash/capsules/outil_rouge06.html" which is useful for a brief background understanding on Chomsky's ideas of universal grammar.

    ReplyDelete
    Replies
    1. Spot-on on every point.

      Ada Lovelace (daughter of Lord Byron and co-inventor of one of the precursors of the computer, with Babbage) was brilliant, but simply wrong on this one. She either underestimated the power of computation or overestimated what it means to do something new.

      Turing slipped in his allusion to "solipsism" (which is the belief that I am the only thing that exists). All Turing meant was philosophical scepticism -- the same thing Descartes meant in his prelude to the Cogito, in which he described all the things that we could not be certain about, including the existence of the "outside" world. In this case, what Turing meant was "just" the "other-minds problem," which is that there is no way you can know for sure that anyone other than yourself feels.

      Of course the way out of this is to remember that science (and knowledge)does not need (and cannot have) certainty but only high probability on the available evidence. We have that with "mind-reading" other people and other mammals and birds; also with fish once you know them, and even with invertebrates such as octopus, bees, ants and ["Stevan Says"] eventually probably all animals with nervous systems, right down to the lowly oyster.

      But when it comes to robots, all we have to go by is the Turing Test: T2, T3 or T4...

      Delete
  9. "An interesting variant on the idea of a digital computer is a "digital computer with a random element." These have instructions involving the throwing of a die or some equivalent electronic process; one such instruction might for instance be, "Throw the die and put the-resulting number into store 1000." Sometimes such a machine is described as having free will (though I would not use this phrase myself), It is not normally possible to determine from observing a machine whether it has a random element, for a similar effect can be produced by such devices as making the choices depend on the digits of the decimal for ."

    There was a similar question raised on the skywriting forum last week about how cognition and randomness are related, and whether generating a random number was a cognitive process at all. While Turing argues that it is not possible to determine from observing a machine whether it has a random element. I argue that it depends on the intended purpose of the machine. If it’s a machine created to pass the Turing Test, then if it passes for its ability to generate randomness at T2, I would argue that it’s good enough. After all, the Turing Test was created to “trick” people into thinking that it were a human with full cognitive ability. If it were a machine created to pass at least the T4 level of the Turing Test, I would say that it does not matter whether its generation of randomness is superior to a human's ability. I can’t remember it’s a quote from whom, but a scientist had claimed that there is no such thing as a general purpose machine. I still stand by that statement, that it’s impossible to create such a machine (that is, be good at everything, including generating randomness).

    ReplyDelete
    Replies
    1. Wendy:
      1. The TT is not a "trick": It is a real attempt to really bioengineer cognitive capacity.

      2. In TTesting Ting, would you ask her to "generate a random number"? And kick her if she didn't?

      3. Why would it not matter for T4 but matter for T3 or T2?

      4. I don't know what "general purpose machine" means (unless it means a Universal Turing Machine (UTM), which is what a computer is.) People can compute. But the TT is not about designing a computer; it is about reverse-engineering human cognitive capacity: what people can do.

      5. "Free will" is not something people do. It's something it feels like when they do some of the things they can do: the things that it feels like they are doing deliberately -- the things they feel they are "causing" their bodies to do. But nobody has an idea what that really means. And it is the core of the "hard problem" of how and why organisms feel anything at all.

      Delete
  10. Professor Harnad mentioned in class that the Turing test isn't actually a game but is a means through which we can gain insight into the fundamental question of cognitive science. If we can build a machine that is behaviourally/functionally equivalent to a human (meaning that a machine can produce any output given a set of inputs that a human can, I know that Pylyshyn would object to this because I haven't specified that the algorithm needs to be the same but I digress), than we know to some extent 'how it is we do what we do'. It is not the criteria of the TT that we're attempting to fully replicate a human in every way possible (I know I have been hung up on individual differences as some of my critiques of computationalism) but that we are trying to reverse engineer intelligent responses in order to gain insight into our fundamental capacity to cognize. It doesn't matter if human differ from each other to a certain extent because we all have a core ability that allows us to have cognition.

    "May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection."

    I'm curious as to why we need not be troubled by this objection. Is it as I said above that a long as the functionality between human and machine is indistinguishable then the thinking process itself doesn't need to be exactly the same? What would Pylyshyn's view on this be?

    In Turing's refutation to the mathematical objection, he talks about Gödel's theorem which basically says that there will be necessarily some things that the machine cannot answer or solve. Turing fairly brings up the point that the human intellect also has limits. However, isn't it the case that the form of mistakes or errors are sometimes just as important and the proper solutions? In order to mimic the intelligence of man, and in doing so reverse engineer our own cognitive capacities, shouldn't a successful Turing candidate also make mistakes in the same way a human does? Is this just already part of the requirements for a machine that can pass the TT?

    ReplyDelete
    Replies
    1. Hi Allie! I love your questions here. - (I'm Alex Taylor, sorry for my aggressive screen name, it predates this class lol)

      I had questions along similar lines. Why shouldn't we care about this question? I guess from the limited perspective of the "game" (rather than real life which might consider further questioning) and its objectives, if we assume that there are or could be multiple, distinct modes of "thinking" as suggested by the objection, the goal isn't to determine which type of cognition takes place, but rather if /any/ cognition takes place that can be recognize itself as cognition.

      In Prof. Harnad's piece for this week, he outlines the five kind of levels of the TT. Responding to Turing's paper, Prof. Harnad states that Turing rejects T4 and T5, which include criteria of similarity in structure as well as function, and T3 based on the conditions of the "game" experiment. So it does seem like you have it right with the emphasis on indistinguishable functionality, at least in the apparent sense.

      As for Pylyshyn, I'm not sure I fully understand his theories yet, but based on your point that he would take issue with an equivalence that lacks specification for sameness of algorithms I have some thoughts. I wonder, would he require that the content or dynamical structure of the algorithms (in the black box?) be the same between humans and machines? If it had to do with content but still the general structure/framework of the algorithmic levels were the same, would that be sufficient? If not, and instead the content and structure need to be identical to satisfy Pylyshyn, and if we think of the content as the language with which we communicate, does that imply the exclusion of equivalent cognition between "machines" who speak/understand (a cognitive process run by algorithms?) in different languages?

      To your last point, for the TT, in "Imitation Game," Turing discusses in objection #5, that there can be two types of errors. The first, "Errors of functioning...due to some mechanical or electrical fault," and the second, "Errors of conclusion...arise when some meaning is attached to the output signals from the machine." I think the second form of error certainly makes a difference in whether the interrogator will be able to distinguish human from machine. That being said, the argument from Gödel's perspective suggests that we could tell a machine apart from a human because "there will be some questions to which it will either give a wrong answer, or fail to give an answer at all." You covered the fact that humans make wrong answers too, and I want to add to that, to say that humans also may not give any answer at all sometimes, for various reasons. Sometimes people fall asleep and forget they're playing the imitation game and then they submit they comments at the end of Friday... you never know what could happen.

      Anyway, thanks for reading if you did! Have a great weekend :)

      Delete
    2. Allie, good questions. Yes, I think Turing would be satisfied with a TT candidate that could do it at all, and would not insist on Pylyshyn's "same algorithm" (or, if T3, then same dynamical system). But don't forget that the "it" that the TT candidate must be able to do is everything a (normal, average) human being can do, not just play chess! That’s just a toy fragment of human cognitive capacity, There are lots of chess-playing models, all doing it differently, but as you scale up to a model that can do everything that a human can do, the degrees of freedom (the number of ways to do it all) narrow. Maybe not to zero, but even if we had 6 Tings, one built in MIT, one in Stanford, etc., all with very different internal mechanisms, would you kick any of them? Would you only not kick the T3 that was also T4?

      Yes, you would expect the same kind of fallibility in a T3 as anyone else (and Ting no doubt has it). But fallibility is easy to generate (and hard not to!). You probably mean trying to figure out the mechanism from studying people's mistakes; that's fine, but it's not yet model-building or TTesting. It's still trying to reverse-engineer what to build.

      (And it's pretty sure that Ting cannot prove anything that is unprovable!)

      Alex, yes, it's the (full) capacity (of an average person) that cogsci is trying to reverse-engineer and test with the TT. And since we are nowhere near being able to pass (the total) TT any way at all, it seems premature to worry about "what if there's more than one way?"...

      "Indistinguishable functionality" is vague. What the TT requires is (full) human cognitive performance capacity, indistinguishable from that of any other human, to any human: "Weak Equivalence" (whether computational or dynamic).

      If computation were just computation, then "Strong (i.e., algorithmic) Equivalence" would mean that two computers that produced the same output for the same input would be doing it using the same algorithm. That includes if one of the computers is one of us (which it would be, if computationalism were true). (This actually only even makes sense for T2, if passed by computation alone, since there's more to T3 than just algorithms.)

      I don't understand what you mean by "content of dynamical structure," but something like Weak Equivalence can probably be defined for dynamical systems too (as in AM and FM radio, maybe).

      "Stevan Says" Goedel's theorem has nothing to do with reverse-engineering cognition. The Weak C/T Thesis is that what mathematicians are really doing when they compute is Turing Computation (the Turing Machine). The Goedel Theorem proves that there are theorems that mathematicians can't prove. ("Prove" means compute.) Mathematicians are humans. Therefore (if the Weak C/T Thesis is true) it is not part of mathematicians’ cognitive capacity to compute proofs that are uncomputable . This is true whether or not computationalism is true, and therefore Goedel's Proof has nothing to do with cognitive science irrespective of what a giant like Penrose or a pygmy like Lucas might think about it... (But this is esoteric stuff: you don’t need it for this course!)

      Delete
    3. @alex your aggressive screen name is great

      Delete
  11. After reading the mathematical refutation extensively , I have come to the conclusion that not only did Turing not answer the refutation properly, but it was also a refutation that did not matter in this context.

    Turing did define what is Godel's theorem (quite eloquently, I might add) before starting his argument, but he did not actually refute the theorem in his rebuttal. Turing stated that "It [Godel's theorem applied to machines] states that there are certain things that such a machine cannot do". This is not what Godel is arguing about. Godel's theorem is about how you cannot prove that something is true. He is not disregarding that things can be true, but only that we cannot know that the proof of the truth is actually the only way to access said truth. Meaning, that Turing's logical adjustment of the theory to machines is not well matched. Because "certain things that such a machine cannot do" has nothing to do with proof, but is actually more about function than anything else. Turing inevitably goes down his line of reasoning with this and actually never refutes how Godel's theorem is an objection to digital machines being able to perform computation.

    With that taken into consideration, I think that it would have been best for Turing to simply state that Godel's theorem is not relevant towards this topic and how it does not apply in this context. Turing is not trying to demonstrate that computation is the "proof" that is necessary in order to simulate the mind through a machine. If he were trying to argue that computation through digital machines are the proof (like an equation)to which we can simulate the mind, then the theorem would apply; but this is definitely not the case.

    ReplyDelete
    Replies
    1. Although, upon posting this, I have realized that Godel's theorem can actually be a refutation, though it's not a strong one, in my opinion.

      Because it can definitely be argued that Turing is trying to prove that a digital machine can simulate the mind. So then you could actually apply Godel's theorem here and say that "we know that what the machine is computing is like the mind [should it be that way], but we cannot know that your machine proves that". However, because the nature of the theory is of infinite regress and can be applied to anything, I don't think it applies here. I think that Turing is more in the field of experimenting with a potential theory, rather trying to hone in a definite proof that digital machinery + computation = mind.

      Delete
    2. Yes, Goedel's Theorem is irrelevant to cognitive science (reverse-engineering organism's cognitive capacities), and it is therefore irrelevant to this course. But in his 1950 paper Turing was talking about lots of things, including what mathematicians can do, what computation is, what computation can do, the Weak/Strong C/T Thesis, etc. So it was natural to at least mention non-computability. If humans were able to do something that computation cannot do, then that would be a refutation of computationalism (the theory that cognition is just computation) -- though not a refutation of either the Weak C/T Thesis (that Turing computation is what mathematics are doing when they compute) or the Strong C/T Thesis (that computation can simulate just about anything).

      Delete
  12. - " If we substitute ‘laws of behaviour which regulate his life’ for ‘laws of conduct by which he regulates his life’ in the argument quoted the undistributed middle is no longer insuperable. For we believe that it is not only true that being regulated by laws of behaviour implies being some sort of machine (though not necessarily a discrete-state machine), but that conversely being such a machine implies being regulated by such laws.”
    This section of Turing’s essay, in which he responds to the syllogistic opposition argument that because there is no definite set of “rules of conduct” for humans, as there must be for machines, therefore humans cannot be machines, convinces me more than any other that in fact there is an important distinction between mechanical computation and human cognition, and that machines cannot be made to “think” in the same way that humans “cognize.” I have added italics to this quote, to emphasize the notable difference in my reading. Turing suggests we substitute “laws of behavior” – by which he means involuntary, physically anchored laws – for the oppositions’ “rules of conduct” – which he takes to mean social rules that one can choose to follow or not. The difference is evident in such a distinction between “laws” and “rules” – the difference is that element of choice. A human computer can decide not to compute, of its own volition, not due to any error in mechanics.
    It also seems that both the counter-argument to Turing’s thesis and his response include logical fallacies. The former, provides the premises:
    (1) if man (H) has definite set of rules of conduct (R), H would be a machine (M); (2) M has R, (3) H have no R; therefore, H cannot be M.
    This may be a logically valid syllogism, but the truth factor is up for debate. Turing should have said, who is to say whether man has or has not a definite set of rules of conduct?


    Instead, he suggests his beliefs that:
    (1) If X is regulated by laws of behaviour (R), X is machine (M); and,
    (2) If X is M, X is R.
    The only thing this tells us is that Turing believed machines are defined and constituted by their regulation according to laws of behavior. It does not push us toward any greater understanding of the relationship between humans and machines or humans as machines. Still, he acknowledges a distinction between describing the laws versus rules, and in the italicized portion of the quote above, between agentic or instrumental position with regulation.
    It still seems like a strong objection that humans can choose not to perform at least some computations for no reason other than pure will or lack thereof, where mechanical computers can only make decisions which they are explicitly designed/instructed to make, nor can they refuse to decide when instructed to do so. (Lovelace)

    ReplyDelete
    Replies
    1. All these solemn syllogisms! Kid-sib just sighs. Let me try to tell kids-sib the part that's worth telling:

      (1) All organisms are machines, meaning that they are subject to cause and effect, like everything else in the universe.

      (2) Cognitive science, part of biology, is trying to figure out how organisms work: what kind of cause/effect machines are they? Their brains are clearly more complicated than a pendulum, or even a pancreas, because what they can do is everything the organism can do.

      (3) Mammals have the capacity for "involuntary" actions as well as for "voluntary" actions that depend on their individual histories and genetic makeup.

      (4) Reverse-engineering their voluntary capacities (and TTesting them) would explain (causally) what their capacities are (how and why they can do what they can do), but it would not predict or explain what they will actually do, because to do that you would have to know their individual genetics, past history, their current state and their future experience.

      (5) Besides being able to do all the things they can do -- produce "involuntary" actions (which are more easily predicted) and "voluntary" actions -- organisms can also feel, and (some of) their voluntary actions feel to the organism as if the organisms themselves were "causing" them. We have no idea why organisms have these feelings (or any feelings at all) but they are no doubt also caused by their brains, which are mechanisms, and are part of their bodies, which are also mechanisms, all of them part of a semi-autonomous causal system -- semi-autonomous, because they are of course also under the causal influence of their inputs (the things in the world) and the causal consequences of their outputs (the things they do to the things in the world).

      It's all causality, but not all predictable, either by the ones trying to reverse-engineer organisms, or by the organisms themselves (in what they feel and do).

      There aren't two kinds of "doings": "behavior" and "conduct." There's just doing-capacity, some of it involuntary, some of it voluntary; some predictable, some not; some felt, some not; some felt to be voluntary, some not.

      Delete
  13. - "As I have explained, the problem is mainly one of programming. Advances in engineering will have to be made too, but it seems unlikely that these will not be adequate for the requirements."
    This quote is reminiscent of the opinion that solutions to climate change are just dependent on time and advancement of technology. In terms of understanding cognition, will advances in technology (and essentially, machine storage capacity and speed) really give us the tools to understand cognition better?
    In an article about adult brain plasticity that I’m reading for another class, the author poses the question whether structural and functional changes in the brain are caused by changes in behaviour/learning OR if changes in behaviour are what spur changes in brain anatomy. This also brings me back to the first comment I made on this post, where I ask if we could change the question from “can machines think?” to “can machines think the way that we think we think?”. MAYBE the way that we think is also co-evolving with advancements in technology, such that: advances in technology  change in behaviour  (possible) changes in brain function and structure. If the adult brain truly is as plastic as it is being studied, what if cognition always stays “one step ahead of us”?
    One possibility might be that we currently don’t have the technology to observe the brain’s plasticity in real life; maybe this is the advancement in technology that we are waiting for?

    ReplyDelete
    Replies
    1. those squares were supposed to be arrows ->

      Delete
    2. Turing is saying that the task is more to come up with the right software rather than to come up with better hardware.

      There is a good reason why Turing highlighted T2: Language (as we"ll see later in the course) is remarkably powerful, much the way computation is (the Strong C/T Thesis). Can machines think? Well, of course, because organisms are machines (causal systems). Maybe no two of us thinks "the same way." Maybe thinking and thinking capacity evolve with time (and co-evolve with technology, like the Web and google). But human verbal communication (in the same language) leaves a huge scope for T2 and TTesting. The brain's cognitive capacity can be conveyed (hence tested) verbally (T2) -- even if it's dependent on T3 capacities that cannot be tested directly with T2.

      Delete
  14. Although Turing was prescient in his section on learning machines, two of the claims made in that section seem problematic to me.

    « Our hope is that there is so little mechanism in the child brain that something like it can be programmed easily »

    « An important feature of a learning machine is that its teacher will often be very largely ignorant of quite what is going on inside, although he may still be able to some extent to predict his pupils’s behaviour »

    Firstly, the case seems to be exactly the opposite of what Turing was hoping: the mechanisms in a child’s brain are extremely complex. Moreover, if we are to design a machine that can do as thinking does (i.e. it can pass T3), it being equipped with sensory apparatuses, as Harnad suggests, is necessary. The mechanisms for sensory perception are present at birth, so one can readily see how the problem is already much more complex than originally thought. Another interesting thing to note is that the historical progression of AI seems to be reversed. In other words, higher-order cognitive abilities such as playing chess were implemented in machines before anything near what we today call computer vision or other more « basic » cognitive processes were. Where is the AI model that is good at somatosensory perception, let alone auditory perception (we are barely there with natural language processing, think of Siri or Google Assistant which have very restricted understanding of natural language).

    Secondly, if a teacher is largely ignorant of « what is going on inside » (as we are today with deep learning algorithms), do we not end up back at square one?

    ReplyDelete
    Replies
    1. The "teacher" does not mean just verbal instruction. You already have to have learned a lot some other way before you can learn verbally. There is also "supervised learning" through trial-and-error (aka "reinforcement" learning), through corrective feedback from the consequences of doing the right or wrong thing -- which does not require an "instructor." (Turing anticipated such neural-network learning algorithms too.) The environment can be the instructor.

      And before that there's even "unsupervised learning," which is, basically: learning passively from the correlational structure of the input (features and their frequencies and correlations).

      In fact "Stevan Says" learning capacity is the most crucial component of cognitive capacity. So Turing's option is really a necessity. (And verbal learning has to be grounded in nonverbal learning.)

      Delete
  15. In Turing's addressing of the critiques of his proposed Imitation game, I find his conclusion rather unique. The most valid critique that he addresses in the section is the last one, which is that a human would have a terrible time attempting to imitate a machine and thus who was 'A' and who was 'B' would be discovered quite quickly. My instinctual thought was that this is a perhaps valid flaw in the game as in this light it seems to be testing the human more then the machine. But as Turing points out if the machine is truly capable of accurately presenting itself as a human then this should be a nonissue as it will appear to be just as bad at being a 'machine' as the real person is.

    What interests me about this point is that it goes beyond the machine's "ability" (which is in a way focusing on it accomplishing everything a human can accomplish and reaching 'up' towards what is perceived as a higher capacity) and instead is focusing on the machine's ability to deceive and 'play dumb'. This is one, requiring a different kind of understanding and 'intelligence' from the machine which focuses on the limitations of its human counterpart, but it is also very dubious by nature. Applying this level of deceit and what feels almost like mischievousness gives (me personally) a sense that the machine is almost required to have a personality to pass as a human, which is not an element I had concerned up to this point.

    ReplyDelete
    Replies
    1. Please read what has been said in other commentaries and replies about "deceiving" and "fooling" and "imitating."

      The TT has nothing to do with that. It's about figuring out how to produce the real thing.

      Delete
  16. The variant of Lady Lovelace’s objection that a machine can “never do anything really new” (16) brought to mind the near impossibility for humans to be wholly original. Fashion trends reoccur in cycles, artists source inspiration from past creators, ancient philosophers debated the same issues we continually bring up today.. The objection also sparked a thought in my head regarding the predictability (and potential determinism?) of human behaviour. We are directly influenced by the education we receive, the experiences that harmed or nourished us and the values we’ve been conditioned to adopt. If we ourselves can never really “do anything new” or go beyond what we’ve been experientially programmed to do. . Is it really fair to hold a machine to the same standard, when considering its ability to pass the Turing Test?

    “Instead of trying to produce a program to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education, one would obtain the adult brain” (20). I thought this point was absolutely brilliant. Rather than attempting to create a machine that could, upon completion, pass the test, why not conceptualize it as a gradual process? A child-machine could be created, able to pass the Turing Test, and could potentially develop more abilities with experience and further programming.

    I also liked his point that we need not be too concerned about legs, eyes etc (all the physical aspects of a human being), as there are many humans who think without these appendages. I hadn’t initially thought about this and it raises the question of what exactly is needed to “think” and “learn”. What is the smallest number of criteria needed to give someone (or something) the capacity to “think”? Which I suppose is the root question of the Turing Test and this class.

    ReplyDelete
    Replies
    1. Experience does not "program." It provides data. What we do with the data is often recombinatory (especially language). But most recombinations are "new" (even if most of them are trivially new, like this sentence).

      On modeling the child, and on learning, see the other comments and replies.

      In general, I urge everyone to read the other comments and replies too. That way we won't be saying the same things over and over.

      About what might be the minimal sensorimotor capacity needed to pass TT -- let's do it with the maximum and then see how we can cut back...

      Delete
  17. In “Computing Machinery and Intelligence”, A. M. Turing replaces the original question, “Can machines think?” with his new question, “Are there imaginable digital computers which would do well in the Imitation Game?”

    Turing refutes nine objections. Here, I want to talk more about (4) The Argument from Consciousness. Turing stated that we can accept a passing machine’s convincing behaviour as being human-like. Despite being able to operate correctly and hold a conversation, this does not imply it possesses consciousness. During the test, the digital computer is manipulating symbols according to its instruction table, simulating human conversational behaviour (or simulating thinking) and does not understand what it is outputting.

    Hence, I feel that the two questions are completely different. The first question involves the process of thinking. As we discussed in class, thinking is a complex, internal process that requires consciousness. However, passing the Imitation Game (T2: a symbolic and verbal examination) is only concerned with the machine's external behaviour, and not what is occurring internally. If a digital computer passes T2, can we assume it was thinking? Although Turing says we can disregard Solipsism (the theory that posits only one’s own mind is sure to exist and nothing of external minds can be verified), I think it remains an important and valid point.

    ReplyDelete
    Replies
    1. Since it is not clear yet whether computationalism is correct (nor whether Turing was a computationalist) it is not clear whether computation alone can pass T2 -- and it's sure that computation alone cannot pass T3 or T4.

      You yourself, by the way, are a T3 robot, Ting, so we already know you are not just a computer, computing.

      About consciousness and "solipsism" see other comments and replies. And Searle will be next week, on whether computationalism is true.

      Delete
  18. I think in his refutation of the argument of consciousness Turing did have a good point although I do think his example was confusing.

    I understand his point regarding the solipsist point of view in that we will never know for certain that a machine is sentient in the same way we will never know for certain other people are sentient, we just assume they are. But I do think that any machine that passes the Turing test would have to feel. After all, our reactions to events aren't always rational and this irrationality is due to our emotional state. I think for us to consider any machine conscious it would have be possible to elicit irrational reactions from said machine. That is, we can't know if the machine is angry for instance, but it should be capable of acting angry in response to to events. And it would be impossible for a machine that doesn't express some emotional state to pass the Turing test since we expect people to have emotional responses but I feel like this would have been more immediately obvious if he'd actually given an emotional example as opposed to the sonnet writing one.

    I also thought his specification in the critique of the new problem that stated

    "It might be urged that when playing the "imitation game" the best strategy for the machine may possibly be something other than imitation of the behaviour of a man. This may be, but I think it is unlikely that there is any great effect of this kind."

    was interesting because a lot of the news articles on the topic claim we've passed the Turing test wither in situations where the person on the other side has no reason to suspect that they're speaking to a machine or in cases where the machine claims to be a child/not fluent in English or some other startegy that absolves it from having to hold a conversation at a high standard. I think this points to a general misunderstanding of what Turing was tring to achieve and a desire to pass the test simply to have passed it :/

    ReplyDelete
    Replies
    1. (1) When Turing spoke of solipsism he just meant the other-minds problem (and maybe Descartes' Cogito).

      (2) There is a lot more to feeling than "just" emotions. (It feels something to be tired or thirsty, to touch water, hear an oboe, chew gum, want good weather tomorrow and to understand "the cat is on the mat" (as well as this sentence!). If it were just about scrambled or overactive states, and their confused or even "irrational" consequences, it would be easy to generate and explain. The hard problem is: how and why are any states felt states, whether "emotional" or not.

      (3) The TTest is the "game" life, life-long -- nothing to do with tricking or imitating...

      Delete
  19. “The machine (programmed for playing the game) would not attempt to give the right answers to the arithmetic problems. It would deliberately introduce mistakes in a manner calculated to confuse the interrogator.”

    Previously (either in a skywriting or in class, I can’t remember), I brought up a point that Turing now shows me is so easily solvable. I assumed that humans are irrational in a way that machines are not. I used the word “rational” to mean that we make the decision that has the highest probability of maximizing gains and minimizing losses. I knew and had seen countless examples of “irrational choices” where there was a clear “rational” choice that a computer that calculates probabilities would pick. I assumed our irrationality was justified through other factors such as emotions or motivation, things that would be very difficult for a computer to account for in the calculations. Now I realized it doesn’t have to, all it has to do is play dumb. The ease with which those couple of sentences shattered my illusion of irrationality is almost laughable. To pass the imitation game, the computer doesn’t care about needing to have emotions or motivation, it just has to be programmed to give the “wrong” (irrational) answer every x amount of times. When it comes to passing T3, I’d like to assume that our robot colleague Ting (if you’re reading this, hi Ting!) chooses irrationally sometimes for the same reasons I do, but I can never actually be sure of that. For all I know it’s just a program that tells her to choose wrong to seem more human. I’d venture to say that’d we’d come closer to a concrete answer at T4. If both Ting and I show activity in our ventromedial prefrontal cortices while estimating values, I’d like to believe that we’re both feeling the same thing. But for all I know, it’s just her code telling her “when making the wrong choice, light up here because that’s what humans do”. If that’s the case, what can we actually ever be sure of?

    ReplyDelete
    Replies
    1. First, let's sort out what "program" does and does not mean. An algorithm (a set of rules) for a Turing Machine may be "when you see a zero, erase it, write a 1, advance the tape and halt." For the Turing machine (a computer), that's a finite series of states it goes into, depending on the data it gets.

      But for Ting -- who is a T3 robot, not just a Turing Machine -- it might be part of the learning program that allows her to see a cat, on a mat, so when she is later asked "where is the cat?" she can (correctly) answer "the cat is on the mat." The program does not make her say "the cat is on the mat." It makes her capable of saying it when she has seen the sat on the mat, and not saying it when she has not seen the cat is on the mat. It is because she has seen the cat on the mat that she says it; the learning algorithm just enables her to say the right thing under those (and countless other) circumstances -- none of which entered the mind of the programmer, nor is contained in the code. It is (input) data-based.

      But, yes, the whole thing is "just" a cause/effect sequence. But that's true whether or not cognition is computation. It's true even if learning to say "the cat is on the mat" under those conditions happens because of a chemical reaction.

      In fact. if the learning algorithm is encoded in our DNA, it's not Turing computation but analog "computation," because recombinant DNA is about proteins and protein-synthesis (so it's not hardware-independent).

      So if we are "programmed," it is not necessarily because we are Turing Machines executing an algorithm (although we can be simulated by Turing Machines executing an algorithm, and we also do execute algorithms sometimes, both consciously and unconsciously). We are programmed simply because the world is causal: We are not spiritual agents making the rules for physics or biology; it's the other way round!

      And, again, it's not imitation, and not a game; Turing modelling and testing is an attempt to reverse engineer the real internal causal mechanisms that are producing organisms' ability to do what they can do, whether or not they turn out to be computational -- which in Ting's case it cannot all be, because Ting is a robot, not just a computational T2 computer.

      But would you really kick Ting if she could not pass T4?

      Delete
    2. I think what I keep wondering about the most is: are we trying to see if we can get a machine to cognize SPECIFICALLY the same way we do, or could we assume Ting cognizes even if we look into her skull and see wires instead of a brain? I can't figure out at which TT point we decide that cognition has been achieved.

      Delete
    3. "We may call them "errors of functioning" and "errors of conclusion." Errors of functioning are due to some mechanical or electrical fault which causes the machine to behave otherwise than it was designed to do. In philosophical discussions one likes to ignore the possibility of such errors; one is therefore discussing "abstract machines." These abstract machines are mathematical fictions rather than physical objects. By definition they are incapable of errors of functioning. In this sense we can truly say that "machines can never make mistakes." Errors of conclusion can only arise when some meaning is attached to the output signals from the machine." Turing, Computing Machinery & Intelligence

      Like Lyla, I'm also grappling with the notion of mistakes, as well as where this fits within our discussion. Turing states that machines can only perform errors of conclusion - errors that are considered errors because the rules/path the input follows produces a logically or factually incorrect output. The execution itself, however, is perfect. It seems that we are capable of both forms of errors. Intuitively, we assume our own computational errors to be errors of functionality - it doesn't feel that we modify our thoughts to purposefully mess up, only unless we choose to (perform errors of conclusion), which occur when we intend to say the wrong answer. However computationalists, I'm assuming, would seem to think that we only compute errors of conclusion.

      Rather than emotion or motivation, I am curious about errors in memory and information retrieval. Would errors like mixing up your friends' names have to fall within language capability, in order to pass T2 (thus making it necessary for T2 to have T3 capabilities)? And would information retrieval also have to constitute as language ability? This to me seems wrong.

      Delete
    4. Lyla, we all think somewhat differently, but TT is just about being able to do it all no more differently than we are from one another. What do you mean by "a different way of thinking"? Could Ting be thinking in a different way?

      Grace, an algorithm can be incorrect (give the wrong output); or the algorithm could be approximate or probabilistic, only giving outputs that are probably right, or mostly right; or the hardware could go wrong.

      In short, there is no "problem of error" -- or the feeling that there is one comes from not understanding what an algorithm (computation) is.

      And as always, our intuitions (feelings) of "free will" lead us astray.

      There are much stronger reasons for T2's needing T3 capabilities than just so it can forget or make mistakes!

      Delete
  20. I have been wondering why the Turing test is so useful. While I agree that creating machines which can fool us into thinking they are humans is an interesting idea, I’m not sure this helps us develop artificial intelligence. Do we really need a machine which can pass this test? Aren’t there more useful applications of AI? AI seems more concerned with specific goals like facial recognition and pattern finding. It does not need to clone human intelligence to do this. We have lots of intelligent humans we can talk to, why do our machines need to think like us too? I would love to have another intelligent creature’s unique views of our world, rather than another being which thinks and speaks like we do. We have enough ideas from humans, I want a robot which thinks in radically different ways from us.

    I suppose the Turing Test is an ok starting point to measure intelligence, given that we don’t have a very good intelligence test yet. Without this test we might not be able to tell when we have achieved artificial intelligence.

    ReplyDelete
    Replies
    1. (1) The TT is not about fooling.

      (2) It's to test whether we have succeeded is reverse-engineering cognition: How we (or any other machine) can do what we can do.

      (3) The TT has nothing to do with intelligence-testing; it's theory-testing: testing our theory of the mechanism that can produce our cognitive capacities. (That's why we can't build T3 in our sleep!)

      Delete
  21. In Turing’s definition of machine, he states “we also wish to allow the possibility that an engineer or team of engineers may construct a machine which works, but whose manner of operation cannot be satisfactorily described by its constructors because they have applied a method which is largely experimental.” I think this alludes to an interesting possibility of making the Turing Test even stronger. The test is centered around being unable to distinguish a machine from a human based on factors that normal humans can be subjected to. I would propose that in order for a TT-passing machine to be regarded as truly intelligent, we must not only allow, but require that the constructors of that machine are not able to satisfactorily describe the manner of operation. As of yet, we have no way of satisfactorily explaining human cognition. For both humans and machines, we can say that a specific input X will produce output Y (“if you pinch him he will squeak”) but the distinction is that we programmed the machine to do so, and we may not be able to explain what exactly happens in the human mind. Until we are able to develop an all-encompassing theory of human cognition, or an explanation of what is happening inside the “black box”, I think that machines should be evaluated based on this same property. It is not such an unattainable goal - even today, creators of complex deep learning networks have trouble explaining why the program makes some decisions over others. Adopting this requirement in the Turing Test would, in my opinion, serve to strengthen the test by making machines that are even more indistinguishable from humans.

    ReplyDelete
    Replies
    1. Learning ability is part of TT. And deep-learning algorithms are algorithms. And knowing the algorithm does not tell you what it will be doing after it's been learning, unless you also know every input that it will ever have -- and especially if it is an algorithm that can modify itself based on its input data!

      (But what an idea, that what we want from reverse-engineering cognitive capacity -- how people can do all the (cognitive) things that can do -- is to not know how!

      Delete
  22. What about “under-intelligent” behaviours humans have and “super-intelligent” behaviours humans don’t have? Human behaviours and intelligent behaviours are not necessarily one and the same and the Turing test requires for the machine to be able to execute all behaviours, even those considered “un-intelligent”. It is now documented that the systematic errors in the thinking of normal people are intrinsically related to the machinery of cognition rather than the corruption of thought by emotions or other outside factors. Where a computer may be able to simulate a “statistician’s mind”, it will also have to learn to simulate all the shortcuts of “intuitive thinking” and biases that human beings inherently have. In other words our computers will have to learn a certain number of heuristics that bypass “rational behaviour” in order to think in a more human manor. Similarly, it will also have to learn how to silence some of its computing power to think within the computational limits of a human beings. Certain complex calculations such as finding the nth decimal of PI will be impossible for human beings.

    I am aware that this comment somewhat misses the mission of the TT which is to explain “how we think what we think” instead of how the computer could best imitate a human being, but perhaps these evolutionary “flaws” of human cognition are inherent to understanding our cognitive machinery and how we think what we think.

    ReplyDelete
    Replies
    1. If biasses or errors in human performance give you a hunch about how to produce human performance, fine. We should take our inspiration from wherever we can get it. But biasses and errors are not ends in themselves: positive capacity is; and they are the fine-tuning of performance that psychologists tend to focus on (rather than working on the easy problem).

      Delete
  23. “May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection.”

    I thought this quote was very interesting in regards to the discussions we’ve been having in lecture about the more general questions, “What is Cognition?” and “What does it mean to think?”

    Our definitions of thinking and cognition have not convinced me that what a machine is doing and what we do when we think are entirely different procedures. Here, Turing seems to argue that if a machine can convince another person that it is human, then there is no point of having a definition of thought that makes a distinction between what humans do and what machines do, because the results are so similar.

    I don’t understand how we are so ready to say that Computationalism false when we don’t completely understand what thinking is. Am I correct in that the main arguments against it are the discrete-continuous differences? And what is the proof that Thought can’t be reproduced in a discrete system?

    Similarly, you responded to me last week saying:

    “Some parts of brain function are no doubt replaceable by just computers, computing. But, for example, neither sensory transduction nor moving can be. Neither can heat”

    But we do have machines that transform visual input, movement, and temperature into electric signals. Is the discrepancy in that they aren’t done by “computing”?

    I’m not even comfortable categorizing a sensory input as a part of cognition, because the chemical reaction that causes a sensory cell to depolarize is different from the signal that cell is outputting. Cognizing a sensory input is different from the processing of that stimuli in the sensory neuron, and I think this is obviously shown in phenomena such as Phantom Limb Pain.

    I don’t understand how the processing of stimuli can be a discerning factor for cognition (thought).

    ReplyDelete
    Replies
    1. Turing is just saying he wants to produce human cognitive capacity and is not worried if TT can do more. (But funny to be worrying about that 70 years later, when we're still far from passing TT any which way.)

      We have not yet said why computationalism is wrong (or not enough) in week 2. That's Searle and the symbol grounding problem in week 3.

      Computing is manipulating squiggles and squoggles; sensorimotor transduction is not. It's physics. You can simulate it computationally, but you can't replace real sensors with squiggles and squoggles.

      Cognition is done by cognizers, not by cells. Sensorimotor capacity is part of cognitive capacity.

      Delete
  24. Turing seems to believe that it is justified to say that machines can think if a machine (computer) can sufficiently fool an interrogator in the Turing test––i.e., if an interrogator does "not have more than 70 per cent chance" of correctly identifying the computer as a computer rather than as a person. What this implies is that Turing has a behaviourist view of thinking or cognition. He believes that if a texting computer––given input messages––can simulate the output messages of a texting person such that the computer can fool someone into thinking that it is indeed a person, this computer can justifiably be said to be thinking. Hence, it seems that Turing believes an essential tenet of behaviorism: that one can give an exhaustive definition of "thinking" which only takes into account inputs and outputs––thinking is whatever we are doing when after being told such and such, we respond in such and such a way.

    In respond to the objection Turing calls "The Argument from Consciousness," which holds that one cannot be sure a machine thinks unless one knows that a machine feels itself thinking, Turing argues that we make inferences that others are thinking even though we are not in the minds of those others and cannot be sure that they feel themselves thinking. As a kind of reductio ad absurdum, Turing writes, "according to this view the only way to know that a man thinks is to be that particular man." Turing is saying that if we infer that others are thinking based on their behaviour, then we should be able to make the similar inferences about machines, given the right behaviour. I would argue, however, that we make inferences that others are thinking (i) based on their behaviour, but (ii) based on the fact that they are designed similarly to us (both in hardware and software). If we behave a certain way and we know we are thinking, it follows that if others––who are similarly designed to us––behave in a similar way that we might, then they are likely also thinking. However, when it comes to machines, we are missing this key second point. Machines may be designed completely differently to us even if they behave similarly, and their behaviour alone does not justify our inferring that they think as we do.

    ReplyDelete
    Replies
    1. On "fooling" -- see other Replies.

      The T2 and T3 are behavioral ("doing" is behavior) but it's not "behaviorist": behaviorists did not try to reverse-engineer behavioral capacity. But modelling also the "doings" of the brain (T4) is also "behavioral. -- just more microbehavioral.

      The TT cannot test consciousness, because feeling is not doing; you can't tell if it's happening (the "other-minds problem) -- so you have to settle for "behaving indistinguishably from someone who is feeling."

      (Alex, I've not replied in detail because many of your points were raised by others. Please always read what the others have said first.)

      Delete
  25. From “Computing Machinery and Intelligence” by A. M. Turing:

    Section 5: Argument from Various Disabilities.

    “The claim that a machine cannot be the subject of its own thought can of course only be answered if it can be shown that the machine has some thought with some subject matter. Nevertheless, "the subject matter of a machine's operations" does seem to mean something, at least to the people who deal with it. If, for instance, the machine was trying to find a solution of the equation x2 - 40x - 11 = 0 one would be tempted to describe this equation as part of the machine's subject matter at that moment. In this sort of sense a machine undoubtedly can be its own subject matter. It may be used to help in making up its own programmes, or to predict the effect of alterations in its own structure.”

    I understand Turing’s explanation that in some way a machine can be the subject of its own thought, let’s say by taking its own equations to modify itself or to predict its failures. I think that the harder problem is whether or not the machine could feel itself thinking about itself. I guess from here, we are back to the argument from consciousness. Would I ever be convinced by brute behavioral data that a machine performing computations feels itself computing in anyway similar to when I feel myself thinking? Does only behavioral data matter when I assert that my friends have consciousnesses? Is there something more to my friends than their behavior or being that no machine could ever be able to do? Maybe not… maybe I am a machine in the end…

    Interestingly, the only information that I have to say that someone has consciousness is the things that he does, not the things that he feels (i don't have that information/ do I even know what I feel before knowing what I do?). Does feeling even exist, or is feeling just doing...?

    ReplyDelete
  26. Upon reading Turing's work, I found intriguing parallels between the functionality of a human brain's working memory and the description of a digital computer. Turing defines a Digital computer as consisting of three parts: 1) a store, 2) an executive unit, and 3) a control. Considering Baddeley's model of working memory, there consists of the long term memory storage, the central executive, and the subordinate systems in the form of the phonological loop & visuospatial sketchpad. One can equate the store of the digital computer to the long term memory storage, functioning as a cumulative memory bank to be later accessed as needed. The executive unit then mirrors the subordinate systems, as these components store and perform calculations/manipulations of varying stimuli/inputs. The control can finally be equated to the central executive as they both function as the "command centres" , ensuring that all instructions are followed and the comprehensive system is operating smoothly.

    With this analogy in mind, I believe the Argument from Consciousness can be refuted. This basis of this argument lies in the limitations of a computer to feel emotions or possess uniquely subjective feelings of experience. However, a grey area in distinguishing the human mind and machine surfaces due to these similarities. Since the human mind and core aspects of a digital computer show parallel systems, would it be a stretch to believe that thought/ability to think could in fact originate from both? If our "wiring" is the same, wouldn't incorporating emotion (e.g. via an artificial amygdala) to machines replicate the human processes to feel?

    ReplyDelete
    Replies
    1. 1. There are some similarities between brains and digital computers, but not that many.

      2. Computationalism (the theory that cognition is just computation) is probably wrong.

      3. Explaining consciousness (the fact that organisms feel rather than just do things) is the "hard problem." But computationalism is probably wrong for the "easy problem" of doing, too (Searle week).

      Delete
  27. I enjoyed this reading and appreciated the obvious great love/admiration Turing had for computers!

    ReplyDelete
  28. Turing’s description of ‘teaching’ the child machine, through constructing “events which shortly proceed the occurrence of a punishment symbol are unlikely to be repeated, whereas a reward signal increased the probability of repetition of the events which led up to it”, raises two critical issues as to the ability of the original Turing test to replicate cognition (Turing 1950). First, there is the issue of what a “punishment symbol” even is. Perhaps I am merely misled by the use of the word punishment, which to me requires the machine to have things that it ‘wants’ and things that it ‘wants to avoid’, which requires a level of intentionality and maybe even ‘personality’ on the part of the machine that does not seem to be provided by Turing’s description. Second, if one were to accept the ability of the programmer/experimenter to punish and reward said machine, this system of learning reveals the problematic behaviourism which is folded into computationalist thought. Turing’s punishment of the robot may be comparable to Skinnerian Conditioning, in which a desired response can be conditioned into a creature. While the robot may ‘learn’ in the sense that it is more likely to produce the desired results, could it be said that it understands why the desired results are desired? Simply becoming more capable to follow a procedure does not necessarily imply comprehension of that procedures meaning, how and why it works, as Professor Harnad mentioned in class, the ability to follow a recipe, producing the desired results, does not necessarily equal comprehension of what one is making, or how.

    ReplyDelete

Opening Overview Video of Categorization, Communication and Consciousness

Opening Overview Video of: