Blog Archive

Monday, September 2, 2019

1b. Harnad, S. (2009) Cohabitation: Computation at 70, Cognition at 20

Harnad, S. (2009) Cohabitation: Computation at 70, Cognition at 20, in Dedrick, D., Eds. Cognition, Computation, and Pylyshyn. MIT Press 


Zenon Pylyshyn cast cognition's lot with computation, stretching the Church/Turing Thesis to its limit: We had no idea how the mind did anything, whereas we knew computation could do just about everything. Doing it with images would be like doing it with mirrors, and little men in mirrors. So why not do it all with symbols and rules instead? Everything worthy of the name "cognition," anyway; not what was too thick for cognition to penetrate. It might even solve the mind/body problem if the soul, like software, were independent of its physical incarnation. It looked like we had the architecture of cognition virtually licked. Even neural nets could be either simulated or subsumed. But then came Searle, with his sino-spoiler thought experiment, showing that cognition cannot be all computation (though not, as Searle thought, that it cannot be computation at all). So if cognition has to be hybrid sensorimotor/symbolic, it turns out we've all just been haggling over the price, instead of delivering the goods, as Turing had originally proposed 5 decades earlier.

57 comments:

  1. When Harnad finishes his resumé of the philosophies surrounding Cognitive Science in Cohabitation: Computation at 70, Cognition at 20, he suggests, as next steps to Searle's response to the Turing Test, that we scale up the Turing Test by producing "the full robotic version of the TT, in which the symbolic capacities are grounded in sensorimotor capacities and the robot itself (Pylyshyn 1987) can mediate the connection, directly and autonomously, between its internal symbols and the external things its symbols are interpretable as being about, without the need for mediation by the minds of external interpreters" (Harnad, 9). After making said suggestion, he finishes his paper by imagining a scenario where we succeed in producing said robot. In this world, we would (or rather, could) decide which components of its internal structures and processes we call "cognitive". He goes so far as to beg a question that I believe is more existential to Cognitive Science: whatever we decide even is cognitive, "does it really matter?" (Harnad, 10).

    In this blog post, I would like to note two consequences of his position. The first is that Harnad ignores the essence of Searle's critique; the second is that Harnad abandons the search for the cogs of cognition in his closing query.

    Let us recall that the objective of Searle's thought experiment was to demonstrate that the Turing Test was insufficient to prove the existence of the mind, as was computation in explaining its nature. Harnad's response to this accusation, and justification for his thought experiment, is that alternative systems that are not computational are viable alternatives. However, positing alternative systems is missing the point. Not only can computational modes of thought be dismissed using the Chinese Room Experiment - any system involving symbol manipulation, be they computational, analogue, neural nets or another, are insufficient by Searle's thought experiment. Contrary to Harnad, the root of the problem isn't symbol-grounding - it's the very use of symbols to begin with.

    To the second point: recall that the initial objective of cognitive science was in some ways a want to discover the workings of the "black box" that Skinner had dismissed as being unimportant, so as to explain not what we do, but what we can do. The philosophical road embarked on by Zenon, Pylyshyn, Turing and Searle aimed to answer questions of optimality and actuality (Harnad, 5) of different models. In positing that we can "decide" which systems are cognitive in a fully fledged robot, Harnad implies that we are the arbiters of which components are cognitive, and that we are no longer searching for those components. We have gone from discovering to deciding. We no longer question what is actual - we know what is actual, and just pick from that batch. We no longer decide what is optimal - the value judgments of efficiency and logical coherency are besides the point. Raising Harnad's final question resigns the search for the actual to the practical - for who needs to search for how we work, if we can design something else that works and choose from that? The procedure fails to answer the initial question.

    As a closing thought, I leave a question for the group: if we were to decide what cognitition is based off a robot we design, don't we decide what cognitition is in them, but not necessarily what it is in us?

    ReplyDelete
    Replies
    1. Point 1: How is a waterfall, dripping, symbol manipulation? What if something like that were the mechanism that generates cognition?

      (It was ambitious of you to take this on, so early, but we will be giving both the question of what is and isn't computation (symbol manipulation) and Searle's Chinese Room Argument that cognition is not just computation (symbol manipulation) a lot of thought in the next couple of weeks!)

      Point 2: Yes the distinction between what is "cognitive" and what is "vegetative" is mostly arbitrary -- except for one thing: feels like something to think (cognize). If cognition were just doing, and it felt like nothing to be able to do what we can do, i.e., if we were insentient robots, then there really would be no difference between what what is "cognitive" and what is not. But there is. And it is the biggest difference there is. And Searle makes use of that difference in his Chinese Room Argument (without realizing it).

      We'll cover all this in due time. But for now, I think this will give you enough to think about.

      Delete
    2. I want to reply to your last point:

      I would argue that it would depend on why the robot was created in the first place. Assuming that the robot was created with low-level processes (e.g. how to compute addition), then any of its cognition has a higher chance to be a reflection of us, because we have engineered it to be like us.

      On the other hand, if we create a general-purpose, high-level robot that has 100% capacity of human cognition, I would argue that there would be a chance that it would take on its own way of thinking. I know this sounds doomsday/sciene-ficitiony but if every human has more or less their unique way of thinking, wouldn't the robot, which is modelled after that uniqueness, also take on the same characteristic?

      Delete
    3. Wendy, yes, that's getting a bit sci-fi.

      Surely nothing depends on why the robot was created, but on what it can do. That's why Turing insists on total capacity; otherwise it's just a toy, that could have been designed in many different ways.

      Of course every organism, and robot, is unique. Even for robots that start out identical, their experiences quickly change them. Computer programs too, once they've been running, especially if they are learning programs.

      Delete
  2. The proposition of a theory of cognition which combines computation with some other dynamic elements is interesting in how much it surprised me. In my experience competing ideas in the scientific realm are most always singular and do not employ combinations of ideas. In this way I have grown very used to concepts remaining in line with Occam's razor.

    I don't think that this is a bad thing however, as there are many areas in which the simplest explanation may be correct - or at least what we currently believe to be correct - but it anything is complex enough to defy this trend then I would certainly think cognition would be the thing to do it

    ReplyDelete
    Replies
    1. I agree with you, that I was also surprised at the thought of combining computation with some other dynamic process! In fact when I was first reading through the paper, I was honestly a little bit confused because I was wondering how dynamic processes could fit under the umbrella of computation, instead of considering the two as separate concepts working in tandem.

      My follow-up question to that would be... how do you exactly define or limit what a dynamic process covers? I'm wondering what about including dynamic processes to the explanation of cognition is different than simply using homuncular explanations.

      Delete
    2. Esther, some natural candidates for dynamic processes are sensory input and motor output. Also any internal analogues, like the ones that underlie imitation, and mental rotation. Because of the strong Church/Turing Thesis, all the internal processes could be done computationally too, but that will not necessarily be the simplest or efficient way (which is what lazy evolution favors).

      Delete
  3. “Naming things is naming kinds (such as birds and chairs), not just associating responses to unique, identically recurring individual stimuli, as in paired associate learning. To learn to name kinds you first need to learn to identify them, to categorize them (Harnad 1996; 2005). And kinds cannot be identified by just rote-associating names to stimuli. The stimuli need to be processed; the invariant features of the kind must be somehow extracted from the irrelevant variation, and they must be learned, so that future stimuli originating from things of the same kind can be recognized and identified as such, and not confused with stimuli originating from things of a different kind.” (This passage is from the text Cohabitation: Computation at 70, Cognition at 20)

    This passage really struck me because it made me reflect on daily life and those small tests that one sometimes has to do to perform some action on a website and prove that they are not a robot. One common test that websites will use is to select items of a certain category from a grid of pictures. For example, there could be a grid of 9 boxes and you as the human user must select all squares that contain buses or something along those lines. Ultimately this is a test to weed out bots based on their differential abilities from human web users. This test is based on the ability for human cognition to reliably and accurately identify items from a category, specifically in this case based on visual processing. Presumably robots struggle with this task, and the text quoted above relates this struggle to one of categorization. What is it about the process of categorization that has made it such an exploitable process by website designers to distinguish humans and computers? I would imagine that with advances in machine learning, robots and computers may be getting more accurate and better at fooling this test. If one were to train the computers to simply do the rote-associating of behaviourism to learn categories, without finding a common thread linking these individual instances, it would be unable to pass the test unless all of the images had been previously seen. This daily-life example supports the conclusion made by Harnad above, which is that rote-association behaviourism is not going to provide a satisfactory answer to what cognition gears underlie categorization. It is interesting to consider the scaled up version of the Turing Test Harnad proposes in his conclusion in the context of this categorization example. Would being able to ground symbols such as “bus” as mentioned above in sensorimotor capabilities be enough to acquire the skill of categorization? Giving this robot the ability to have grounded symbols is (to some extent) giving it the ability to categorize. Allowing the symbols to be grounded resolves the issue that the robot now “knows” what the category bus means beyond just squiggles that correspond to certain pixels. To test this knowledge, one would ask the robot something like “point to a bus,” or “is this a bus?” which are reliant on the ability of categorization – and the ability to recognize the category of bus in the real world. It is not fair to just gloss over the issue of actually giving the robot sensorimotor capabilities to solve the symbol grounding problem though. How are these abilities given? And how do they integrate or meld with the robot’s computational algorithms? When Harnad makes the conclusion at the end of the text that the Turing robot can be thought of as employing both computational and dynamic processes, it is unclear what the interface between those processes is. I am curious if there a possibility for an intermediary or hybrid lying in between these ideas, or if they exist as a binary.

    ReplyDelete
    Replies
    1. Sorry I just saw your comment about keeping my responses shorter, I will do that next week!

      Delete
    2. A "catcha" test is not a robot test, but it is a test in optical pattern processing (hence categorization). It's already easy to beat them; the tests will need to be made harder and harder. But none of them come to the level of the Turing Test (T2: digital data in, digital data out) let alone the robotic Turing Test (T3, things-in-the-world in, actions-in-the-world out). Google's already amazingly successful in processing digital images using deep learning. But that's only a tiny toy fragment of T2 (which includes not just pattern categorization but also language understanding and production). Grounding symbols, though, requires T3. We'll get to that.

      Delete
  4. This reading raises an important question. So far the most efficient improvements to AI and robotics are often self-taught and self-improving. The AI is given baseline code and told to improve themselves while being able to rewrite their own code to do so. This could be analogous to “thinking” as the process of coming to different outputs is constantly changing and becoming more efficient. But under this model, we would have no way of knowing all of the improvements the robot/AI is making to itself. So if this is a valid pathway to artificial consciousness, then we would still not have any clearer idea of what consciousness is other than knowing that we can create it. And if it is a learning robot (which if conscious it would necessarily have to be to act and feel like humans do) then its process of thinking would be constantly changing. How would we be able to interpret and understand processing that evolved on its own through the “will” of a robot?

    Building on this previous point, since behavioural equivalence is both a characteristic of cognition and computation, it would also be present and valid for AI robots. So if the processing (consciousness) of any two robots that pass the Turing Test were to be compared, it can be argued that it would take forever to completely compare the processing of the two robots. This would be all the more true if both robots were learning robots as they would continue to evolve independently of each other. So the only way of fully evaluating their individual consciousnesses would be to consider their processing as a whole entity rather than discreet computations, evaluating their behaviour and “feeling” rather than computations or sensorimotor interactions with the world. Wouldn’t this only prove that the robots are good at imitating people rather than absolute certainty that this would be consciousness? I feel like this would only lead us back to the starting point of uncertainty about what consciousness is as a whole.

    ReplyDelete
    Replies
    1. To pass the Turing Test (of being able to do what we can do) a robot would of course have to be able to learn, hence of course it would keep changing (just as we do).

      There is no way to test whether the Turing robot is conscious (i.e. feels) because the Turing Test can only test what the robot can do. But with the Turing robot we would at least know one way any causal mechanism at all could do what we can do.

      The problem of knowing whether the robot feels is not the easy problem, nor the hard problem. It is the other-minds problem. There's no way to know for sure whether anyone but oneself feels (Descartes), but you don't have to know for sure. We believe what people say, and how they act. With mammals and birds the probability is almost has high; fish and reptiles too. And it's beginning to look the same with invertebrates.

      Turing just adds that there's no more reason to doubt that a Turing robot feels than that anyone else does.

      Delete
    2. Professor, then I have a question about the Turing Test: are there multiple versions of it or are they all dependent on textual responses between a mediator and the machine/person via email or whatnot?
      I feel like so many experiences and interactions that we have as humans are spontaneous or can be unexpected, and we react a certain way (with all the cognitive biases and heuristics that we use to think fast on our feet). Would situations in the Turing Test reflect that accurately?
      Also, is the Turing test something that is actually used in practice, and not just talked about theoretically? Sorry if that sounds kind of naive, it's honestly my first time really diving into this topic. I'm guessing no machine has ever successfully tricked someone as being human, otherwise we wouldn't be having discussions on what cognition/computation still are...?

      Delete
    3. Esther There are multiple TTs: T2 (verbal only), T3 (verbal + robotic), T4 (verbal + robotic + brain-like)...

      Trying to pass TT is the goal of cogsci, the "easy problem" of explaining, causally, how and why organisms can do all the things they can do. (We're very far from solving it, and partial solutions are just toys.)

      All this will be explained during the next couple of weeks.

      Delete
  5. "How do I identify her picture? Those are the real functional questions we are missing; and it is no doubt because of the anosognosia – the “picture completion” effect that comes with all conscious cognition -- that we don’t notice what we are missing: We are unaware of our cognitive blind spots – and we are mostly cognitively blind. "

    The distinction between conscious cognition and cognitive blindness reflects a fundamental question that comes with the theory of computationalism. A Turing machine's most basic computation relies on a system of 0s and 1s. The machine can then theoretically compute all that is computable. To the extent that our conscious cognition involves decision making, the system of 0s and 1s is an excellent analogy. However, perhaps 1s and 0s are not entirely representative of the most basic cognitive process. Our unpredictability and irrational behavior are perhaps the result of an event in the mind (possibly linked to our sensorimotor system and/or internal symbol system) which we cannot yet understand/simulate due to a cognitive blindspot.

    This is similar to a bug in a software which we are unaware of until it causes a problem when run with a particular input.

    "As we have seen, there are other candidate autonomous, non-homuncular functions in addition to computation, namely, dynamical functions such as internal analogs of spatial or other sensorimotor dynamics: not propositions describing them nor computations simulating them, but the dynamic processes themselves, as in internal analog rotation; perhaps also real parallel distributed neural nets rather than just symbolic simulations of them."

    This extract also evokes the same idea, with dynamical functions and processes perhaps being the bridge between computation and our cognitive blindspots.

    ReplyDelete
    Replies
    1. What do you mean by "cognitive blindspots"? Things we don't know, or don't know how to do? Things we can't know or learn to do? Yes, cogsci's mission is to explain all that (the easy problem).

      Or do you mean the difference between (Imp) things you can do and (think) you know how you do them ("explicit knowledge") and (Exp) things you can do but don't know how you do them? In cogsci, you have to explain how and why we can do both Imp and Exp. And almost all of it is Imp.

      Consciousness (feeling) on the other hand, is another, harder, matter...

      Delete
  6. Where are the limits to the computational argument? Specifically, the notion that “the physical details of the hardware are irrelevant [computational states are independent of the dynamical physical level]”. I think this argument is rather convincing as a general rule for how cognition works (there are psychological states and states of consciousness that seem universal (i.e. cogito ergo sum). But I think that the argument isn’t without its caveats, especially when it comes down the topic of intelligence.

    Before getting into the substance of my argument, I’d like to explain the position of where I am coming from. Intelligence is a very broad spectrum. In terms of IQ (despite its caveats), it ranges from extremely low, to extremely high. This variation is not simply due to improper testing or cognition, but also physical features that come with extreme variation. According to Plomin and Deary, intelligence is ~60% heritable in adulthood and the heritability stems from intelligence being derived of thousands of DNA variants with extremely small effect sizes. (The effect sizes are small with each DNA variant being isolated, but when working in concordance with each other, they form a person’s intelligence) [1]. This would mean that the physical details of the computational device (a human being) do matter. Now, I am not trying to slap on a molecular argument here and complicate things, because that is not what the field of cognitive science is trying to figure out, but I do think that it matters that we establish an origin source for variation in terms of intelligence.

    The reason that I think this matters is because if the dynamics of the physical level are irrelevant to the function of the software, then this variation would be non-existent. This phenomenon manifests itself everytime someone’s IQ is tested because there are many ways to get the same answer on an IQ test (depends on what is being tested, but this is a staple in intelligence psychometrics). As Donald Hebb stated that the true object of study is the “intervening internal process [referred to the 7-2 situation]”. If that is the case, then we would have to acknowledge that different dynamics will yield to different processes and different mental trajectories to come to the conclusion (an answer on an IQ test or 7-2).

    I find it rather difficult to imagine another explanation for how one could come to the same conclusions with different mental tactics (i.e. IQ testing or 7-2) and the variation for the conclusions cannot be explained without differing physical dynamics. Though, as I write this I can surely imagine that my reductionism definitely has holes in it as well.

    [1] Plomin, R., & Deary, I. J. (2015). Genetics and intelligence differences: five special findings. Molecular psychiatry, 20(1), 98-108.

    ReplyDelete
    Replies
    1. Intelligence is the human capacity to do the things humans can do (that we consider "cognitive," rather than "vegetative" like balance or temperature control). It comes in degrees, but cogsci has to explain it all, and the Turing Test has to generate the generic capacity to do it all: not the capacities of a giant like Einstein (that can come later), but those of ordinary pygmies like most of us).

      Implementation-independence is not an argument; it is a property of computation.

      Genetics would be part of Turing Test T4 (see above). Dynamics is already part of T3.

      At this point in the course, we are first trying to understand what computation is, and what powers it has (Weak and Strong C/T), what it can do, and how. Then we will see what computation can't do (computationalism), and why.


      Delete
  7. Small note : I don’t know much about cognitive science, so I’m out of my depth. I apologize in advance if I make an incorrect statements or irrelevant claims!

    This article brought to mind the subjective and non-static nature of cognition. Professor Harnad asks the question : “can introspection tell me […] how I learn from experience? How I reason? How I use and understand words and sentences?” (3). Every individual is shaped by their genes as well as the accumulation of their experiences. This impacts how we interpret situations, how we learn from them, how we react and how we then in turn reason and make future judgements. People develop cognitive biases which leads to the same situation impacting individuals in a variety of different ways. I think this creates an added level of complexity when it comes to understanding cognition and attempting to create machines that would pass the Turing Test.

    Harnad’s section on introspection also made me wonder whether we can ever study cognition without a certain level of introspection. We are driven by our brains - the very organ we are attempting to study and understand. Won’t there always be a degree of introspection involved in the study of cognition?

    ReplyDelete
    Replies
    1. I agree with you when you say that individual genes and experiences will inevitably influence cognition, however no matter the individual differences, everyone still goes through cognition regardless. In other words, there must be some basic level of computation (or dynamical function), beneath the complexities, that can explain cognition for a given physical system. Thus, a computer in a Turing Test might not have the random variations that a human has genetically but it might still be able to fool a person. For example, and some consider it cheating, but a team of researchers in the past had created a computer interface to try and pass the Turing test but programmed it to be a young boy who didn't speak English very well. This is in some way an assigned "experience" that would influence the computer's answers and it also makes it more difficult for the human to differentiate mistakes by the computer or mistakes made by a young second language learner. Here the computer is designed to be unique which addresses the added levels of complexity you discuss in your comment.

      To address your second comment, yes there will always be introspection involved in how we think and explain elements of our minds. However, as I understand it, the important take-away is that introspection will never explain HOW cognition functionally occurs thus can never be used seriously in cognitive science.

      Delete
    2. I think that your question about the degree of introspection involved in the study of cognition is an interesting one. If we go back to the example of Prof Harnad in the lecture about remembering the name of your grade three teacher, introspection doesn't seem very productive in answering how we access memories - it seems to only answer how it personally feels to remember something. For example, one might have the experience that the teacher's name just "popped" into their head, or maybe an image of that teacher popped into their heads prompting the name. Some people employ a strategy of the "method of loci" in order to remember things such as a grocery list. Briefly, this technique has you imagine to place the items that you want to remember around a familiar route, perhaps your apartment, and then at the grocery store to remember what was on your list, you imagine yourself walking along that familiar route and noting all of the items along the way. Introspection in this case would just involve someone explaining how they walk this familiar route in their apartment, but this doesn't really address the how of cognition. How do the virtual apartment and its contents appear in one's mind? I think introspection will always be involved in investigating how cognition feels, but also as Matt stated it is limited in actually explaining how the process happens.

      Delete
    3. Claire, when we get to the Turing Test, you will see that the successful candidate has to be able to do (hence its internal mechanism will be the explanation of) everything we can do.

      Matt, the TT is not about "fooling"anyone; it's about reverse-engineering how organisms can do what they can do by designing something that can do it (which will also be an explanation of how it can be done). Variability is trivially easy to produce, it's ability that it more difficult to produce and hence explain.

      (If someone thinks they can figure out how to pass TT from introspecting about they themselves do all the things they can do, lots of luck to them!)

      Stephanie, actually the 3rd-grade schoolteacher example was not mine, but D.O. Hebb's. But your point is right.

      Delete
  8. I would like to put some more time and thought into the ability of the Imitation Game to be of any help in differentiating cognitive systems from non-cognitive ones as it seems to be at the center of some of the problems of this course. I have an issue with the imitation game being an appropriate test to determine if a computer is “intelligent”. It seems to me that the test is measuring how similar the behaviour of the test-entity is to that of his standard healthy and normed human opponent, more than it is differentiating a cognitive system from a non cognitive one. Leaving aside what Turing actually meant by “intelligent”, I think that I need some more clarifications as to how the Imitation Game can help us differentiate a cognitive system from a computational one. Suppose we were to play the imitation game with someone who had some sort of mental impairment and who thereon continually lost to his normed and healthy human opponent - would we consider them as void of cognitive abilities or not human-like altogether? Beyond the obvious ethical issues this entails, perhaps what I am getting at is that the Turing test makes for a somewhat anthropo-centred (human-centred for kid-sib) account of cognition, arguably also one centred on our own cultural criteria of what characterises cognition. The test answers the question “does this entity resemble me”, more than it answers the question “is this entity a cognitive system?”.

    Perhaps we need other tests or new referentials for the Turing test that would expand our definition of cognition beyond that of having human-like behaviour. Thereon starts the question of how far beyond human-like characteristics we want to push this definition? Some biologist have gone as far as expanding the definition of cognition to encompass any “structural coupling” of of a system to its environment (e.g. bilateral perturbations of the system to its environment, whether it be biological interactions, chemical interactions…). Of course the problem with such a definition is that it would also include any basic computation system as it’s own hardware becomes coupled with it’s environment (i.e. its user) every time it is used.

    ReplyDelete
    Replies
    1. | Quote: "The test answers the question “does this entity resemble me”, more than it answers the question “is this entity a cognitive system?”."

      This is an interesting point. However, in our search in describing what a cognitive system is (or would be like in a robot), we rely on what we know. We know that /we/ are cognitive systems, and therefore try to find it in other entities. Naturally, this involves a "does or does it not resemble me" kind of game, since we would otherwise have no basis for what we are looking for/looking to define.

      I do agree with you that this is not a perfect plan. What if we find out that humans are not, after all, truly sentient? Would that not change the whole game, so to speak? If what we believe to be a cognitive system within us might then only be a poor imitation (no pun intended) of "the real thing".

      Delete
    2. Matthew, You'll find out more about what the Turing Test is about. It is not a game; and it is not imitation. It's about generating all of our (generic) cognitive performance capacity (i.e., what we can do). But the capacity needs to be present. There is no TT for what a person in a chronic vegetative state can and cannot do. And the cognitive capacity of every species will need to be explained by cogsci, but we already know what a generic human can and cannot do; for nonhuman species, even if it's less in some respects, it's harder, because we don't know yet what other species can and cannot do (let alone whether and what they feel). We are also better at mind-reading our own species than other species (though we're better at that than we might think).

      But there's more than one TT (see T2, T3, T4 in other threads above).

      Eli, what do you mean by "What if we find out that humans are not, after all, truly sentient?" That we are all zombies? How would you find that out, since you can't even know for sure whether anyone other than you is sentient? (Descartes' Cogito and the other-minds problem.)

      Delete
  9. "The decorative phenomenology that accompanies the real work that is being done implicitly is simply misleading us, lulling us in our anosognosic delusion into thinking that we know what we are doing and how." (Harnad, 2009)

    I love this line. Succinctly said.

    ReplyDelete
  10. “…the fully robotic version of the TT in which the symbolic capacities are grounded in sensorimotor capacities and the robots itself can mediate the connection directly and autonomously, between its internal symbols and the external things its symbols are interpretable as being about”

    So, I agree with the idea that there’s more to cognition than computation and making a TT that can evaluate all those different aspects of cognition is a step in the right direction, but how would we do that? And maybe, even more importantly, how will we know that the robot is actually experiencing the things that we deem to be cognitive? I’m going to pull a mini-Descartes here and say that the only thing we can be sure of is ourselves, so how can this robot convince us that they are experiencing what we’re experiencing? I think many of you have heard of this argument: how are you sure that the color that you see as red, I also see as red? What if we have learned the same name (“red”) for whatever color we are seeing at that moment? The science behind color vision has showed that we’re both receiving the same wavelength (say roughly 700 nm) which is detected by the same cones in your eyes as I have in my eyes which are then traced down through the visual perception pathways to the brain. Assuming both of our brains are running the same software (and since we’re both “conscious” I’d say that’s an okay assumption to make) and coming to the same conclusion, I’d expect us to be seeing the same color, and since we’re both sentient beings, I’m choosing to believe that we both experience and feel red the same way. Carrying on with that logic however, assuming we create a robot with the right software that also detects the wavelength of 700 nm and says “I see red”, are we going to believe it is experiencing it? Or is it simply just detecting a wavelength and outputting a name for it (through a complex neural network)? I’m more inclined to believe that another human is experiencing red than a robot with the same software and circuits because I believe humans have sentience. In this (idealized) case all that differs between me and the robot is our hardware, so why am I still less inclined to believe that they will experience the “feeling” of red. Sorry this got a little long, I’m just trying to put my existential crisis into words.

    ReplyDelete
    Replies
    1. You have to scale up to the TT gradually, trying to capture more and more of our abilities. But until it's all of them, indistinguishable from the rest of us, it's not a TT but t0 (a toy, that you can build in countless different ways that may have nothing to do with the way our brains do it).

      You (and Descartes) are right that we can only be sure about ourselves, that we feel. That's the other-minds problem. But the TT, too, cannot test feeling, just doing. But that's the same for us, in real life, with anyone other than ourselves. So Turing just points out that once you can no longer tell them apart, don't try! Indistinguishable performance capacity (i.e., doing-capacity) is the best that cogsci (or any sci except sci-fi) can hope to do!

      Delete
  11. I liked the detailed refutation of introspection that Harnad included in this article. It has been demonstrated many times that our ability to have insight into our own cognitive processes is very limited. Not only do we struggle to explain how we arrived at a behavioural conclusion (such as remembering your third grade teacher) but there are so many unconscious processes that are inaccessible to introspection. Like the article (and Zenon) said, many of the theories of the mind such as imagery theory only defers the explanatory debt and so the essence of cognitive science is to break down these problems into questions that don't rely on some kind of homuncular thinking.

    Pylyshyn's computational states and the independence between software and hardware is reminiscent of behavioural equivalency and also of Turing's claim that anything can be a Turing machine as long as the functional states are the same. I am also not convinced that this solves the mind/body problem and I also take issue with how he equates computation based on arbitrary symbols (like binary 0's and 1's) with computation based on meaningful ones. For example, in humans, although computation might be based on electrical impulses (firing of neurons), with the final product being, say, thinking of a word, the intermediate steps in this computation (let's say stringing together meaningful letters) have meaning, whereas an incomplete sequence of 0's and 1's wouldn't mean anything. I just thought this distinction might be useful in comparing the computations of machines and of people.

    ReplyDelete
    Replies
    1. It is not that "anything can be a Turing Machine" but that (1) anything can be used as a symbol; (2) computation (software) is hardware-independent (it's just symbol manipulation rules, that can be executed by lots of different hardwares; (3) just about anything can be simulated by computation (the Strong C/T Thesis) -- but the thing and its computer simulation are not the same kind of thing: one is (usually) a dynamical system, the other is just a symbolic simulation of that dynamical system (executed by another kind of dynamical system: the computer's hardware).

      The simulation can be based on Weak Equivalence (producing the same output for the same input) or Strong Equivalence (producing the same output for the same input using the same algorithm [program, software, rules]). But Weak vs. Strong Equivalence only makes sense when both systems are just computers doing computation, because if one of them is a waterfall and the other one is computer simulation of a waterfall, the waterfall is not executing an algorithm at all: It's just water, falling.

      This is why S/W "equivalence" is so important to computationalists (i.e., those who believe that cognition is computation): If cognition is just computation, then your brain is just doing computation (symbol-manipulation) when you are thinking, and the computer is not just simulating it, it is doing the same computation, where "weakly the same means" it gives the same output for the same input and "strongly the same" means it gives the same output for the same input using the same algorithm.

      So is thinking like water, falling, or like a computer, computing? (Turns out it's some of both.)

      "Functional states" means "computational states" -- that series of states that a Turing Machine passes through when it is executing a computation (reading 0's and 1's on a tape, writing, erasing, advancing and halting).

      But computation is, by definition, manipulating symbols based on their shapes, not their meanings. The question of meaning is not so simple. We know the meanings of words, but what does meaning mean computationally? How do you get semantics (meaning) out of syntax (symbol-shape-manipulation)?

      That's the symbol grounding problem (Week 5).

      Delete
  12. “The answer in the case of syntax had been that we don’t really “learn” it at all; we are born with the rules of Universal Grammar already in our heads. In contrast, the answer in the case of vocabulary and categories is that we do learn the rules, but the problem is still to explain how we learn them: What has to be going on inside our heads that enables us to successfully learn, based on the experience or training we get…”

    Though I would agree that most categorisations stem from learning rules over time, I can’t help but wonder if that is the case for all of them as is mentioned in this section. That is to say – are there any forms of categorical perception that we are intrinsically able to accomplish (or at the very least, come more easily to us than sufficient category learning history could explicate). One example in particular comes to mind. In another Cognitive Science course I partook in, we discussed the theory that we evolved the ability to perceive in three colours since it more easily allowed us to distinguish a fruit (more often, tinged with red) from its leaves (more tinged with green). In this capacity, human individuals could be considered to be at naturally predisposed to distinguish the categories of food from non-food (or more specifically, nutritious part of plant from less-nutritious part of plant).

    In order to flesh out the theories of categorical perception, I think that in addition to investigating how we learn rules, we should also consider these more implicit rules that we already know or are more predisposed to learn in order to avoid bias in experimentation.

    ReplyDelete
    Replies
    1. In sensorimotor categorization, what our brains learn is how to detect the features (not rules) that distinguish the members of the category from the non-members. Learning syntax can be the learning of rules, as in the case of ordinary grammar (OG). But when we get to Chomsky, we'll learn there's another grammar (Universal Grammar: uG) that is innate.) Some sensorimotor categories are innate too, or almost innate, for example, color categories.

      Learned/Innate is not the same dichotomy as Implicit/Explicit.

      Delete
  13. In "Cohabitation: Computation at 70, Cognition at 20", Professor Harnad concludes that we are still unsure of the definition of cognition.

    In the past, I naively believed that the human brain is an information processing system and that cognition and computation go hand-in-hand with one another. Why I thought this was because a computer can manipulate, store, retrieve and process information—all of which can be performed by the human mind as well!

    Additionally, I guess for the simplest of problems (e.g., addition), a Turing machine can operate similarly to the human brain. And, figuratively speaking, the human brain is continuously computing, being fed a tape with symbols on which it can perform read and write operations and perform functions, too.

    However, ground-breaking experiments like John Searle's Chinese Room Argument refuted this theory that cognition is just computation. Searle proved that although the Turing machine can correctly apply rules to manipulate symbols or imitate the ability of the human mind, it does not have any understanding of meaning. Thus, he refuted the strong AI hypothesis/strong Church-Turing thesis.

    Moreover, what is unique and special in the human mind is its emotional intelligence. Our emotions regularly impact our computation of stimuli in the external environment and decision-making. So, if "the only time we can have a concrete understanding of cognition is when a system can behave in a way that is indistinguishable to a human", we might not have the definition of cognition for a while... Will a Turing machine ever be able to experience, process, understand and react to emotions or feelings like us? Even with emotions or feelings, will there ever be a Turing machine that allows them to affect its responses like us? Finally, will there ever be a Turing machine able to use adaptive learning and past experiences to determine its next moves?

    ReplyDelete
    Replies
    1. I'm not sure what "emotional intelligence" is -- but what Searle proved was that doing the computations that pass the Turing Test (T2) for understanding language do not generate the feeling of understanding in the system that is doing the computations (symbol-manipulations), whether the symbol-manipulator is a digital computer or a human being.

      Delete
  14. "There is still scope for a full functional explanation of cognition, just not a purely computational one."

    This quote stood out to me in Harnad's "Cohabitation: Computation at 70, Cognition at 20", because it challenged my notion of theories of cognition. I always thought that functionalism was a less sophisticated explanation of cognition, and computationalism was the new and improved version of functionalism. After reading the paper I realized that we do not need to abandon functional explanations of cognition when a new theory comes along. Harnad argues that computation alone cannot explain cognition. I found it interesting that what I had perceived as a more basic theory can support computationalism and give us a deeper understanding of cognition.

    ReplyDelete
    Replies
    1. Computationalism is a specific form of functionalism. It has the power and specificity of computation behind it (and also its liabilities). It is a method, rather than a specific model. But it makes one hypothesis: that cognition is some form of (Turing) computation.

      Functionalism is much vaguer, just a philosophical notion -- that some sort of a physical, causal mechanism can explain our behavioral and cognitive capacities.

      Delete
  15. With his Chinese Room argument Searle showed that a machine being able to hold a conversation through email didn't make it intelligent because it could just be using previous instances of the use of symbols to replicate an understanding of these symbols without actually understanding them. So the system is missing the feeling of understanding right? But doesn't the other minds problem mean that we also don't know if anyone other than ourselves have that feeling. I think I understand at an instinctive level what he's getting at because I instinctively assume that other human beings have the feeling of understanding as well whereas this would need to be something that is proven in a machine. In theory however, isn't other people's feeling of understanding symbols just as uncertain as a machine's feeling of understanding?

    ReplyDelete
    Replies
    1. We'll cover Searle's Argument in Week 3, but the problem is not "using previous instances of the use of symbols" but the fact that symbols are just shapes, not meanings, and if a computer program can text with you, it's not because it's understanding what you are texting.

      Yes, understanding language feels like something, and you feel it whereas the computer feels nothing. But organisms are machines too (i.e., cause/effect mechanisms).

      We assume other people understand because they behave just like us, especially when it comes to language. The Turing Test is a test of that.

      Delete
    2. Now that we have reached week 3 and Searle's CRA, it is interesting to see how you brought together the earlier topics and the CRA before we had a whole discussion. It turns out that yes, what is missing from the CRA is Searle's explanation for /how/ he knows that he doesn't understand Chinese even though he can manipulate the shapes to give an output such that someone who does understand Chinese would assume they came from a human brain, cognizing the same way and understanding Chinese. That is also why his CRA does not really refute the TT, but rather computationalism. And he fails to successfully do that because Searle's imagination allows him only two possibilities for what cognition actually is. Denying computationalism, he instead assumes a T4 machine-brain is the only alternative. He also seems to suggest that instead of reverse-engineering the brain, we ought to be trying to recreate it's structure and function through studying neuroscience and biology. Maybe we should be doing both and trying to synthesize information to reduce our uncertainty?

      Delete
    3. What I also find really interesting is how you bring in the questions of The Other Minds problem and "mind-reading" - I think it was part of Prof. Harnad's critique of Searle's CRA, that if the CRA's claims against computationalism also render the TT moot as a task to determine/reverse-engineer whether and how machines "think," then Searle is suggesting that in some other way we must determine how and why other minds can or cannot feel what it's like to think. Which just throws us back into the Other Minds Problem, and to date the best test of whether some other mind/machine is thinking like us such that we believe it to be the same mechanism (cause-effect system), is the TT, I believe is not dissimilar to the concept mentioned called "mind-reading." We use our intuition and appearances of certain stimuli to assume what kind of machine (feeling or non-feeling) is texting/emailing with us.

      Delete
  16. « The answer to the question must be cognitive; it has to look into the black box and explain how it works. But not necessarily in the physiological sense. Only in the functional, cause-effect sense»
    A fundamental argument of computationalism concerns the independence of wetware and software, or in the case of human minds, brains and cognitions. As Harnad reminds us in his paper, physical implementation is not where the answers to questions about cognition will be found. Although that may be true for purely computational systems, it seems to me that this argument might fail when we accept, as a consequence of Searle’s rebuttal of computationalism, that intelligent systems might not, after all, be purely computational. Moreover, reading through this paper reminded me, especially towards the end, of « What might cognition be, if not computation » by van Gelder. This interesting paper challenged computationalism and posited that complex systems such as cognitive systems might be dynamical in nature. This implies for example that they are non-representational, or at least that it isn’t more informative to describe them as such. If we are to hold on to a « full functional explanation of the brain », how would such an explanation account for the dynamical components of cognitive systems? Moreover, wouldn’t it be necessary to refer to physical aspects of cognitive systems in order to explain dynamical processes?

    ReplyDelete
    Replies
    1. Van Gelder's 1998 BBS target article (which was published in BBS when I happened to still be the editor of BBS!) reminds us of the obvious fact -- already implicit in Turing's definition of computation -- that computation is implementation-independent: the software is independent of the hardware (although it needs to be executed by some hardware or other).

      It follows that not "everything" in the universe is (or can be) just computation (contrary to Nick Bostrum's hypothesis, which, more important than its being wacky, is actually logically incoherent, failing to distinguish between computer simulation and hardware, just like the movie The Matrix).

      Van Gelder simply pointed out that every category has to have a complement (i.e., members and nonmembers): If there are X's, there must also be non-X's. So if there are things that are computational, there must also be things that are noncomputational.

      The computation’s hardware is the first candidate for not being just computation, but in fact all physical systems, not just computers executing computations, are noncomputational. Physical systems are called "dynamical systems," which means they change states in time according to the laws of physics (not symbol manipulation rules — except if they happen to be executing a computation).

      So if you write a computer program to simulate the moon — or the mind — the software is independent from the hardware. But the hardware cannot be simulated hardware, any more than a real robot’s sensors can be simulated sensors! (If it is one of Pylyshyn’s “virtual machines” — say a Mac simulating a PC, it’s all just software running on hardware, whatever the real hardware happens to be. Everything “above the level of the hardware” is just what Searle calls “squiggles and squoggles.”) Ditto for simulating a robot: a simulated robot passing a simulated T3 is really all just squiggles and squoggles, neither moving nor thinking, just as a simulated furnace is not heating, or a simulated plane is not flying.

      But Searle had already shown in 1980 (also in BBS) that cognition could not be just computation. He thought this meant that it had to be something that only the brain can do, so we should only study that. But how to squeeze out the solution of the easy problem from studying the brain? That will be Fodor’s question in Week 4. Van Gelder just points out that any dynamical system that is not executing a program is not computation at all. (And that even in a computer executing a program, the program is “independent” of the hardware.) So there are plenty of dynamical systems in the world that might be eligible to be (all or part) of the system that can pass T3. The brain is one; there may be others.

      Delete
  17. “The gist of the Turing Test is that on the day we will have been able to put together a system that can do everything a human being can do, indistinguishably from the way a human being does it, we will have come up with at least one viable explanation of cognition.”

    This quote interested me because, while I think it’s implied, I don’t think it’s necessarily true.

    Deep learning crosses the line of meta-programming that defines whether we fully understand what we are programming or not. So I think it’s reasonable to theorize that in the future we could create a program that is able to improve itself to the point of passing the Turing Test, without its core functionality being immediately interpretable (to us).

    This calls into question the assumption of whether we need to be able to explain ~how~ something is intelligent, in order to determine that it ~is~ intelligent.

    We don’t understand our own cognition, so how can we use our own ability to understand ~how~ another entity cognizes as the determiner of whether it is intelligent?

    ReplyDelete
    Replies
    1. If computationalism were true -- i.e., if cognition was just computation -- I'm sure we'd be happy to accept a learning algorithm (like deep learning) as the explanation of how we learn (if it could really learn eveything we can learn), regardless of whether we can trace all the changes in the software that result from actually learning something.

      But computationalism is not true; so passing TT takes more than a learning algorithm: There's also a symbol grounding problem to solve.

      Delete
  18. From: “Cohabitation: Computation at 70, Cognition at 20” by Steven Harnad

    “Computation already rears its head, but here too, beware of the easy answers: I may do long-division in my head the same way I do long-division on paper, by repeatedly applying a memorized set of symbol-manipulation rules -- and that is already a big step past behaviorism -- but what about the things I can do for which I do not know the computational rule? Don't know it consciously, that is. For introspection can only reveal how I do things when I know, explicitly, how I do them, as in mental long-division. But can introspection tell me how I recognize a bird or a chair as a bird or a chair? How I play chess (not what the rules of chess are, but how, knowing them, I am able to play, and win, as I do)? How I learn from experience? How I reason? How I use and understand words and sentences?”

    The above citation explains in part why behaviorism is inadequate for understanding our minds. It points out to the fact that behaviorism overlooked the unconscious functional processes that enables certain behaviours. Furthermore, this segment of the text seems to suggest that our mind performs unconscious computations as well as it able to represent some computations consciously. For example, our mind can consciously represent (by introspection) some basic arithmetic computations such as the symbol-manipulation rules of multiplication and it can also perform unconscious computations (computations for which don’t know consciously their mechanisms or rules) such as remembering the name of our third grade teacher.

    I have a general question out of pure curiosity. It is interesting to recognize the fact that some computations I can represent consciously while others I just can’t. Why is it that humans have evolved to be able to represent consciously only some computations while it did not evolve to represent other computations? A provisory answer would be, maybe, that there are no evolutionary reasons for being able to represent some computations while there are evolutionary reasons for being able to represent others.

    ReplyDelete
    Replies
    1. Yes, there is both explicit cognition (when we know what algorithm we are executing) and implicit cognition (where we don't know how we are doing what we're doing, whether it's computational or not).

      One advantage of explicit cognition is that you can verbalize it and communicate it to others.

      A harder question than "why is some cognition explicit and some not?" is: "why is some cognition felt and some not?

      Delete
  19. In "Cohabitation: Computation at 70, Cognition at 20," Prof Harnad argues that the approach to take to solve the symbol grounding problem should be to scale up the Turing Test so as to build a robot that can interact with the world as a human could (T3), given sensory-motor inputs and outputs. However, it seems to me that Searle's objection would remain, even if we could build a robot and scale up the Turing Test to T3. The robot might, for example, receive visual input from some kind of camera, but this input would then be translated into bits (as photons are transduced into action potentials in the brain), and these bits would just be more uninterpreted formal symbols that would undergo computations in the CPU or brain of the robot. In other words, we would effectively have Searle inside the head of a robot getting Chinese words which correspond to visual input, but Searle would still not understand the meaning of those words. It seems to me that no matter how the formal symbols are obtained (i.e., what input is used), the symbol-grounding problem remains, and we must explain how the symbols--once they are in the mind/brain--hold meaning or have semantic properties.

    Suppose we program a robot such that when it receives the sensory input of an apple, it takes 1100001100 as input (so apple = 1100001100). The robot might then perform computations using 1100001100 as input and then output the verbal expression, "This is an apple." However, I do not believe the robot would "understand"--at least in the way we understand--that the apple is an apple; it would simply be performing formal operations over a pre-defined input that the robot builder has associated with the external stimulus "apple."

    ReplyDelete
  20. (1) T3 is not just a robot, texting; the T3 robot has to be able to do everything else we can do.

    (2) For that, a T3 robot needs (at the very least) sensors and effectors: Sensors and effectors are not computers. They are dynamical systems. And they cannot be replaced (in a real T3 robot) by simulated sensors and effectors.

    We'll discuss this next week, on Searle.

    ReplyDelete
    Replies
    1. If sensors and effectors are dynamical systems, what does that change about how the computation works (if it is computation that is going on)? In other words, what are the implications of the dynamical/discrete distinction for the TT and solving the symbol-grounding problem?

      Delete
    2. Computationalism is the idea that the solution to the easy problem (reverse-engineering cognitive capacity) will be computation: cognition = computation. If it's not -- e.g., if it's dynamic, or hybrid computational/dynamic -- then computationalism is wrong.

      Delete
  21. “What was the name of your 3rd grade school-teacher?”

    So introspection is discreditable because of its inherently uninformative nature. Using the "naming of your 3rd grade school teacher" exercise reviewed during lecture and addressed within the reading, its seems introspection is limited in its productivity and efficacy. The "how" we access memories that we search for in the field of Cognitive Science is left unanswered as introspection only explains how it personally feels to remember something, rather subjectively. As we've learned, the name itself could be said to have "popped" into their head, perhaps the image of the teacher initially surfaced which prompted the name thereafter, etc. However, the functional "how" remains inexplicable via introspection.

    I find myself reflecting on Hebbian's neuroscientific theory with this in mind. Although it appears that natural introspection proves inadequate in explaining how we produce this information, when we consider the increase in synaptic efficacy that Hebb posits in his theory, can we not default to this cellular-level learning process as how we cue these memories? Synaptic plasticity and the adaptation of brain neurons seems to at least attempt to answer the functional "how" question. Or does this fall victim to the "easy answers: rote memorization and association"? If the latter is the case, are we not arriving to these conclusions via some contribution by introspective means?

    ReplyDelete
    Replies
    1. The goal of cognitive science is to reverse-engineer organisms' cognitive capacity -- to explain how and why the brain (or anything else that can do what the brain can do) is able to produce organisms' capacity to do all the things they can do. That's what the Turing Test tests. Hebb made useful contributions, but no one has come close to passing the Turing Test yet.

      Delete
  22. "Searle’s very simple point is that he could do this all without understanding a single word of Chinese. And since Searle himself is the entire computational system, there is no place else the understanding could be. So it’s not there."

    My difficulty with Searle’s thought experiment is that I feel as though it relies on a conception of “understanding” that is bound up in our subjective experience of what it feels like to “understand”. If one can consistently use a symbol appropriately, given it’s intended meaning, is this not proof of “understanding” said symbol? In human to human interaction, this seems to be how we gage understanding. If a child identifies a cat by the symbolic word “cat”, we judge that this child understands “cat”. If they point to a dog and name it cat, then we judge that given their use of the symbol in a way that violates our collective consensus on what it represents, that the child does not understand “cat”. In the same way, if a computational system is endowed with the knowledge and processing abilities to consistently make appropriate associations and usage of “Chinese” words, then why are we not willing to describe this as understanding? What part of the “meaning” of these symbols does the system not possess? It seems to me that the only difference between the understanding of a system that speaks “Chinese” and an individual that speaks “Chinese” is the subjective experience. Searle’s experiment therefore shows that computationalism does not help solve the “hard” problem, but I do not believe it does anything to discredit computation as answering the “easy” question. It relies on us imagining our own subjective bewilderment if we were spoken to in Chinese, unable to link the words to their meaning, while overlooking the fact that linking words to their meaning is exactly what the system itself is doing.

    PS: I understand that this response does not acknowledge the symbol-grounding problem, but I felt that there was a dismissal of computation as a solution to the “easy” problem that could be addressed independently of this issue (maybe this is wrong).

    ReplyDelete
  23. When Harnad states that scaling up the Turing Test requires “the full robotic version of the TT, in which the symbolic capacities are grounded in sensimotor capacities”, and the connection between the internal symbols and the external things its symbols are interpretable as being about”, does it matter which sensimotor capacities the robot has (Harnad 2008)? Or is it simply the case that whatever sensimotor capacities the robot has are used to ground its symbolic capacities, such that symbols such as colours, objects, etc, have semantic value (meaning) to it, which it internally produces? I ask this question because, in my opinion at least, not all symbols and sensimotor capacities are as easy to ground. For instance, how would one proceed to ground pain – an aspect of cognition which can undeniably considered a sensation, yet is often also regarded in psychological/neurological discourse as an emotion. In addition to this dual status, the dimensions along which pain is described pose categorical problems – even in human cases – it is not only a sensation, but a sensation which imitates – ex. the pain from a heart attack may register as ‘crushing’, rejection might feel like being hit by a car. How would one reverse engineer such capacities? Would the robot be able to undergo placebo, should it? Perhaps these questions are less about cognition and more about sensation, more analog than digital, yet still I cannot imagine considering an machine, organic or otherwise that does not inhabit dynamic mental states of pain to be one that can be said to cognize.

    ReplyDelete
  24. “…to put it another way, the task of cognitive science is to explain what equipment and processes we need in our heads in order to be capable of being shaped by our reward histories into doing what we do.” - “Cohabitation: Computation at 70, Cognition at 20”

    Something I keep returning to (and having to remind myself of) is the idea of “capability” or “capacity” of cognition, especially when discussing a TTpassing machine. I’m curious to know if there is no question of attempting to define what cognitive capacities humans have before or in place of reverse engineering? Is this necessary? We know what we do but we have no way of defining all the things that we do. Is this even necessary?
    In scaling up the TT to some T3 version with sensorimotor capacities, will we have defined the scope of “equipment and processes” necessary to cognizing to just anything sensorimotor? How do we understand individuals who do not have specific sensorimotor functions yet are still able to ground symbols in this scaling up?

    ReplyDelete
  25. Chomsky’s poverty of stimulus is explained by Harnad as the condition in which “the core grammatical rules are not learnable by trial and error and corrective feedback.” I think this is fascinating. If Chomsky was correct about this, this seems further evidence that computationalism is wrong. If we are born with some Language Acquisition Device, and if Universal Grammar is correct, then there must be some innate component to categorization. We are not like robots who learn things only by supervised or unsupervised learning. There is some physical structure in the brain which allows us to pick up languages. This also goes against Pylyshyn’s idea that cognition is implementation independent. If language learning relies on one distinct part of the brain, then the hardware does matter, in cognition. Maybe this isn’t entirely relevant, but philosophers have postulated other categories, such as the distinction between good and bad, are innate as well. This seems like a fundamental difference between computation and cognition.

    ReplyDelete
  26. This comment has been removed by the author.

    ReplyDelete

Opening Overview Video of Categorization, Communication and Consciousness

Opening Overview Video of: