Blog Archive

Monday, September 2, 2019

5. Harnad, S. (2003) The Symbol Grounding Problem

Harnad, S. (2003) The Symbol Grounding ProblemEncylopedia of Cognitive Science. Nature Publishing Group. Macmillan.   

or: Harnad, S. (1990). The symbol grounding problemPhysica D: Nonlinear Phenomena, 42(1), 335-346.

or: https://en.wikipedia.org/wiki/Symbol_grounding

The Symbol Grounding Problem is related to the problem of how words get their meanings, and of what meanings are. The problem of meaning is in turn related to the problem of consciousness, or how it is that mental states are meaningful.



If you can't think of anything to skywrite, this might give you some ideas: 
Taddeo, M., & Floridi, L. (2005). Solving the symbol grounding problem: a critical review of fifteen years of research. Journal of Experimental & Theoretical Artificial Intelligence, 17(4), 419-445. 
Steels, L. (2008) The Symbol Grounding Problem Has Been Solved. So What's Next?
In M. de Vega (Ed.), Symbols and Embodiment: Debates on Meaning and Cognition. Oxford University Press.
Barsalou, L. W. (2010). Grounded cognition: past, present, and futureTopics in Cognitive Science, 2(4), 716-724.
Bringsjord, S. (2014) The Symbol Grounding Problem... Remains Unsolved. Journal of Experimental & Theoretical Artificial Intelligence (in press)

65 comments:

  1. So is this the end of the road? We've established that symbol grounding is necessary for meaning, but insufficient lest we should consider zombies to be cognizant. That missing piece is consciousness, but from whence that comes is insoluble, and its existence in us is even jeopardized by determinism.

    Is it just me, or does it take an element of faith to hold that we are conscious? For logic dictates that it cannot be so, but our intuitions say otherwise. To believe that we think seems to abandon logic for our intuitions. However, on the other side, couldn't it be said that to deny that we think takes as much faith, since we then have to reject our fundamental intuitions for our logic? It seems that whichever doctrine you endorse, atheism has no place.

    ReplyDelete
    Replies
    1. It does seem rather anticlimactic... If we have no way to determine through external means whether something is cognizant, are we left with intuition or faith, as Julian proposes? That doesn't seem like enough of an answer, either.

      The next reading (6a), which I've only skimmed at this point, makes me think that we're settling for T3 for lack of an alternative. If all we can say is "cognition seems to be about what we do and how we react to things", then don't we leave ourselves open to Searle's objection, again?

      Delete
    2. Luckily we don't have to rely on faith to hold that we are "conscious", which in this case is just a weasel word for "having the capacity to feel". In fact, I think I can be certain that I feel(sentio ergo sentitur), but that certainty only applies to oneself.

      Having no way to determine through external means whether something is "cognizant" is the Other Minds-Problem: you can't be certain about others' capacity to feel, only yours.

      What does it mean to be cognizant? to have the capacity to produce thoughts? I don't think cognition is a marker of sentience - of feeling - as we already have neural nets (and even simpler algorithms)far exceeding human performance in a myriad of cognitive tasks, even if they do not have the capacity to feel.

      I think we assume cognition has something to do with establishing whether other beings feel because every member of the category "has the capacity to feel" also is in the category "has the capacity to cognize (to categorize)".

      Also in practice, we can't even settle for T3 as we still need a couple of giants like Newton to reverse-engineer cognition to the point of producing T3-passing robots. Until then, what should we settle for?

      Because of the symbol grounding problem, could there really be such a thing as a T2 algorithm ? Wouldn't it need to ground sensorimotor invariants with the help of T3 capacities ?

      Delete
    3. (1) JG:

      “So is this the end of the road? We've established that symbol grounding is necessary for meaning, but insufficient lest we should consider zombies to be cognizant. That missing piece is consciousness, but from whence that comes is insoluble, and its existence in us is even jeopardized by determinism.”

      The end of what? It’s the beginning of cogsci’s “easy” (Turing) road of reverse-engineering what organisms can do (including language).

      Symbol grounding is necessary for T2, T3 or T4.

      The other-minds problem is not solvable, but Turing says the TT is close enough, and we can’t get any closer.

      But how does “determinism” get into this? (Determinism is not even a cogsci question but a question for physics.)

      ”Is it just me, or does it take an element of faith to hold that we are conscious? For logic dictates that it cannot be so, but our intuitions say otherwise.”

      What are you asking? No faith is involved in knowing that you yourself feel, just your own experience (and logic). We already know that from the Cogito (Sentio). Whether others feel is the other-minds problem. And Turing points out the closest thing to a solution (TT) and it’s close enough

      ”To believe that we think seems to abandon logic for our intuitions.”

      You lost me there!

      I don’t get a sense, from what you posted, what you have or have not understood about the symbol grounding problem.

      (2) Eli, same reply to you Eli. An answer to what? What is the question?

      Delete
    4. I wonder who is posting as "Charles-Valentin Alkan"? It’s either (i) a student (current or past) who has fully understood, or (ii) a very clever GPT-3 chatbot that has processed my text! But it saved me the need to reply to the above two postings!

      (If you are a student, you need to identify yourself, at least by email, to get the credit!)

      Delete
    5. “I think we assume cognition has something to do with establishing whether other beings feel because every member of the category "has the capacity to feel" also is in the category "has the capacity to cognize (to categorize)".”

      According to Searle, understanding is not inherent to symbol systems and requires some “feeling” of understanding related to cognition. Since the meaning of a symbol requires grounding it (interpreting the symbol and connecting it to its referent), therefore understanding the meaning of a symbol would potentially require the feeling of understanding. Doing everything we can do includes expressing and acting on feelings. Since symbol grounding is necessary to cognition, doesn’t this mean that to answer the easy question we need to at least partially answer the hard question of how we feel? Or is there a way to separate the categories cognition from feeling?

      Delete
    6. Aylish, there's a simple way to separate them: grounding is not meaning (or understanding); it's just a necessary condition for meaning (and understanding): meaning = grounding + feeling.

      This is part of Turing's methodological separation of the easy problem of doing from the hard problem of feeling. The (methodological) separator is the other-minds problem.

      The only exception is T2 if it can be passed by computation alone (Searle's Periscope).

      Delete
    7. Just a clarification on my original intention with the first post (it was, admittedly, half free-think, but still half related):

      This article points out that our symbols, whether they be mental representations, language or otherwise, is that they're grounded in the real world via our senses. This I understood. However, the focus of my point was on the idea that if zombies have their symbols grounded too, how do I tell myself apart from a zombie?

      The standard reply to how we're different from zombies is "of course, it's because I feel". However, whether I feel is worth questioning when we think of the idea that all of cognition happens in the brain and can be explained by happenings in the brain. The feeling I am thinking of is that of free will - that feeling, and what it is, clashes with the idea that all we do is only chemicals going back and forth. This last I labeled, without explanation, determinism, but you can think of it also as a strong neuroscience hypothesis - that neuroscience can explain everything. And as such, since there didn't seem a satisfying answer to what makes us different than a zombie, it seemed like we would need the answer to the hard problem to find a way to answer the easy problem. This is why we may have come to the "end of the road".

      This is how I got from symbol grounding problem to the question of whether we ourselves feel. The rest about faith is just rambling and philosophy. Sorry about that.

      Delete
    8. You know you are not a zombie because you feel -- not because you feel you have free will: because you feel anything at all. That's just the Cogito (Sentio).

      Of course your brain produces your feeling, but to know how would require solving the hard problem.

      None of this has anything to do with the symbol grounding problem.

      Delete
  2. The concept of grounding a set of elementary symbols which can then be used to build and ground other symbols is very reminiscent of the idea of the 'impoverished stimulus' / the poverty of the stimulus where in you begin with a limited set of knowledge (language in the original discussion presented by Chomsky) and are able to build on it. Language is a bit different in that you are learning a gramma structure coupled with the content (I would argue to some extent these symbols being discussed don't have much of a structure limiting their combinations like grammar does) but still has some notable similarities.

    ReplyDelete
    Replies
    1. The concept of grounding is very remote from the “Poverty of the Stimulus" (which we'll discuss in weeks 8 and 9). Grounding concerns semantics, whereas the poverty of the stimulus concerns only syntax (although it is unlikely that linguistic syntax (grammar) is completely autonomous from semantics).

      The right connection is not with the Poverty of the Stimulus (which is about the fact that the child neither hears nor produces violations of the rules of Universal Grammar (UG), therefore the rules must be inborn).

      The right connection with the symbol grounding probelm is productivity: the fact that a language-speaker can produce an infinite number (and variety!) of sentences by combining words according to the rules of grammar. But that would be true even if there were no Universal Grammar, nor poverty of the stimulus, and grammar consisted only of Ordinary (i.e., learnable and learned) grammar (OG). A finite set of rules can generate an infinity of potential sentences. In fact just the axioms and rules of deduction of arithmetic can already generate an infinity of potential arithmetic statements.

      What symbols grounding has in common with the question of productivity is that, there too, a finite set of grounded symbols (plus syntax) can generate all other sentences: You don’t need to ground everything directly.

      Delete
    2. @Prof: Does this mean that with a finite set of grounded symbols, you can ground an infinite set of sentences? Is grounding a sentence different from grounding its individual components? Could you ground a sentence without grounding its individual components?

      When you use the 3 examples (1) "Tony Blair," (2) "the UK's former prime minister," and (3) "Cheri Blair's husband" and they all referred to the same person, were each of these sentences grounded in different ways, but shared the same referent?

      Delete
    3. Ishika: Good questions!

      "Does this mean that with a finite set of grounded symbols, you can ground an infinite set of sentences?"

      Yes.

      "Is grounding a sentence different from grounding its individual components?"

      Yes, content words (boy, ball, throw) have referents. Sentences (strings of symbols) are subject/predicate propositions (“boy threw ball”); they have truth values: T and F.

      "Could you ground a sentence without grounding its individual components?"

      No, Sentences are subject/predicate propositions (“”boy threw ball”; cat is on mat") that are either true or false. The subject and predicate are mostly content words (“the,” “not,” “if” are function words, with a syntactic function but no referent). Words can be grounded directly (through sensorimotor feature detectors that connect setect their referents in the world) or words can be grounded indirectly (by propositions that describe or define their referents).

      But a proposition is grounded if the content words of which it is composed are grounded.

      Otherwise it’s just meaningless squiggles and squoggles…

      "When you use the 3 examples (1) "Tony Blair," (2) "the UK's former prime minister," and (3) "Cheri Blair's husband" and they all referred to the same person, were each of these sentences grounded in different ways, but shared the same referent?"

      The meaning of a word is not the same as its referent. Some have suggested that the meaning of a word is the means by which you detect its referent. (And there can be many means.) If so, that means of detecting the referent would be sensorimotor category detection if the word is directly grounded, or, in indirect grounding the means can be a proposition, like a description or definition of its referent *U’s former prime minister” or “Cheri Blair’s Husband.” (Definitions and descriptions are approximate; you can make them closer and closer, but never exhaustive, except in formal mathematics. This should remind you of computer simulation…)

      But to ground a word indirectly with a proposition, the (content) words in the proposition have to be grounded -- either directly or indirectly... and so it goes, to a potentially infinite grounded vocabulary of content words, and an infinite number of sentences you can compose with the grounded vocabulary.

      But grounding is not meaning either! It's the means of identifying the referent; but there's something more to meaning. What do you think it is?

      Delete
    4. So, the referent of a word is not fully constitutive of meaning, and if we add on groundedness (direct or indirect) that also does not yet fully define meanings we derive. I want to say that what is missing still is to include what it feels like to be implementing the whole process that is going on inside our heads when we invent and interpret meanings of words/sentences and their referential components. Is that what you are looking for as the something more to meaning?

      In your scholarpedia page on this you write, “ultimately grounding has to be sensorimotor, to avoid infinite regress” which is exemplified in the model of the unilingual dictionary. Then you mention the property of “consciousness” – is consciousness the thing that we refer to when we say that it feels like something to be thinking? It feels like something to X (x=anything grounded) = source of meaning. The feeling of recognizing referents and of picking them out, of picking out their interrelationships to each other, is the only solid ground for any and all percepts.

      Is it accurate, or partially so, to say that “apple” is a grounded term because we can feel/perceive the smell, texture, taste, reflected light etc. that are the features our brains can pick out? The features describe categories, and doesn’t all of our meaning come down to “is” and “is not” relativities? We need at least two things for any one of them to mean something right? … am I on topic or digressing here?

      Delete
    5. Yes, a word's meaning = its grounding + feeling: what it feels like to "have something in mind" when you refer to or describe the referent; to understand and know what the referent is. Grounding can be direct (sensorimotor) or indirect (based on a verbal description whose words are themselves grounded, either directly or indirectly by a verbal description or definition). We are not grounded, walking, talking sensorimotor zombies.

      Yes, consciousness is just a weasel-word for feeling (or "sentience"). A conscious state is a state it feels like something to be in. A zombie does not have states that it feels like something to be in: nobody's home in a rock, a rocket, or an insentient robot. (Also, I hope, not in a plant either!)

      It feels like something to be thinking -- thinking anything; and not all thinking is verbal. In fact many species think, nonverbally. Only humans have language; and what we are talking about here is the meaning of words and sentences. I think a lot of our thinking is nonverbal too, but our brains are saturated with language, so we almost automatically verbalize our thoughts, like captioning a silent film with a narrative of what it's about.

      Yes, "apple" means what it means because we can do the right thing with apples, including naming and describing them (i.e. "apple" is grounded) and it feels like something to say or write or think an "apple" (or "the cat is on the mat") and mean "apple" (or that "the cat is on the mat"). And cogsci is trying to reverse-engineer how our brains (or any causal mechanism) can do that.

      Propositions ("the cat is on the mat") have a subject and a predicate. They describe a set/subset category relation between the subject and the predicate. The subject is a subset of the predicate: The category (or individual) "cat" is inside the category "things that are on the mat." The "is" (whether it's explicitly said or not, is crucial for the distinction between whether the proposition is true or false. [Some languages have no word for "is" and just say "cat on mat" but what they mean is the "is" of predication, while "cat not on mat" means the "is not" of denial.]

      Yes, is/isn't is very important. Without it you would not have true vs. false. And it's closely related to positive and negative feedback in categorization, and with doing the right vs. the wrong thing. (We'll also be discussing this in another context when we get to Chomsky and the "poverty of the stimulus" in yet another context, this time purely syntactic.)

      Ask me in class Tuesday to tell you about the membership of the category "laylek." (If you google it, don't google "laylek" alone or you'll get the wrong laylek! You have to google "laylek harnad"!)

      Delete
  3. "First we have to define "symbol": A symbol is any object that is part of a symbol system. (The notion of single symbol in isolation is not a useful one.) Symbols are arbitrary in their shape. A symbol system is a set of symbols and syntactic rules for manipulating them on the basis of their shapes (not their meanings)."

    With this definition in mind, what isn't exactly a symbol? I know that in this context, a symbol here is referring to a word and the letters that make up a word; but what about about the physical world? If we take a table for example, its physical properties could be interpreted as symbolic -- a table generally has four legs attached to each corner of a rectangular or square surface in order to support it. I assume that the symbol system in this case would be manipulating the four legs and the surface in order to make them come together, and form a table. If my assumption is correct, would then the "grounding" of "physical symbols" operate in the same way that the grounding of "formal" symbols would? (I'm not sure if there is a word for the non-physical symbols, so I just used the wording from the paper in which symbols were further elaborated on.)

    ReplyDelete
    Replies
    1. Anything can be used as a symbol -- as a component of a symbol system, with rules for manipulating the symbols based on their shapes. A "0" can be used as a symbol and so can a chair or a name. The shapes of symbols are arbitrary (which is a property related to implementation-independence). Computation is just formal. The rules for manipulating the formal system of symbols (of which the chair might be one) are algorithms, computer programs, software. "If chair, move aside and replace by table" is just as good as "if 0, erase and write 1."

      Now remember that computation, although it is the manipulation of symbols based on their shapes, not their meanings, is only of interest if the symbols and manipulations can be given an interpretation, and can bear the weight of that interpretation. If the symbol system is interpretable as doing addition, it better give you "4" as an output when you give it "2 + 2" as input. In other words, not every object is being used as a symbol in a computation, but any object can be.

      But when it comes to the (attached) parts of a chair, you can't use them as symbols unless you detach them. Symbols are discrete objects. And you need a symbol system, other symbols, and rules for manipulating them; and it all has to be systematically interpretable.

      You can simulate a chair and its parts computationally, but then that's not a chair: It's symbols that are interpretable as a chair.

      And Searle showed that language, in T2, has a symbol grounding problem. The symbols (words) mean nothing to the system itself. But that problem is cogsci's problem, and language's. Computation itself, as used in mathematics or computer science, does not have a symbol grounding problem. If the algorithm is right, and the output is systematically interpretable, it does not matter that the interpretation itself is being made by the symbol system's external users, not by the symbol system itself.

      Delete
  4. I am responding to “Steels, L. (2008) The Symbol Grounding Problem Has Been Solved. So What's Next?” and “Bringsjord, S. (2014) The Symbol Grounding Problem... Remains Unsolved. Journal of Experimental & Theoretical Artificial Intelligence” because just the titles intrigued me and suggested to me that there is not necessarily a consensus about the symbol grounding problem in cognitive science right now. The proposed solution that Bringsjord refutes is not the exact same one that Steels describes, but the Steels paper was published first and Bringsjord still argues six-years later that it is unsolved.

    Steels argues that they have solved the symbol grounding problem by designing a robot that can ground the meaning of colours just based on sensorimotor perceptions. This to me seems a bit like the toy-t0 Turing Test though, in that perhaps they have a robot that can ground symbols (although we have no access to this robot through any periscope so I cannot say that these robots “feel” the meaning as I do), but only in a super narrow application of colours. Maybe this approach is really easy to scale up to far more categories and my skepticism of it as a toy-solution is unfounded, but it doesn’t seem to me to be an easy jump from being able to ground colours to interpret Allison’s letter (an example Bringsjord proposes, essentially where you have to “read between the lines” and infer meaning from the tone of the text). Are all feature detectors as simple as the ones the robots can use for colour? My other question is what exactly would Bringsjord say in direct response to Steels’ work?

    ReplyDelete
    Replies
    1. You are right. Steels's "solution" is just a toy, and remains so until/unless it scales up to T3.

      Delete
  5. - Reading this paper’s proposed candidate solution of a “hybrid non-symbolic/symbolic system” to the symbol grounding problem solidifies for me that T3 should be the “right” level of the Turing Test. Any machine that tries to cognize the way humans cognize (which is what the TT is testing for) will need sensorimotor parts in order to get access to the non-symbolic iconic and categorical representations.
    At one point, Harnad writes “first and foremost, a cognitive theory must stand on its own merits, which depend on how well it explains our observable behavioral capacity.” So from that, would a cognitive theory not be concerned with how representations are stored in the brain as well? A hybrid system with symbols and non-symbols is admittedly very appealing to me, but I struggle to see how stored iconic and categorical representations in the brain can be used to ground the elementary symbols, without running into the homuncular problem?

    ReplyDelete
    Replies
    1. "observable behavioral capacity" means all of T3.

      There is no homunculus problem for Ting because she is autonomous. She is a solution to the "easy problem." Her designer reverse-engineered human all of "observable behavioral capacity."

      The homunculus problem is a problem for introspecters, not for reverse-engineers.

      Delete
  6. (1) "Perhaps symbol grounding (i.e., robotic TT capacity) is enough to ensure that conscious meaning is present too, perhaps not. But in either case, there is no way we can hope to be any the wiser -- and that is Turing's methodological point (Harnad 2001b, 2003, 2006)."

    In this extract, is Turing's methodological point the fact that he focused on weak equivalence?

    (2) This extract convinced me of the necessity of a machine to pass T3 in order to pass T2:

    "The symbols in an autonomous hybrid symbolic+sensorimotor system -- a Turing-scale robot consisting of both a symbol system and a sensorimotor system that reliably connects its internal symbols to the external objects they refer to, so it can interact with them Turing-indistinguishably from the way a person does -- would be grounded."

    My understanding of this is that symbol grounding is the quality that ensures that the symbols are correctly manipulated, or in other words that the correct rules are applied to the correct symbols. T3 is the lowest level of the TT that would allow for symbol grounding.

    Relating this back to Searle's CRA, Searle's system relied only on computation at a T2 level. If the system had sensorimotor abilities and was thus able to ground symbols, would Searle's argument fall through? We would still not be sure of conscious meaning, but this is the Other Mind's problem. Instead, the thought experiment would become more similar to a child interacting with the world and learning the meaning of words by interacting with the world.

    ReplyDelete
    Replies
    1. Edit: In Searle's CRA the Other Minds problem is addressed by Searle's Periscope. Am I correct in assuming that this could not be replicated if the system became one with sensorimotor abilities, because it would then be akin to a child experiencing the world for the first time, and we could not put ourselves in that position as Searle does with Searle's Periscope?

      Delete
    2. (1) The part of Turing's methodology that I thought about here was his emphasis on indistinguishability. One could propose an alternative test by which we could measure our reverse-engineered cognition candidates, and say "the candidate will only pass if they feel exactly like we do when carrying out all of our abilities". The problem with a test like this is that we don't have access to the feelings of our candidates due to the Other Minds problem. So in this way Turing makes sure his test only focuses on the indistinguishability of (verbal) function of the candidates, since this is something that we actually can test. The weak equivalence fits into this because if all that is required is functional indistinguishability, the details of the algorithm or system do not have to be identical among candidates.

      (2) Only a computational system would be accessible to Searle through Searle's periscope because this is the part that is implementation-independent. I agree with you that if the system had sensorimotor abilities, Searle cannot "become" this part of the machine because these abilities depend on their physical hardware. His argument would still stand for when he was simulating just the T2 computer though, more the issue is that this argument cannot simply be extended to judge the mental state of a T3 robot.

      Delete
    3. Ishika

      No, Turing’s point here is just that reverse-engineering can only solve the easy problem, because of the other-minds-problem as well as the hard-problem. So there’s no point in asking for more…

      No, Symbol grounding is needed to pass T3 (and T2). The syntactic rules (if any) can take care of themselves, but the symbols (content words) have to be connected with their referents to pass T3.

      Searle’s Periscope works for T2 if T2 can be passed by computation alone. (“Stevan Says” it can’t be.) It can’t penetrate T3.

      Learning ability is crucial to T3 (and T2), but whether the learner is a “child” is irrelevant.

      Stephanie

      You’re more or less right, but weak equivalence is irrelevant here.

      Delete
  7. “So if we take a word's meaning to be the means of picking out its referent, then meanings are in our brains. That is meaning in the narrow sense. If we use "meaning" in a wider sense, then we may want to say that meanings include both the referents themselves and the means of picking them out.”

    I think this model succinctly explains the concept of words that pertain exclusively (or close to) the semantics rather than the syntax of a sentence. I’m curious as to how this system can be extended to words that are more syntactic in character, however, e.g. articles, complementizers, etc.

    For a concrete example, let’s imagine the English word ‘the.’ In the article, symbol grounding involves the mind picking out the referent from the outside world. ‘The’ does not have a physical manifestation in the world to refer to, however. So, I’m curious as to whether the mechanics of the generation of meaning for this word resemble content words. Would there be some sort of categorisation learning function that could explain this, (e.g. a language learner gleans the proper usage from correct and incorrect instances of people using ‘the’ vs. the indefinite article ‘a/an’ for instance), or is there some sort of innate structure for people to understand these functional words, or is it a mix of both?

    ReplyDelete
    Replies
    1. Over 99% of the words in a language are content words (nouns, verbs, adjectives, adverbs). Content words have referents. The referents are usually not individuals (as with proper names) but categories, kinds (cats, mats, swimming, running, blue, green). The rest of the words (like the, and and, and if, and not) are function words. They don't have a referent; they have a use, a syntactic use. They are like mathematics and computation, which have to be used in a certain way, according to the symbol-manipulation rules, but they don't refer to anything; so they have no symbol grounding problem (just as none of computation or maths does).

      Delete
  8. In Professor Harnad’s article “The symbol grounding problem,” we revisited the Chinese Room Thought Experiment. If a machine has a rule book and can reply to the question symbol strings (e.g., Q. 你会说中文吗?) with the correct answer symbol strings (e.g., A. 我会说中文。), it will pass T2 with ONLY symbol manipulation, and no symbol grounding is necessary.

    Next, we learned about the Chinese/Chinese Dictionary-Go-Round: when a non-Chinese machine flips through its Chinese dictionary to find the meaning of an ungrounded symbol and arrives at another ungrounded symbol cycling endlessly because the dictionary is ungrounded.

    Then, we learned that humans discriminate and identify things using categories (grounded descriptors) and symbolic representations.

    For example, "zebra = horse + stripes."
    - Zebra, horse and stripes are category names.
    - "Zebra = horse + stripes" is a proposition because there is a subject (zebra), a predicate (is a horse with stripes).
    - "Zebra = horse + stripes" has a truth value.

    If we can use categories and symbolic representations to learn from descriptions and propositions, how do we ground abstract concepts, such as love or success? Do abstract concepts have referents to things in the physical world? I assume these questions have something to do with the hard problem of consciousness (and feelings), perhaps?

    ReplyDelete
    Replies
    1. T2 is not just question-answering. It's about anything anyone can talk to you about -- and anything you can talk to them about.

      A T2-passing program is not a dictionary; it is an algorithm that can (somehow) pass T2. The dictionary is just to illustrate the symbol grounding problem.

      To categorize is to do the right thing with the right kind of thing. There are three ways to learn categories: unsupervised and supervised induction and verbal instruction -- but the words (category names) in the instruction have to be grounded, directly (by induction) or indirectly (by instruction).

      We can learn new categories from (grounded) descriptions. All categories are abstract: categorization depends on the abstraction of features that distinguish members from nonmembers. Love and success have referents; you can point to them. And you can also describe them. Fictitious entities have referents too (e.g., unicorns: you know what that means, you know what they look like, and you would recognize one if ever you saw one; so they are grounded, like everything else. So are galaxies too distant to see, and subatomic particles too small to see.)

      No, this is not connected with the hard problem.

      (Soon we will meet the "peekaboo unicorn.")

      Delete
  9. I'm not sure I understand what Steels means by nonsymbolic c-representations which would "use continuous values and are therefore much closer to the sensory and motor streams". It doesn't really matter that the representations are closer to analogue processes. They are still simulations and are therefore only approximations i.e. they still are inhenrently discrete symbol manipulations. Moreover, they are implementation-independent since they are computational which means we still haven't solved the problem outlined by Searle's CRA. I also think his interpretation of Searle's argument is incorrect. He takes it to be an argument that says that no artificial system can ever achieve symbol grounding and writes that the only thing that was right about Searle's argument was the fact that meaning is given to the system by humans (and that that really is problematic according to Steels). Steels' robots seem to have autonomy, we'll give him that. But so do unicellular organisms to some extent and I think we can agree that such organisms do not cognize, however complex they may be. Having autonomy and complexity does not give you a free pass to symbol grounding. We need more than that. As Harnad discusses in his symbol grounding paper, autonomy is crucial but so is nonsymbolic sensorimotor capacity to allow for grounding. And Harnad takes that to be an analogue and dynamical process which must therefore be implementation-dependent. Steels' robots are t0 robots. They are useful for reverse-engineering cognition (maybe in the simulation/tool sense) but not groundbreaking in those regards. Steels' paper (and robot experiment) seems like a camouflaged attempt to salvage computationalism. So, in short, no, the symbol grounding problem has not been solved.

    ReplyDelete
    Replies
    1. I was thinking about autonomy and I'm having second thoughts about what really counts as autonomous and what doesn't. How would one go about defining it?

      Delete
    2. Good points. And, yes, autonomy (like the cognitive/vegetative distinction) is a fuzzy notion. There is probably no complete causal isolation in physics (but I don't know, because I'm neither a physicist nor a metaphysician).

      On the other hand, feeling (consciousness) is all-or-none; it's not a matter of degree. You can either be feeling more or feeling less, or feeling more intensely or feeling less intensely; but at any point in time you are either feeling something, no matter how faintly or intermittently, or there is no feeling going on at all: the "zombie" state of a rock or a rocket (or an insentient robot -- or a dead (or brain-dead) human rather than a human in delta sleep or in an acute or chronic "vegetative" state...). (This is worth reflecting on, but it's definitely not part of this course!)

      Delete
  10. I am struggling to expand what has already been said by other students, so I can only offer my understanding of what I have read. The Symbol Grounding Problem (SGP) reveals that although a word may be grounded (has a connection to its referent), it does not necessarily follow that the word has meaning. Meaning, in the narrow sense, according to Harnad, is the means of picking out a word’s referent. Here enters consciousness, which is assumed to be how we combine a word and its referent in our head. Accordingly, words written on paper are not grounded because there is nothing like consciousness to connect the word to its referent.

    Moving on to the Turing Test; A T2 passing machine is not capable of grounding words to their referents since it does not have sensorimotor capabilities, but a T3 robot can ground words. However, as Harnad states, a T3 robot cannot be said to know the meaning of a word just because it can ground it. As a result, would we at least need a T4 machine, which is indistinguishable in sensorimotor capabilities and internal structure/function, to ensure that a machine is can grasp the meaning of a word? This would make sense given that a T4 machine goes beyond what is necessary for computationalism and I am now firmly in the camp that believes that cognition is part computation and part other things.

    ReplyDelete
    Replies
    1. Hi Matt! To speak to your last point about which level of the TT can be said to "know the meaning of a word" I would argue that we don't necessarily need a T4 machine to do this, but at the very least we need a T3.

      Groundedness is necessary but not sufficient for meaning. This means, as you said, that a T3 robot will not be guaranteed to have meaning simply based on its ability to ground symbols. However, I don't think we can preclude the possibility that T3 could both ground the symbols and have meaning attached to them. What we know for sure, based on Searle and others, is that computation alone is not sufficient for cognition and so anything T2 and below is out of the question for meaning. At the same time, I don't know if we've proven definitively that we need the brain and only the brain in order to have meaning. Computation alone is certainly not enough but Harnad seems to be agnostic about whether a T3 with the right software and the right sensorimotor components could produce meaning. In the end of the article, he also says that the problem of explaining the role of consciousness is possible in principle but is probably "insoluble", and so it may be the case that symbol grounding can ensure consciousness, but in the end with the TT, we can't know for sure.

      (These are just my thoughts, if anything isn't right please feel free to correct me someone!)

      Delete
    2. Matt, bear in mind that “connection” to the referent does not mean a “line” (or “association”) linking the word to the thing, but rather, as in all of the “easy” problem of explaining doing-capacity, it is the causal mechanism that produces the T3 capacity to do all the things we can do with the referents of our words: We can detect and identify apples (via several sensorimotor modalities: optic, acoustic, tactile, osmotic, biochemical), hold them, manipulate them eat them, etc. To (be able to) categorize is to (be able to) do the right thing with the right kind of thing.

      But, besides having all that “easy” doing-capacity, we are also not zombies: it also feels like something to do, and be able to do, all those things. Explaining that is the “hard” problem. Yet it is part of the meaning of a word (or sentence): It feels like something to say and mean “apple” (if you know what “apple” means!). And if you don’t understand Chinese, then to say 蘋果 just feels like what it feels like to say píngguǒ — but it means nothing.

      What you have left, beyond T3, is the other-minds problem, even with T4. So the question to ask yourself is whether and why T4 would make you less inclined to kick Ting than T3…

      Allie, good points. (But it’s not T2 that has been shown to be an insufficient test of whether you have successfully reverse-engineered cognition: what’s insufficient is T2 if passed by computation alone [i.e., computationalism]. T2 might still be a strong enough test — e.g., if it is passed by a robot that could also pass T3; but T2 may be a strong enough (indirect) test of that — just as T3 is the “filter” for how much of T4 is needed… to pass T3!)

      I’d say I’m more than agnostic about T3. I would never, ever kick Ting, as surely as I would never kick any of you, or any other sentient organism. With his Ttest methodology Turing has shrunk the “agnosticism” (aka the other-minds problem) to as small as it seems possible to shrink it ; further shrinking would just be superstition.

      Delete
    3. So, symbol grounding is only one half of the equation. The other half is feeling, or the hard problem. There is no way to solve the hard problem or the other minds problem so I will focus on the easy problem here. Categorization is a mechanism for grounding words. With a MinSet acquired through categorization, you can define all words in a dictionary. Accordingly, it appears we have reduced the easy problem to its most concise understanding. What then can we do? It seems as though a T3 robot with a MinSet and the ability to categorize would be totally indistinguishable from a human in terms of its capacity to have a conversation. How does understanding a MinSet help us reverse-engineer the brain? For me it seems useful because we now have a focused understanding of symbol grounding.

      Delete
  11. I get the general gist of symbol grounding and why we need it as opposed to simply having symbols that are just squiggles and squoggles but I have a lot of questions. I think I just need some kid sib-ly clarifications.

    1. Is a symbolic system just what something that does computation would be doing?
    2. What would the hybrid system entail?
    3. Why would we want neuronal connections to be a symbolic system? Why are we assuming they aren’t?

    Usually I’d have more comments during my skywriting but I think I have a few gaps in understanding that I need to fill in first.

    ReplyDelete
    Replies
    1. 1. Is a symbolic system just what something that does computation would be doing?

      A symbol system is a set of symbols and rules for manipulating them (algorithms). A symbol system is just formal. A computer (hardware) physically executes the symbol manipulations. (“Symbolic” here means “computational.”)

      2. What would the hybrid system entail?

      “Hybrid” just means both dynamic (analog) and symbolic (computational). So in a T3 robot, the sensorimotor part (at least: there could be a lot more inside) is dynamic and the symbolic part (whether big or small, like doing long division in your head) is computational.

      3. Why would we want neuronal connections to be a symbolic system? Why are we assuming they aren’t?

      We don’t “want” anything — only a T3 system that works. The sensorimotor part cannot be computational. It has to be dynamic. Some, maybe most of what goes on inside T3 could also be dynamic rather than computational too.

      But, in principle it is also possible (Strong C/T Thesis) that some of what goes on inside T3 could be simulated dynamics, hence computational.

      For example, in the Shepard “mental rotation” experiment — in which the time it takes to verify that the two shapes are really the same shape, but one of them is rotated, is proportional to the degree of rotation, i.e., longer the greater the rotation— the natural interpretation is that some sort of analog rotation in real time is going on inside the head. However, it is also possible that a computational simulation of rotation is being done instead. The sticky part is that it is not clear then why a bigger rotation should take longer (except to deliberately fit Shepard’s reaction-time data!).

      Delete
    2. I remember reading this reply but I guess I forgot to respond! But yeah this all makes sense, I guess it was just a wording confusion on my end. But yeah in order to ground symbols we need repeated sensorimotor interactions with their referents (or indirectly, using other grounded symbols through language like zebra = horse + stripes if horse and stripes are both grounded). This would require some analog equipment and cannot be purely symbolic

      Delete
  12. “Meaning is grounded in the robotic capacity to detect, categorize, identify, and act upon the things that words and sentences refer to.”

    I am imagining another sort of Chinese Room where a man is transported to another planet and is now in a completely unfamiliar environment. He is shown a symbol (say 8u7) and then is shown a cluster of orbs with sticks pointing out of them in a green fog. He is allowed to touch the orbs, and to smell and examine them. He understands that this thing corresponds to the symbols on the page in front of him. They repeat this task with many new symbols and objects, the likes of which the man has never seen. He learns actions too, this way. He then has to do what the man in the normal CR does, with these alien symbols. Does he now understand what he is outputting? I don't think he entirely would.

    My main issue is with categorizing. How do we categorize? How does the man know that the 8uj he was shown is one instance of a 8uj, and that all 8ujs differ slightly? You can show him thousands of different 8ujs and say they all belong to the category 8uj, but the man won’t be able to tell that a new concept belongs to the category if he hasn’t been taught this, since he can only memorize pre-existing members of the category.

    I don’t think teaching rules makes us categorize any better, either. For example, we say a zebra is a horse-like being with stripes, but what are horse-like beings? What are stripes? If all you can do is memorize members of a category, how can you tell what any instance of a stripe is? Stripes are bands of alternating colours. How do you define a "band" such that it still applies to what appears on a zebra’s back? How can you tell that two colours are sufficiently different to be a stripe? I don’t think making rules simplifies anything. I don’t see how there is an end to categorizing.

    ReplyDelete
    Replies
    1. As Professor Harnad discusses below, categorization is different from the paired-associate learning that you describe in your alien example. All of the questions that you ask are important aspects of the easy problem, many of which have been at least partially answered by psychologists. However, we cannot doubt that categorization is possible, because every time that a baby is born, they are essentially living out the example you described. They are transported to a completely unfamiliar environment, and must slowly learn to categorize the world from scratch.

      I would also like to add to Professor Harnad’s explanation of the process of categorization that we do not begin from a perceptual “blank slate”. Evolution has shaped our perception and cognition in ways that make it easier for us to interact with the world. A classic example is of humans having pattern detection mechanisms that are hypersensitive to features of things that were threatening throughout evolutionary history. For instance, the shape of spiders grabs our attention, even as babies. We are also highly attuned to processing human faces from infancy. Even your example of stripes and bands has an interesting perceptual dimension, as our visual system is “wired” to better detect such contrasts in color through the process of lateral inhibition.

      All of this to say that evolution has provided us with a perceptual infrastructure to help us make sense of all the different stimulus we receive, by bringing into focus what is most relevant. This framework does the groundwork for categorization, which helps explain how we succeed at this seemingly impossible task.

      Delete
  13. Categorization is not memorization. It is the abstraction of the features that distinguish the members of the category from the members of other categories. You have not learned which mushrooms are edible and which ones are inedible if you have to be told every time. You learn by trial and error, with corrective feedback on whether you did the right thing or the wrong thing. You have learned it when you can do it without the feedback.

    Nor is categorization paired-associate learning: associating the category name with each category member -- especially when it is nonverbal category learning, and the right thing to do is not to say its name, but to eat (or not eat) it.

    We already have neural nets (unsupervised and supervised) that can learn (some) categories this way. So it's not magic. But it's certainly not memorization.

    The nets learn to abstract the features that distinguish the members from the nonmembers of the category. But the features are potential categories too. So once you have grounded the names of enough categories (and the right ones), including categories that are features of other categories, you can tell someone what the category's distinguishing features are, instead of leaving them to be learned the hard way, by trial/error/feedback/abstraction.

    So if you don't know what the horse-shape category is and what the stripe-shape category is, you either have to ground them directly, by trial/error/feedback/abstraction, or you have to be given a verbal description made up of other grounded categories, telling you its features (head, tail, mane, etc., black, white lines, etc.). None of it is magic, but it takes the capacity to learn to abstract features, and also the capacity to string grounded feature-names into subject/predicate propositions such as: "a zebra is a striped horse."

    And that requires language, whose origins we will soon be discussing.

    ReplyDelete
    Replies
    1. After posting this I realized I was wrong to doubt categorization. Even auto-generated subtitles on youtube videos are using this technology now, and it is pretty accurate. I know there are three ways to learn these categories (instruction, supervised and unsupervised learning), I just struggle to see how this can be implemented in code. If the auto-generated subtitles were trained using supervised learning, how does the program store this information? Is it creating a dictionary to store different souds which map to certain words? I feel like we fall back into the problem of memorizing if that is what the program does. So what is it doing?

      Delete
    2. Spoken-to-written transcription is getting better and better and is based partly on phonetics (like MNs!)and partly on word distribution statistics (what word tends to co-occur with what word); there is also some supervised voice-specific training, probably now with the help of the accompanying video images, including databases of crowd-source-tagged images, static and dynamic. How it's stored I don't know, but there's plenty of storage capacity these days. Language translation has gotten incredibly good too.

      (For English/French google translate hardly needs any tweaking any more; I would never do a translation from my head any more, in either direction. I just GT it and then do some fine-tuning. Style's not good, but the original rarely has any style either! We're not translating Milton or Molière!)

      But none of that is language-understanding, or TT-scale performance capacity. It's cognitive technology, not reverse-engineering.

      Delete
  14. I here have a question about iconicity.

    From what I understand from the text, in order to categorise we have to 1) discriminate features that make up an object from other features and then 2) identify this set of features as relating to a category. This is the basis of categorical perception which then leads to the manipulation of symbols that can allow various forms of grounded symbol manipulations to take place (computation) through mere propositions. On the one hand, the discrimination of features will lead to analog sensory projections (i.e. icons) and, on the other hand, identification consists in detecting specific features of “categorical representations”. My question is how do these two processes meet and relate if according to Harnad 1982 they are different/separate processes.

    Supposing my above definitions are correct here is how I understand it. 1) After exposure(s) variant features are discriminated from invariant features and we are left with a sensory icon, or first abstraction; this is discrimination. 2) There is a second process of abstraction where categorical perceptions selectively renders more salient certain cues (depending on previous exposure to the object), associating the set of cues with past experiences of being exposed to a category of object; this is identification. Am I getting this correct ?

    Once the category is created and grounded in this way it can be syntactically combined with other categories to create other grounded combinations such as:
    Zebra = Horse(white) + stripes(black)
    All without having ever seen the object (the zebra). In this way we can now compute and be grounded (and pass T3!).

    ReplyDelete
    Replies
    1. Think of the iconic representation (an analog copy) of input as the product of unsupervised learning, and the categorical representation of input as the product of supervised learning (trial, error and feedback in trying to "do the right thing with the right kind of thing"), through which your neural networks learn to detect the features that distinguish the members from the non-members of the category, thereby grounding the category's name.

      Symbolic (verbal) representations can then combine the grounded category names to define and describe further categories.

      Delete
  15. “So we need horse icons to discriminate horses. But what about identifying them? Discrimination is independent of identification. I could be discriminating things without knowing what they were. Will the icon allow me to identify horses? Although there are theorists who believe it would (Paivio 1986), I have tried to show why it could not (Harnad 1982, 1987b). “

    My second question here relates to my preceding comment on discrimination and identification (presupposing it is correct). No doubt, categorical perception must involve 1) discrimination of invariant features from variant features and 2) associating the abstracted instance (icon) obtained with its class (I.e. whatever word/proposition is associated to the icon) - identification. I have no doubt that these are two attributes of categorical perception - I however question if this is a unidirectional process where identification happens separately and subsequently to discrimination.

    Here is an example to illustrate what I mean. If you’ve played Where’s Waldo, you know that there are several ways of finding the red and white bobble hatted figure. When I play I know that I traverse the scene looking for red and stop on any instance of red to focus - the closer the figure ressembles Waldo the longer I will stay on the instance to identify the figure. This is of course my experience of the game and I don’t claim everyone plays the game this way, however my point is that in order to find Waldo, I start by identifying an attribute of its class (in this case red) and then discriminate other features accordingly by scanning the page for an instance of red; from there I move in a dialectic where once I’ve found a feature of red I can identify and discriminate more features until I’ve found enough invariants to categorise an instance as Waldo.

    What I am proposing is that discrimination and identification relate to one another by either dialectically (if not simultaneously!) discriminating and identifying an instance until a threshold is passed by which the instance can be associated with a particular category.

    ReplyDelete
    Replies
    1. Scanning a scene to find Waldo is not a categorization problem but a visual-search problem, and you are describing a visual search strategy. There might be some analogies between visual search and feature-learning and detection in categorization, but neural nets do it through unsupervised and supervised learning.

      Discrimination means telling inputs apart, and comparing them: Are these two inputs the same or different, and if different, how different?

      Categorization is not about comparing inputs, but about what is the right thing to do with a single input: Do I eat this? Is this an apple?

      Delete
  16. In Symbol Grounding Problem on Scholarpedia, Harnad explains that for a formal symbol––a symbol which is manipulated based on its shape according to a set of rules––to be grounded, the symbol must be connected to its referent––i.e., the thing in the world the symbol picks out (for example, the referent of the symbol "apple" is an apple in the world). For a symbol to be grounded, the system that contains the symbol must interact, via sensorimotor capacities, with the symbol's referent. Alternatively, a symbol or word may be grounded by learning the definition of the word, where at least some of the other words in the definition are grounded through sensorimotor interaction.

    When exploring these ideas, I wondered how more abstract objects or notions might be grounded––for example, the Supreme Court of the United States. The Supreme Court is an institution which is abstract, and while there might be buildings associated with the court which one could interact via sensorimotor input, you cannot interact directly with the abstract concept of the court. I would suggest that the answer to this, which is implied in Harnad's work, is that we learn the definition of the court, and even if many of the words in that definition are also abstract concepts, we must keep defining abstract terms until every word of the definition is grounded. In other words, when defining abstract terms, we have a regress, but not an infinite regress, as the regress will end when we get to definitions that contain only terms which have been grounded through sensorimotor interaction.

    But what about concepts which we seem to understand but cannot obviously be defined by terms that are grounded––for example, the concept of a number? It does not seem to me we could ground the concept of the number three directly––through sensorimotor interaction (with a number?)––or indirectly, by defining the number using other terms that are grounded directly through interaction. Maybe there are some concepts which are innate; and maybe it is not the job of cognitive science to explain these concepts, but simply to acknowledge that we have them and work from there?

    ReplyDelete
    Replies
    1. Hey Alex! I've been thinking about your question - how do we ground more abstract notions such as the Supreme Court? You mentioned that perhaps we learn the definition of the court and define each term within it until they're all grounded. I completely agree. I would also propose that we are able to ground "Supreme Court" through more detached sensorimotor interactions such as watching movies that feature the Supreme Court. Perhaps our childhood experiences of right and wrong which shape our concept of "justice", play into this process? I would assume that as we have experiences with and ground terms such as "judge", "lawyer", "criminal" etc. our conception of "Supreme Court" becomes more and more grounded.

      The way I unintentionally worded my last sentence ("becomes more and more grounded") brought a question to mind. Does grounding happen all at once or in degrees?

      In response to your question about numbers, I wonder whether we ground numbers through early experiences of counting. This then leads to learning math in school and applying these techniques in our daily lives.. Perhaps all of these experiences are what allow us to ground numbers rather than them being innate?

      Are there any concepts that are innate and not grounded through sensorimotor experience? I honestly don't know but will continue thinking about it..

      Delete
    2. Alex, good summary.

      About grounding "number":

      (1) Inasmuch as "number" is used in formal mathematics (i.e., computation) it does not have to be grounded. You just need to follow the symbol-manipulation rules of formal arithmetic.

      But we do know what number refers to, and that is based on our capacity to perceive "numerosity," which allows as to categorize when there is no thing, one thing, two, three, and then four (double pairs), five (pair and triplet), six (two triplets or three pairs), etc.

      But as the number gets bigger, we can't do it directly through perception, so there is something we do: we count, and we understand that we can keep counting, adding one more, over and over.

      That's probably the sensorimotor grounding of numbers.

      Claire, you're right about grounding SC and number.

      Some categories are probably innate, and numerosity is probably one. Perhaps also animate/inanimate, figure/ground, and perceptual primitives like curves/straight, points, lines, colors, phonemes, etc.

      Delete
    3. On the grounding of numbers, I’m reminded of other coursework on songbirds and neural integration – where the activity of individual neurons increases upon hearing a certain number of song syllable repeats, and different neurons specifically respond to certain number of repeated syllables. For instance, neuron A fires after 2 repeats, neuron B fires after 3, and so on. So, it is suggested that the brain of the songbird is essentially able to count via connections between these neurons (that themselves become more active in a sequential counting order). I’m curious to know if Fodor would again write this off as meaningless correlation. The activity of the neurons resembles the process of counting! So, if numerosity is innate, could this type of activity be an avenue for neuroimaging to be relevant (when what’s happening in the brain actually resembles its grounded counterpart)? In relation to the Turing Test, it could be that perhaps the activity rather than the neurons themselves would be necessary, so a T3 robot would suffice.

      Delete
    4. Grace, good point. Neural counting that selectively "mirrors" input numerosity would be more than correlation, it would be an analog mechanism (like "mental" rotation).

      But do nonhuman animals really count? They (like us) have innate numerosity feature-detectors for 0, 1, 2, 3, and mayb 4 "things" (whether temporal or spatial). But that's it. Beyond that, it requires counting, not just numerosity categorization.

      That means an understanding of a special case of recursion: knowing that whatever quantity N is, you can add one to it, it becomes N+1. Like the understanding of propositions, this is not just giving different names to different numerosities. It is a systematic way of generating and counting numerosities.

      Delete
  17. Claire -- do we know each other from somewhere? -- I think you make great points! An interesting point you seem to be getting at in your comment is the idea that a more general concept -- like justice -- could be grounded progressively: as we have different experiences with the concept, this concept becomes more and more grounded. I think this is a great point because it is in line with our (or at least my) intuition about how our grasping of concepts works. It often happens, I think, that when we first encounter a word which refers to a concept, we have only a vague idea of the concept or no idea at all; then as we encounter more situations and have more experiences which contribute to our understanding of this concept, the concept becomes clearer and clearer. We know that meaning = grounding + feeling and the intuition I am appealing to might apply to meaning rather than just grounding, but I think it makes sense for grounding as well. We have a symbol -- like the string "justice" -- and as we have experiences in the world where this string is invoked, it becomes progressively more and more grounded. Instead of someone pointing to an apple, and saying "that's an apple," allowing us to ground the apple, someone might point to something and say, "that's justice," allowing us to ground the concept of justice.

    The interesting difference between grounding "apple" and grounding "justice" is that to ground apple you basically only need to interact with an apple and connect it to the symbol "apple." For "justice," on the other hand, there could be many different things or concepts which are considered to be just or unjust, different interpretations of justice, etc. For that reason, it seems very plausible to me that the concept of justice would have to grounded in degrees -- as you say -- perhaps from multiple experiences with justice. With each degree of grounding (every time you have an experience) you get a more complete sense of what the concept justice is... maybe...?!

    ReplyDelete
    Replies
    1. Most words don't refer to "concepts" (what is a concept) but to categories (kinds of things you call by that name, as distinguished from other things you call by other names, and based on the features that distinguish them).

      All categories (except in formal maths) are approximate rather than exact. We can always keep refining the approximation from a bigger sample, revising or enriching the features, etc.

      A picture (or object) more than you can describe with 1000 words, because it has an infinity of features. How? This is what we learn from Watanabe's "Ugly Duckling Theorem" and from "Funes the Memorious" in Week 6.

      Yes, categories like "justice" are more complex and composite (describable by more grounded features, themselves grounded categories too) than "apple" -- but "apple" is not that far down from "fruit" and "food" and "thing"... which all need a large combination of features to describe them and cannot be fully grounded directly. (Ask yourself what categories can and cannot be learned by direct supervised learning by an organism without language, and you'll get an idea of the nuclear power of language.)

      Delete
  18. 9) From Harnad, S. (2003) The Symbol Grounding Problem

    The symbol grounding problem addresses the problem raised by Searle’s in his C.R.E. of how and when mere symbols could be about anything in the world. To be about anything in the world, symbols first need to be grounded. Grounding refers to the process by which the brain picks out its referents. Harnad states that the brain picks out its referents directly through T3-passing robot capacities (by induction). The brain is also capable of picking out referents indirectly through verbal descriptions (propositions that describe or define their referent), but the words in propositions are grounded directly. Furthermore, I have learned that indirect grounding could be computational, while direct grounding is not because it requires sensorimotor capacities that are analog. Grounding is a mean to detect referents. Solely, it does not explain meaning; the latter also requires an explanation of feeling. Grounding may pass TT, but without ever feeling. Turing’s methodological point is that feeling is inaccessible therefore, that a grounded robot may pass TT without feeling does not pose a threat to the relevance of the TT.

    One question would be:
    Would it be possible to ground symbols indirectly without ever have grounded anything directly?
    Another question would be:
    Does the System reply to Searle’s CRE accounts for the grounding element? That the whole system would be grounded, but without feeling, like a zombie.

    ReplyDelete
    Replies
    1. Indirect grounding can be re-combinatory (if that's what you mean by "computational"). But what is recombined (into truth-valued subject-predicate propositions) is grounded category names. No groundless boot-strapping or sky-hooks...

      If you think computations alone (T2) can be grounded, then you have not understood the Chinese Room Argument or the Symbol Grounding Problem. (When Searle memorizes and executes the T2-algorithm on his verbal inputs, he is the system; there is nothing else. Hence no connection to referents.

      Delete
    2. Hi Max, just going to make a quick correction to what what you said. You said that "Grounding may pass TT, but without ever feeling." I'm not sure what you mean here: do you mean grounding allows a T2 robot to pass for T3? If that's the case then it's true that it might not ever feel, because ground = meaning + feeling. However, I think this is not quite pertinent to the symbol grounding problem itself, and more in the realm of the hard problem.

      Also, you ask "would it be possible to ground symbols indirectly without ever have grounded anything directly" and I would also say no. This is similar to both categories and language (learning content words) because you have to have a basic foundation of grounded categories/words in order to make sense of new things that are not grounded. The new category that is made from these lower-level grounded categories would then be indirectly grounded. However, it is important that its constituent is grounded.

      Delete
  19. It is my understanding that symbol grounding is the process of obtaining meaning from arbitrary squiggles and shapes. How do meaningful thought and language play into our derivation of meaning from symbols? Taking Wernicke's area and the angular gyrus into account, both serve functions with respect to reading and the sensible comprehension of words. Wernicke's area is considered an expansive mental dictionary storing all the meanings to corresponding words, and the left angular gyrus facilitates our ability to extract meaning from the symbols we read. This mental dictionary analogy is intriguing to me because one could imagine the endless vacuum of meaningless squiggles that we could surround ourselves in without ever "truly" finding meaning.

    From a psychological perspective, how do we go from nothing to meaning? Is it due to an innate mechanism in our brains as Chomsky believes (i.e. the existence of LAD: Language Acquisition Device)? Or are our environments and interactions the reason for this ability? It seems that there is a clear dichotomy here, and I wonder if this must be black and white. Are there in fact, only two options: meaning or symbols devoid of meaning? Perhaps there is some intermediate place of knowledge between both. For instance, if someone were to know the meaning of a word (i.e. semantics) but unable to use it grammatically (i.e. syntax) is it still grounded?

    ReplyDelete
    Replies
    1. A dictionary alone is an ungrounded set of symbols, whether it's on paper or in the brain. Grounding requires a sensorimotor connection between words and their referents.

      Chomsky is not talking about meaning but about Universal Grammar (UG) -- weeks 8 and 9.

      I cannot tell from your comment whether you have understood the symbol grounding problem.

      Delete
    2. Hi! I think your comment really helped me realize the parts of the reading I'm not really comfortable with so thank you!

      From what I understand meaningful thought and language aren't a precursor to deriving meaning from symbols. Since language is dependent on our ability to use words to form propositions, and words need to be grounded (and so can be considered the symbols we have to derive meaning for) I think we can only use language to derive meaning from symbols after we've already established the meaning of some symbols (as we saw in class with the minimum grounding set of dictionaries). I am having trouble seperating language from meaningufl thought though. My understanding of meaningful thoughts was that they were a direct result of us taking a word and understanding it due to the fact that the word was grounded for us, so again a direct result of our symbol grounding mechanisms. I can't imagine a thought that doesn't refer to a category we understand being meaningful but I feel like I might be missing something here?

      Delete
  20. The symbol grounding problem seems to put a finger on why we need at least T3. In order to obtain meaning of a word, we need to ground the symbol using our sensorimotor capacities, but it also “feels like” something to understand the meaning of a word. A purely T2 computational system is not enough for symbol grounding as we saw from Searle’s Chinese Room argument. The grounding of symbols can be solved with the sensorimotor capacity of T3, but we cannot hope to determine whether or not the robot is feeling because of the hard problem. All we can do is assume that the robot is feeling. I think that this is Harnad’s point in this paper and in his previous paper about the “right level” of the Turing Test - T3 is the right level because there is no way we can ever determine if a T3 robot is feeling something. If they are able to pick out the meanings of words to their referents, then they would not be any different from any other human, and we should treat them as such.

    ReplyDelete
  21. How can/can symbol grounding explain the idea of knowing the meaning of the saying "I know what [x word] means in context": is this kind of 'knowing' understanding?

    In my mind, a statement like this is an expression of a category which is not really grounded - take a word like "conniption", which is not well known but is not so abstract (in my opinion at least) that it is not available to direct experience - if you only know what it means in context, you wouldn't know it when you saw it (ex. observing a roommate fall into a fit of rage at the sight of Jared's week old unwashed dishes) or had one yourself (ex. swearing at an inanimate object, say a print, that inexplicably refuses to cooperate).

    However, if you were presented with a sentence like the following: " Jake's having another conniption, he's been yelling angrily at the printer again", through the context of the act of yelling (observable), and maybe even having seen Jake get angry (a grounded concept) and yell at things, you may be able to glean a meaning for "conniption" by comparison - saying it is similar to/belongs in the same category as a "fit" or a "tantrum" as it shares features (ex. irrational actions, expressions of anger). Thus, the word gets grounded either 1) through identifying it as belonging in a category with its synonyms (from which you can assume some features it may share, or 2) through identifying it with some of its features - "yelling" and "angryness" - depending on the fact that those features are grounded, directly or through a similar process.

    ReplyDelete
  22. In The Symbol Grounding Problem (1990), Harnad writes, "What is the representation of a zebra? It is just the symbol string "horse & stripes." But because "horse" and "stripes" are grounded in their respective iconic and categorical representations, "zebra" inherits the grounding, through its grounded symbolic representation... Once one has the grounded set of elementary symbols provided by a taxonomy of names... the rest of the symbol strings of a natural language can be generated by symbol composition alone,[18] and they will all inherit the intrinsic grounding of the elementary set."

    What I think Harnad is arguing here is that one can ground a small set of symbols, and then, using language (combining symbol strings) construct many other categories which inherit the grounding of the initial set and are therefore grounded, even if these new categories are not grounded directly through sensory experience. This can be seen in the example of zebra, where zebra (indirectly grounded using language) = horse (directly grounded) + stripes (directly grounded). And this direct grounding happens using iconic and categorical representations. In an earlier response, Prof Harnad referred to the "nuclear power of language." This idea is much clearer to me now -- we need only a handful of categories to be directly grounded, and then, using language, we can generate infinitely many other categories which inherit the grounding of the few initial categories. This helps show us why language is such an incredibly powerful tool and why humans have made such dramatic advances relative to non-human animals. With language, we have the capacity to ground an enormous amount of categories that non-human animals have no access to, and these additional categories help us develop tools and engage in scientific and logical thinking.

    ReplyDelete

Opening Overview Video of Categorization, Communication and Consciousness

Opening Overview Video of: