Blog Archive

Monday, September 2, 2019

6a. Harnad, S. (2005) To Cognize is to Categorize: Cognition is Categorization



Harnad, S. (2005) To Cognize is to Categorize: Cognition is Categorization, in Lefebvre, C. and Cohen, H., Eds. Handbook of Categorization. Elsevier.  

We organisms are sensorimotor systems. The things in the world come in contact with our sensory surfaces, and we interact with them based on what that sensorimotor contact “affords”. All of our categories consist in ways we behave differently toward different kinds of things -- things we do or don’t eat, mate-with, or flee-from, or the things that we describe, through our language, as prime numbers, affordances, absolute discriminables, or truths. That is all that cognition is for, and about.


56 comments:

  1. We've defined categorization as "any systematic differential interaction between an autonomous, adaptive sensorimotor system and its world". This focus on the "interaction" and, as acknowledged, nod to behaviorism makes me wonder if this definition might suffer from the same black-box qualms that gave birth to cognitive science.

    For example, say I have categories A and B. If all things that are in category A are in category B, and vice versa, A and B would have identical sets, making them identical categories just with different labels (or have the same reference with different sense if you want to be Frege-ian). However, does it not matter if I arrive at category A using a different rule (or perhaps, for the paper, pointing to different features) than the one I use to arrive at category B? Put otherwise, does it not matter if we have a weak equivalence, but not a strong one?

    The behaviorist would say no, as does Harnad here. But could it?

    ReplyDelete
    Replies
    1. To categorize is to do the right thing with the right kind (i.e., category) of thing.

      The "differential interaction” is that you do something different with different kinds of things. That's what makes them categories. Cogsci is not metaphysics. We are not trying to explain what kinds of things exist in the world: just about how and why organisms can do what they can do with them.

      Categories are really potential categories, for organisms, depending on the structure and function of the organism. (An organism’s categories depend on "affordances" — sensorimotor facts about both the organism and its environment.)

      There is nothing behaviorist about this, except of course that categorizing is a behavior, something you do. Cognition is behavioral capacity: the capacity to do certain things. And it's that capacity that cogsci (and Turing) are trying to reverse-engineer (i.e., explain causally).

      Behaviorists don't explain how organisms can do what they can do. They just describe how what they do is influenced by association or by reinforcement.

      To reverse-engineer capacities is to try to infer how the black box that produces them does it — by designing a mechanism that can do the same thing.

      Naming and classifying things is just one (important, but late) kind of categorization. A lot of categories and category-learning precedes verbal categorization: In preverbal human children, and in other species, all categorization is non-verbal doing-capacity.

      If there are two sets of things X and Y, and the right thing to do with X’s is x and the right thing to do with Y’s is y, then you have two different categories. If their members have lots of features and one organism has learned to use one feature to categorize them and another organism has learned to use another feature, then as long as they are both doing the same (right) thing with A’s and B’s, fine.

      But if in the future the feature fails for one or the other of them, then they have to do some more learning.

      Weak equivalence is still enough — as long as they generate the same categorization.

      Delete
  2. I am interested in how categorization exactly fits into last week’s reading on symbol grounding. If we want to solve the symbol grounding problem and give our robot this autonomous ability to ground symbols, would we do this by giving it the ability to categorize? I realize that here I have answered one problem (the symbol grounding problem) with a different problem, namely the categorization problem that this week’s text describes as:

    “But the categorization problem is not determining what kinds of things there are, but how it is that sensorimotor systems like ourselves manage to detect those kinds that they can and do detect: how they manage to respond differentially to them.” (section 4)

    As I understand it from the reading this we can learn to categorize through supervised learning, unsupervised learning, and later instructional (i.e. via language) learning. There may be some so-called “innate” categories that we have, but as the section 10 of the paper remarks, evolution could lead to these innate categories because natural selection could increase the frequency of these categorical abilities in a species.

    One thing that I am interested in is the idea of expertise, in class we talked a lot about identifying poisonous mushrooms as a sort of expert learned category. I understand the idea behind this and Figure 2 in the text about learning a new texture categorization as to be mostly updating our weighting of features. My eyes have not somehow grown new cells that can detect a new feature of these textures or mushrooms, so it can’t be because of actually seeing a new feature, but expertise must be because of a pre-existing feature that had such a low weighting before that it was “invisible” to the non-expert.

    ReplyDelete
    Replies
    1. You seem to have understood all of this very clearly, Stephanie!

      (I think of feature-weighting as a kind of dynamic perceptual filter. It makes some features more intense and others weaker, with the result that the categories "pop out" more. There is some argument about whether the filter is "perceptual" or "attentional." But it seems to me that's just words. Filtering is filtering, and if the categories "pop out" after the learning, and they did not pop out before the learning, then your perception has changed (even if the mechanism is an attentional feature-filter rather than a "feature-creator.")

      Delete
    2. @Stephanie The way you related the 2 readings together was very helpful in situating this reading in the course material as a whole. Am I correct in my understanding that Searle's CRA points out the flaw of the computationalist argument, and symbol grounding problem addresses the issue of "understanding" (in that we need a sensorimotor system that interacts with the world and which we use to ground symbols in order to understand)? Explaining categorization is then the answer to how we ground symbols, and would explain cognition.

      Delete
    3. Searle points out that computation alone cannot understand and the symbol grounding problem points out why it cannot.

      Delete
  3. Section 25: Categorization is abstraction

    Harnad talks about how we could organize abstract concepts based on instances of the concept, and the non-concept (the example he used is round-thing and non-round-thing, where round is the abstract concept). In this case, "round" has a sensorimotor basis, and I argue that we could learn it without ever receiving explicit verbal instructions.

    Could this be interpreted as argument for T3? As in, the minimum required level in order to reverse-engineer cognition is the level that can learn categories, since according Harnad, "to cognize is to categorize". I think that based only on this paper, T2 is insufficient, and T4 is excessive, since there would be no difference in the process of categorization if the biochemical makeup of the robot is indistinguishable from us.

    ReplyDelete
    Replies
    1. Good points. All categorization is abstraction: we "do the right thing" guided by having abstracted the right features, the ones that distinguish the category-members from the non-members. And all features are potential categories. And grounded categories can be named. So that's how "instruction" (a verbal definition or description) can "give" you a new category without the need of (direct) sensorimotor induction (unsupervised or supervised). It's just the recombination of categories that are already grounded, whether directly (through prior sensorimotor induction) or indirectly, through verbal instruction (itself grounded through prior sensorimotor induction or... etc.).

      That's the nuclear power of (T3-grounded) language!

      Delete
  4. Is categorization really all that cognition is? My immediate thought while reading this was: what about people with damage to their ventral visual pathway? They are no longer able to differentiate between objects visually, and could not tell you what should be done with objects of different categories. Of course, this is only taking away one of their sensory modalities, they’re still fine when it comes to discriminating things through touch. So, if we were to imagine a more extreme version of this whereby the senses themselves are intact but the brain areas for object recognition and discrimination are completely shut down, would that functionally “turn off” someone’s cognition? Alternatively, if we put someone with ventral visual pathway damage in a simulated world, the only real sense they’d have for object recognition would be visual. In this specific situation, they’d be unable to form categories. Would that mean that they have no cognition in this simulated world? Visual object recognition can be localized in the brain so, assuming it can be localized for other senses too, would be then have localized cognition to one brain area? I’d like to think (and have a hunch that) my logic is flawed because it definitely can’t be this simple.

    ReplyDelete
    Replies
    1. First, not all of cognition is categorization. (A typical exam question is: what is non-categorical cognition? Examples are all continuous, analog sensorimotor skills [and affordances] such as in sports, dance and imitation [MNs!] -- and, alasm predatory hunting too, including continuous pursuit eye-movements! This is not to say that these continuous skills don't also have some categorical decision points too, as competitive swimmers remind me every year...)

      As to brain damage: it is a sign of how far we are from T4 that we can "imagine" any kind of deconnection syndrome we like (including ones that are neurologically impossible) since our idea about what it is that the various "centers" we are mentally disconnecting are actually doing (and how) is non-existent: we have not reverse-engineered any of those "functions." All we have is correlations between their "localized" activity and the exercise of certain doing-capacities we have. We haven't a clue yet of how the T3 capacity of "visual object recognition" is produced.

      Delete
    2. Oh okay, so would we say that not all cognition is categorization but all categorization is cognition?

      Follow up to the brain damage: I guess using imagination as support isn't the strongest way to argue. Let's just take the example of people who cannot discriminate between objects. Regardless of where this is localized in the brain, let's assume this is a functional lesion (so less about where exactly in the brain it's happening and more just about the lack of ability). Would a person with an inability to identify objects be said to have less cognition than me, someone that can tell you that a pencil is a pencil and a cup is a cup? For this example let's assume this is solely based on visual categorization, so let's say this identification task is through VR. A video game asks me to select the object that is called a cup in order to advance to the next level. I can do this but the person with the damage cannot. Do I cognize* more than this person because of this?

      *or anything else along these lines like cognitive potential, cognitive competence, anything that's saying I can cognitively do more than them --I don't know what the technical term for it is

      Delete
    3. Hey Lyla!
      In reading your responses, it kind of sounded like if someone completely lacked all of their sensorimotor systems, they would be acting almost like a T2 robot. And, at least from the symbol grounding problem and also a bit from this paper too, a T2 Turing Test just wouldn't be sufficient when testing for the reverse-engineering of cognition. This would be because, from this paper, we're told that categorization is the act of abstraction - as singling some of the sensory input and ignoring the rest.
      I guess what I'm trying to say is that I don't think, in this case, cognition can exist in some sort of "spectrum". If someone lost the ability to use their ventral visual pathway, they wouldn't be able to see, but they would have other senses to go by (like being able to smell, touch, manipulate, name, describe etc.) so I don't see why they would be cognizing any "less". But if they were to live with completely no sensory systems, like you alluded to in your first comment, then maybe I'm being too extreme but I would say that they... couldn't be cognizing? At least, they couldn't be making categories because they would essentially be a T2 robot and run into the symbol grounding problem.

      Delete
    4. sorry, I mean if they lost their ventral visual pathway they couldn't *name the different things (I think that's what the ventral pathway is responsible for haha)

      Delete
    5. Lyla, I don't know what you mean by "more cognitive"! (There isn't really an IQ standard for the TT; just average human capacity. And we can't just imagine functional dissociations: some may be possible, others not.

      Esther, T2 is not a robot, just a computational "chatbot."

      Delete
  5. The following response relates to how the paper impacts my understanding of a T3 robot:

    T3 is supposed to be a robot with sensorimotor capacity, therefore making it a sensorimotor system. According to Harnad, as a sensorimotor system, the T3 robot can engage in categorization. This then implies that a T3 robot can engage in the type of categorization learning through ‘hearsay’. Therefore, a T3 robot has the tools to properly use language. However, does categorization lead to Searle’s idea of understanding? Is the categorization understanding of cognition a win for computation or not? Is categorization a type of computation?

    A further question is the one that pertains to a fundamental property of a Turing Machine, that of an infinite memory. As seen in the Funes the Memorius short story, an infinite memory means that you cannot engage in categorization because selective forgetting is required in order to recognize and name things. How would we give a T3 robot the ability to forget the right things like we humans do implicitly? Since a lot of feature selection and weighting is done by our sensorimotor systems, would we need to create artificial sensorimotor systems that are exactly like our own to be sure they sense in the same way we do?

    ReplyDelete
    Replies
    1. Hi Matt, I'd like to try to answer the second point you brought up:

      Assuming that Funes is an actual person (not a work of fiction), he could not be a Turing machine, since as you said, a TM has an infinite memory. However, in the case of a T3 robot, I don't think it matters if the robot has the ability to "forget the right things". A robot is considered T3 passing if it *behaves* like a human. What if we had a robot that has an infinite memory (like a Turing Machine) but it only reveals the relevant parts of its memory? It would then behave like a human, and therefore, "have cognition" the way a human would.

      Also, I don't know feature detection and weighting is done by our sensorimotor systems. Sounds like it's something a bit upstream (more like consciousness), so having a T3 robot with the same sensorimotor systems as us doesn't guarantee that it could select details like we do.

      Delete
    2. Matt, to the extent that categorization is sensorimotor, it cannot be computational. Sensors and effectors are necessarily analog. But, for example, neural nets — which could be implemented purely computationally, rather than as real parallel and spatially distributed systems of interconnected nodes modifying their interconnection strengths in response to incoming input — could “mediate” between input and output, doing the feature abstraction and filtering. This is all hybrid through and through, and does not resurrect computationalism, according to which cognition is just computation.

      “Deep learning” neural nets are one current model for the causal mechanism the does the feature abstraction, selection and weighting in category learning. (No one knows whether it is strong enough to scale up to T3-scale. Probably not.)

      Wendy, don’t take the “infinite tape” of the Turing Machine too literally. It just means that — since computation is hardware-independent — capacity limitations are not computational. A machine can always be bigger, faster etc.

      Yes, the TT requires reverse-engineering a causal mechanism that can produce our capacities; it says nothing about resource requirements in particular: they just have to be enough to actually do the job.

      And be careful about suggesting that “consciousness” does the job, because it is homuncular and non-explanatory as far as the easy problem (of reverse-engineering our doing-capacities) is concerned. We don't want to punt the easy problem to the hard problem!

      And Ting is a robot that has successfully passed T3. Whatever is going on inside, it must be doing the “right” feature-selection and weighting. since she is passing T3!

      Delete
  6. When talking about categorization as doing the right thing with the right kind of thing, this would necessarily depend on the environment the object is in (what it is compared to). However, this would also depend on the individual perception of the sensorimotor system, what they decide is the right thing to do with this category. For example, a vegan and a non-vegan would categorize ham into the same category (meat) but would not do the same thing with it: the non-vegan would eat the ham whereas the vegan would certainly not. In this case, the decision wouldn’t be based on the category, but be a deeper decision that involves more than just the perceived features of the objects. Or is there something that I’m confusing between behaviour towards an object within a category and categorization.

    ReplyDelete
    Replies
    1. What makes something the "right thing to do" is mostly dictated by the environment (e.g., "edible" vs. "inedible") and its effects on us. There are unavoidable consequences if you eat something poisonous, no matter what you make think, believe of feel!

      The carnivore and the herbivore are both objectively right (and agree) when it comes to identifying the members and nonmembers of the category "ham." They differ on what is morally right to do with it.

      So having decided (or, as in my case, discovered) what is the morally right thing to do as far as eating is concerned) I would disagree with a carnivore on the definition of the category "what it is morally right or wrong to eat" -- but we would both understand what met one another's definitions of the category. No confusion there.

      If it weren't for the other-minds problem there would be a way to wire up the world so that omnivores like us (who can survive and be healthy without hurting other animals) felt the pain of the animals we kill and eat. That would be the feedback that kept us from eating them. But in the real world, because of the other-minds problem, we do not feel their pain, so there is nothing to stop us from making them suffer -- except our own mammalian empathy.

      Delete
  7. I’m not convinced that cognition outside of continuous, analog sensorimotor skills is just categorization.

    Categorization as a concept is simplifies problems by defining criteria and differentiating instances accordingly. In this way, it’s similar to a simulation in that it cherry-picks the relevant attributes without incorporating the complete system that is responsible for the selected attributes. Categorization seems like a valuable model to simplify cognition for easier understanding and communication.

    If the root of feelings stems from distinct categories that are learned through evolution, wouldn’t it make sense that we see more consistency in how humans describe them?

    There’s more diversity, generally and when comparing cultures, in the language humans use to describe feelings than to describe colors. This, along with the fact that trying to talk about and label feelings is more difficult than colors makes me think that categorization facilitates the logical reasoning about and communication of our thoughts, rather than explaining how it is that we think.

    ReplyDelete
    Replies
    1. A mechanism that can learn to "Do the right thing with the right kind of thing" is simple? TT-scale?

      It is language (description/definition) that is a bit like simulation and the Strong C/T Thesis, not categorization. (But categorization, too, is always just approximate, because categories are underdetermined: there may be many features that can do the (same) trick, "weak equivalence," more than one way to peel a grape.)

      I don't know what you mean by "incorporating the complete system that is responsible for the selected attributes" or by "the root of feelings stems from distinct categories".

      See the other replies about objective vs. subjective categories.

      Delete
    2. I just wanted to reply to the idea of categorization as a valuable model to simplify cognition. From reading the short story "Funes the Memorious" and our discussion in class, I would argue that cognition itself simplifies (and by this I really mean abstracts) the world itself. Human cognition has blind spots and a far from perfect memory. So in this way I don't see categorization really as a simplified model of cognition. While yes, categorization heavily weights certain attributes as important, so does our cognition. I am not sure if this was exactly what you meant though.

      That being said, I agree that categorization is not going to fully explain cognition on its own. More my point was for the scope of cognition that I do think categorization captures, I don't see it as a simplification of cognition (of the outside world though most definitely).

      Delete
    3. Before we can say "cognition explain cognition," we have to explain (reverse-engineer) categorization!

      Delete
  8. “Where does this leave goodness, truth and beauty, and their sensorimotor invariants? Like prime numbers, these categories are acquired largely by hearsay. The ethicists, jurists and theologians (not to mention our parents) tell us explicitly what kinds of acts and people good and what kind are not, and why.”

    The above passage pairs well with an earlier reading that contemplated how one would how one could A.I. that harbours consciousness. In the text, it was noted that not only do you to recreate the capacity for cognition but also the ability to learn and have experience. The passage demonstrates that simply being a passive observer would not account for all aspects of categorisation in cognising beings – you have to interact with others to learn certain categories. This is especially pertinent for categories that are not defined by perceptual features but rather a common pattern of behaviours such as goodness, kindness, etc.

    I think it also provides an argument for T3 in the sense that T3 should exhibit a motor component, not just a sensory one. This is because for ‘abstract’ categories such as those in the passage (I know abstract is a murky concept and can be considered a weasel word, but I’m having trouble finding a more Kid Sib-friendly substitute), they are defined more by actions rather than by perceptual features. As such, if the T3 robot could manifest these actions themselves, there would be more opportunity to have positive and negative feedback, thus facilitating the formation of categories.

    ReplyDelete
    Replies
    1. Ting can do anything we can do; to do, you have to move.

      In category learning (supervised), corrective feedback need not be social or human. You'd get it even if it was just you in a mushroom world. (But you probably wouldn't talk.)

      Delete
  9. "The neural net in the learner's brain does all the hard work, and the learner is merely the beneficiary of the outcome. The evidence for this is that people who are perfectly capable of sorting and naming things correctly usually cannot tell you how they do it." (section 27)

    This extract was very reminiscent of a concept we saw in week 1, which was "cognitive impenetrability". This was the idea that cognitive science cannot be done by introspection.

    This paper convinced me of the validity of Fodor's argument that neuroimaging in its present state misses the point of explaining cognition.. The next step in cognitive science is to investigate the causality behind the categorization process: how do we identify different kinds of things and how do we modulate our responses based on what we categorize them as. The question is now how do we proceed in this quest: behaviorism does not give us the answers we want. Operant conditioning only tackled primitive cases of categorization. Empirical observation using neuroimaging techniques does not answer the question of "how".

    In the paper, thought experiments are often used to illustrate points being made. Is the step ahead for cognitive science then a mix of experimental learning algorithms and thought experiments? In the case of the chicken-sexing experiment, questions were used to ask the grandmasters how they categorized the chicks. However, due to the concept of cognitive impenetrability (specially when it comes to implicit abstraction/knowledge) how can we proceed in our attempt to explain categorization?

    ReplyDelete
    Replies
    1. You're right that this is an example of the inability of introspection to reverse-engineer cognition, but Pylyshyn's "cognitive penetrability" is not quite the same thing: It's his attempt to sort out what is and is not cognitive. His candidate was that what is cognitive is "penetrable" to cognition: Once you understand the "Monte Hall" problem, you no longer believe that it makes no difference whether you stick or switch, whereas the Mueller-Lyer illusion is not "penetrated" by knowing understanding that the two lines are really equal in length.

      Cognitive penetrability/impenetrability turned out to be inadequate for distinguishing what was cognitive from what was "vegetative" -- probably because the only distinguishing feature is feeling, and, because the other-minds problem means there is no way to use that feature except in yourself, by introspection!

      [It is not about "what we categorize them as" but about how we are able to learn to categorize them ("do the right thing with the right kind of thing").]

      A current hunch about how we do it is unsupervised and supervised neural nets -- but whether they are strong enough to scale the heights of Turing's Test... who knows?

      The way to do reverse-engineering, like the way to do all science and art, is by trying to think of solutions (like neural nets) and testing them. (Stravinsky thought music was, like science, "problem solving." I'm not so sure, but, being a pygmy, I'm not really qualified to judge!)

      I have stopped using the horrible and heartless example of chicken-sexing and now use only mushroom-sorting. I am deeply ashamed for ever having used the other example.

      Delete
  10. In Professor Harnad’s article “To Cognize is to Categorize,” we learned about affordances and the three types of learning.

    An affordance is an invariant feature of an object that indicates how an agent can interact with it. For example, a teddy bear affords hugging to a young child.

    The three types of learning:
    - Instrumental/operant/reinforcement learning is a subcategory of supervised learning. It is when an agent learns as they go, with a reward as the feedback for the right output and no punishment for the wrong output. This type of learning is only effective for acquiring all-or-none categories (e.g., pigeon’s black/white sorting).
    - Supervised learning is when an agent learns through trial-and-error and corrective feedback (e.g., chicken-sexing). This type of learning is time-consuming.
    - Unsupervised learning (“mere exposure”) is when an agent learns on their own from finding patterns (or physical similarities and differences between things due to reflection and shadow detection by our retina) in the environment. This type of learning is time-consuming and risky. With repeated exposure alone, the agent cannot abstract the winning features of a thing.

    Therefore, supervised learning is the optimal type of learning for categorization. Since categorization is an important aspect to cognition, building a T3 robot with sensorimotor performance capability to partake in supervised learning will enable it to abstract important sensorimotor features of unfamiliar objects and dismiss others. This way, we can get closer to understanding cognition.

    ReplyDelete
    Replies
    1. All categories are all-or-none (i.e., categorical). If membership is not all-or-none, but instead a matter of degree, it's not categorization but a relative judgment.

      Both kinds of learning by induction -- unsupervised and supervised -- can be time-consuming and risky. It is learning by instruction (verbally) that is fastest and safest (as long as your instructor tells you the truth, actually reducing uncertainty about what is the right thing to do -- by telling you the features that distinguish the members from the nonmembers of the category, using subject/predicate propositions composed of directly or indirectly grounded words).

      Delete
    2. @Prof: do we then use this relative judgement to determine whether something which is a matter of degree can be placed into a continuous category?

      Delete
    3. Ishika, I'm not sure what you mean by a "continuous" category. I guess you mean categories that fall along a continuum.

      Colors fall along a continuum of frequencies; so do semi-tones (in music: A, Bb, C, C#... in people with perfect or "absolute" pitch); and vowels fall along a 3D continuum of mouth positions (ah, ay, ee, eh, uh, ooh...) but the categories themselves are discrete. The members within each category are discriminable from one another when presented simultaneously, but they are not individually identifiable (categorizable) when presented alone. Each category has a lot of variants, but the distinguishing or invariant features that distinguish its members from non-members are still abstracted features shared by all the members.

      (But remember that the invariant feature can be a boolean combination of features -- f1 OR f2 AND NOT f3... -- and that even people with "absolute" pitch are only approximately accurate on the well-tempered scale: they can identify the 12 semitone pitches in the octave approximately -- to within a semitone that feels a little high or a little low, along a logarithmic frequency continuum of about +/- 20 Hz at A440, but no closer than that, though they can discriminate simultaneous or successive pitches much more accurately. And people with "relative" pitch can do just as well as people with absolute pitch if you first give them a reference tone...).

      Sound loudness, frequency, or wave-shape otherwise behaves like "big" and "small": it's purely relative. We can discriminate loudness, and we can break up the audible range from "can barely hear it" to "ouch" as: "faint, soft, moderately soft, middlish, loudish, loud, ouch." But there are no CP boundaries along the continuum, just as there are no CP boundaries along the shades of gray from white to black.

      Color, semitone and phoneme perception are different, and special. And most categories (e.g., mushrooms) do not range along a continuum but vary in a multi-dimensional feature-space, some of those features being being discrete, some being continua (of either the rainbow or the shades-of-gray sort).

      Those "implicit" features/dimensions can themselves, in turn, become learned and named categories -- which can then be used by someone who knows which features distinguish the members from the non-members of a higher-order category (like "zebra") by explicitly telling them in words ("striped" "horse").

      Delete
  11. Having established that categories are infinite and can always be adjusted (weighted), when is it appropriate to create a hierarchy of categories? As I read this, I was under the impression that while categories are weighted and we can discriminate upon what is "wrong" and "right", it is still an idea that lacks depth (categories are either inclusive/exclusive or right/wrong, as I understand, each of these criteria yield the same "categorical weight" as the next one, which makes the categorical landscape flat). This means that the only possible hierarchy one could make is one of where being "right" is gradually increasing to the point of being absolute (the top of the pyramid). This would also mean that this type of hierarchy would have to exist and be allocated to each category that one would categorize for a specific thing. Therefore, hierarchies exist for each individual category.

    One way I can think of organizing a hierarchy would be with a form of "macro" categories. The best thing that can come to mind for a macro a category is sports. The sport of Brazilian Jiu-Jitsu (BJJ) itself is a category of "martial art" and "grappling sport". Within this category, there are a finite set of techniques one can do (because of the rules of the sport), but a near infinite amount of ways one could use and combine the techniques they know. I think we could label these techniques as the "micro" categories that make up BJJ. Just as with any sport, there a set of proper and improper techniques one can do in BJJ, and we can say that the improper techniques are not acceptable within BJJ because they do not make the category of BJJ function, where as the proper ones do. Thus, a possible way to interpret a hierarchy for a category without falling into a pool of infinity is to evaluate how the micro-categories function within the macro-category in order to make sure it functions properly.

    ReplyDelete
    Replies
    1. There are lots of abstraction hierarchies, e.g. apple < fruit < food < thing but categories are not all just one big hierarchy; they can be parallel, intransitive, orthogonal...

      Starting with "flat landscape" to "absolute right" to "infinity pools" I am afraid I lost you completely...

      Delete
    2. Sorry for the confusion with my wording/terminology.

      What I meant with "flat landscape" is that (at the time that I did the reading), it seemed as if categories lacked "depth" and to me it felt as if they were "flat" because of that lack of depth.

      "Absolute right" would be referred to the "judgments of degree" we spoke about in class today.

      "Infinity pools" Was merely me trying to use the micro and macro categories as a means of distinguishing proper categories (Features as we discussed in class) that make up the category of BJJ so my argument doesn't get bogged down by the reality that there is potentially an infinite amount of categories.

      I hope that's clearer.

      Delete
    3. Now I know what your words meant, but I'm afraid I still don't get your point!

      Delete
  12. Very kid-sibby question, but is this correct?

    When I say the word "bus", the reason why I am able to "understand" what that word means is because the word (symbol) is grounded in a referent. Being able to ground the symbol does not mean that it has meaning, as meaning is the hard problem. However, grounding is what allows us to connect objects in the world with ideas in our head. This means that "bus" refers to an idea in my head. Categorization is the way in which I created this idea in my head.

    ReplyDelete
    Replies
    1. The question is very kid-sibby, but it's the right one to ask! So I will give you a very detailed answer that explains why grounding is not the same as meaning, but neither T3 nor T4 can distinguish them.

      You, a human being, know what a bus is; you understand the word "bus." You know what it refers to. You understand what it means, and you know how to use "bus" to say and mean a bus. It feels like something to understand and mean "bus." So we know that if "bus" is grounded for you then part of that is the fact that you understand and mean "bus" when you use it.

      Because of the other-minds problem, we can't be absolutely sure you feel at all. But in this cogsci course -- which is not a philosophy course on the epistemology or metaphysics of certainty or scepticism but a course on reverse-engineering cognitive capacities (the "easy" problem) -- the other-minds problem concerning other healthy, adult human beings is irrelevant. It arises only concerning human early-stage embryos, patients in chronic vegetative states, and living species without nervous systems, like microbes and plants.

      Does the other-minds problem also arise for T3 robots, like Ting?

      Ting is grounded. But is she sentient? Does she feel? If not, her words are grounded, but they don't mean anything -- to her, because she is a zombie. Nobody home.

      But Turing points out that we have no more nor less reason to believe -- or to doubt -- that Ting (T3) feels than we do with any other person because she can do anything we can do, we know how (because we have reverse-engineered how) and we cannot tell her apart from any other person (except if we do neural-imaging on her: T4). But would you kick Ting if she passed T3 but failed T4?

      So, Turing says (and "Stevan says" Turing is right) don't try to ask for more: T3 (or T4, if you insist) is the best that cogsci can do, given the other-minds problem. Thinking is as thinking does (or can do). Meaning is as meaning does.

      So we can be (just about) as confident (or skeptical) about whether or not Ting feels -- hence whether or not she understands and means what she says -- as we can be with one another.

      But that still means that "meaning" is not just grounding. It's grounding plus whatever produces feeling, and hence the feeling of understanding and meaning what words and sentences mean. But it would require solving the other-minds problem to know whether Ting feels, and it would require solving the hard problem to know how and why (or how and why not).

      If you have a category ("do the right thing…” etc.), and you have language, you can name that category, and that name is grounded, because you already have the category. Because you have language, your subject/predicate sentences are also grounded. Because you are a sentient organism (though we don't know how or why), the names of your grounded categories, and grounded sentences about them, also have meaning for you. When you have the referent “in mind” you don’t just have it “in head.” If Ting is a zombie, then there can be T3 grounding without meaning. If Ting cannot be a zombie because T3 capacity is not possible without meaning, then there cannot be T3 grounding without meaning. Trouble is, we don't know how or why, nor how or why not. (The same would be true for T4.)

      The premise with kid-sibs is that they don't know anything, but they are super-smart and passionately interested in knowing. They are also impatient with long answers -- but absolutely intolerant of answers that don't make sense.

      This one was too long, but maybe it still makes sense...

      Delete

  13. "Although categorization is an absolute judgment, in that it is based on identifying an object in isolation, it is relative in another sense”

    I really like the paradox that this passage shed in contrasting how while being an absolute judgement, categorisation remained relative and completely dependent on the alternative items in the environment/set to be sorted - as the text said: “invariance is relative to the variance”. It explains why unsupervised learning and absolute judgement is not enough some of the time. More precisely supervised learning is necessary when the invariant features that need to be selected are dependent upon the context and it is impossible to determine the correct features and corresponding weights without “supervision” i.e. feedback/external cuing.

    If I understand this correctly, the way supervised learning works is: as certain cues are selectively abstracted from our environment they gain saliency, that is, by repeatedly “compressing” features belonging to a category and “expanding” the threshold between features of different categories they become more “perceptible”. Recursively the set of features to be evaluated are cued/weighted before being input for a step of the recursion. The chicken-sexer through repetition recognises certain features of the chickens as more salient than what any untrained eye would find, additional he cues (or receives cues from a master chicken-sexer) to keep track of and can thereon do the right thing with the right kind of thing (sort males from females). In other words as cues are selectively abstracted for objects x by trial and error, correcting the discrepancy between what xs are supposed to do and what their instances are actually doing makes them more and more categorisable i.e. their invariant features become more salient.

    ReplyDelete
    Replies
    1. Because we are talking about affordances, which describe the possibilities of how an individual can relate to an object, I want to extend the observation from my previous comment about supervised learning. In my previous comment I described how supervised learning involved the perceiver (or an outside agent) recursively modifying/cuing sets of objects to be sorted by adding weights to certain features before re-inputting them for a new trial of categorization; this allows for objects to be more categorisable by better compressing features belonging to a category and expanding the space between features of different categories. If I relate this to affordances, an affordance depends on cues of an object “inviting” an agent to use it in a particular way (e.g. a handle affords holding), the cues making up an affordable aspect of an object are selected by a similar process of recursive categorization. In other words, an affordance is created by selectively abstracting particular cues of an object and verifying that we can do the right thing with them by trial and error.

      But, interestingly, the perceiver is not the only thing being modified supervised learning. In supervised learning, a virtuous loop initiates where the more certain features are used in a set, the more they gain in saliency as they are modified by users. Imagine having to choose the safest path in a forest; the more the path in the forest are used the more they are likely to become a trail or even a road inviting more people to journey onto it. Perceivers not only learn individually do adapt their perception to detect more important features of a set (or in this case their niche) through unsupervised learning, but can also modify cues of their set to make them more salient by modifying them. So the individual’s perception is not the only thing being modified in supervised learning (as opposed to unsupervised learning), its sample is also “being-adapted” to the perceiver to provide it with feedback.

      Delete
  14. Harnad defines categorization in simple terms, that categorization is "doing the right thing with the right kind of thing". Although Fodor tried to argue that categories do not exist but are some entirely innate ability, the case for categorization by learning is very strong. I know that there are more established refutations for Fodor's argument (including the idea that unlike Universal Grammar, humans require 'negative cases' in order to correctly categorize, negative cases meaning things that we are shown are *not* a member of the category in question) but I wonder if it could be as simple as this: some categorization must be learned because there are things we have now that we have never had to categorize before. For example, we must have learned that to categorize an iphone as a technology that we can use to communicate with other humans since it bears little resemblance to the communication devices that came before it and was only invented some 15 years ago (has it really been that long?). Does this make sense? Or would Fodor argue that it doesn't matter that certain objects or concepts hadn't previously existed because we always had the innate capacity to categorize them?

    ReplyDelete
  15. "Machine learning algorithms from artificial intelligence research, genetic algorithms from artificial life research and connectionist algorithms from neural network research have all been providing candidate mechanisms for performing the “how” of categorization."

    In this section of the paper, Harnad discusses "learning algorithms". This section confused me somewhat in hindsight. As I understand it, the discussion we are having about categorization and its inherent sensorimotor basis is an attempt to define the mechanisms of symbol grounding. Namely, if we can understand the mechanisms that underlie categorization, we may be closer to fill the gap between the symbols in the head and their referents or more precisely, their categories. The above citation mentions artificial learning algorithms as potential mechanistic explanations for categorization. If that is the case, how is the current discussion helpful in solving the symbol grounding problem? Are we not still defining the mechanism as algorithmic and therefore computational?

    ReplyDelete
    Replies
    1. I think the link stems from the fact that "categorization is intimately tied to learning". Since categorization is doing the right thing with the right kind of thing, to categorize something we need to figure out what kind of thing it is based on our previous experiences. So we have to look at the categories we have and the objects we've encountered so far and infer the category of the new object. This is basically what machine learning algorithms do. They first take a set of inputs (labelled or unlabelled depending on the type) and the algorithm learns the kind of thing each input is. When you give the algorithm an new input, it compares the new input to everything else it's encountered so far and decides the new object is the same kind of thing as the things that are most similar to it. So a super simple example would be if you gave the algorithm trees and flowers as input so it learned those categories and then tested it on a tall plant with a trunk and more than 10 leaves. Since the properties fit that of the tree better than the flower, the algorithm will call it tree.

      I think this could potentially be very similar to how we learn categories since we also take input and look at our previous experiences to decide what it's the most similar to. However, this doesn't make symbol grounding computational, since both our previous experiences and the current thing we're trying to classify depend on ssensorimotor input, which is implementation-dependent so not computational. I think, since the entire system can't be computational, the algorithmic nature of learning the categories doesn't necessarily make machine learning algorithms unviable as an explanation for how we categorize.

      Delete
  16. “What the stories of Funes and S show is that living in the world requires the capacity to detect recurrences, and that that in turn requires the capacity to forget or at least ignore what makes every instant infinitely unique, and hence incapable of exactly recurring.”

    I hadn’t ever thought about this before. We seem to assume that having infinite memory would be super helpful – you’d ace all your exams, be better at your job, etc. Not having read Funes’s book, I wonder how this condition affected inborn category perception. For example, was colour perception changed? We know that the strong Sapir-Worf hypothesis is incorrect, which is the idea that our language is what makes us categorize things like colours. We are instead born with the ability to tell colours apart, categorically (we all agree on the boundaries of the colour spectrum, without these boundaries really existing. We know this is because of the configuration of our cones). So, I wonder if having the condition described in Funes’s book affects inborn categories as well. I don't think it would change colour boundaries, since this would require changing one’s very photoreceptors.

    ReplyDelete
    Replies
    1. It is an interesting question. Funes and S. show that without the ability to abstract features, every instance of a given sensory input is defined equally by all of its unique features, making it impossible for any other stimulus to be grouped with it to form a “category”. With the example of color perception, this would perhaps lead to the perception of as many “colors” as the human eye can detect shades!

      The finer details of how this would play out depends on how you conceptualize color perception. If we accept the universalist argument that color perception is an innate form of categorical perception, then Funes and S. should still be able to differentiate between universal “focal colors”, such as red, green, blue etc. As you said, such categorization is directly related to the configuration of our sensory organs, and therefore occurs at a very basic perceptual level. A relativist perspective might argue that Funes and S. would be completely unable to categorize colour, given that the boundaries between colors are not inherently perceptually obvious, but learned.

      It has been shown that categorical perception is affected by culture, especially surrounding the blue/green range. This demonstrates that learned categorization does play a role, and supports a weak version of the Sapir-Worf hypothesis. Researchers such as Debi Roberson suggest that there might be two distinct levels at which color is processed - one that is linguistic and categorical and one that is non-linguistic. Perhaps Funes and S. would be able to do things such as match identical colors using the non-linguistic system, but not categorically distinguish them.

      Delete
  17. The ability to categorize, or do the right thing with the right kinds of things, is something that humans can do from a very young age. Categories can change over time as we learn new information, and so are rooted in our experience. They help us communicate with others since it involves an understanding of physical traits, traits that we can use to describe objects to others who also have the ability to categorize. I understand that categorization may point to a solution to the symbol grounding problem - if a person is able to categorize and describe objects with language, we can assume they have an understanding of the object. The word symbolizing the object has meaning to them. To have a robot that can use sensorimotor sensors to categorize objects is to ground those symbols for the robot.
    I understand all this, but my only question arises in the last part of the paper - Harnad speaks of “abstract concepts” like love, or the number 2. What kind of sensorimotor sensors would a robot need to understand abstract concepts? We understand the concept of love through hearsay, but it also “feels like” something to love, in a different way that it “feels like” something to recognize a table. Since these abstract concepts cannot be hearsay all the way down, then what is the sensorimotor invariant that allows us to categorize love and truth and beauty?

    ReplyDelete
    Replies
    1. “Where does this leave prime numbers then, relative to primroses? Pretty much on a par, really.”
      I’m not sure about the sensorimotor sensors needed to identify invariants of abstract concepts, but at a base level, we know that we already have to have categories that we can name in order to learn new ones, regardless of physical object or abstract concept. So in this respect, if we’re to describe all the features of experiencing love, there could be different constructions from categories we already have that helps us recognize love, similar to learning the properties of prime numbers. Again, we run the problem of hearsay all the way down, but Harnad also adds that we “perhaps rely on our own sensory tastes in the case of beauty,” and are influenced by hearsay from aestheticians or critics. I think this can apply to some other feelings in addition to beauty like love. We are told verbally what love is and feels like (hearsay helps us cognize love), and we also experience it in our lives with others, watch movies etc so there is also some sensory experience attached too.

      Delete
  18. The supervised learning section certainly sheds light on the concept of categorization. It seems sensible to view categorization as a learned ability as opposed to an innate skill because it seems as though the error corrections and miscategorizations contribute to our learning and capability to categorize in the first place. Unsupervised learning or "mere exposure" cannot be sufficient for categorizing the pigeon's black/white sorting or chicken sexing; such that feedback-guided trials error training would be essential.

    "An unsupervised learning mechanism could easily sort out their retinal shadows on the basis of their intrinsic structure alone". But this is not the case, some inputs require more than the shape of the shadow to allow us to categorize item, it is the external supervised input and life experience that allows us to be able to categorize in these instances, supporting the supervised learning hypothesis."

    I'm inclined to then wonder about the mechanisms at play when patients who have traumatic brain injuries, tumours, lesions, etc., of whom lose their ability categorize. Does this suggest that the injury has destroyed the "categorization" area of the brain to the extent they can no longer perform this ability (innate hypothesis)? Or perhaps it is more complicated than direct brain regions being affected; instead, is it the dysfunction of underlying mechanisms that facilitate supervised learning causing this impaired ability?

    ReplyDelete
  19. 4. Learning:

    In the case of differential responses There is one element that is not quite clear to me. It makes sense that a different input must supply a different output, however a section of the explanation implies that the same input does not require the same output always. Is this in the sense that given a Fodor shape or a baby chick as input, the output categorization may be different based on our categorization parameters (sorting by sex, species, etc.). If this is the case does that not nullify the requirement that different input must have a different output? (if we sort by kingdom, chicks and Fodors will both go into Animalia) Or is this issue alleviated by focusing on same /type/ of input (based on our criteria) so Animalia(fodor) and Animalia(chicks) both give the same output because they are the same kind of input?

    ReplyDelete
  20. Harnad writes: "categorization is any systematic differential interaction between an autonomous, adaptive sensorimotor system and its world." To break it down more for kid sib I think, he explains what "systematic" means when it describes these "interactions" by opposing it with "arbitrary" ones like the wind and the sand blowing around together. I wonder, couldn't we describe the sand and wind as governed by physics, systematic too? In answer to this query, Harnad answers yes, it is, "like everything else in nature a dynamical system," It is also constituted of "differential" interactions between sand and wind.

    So what is the most major distinction between (1) the way sand and wind systematically interact with each other and the world and (2) the way systems that can categorize and learn categories (by abstracting categorical features interact systematically with each other and the world? How do we mean that sand and wind interacting is NOT categorization in the same way human cognition allows us to categorize?

    Harnad says it is the "adaptive changes in autonomous systems... in which internal states within the autonomous system systematically change with time, so...the exact same input will not produce the exact same output across time, every time, the way it does in the interaction between wind and sand." The keywords are *adaptive* - describing differential interactions/responses changing over time – and *autonomous* - which requires the system to be adaptive?

    I’m trying to understand what is “autonomous” as kid sib. Is a system autonomous if it adapts and changes its differential responses to things over time? Or is it autonomous by virtue of having “internal states”? Or both? Then is this where we run up against the hard problem?

    ReplyDelete
    Replies
    1. To my understanding, a system is autonomous because it changes and adapts because of its changing internal states. Additionally, the hard problem in this case can be avoided if, in this case, we only look at these internal states as categorization. The way autonomous, adaptive sensorimotor systems (like us) categorize changes as they are exposed to more things, and so they way that we respond to old and new things changes as our internal states have changed to reflect past information. Autonomy doesn't *require* adaptivity, they merely go hand in hand for categorization purposes.

      Delete
  21. From Harnad, S. (2005) To Cognize is to Categorize: Cognition is Categorization

    We are faced with the problem of explaining meaning. Meaning is a combination of symbol grounding and feeling. So far, we learned that grounding is a process by which the brain picks out its referents and it can be analog (direct grounding) or computational (indirect grounding).

    This reading explains how the process of grounding symbols is categorization. It explains that a T3 robot grounds symbols through its ability to categorize as categorizing (having categories) is a perceptual or attentional filtering that extracts relevant features and allows to do the right things with those features. We learned in class that to categorize is to do the right things with the right kinds of things. So, a T3 robot is doing the right things with the right kinds of things when it grounds symbols. The problem of categorization (of cognitive science) attempts to explain “how” it is capable of doing so. How it is that sensorimotor systems like ourselves manage to detect those kinds of things and how they respond differentially to them (section 4)? What mechanism could account for the problem of differentially sorting out a potentially infinite number of different kinds of things from sensory inputs?

    While there may exist innate abilities to categorize, the main idea presented in this paper is that sensorimotor systems learn to categorize; through supervised learning, unsupervised learning or verbal instructions. Those are potential candidate mechanism for learning categories.

    How can we relate the idea that grounding can be direct to the notion that categorization does the symbol grounding? Does a direct form of categorization (purely analog) exist or would that be non-categorical cognition?

    ReplyDelete
  22. Harnad (2005) states:
    "Autonomous, adaptive sensorimotor systems categorize when they respond differentially to different kinds of input, but the way to show that they are indeed adaptive systems -- rather than just akin to very peculiar and complex configurations of sand that merely respond (and have always responded) differentially to different kinds of input in the way ordinary sand responds (and has always responded) to wind from different directions -- is to show that at one time it was not so: that it did not always respond differentially as it does now. In other words (although it is easy to see it as exactly the opposite): categorization is intimately tied to learning."

    This proposition suggests that, if we accept that categorization is cognition - even if one does not take the strong position that categorization is all of the easy problem, but instead simply hold that categorization is a meaningful and necessary component of cognition - then cognition relies on the capacity to be wrong, for categories of/responses to given objects to change: for minds to learn.

    Under this model, would an organism that only have innate categories - as Fodor proposes we do - actually be eligible for consideration as a cognizing machine? I think not.

    Re-read, this proposal is also an argument against pure computationalism, under which the correct program would make the right output from the right, digitized input, invariably, like a perfect model of some precise sandstorm, or the world's best chick sorter, yet by Harnad's definition, its computational perfection would disqualify it from cognizing.

    As Harnad suggests, there has been far too much concern about the ontic capacity of engineering cognition - a disposition I would personally speculate comes in part from the prominence of computationalism, and it's desire to turn everything to symbols, values, and data points. Yet, in some ways, it is the 'rawness', unfinished qualities, and variability of our sensory inputs, delivered by analog sensors which allows us to learn to create flexible, mutable categories - to 'change' our minds (or simply our categories). If the sensory input is pre-reduced (to the level of language as in a T2 Turing test, or binary in a computer), I do not think it is possible to accurately model learning, and therefore to accurately create cognition.

    ReplyDelete
  23. If "categorization is discrete and all or nothing: do this or do that", is it a digital process?

    I understand that sensory inputs are received as analog and get 'read'/reduced into a binary : all things in this world are either cats or not-cats(anything else). That distinction sounds like a digital process to me, a computation even. If this is the case (and if we assume cognition is categorization), cognition is potentially breakable into a three part sequence: 1) sensory uptake(analog), 2) an extraction of meaning from the stimulus i.e. categorizing 3) whatever the organism does with the information that these categories provide.

    The first and third 'steps' seem easy enough to reverse engineer: we can make analog sensors (cameras, pressure pads, prosthetics), and the discrete data of the derived categories can readily be read as inputs for behaviour by a sufficiently complex computational model (linked up to sensori-motor components of course). Yet two issues complicate this. First, we have to figure out step two, how does the 'magic happen' such that light becomes objects and categories, further how do those categories get learned and changed - how do we transition form analog to discrete while retaining complexity? Secondly, and perhaps most challenging, is the fact that studies have shown that this path is not uni-direction: our categorizations, memories, and associations feed back onto our perception of 'objective', analog stimulus - both above and below the level of consciousness. While the reduction from the analog and continuous to the discrete is something we have modeled in machinery already (cameras make images into pixels, facial recognition 'recognizes' us), to my knowledge the looping and top down effects present in cognition: the ability of the discrete reductions to applied back such that the analog appears to change have so far not been demonstrated artificially, let alone on the level of interrelation that they occur in a single moment of human cognition.

    ReplyDelete
  24. The section on feature selection and weighting reminded me of the cross-race effect : people are better at discriminating between people of their own race than people from other races. We’ve all heard or personally experienced stories of white bosses constantly confusing their two Chinese employees. [substitute virtually any situation as well as any two races].

    In the section on invariance and recurrence, Harnad mentions that “living in the world requires the capacity to […] ignore what makes every instant infinitely unique and hence incapable of exactly recurring”. I wonder if this would apply to the cross-race effect. Would having enhanced memory help, given that it requires the ability to recognize that which is infinitely unique and incapable of exactly recurring (faces)?

    ReplyDelete
    Replies
    1. Hi Claire, just wanted to say that I also thought of the cross-race effect when reading this paper. However, just wanted to add something: you asked "Would having enhanced memory help [with discrimination], given that it requires the ability to recognize that which is infinitely unique and incapable of exactly recurring face?" I would say that enhanced memory not only would not help with discrimination, but could also hinder it. What helps one learn categories or to discriminate between two objects is the ability to abstract features that are relevant to whatever it is the person is trying to categorize. For example, when comparing two instances of "dogs", it would be quite irrelevant to compare "colours of paws", right? It would be something we filter out when discriminating or categorizing, because it does not help. Having an enhanced memory would, as you said, mean that you can recognize infinitely unique faces. If you don't have the ability to abstract pertinent features, then an enhanced memory just means that you cannot learn categories. Every instance is unique from one another, and you can't find what is the common (enough) ground to group them together.

      Delete
  25. In "To Cognize is to Categorize: Cognition is Categorization," Harnad distinguished between unsupervised and supervised learning. Unsupervised learning requires no corrective feedback mechanism and is likely used to distinguish categories that are very different from one another, where there is only one way to separate the categories. For example, there is a clear distinction between the figure and the ground--and probably only one way to separate them--so this can be done by an unsupervised mechanism. Supervised learning, on the other hand, comes into play when there are many different ways to categorize different things and the categorization is context-dependent. Supervised learning provides a corrective feedback mechanism to tell the system whether of not it is doing the right thing with the right kind of thing... in the right context. In order for a system to know if it is categorizing correctly, it must know what kind of categorization the context requires, and the supervised learning mechanism provides this context. For example, if we want to sort chicks, "sometimes we may want to sort baby chicks by gender, sometimes by species, sometimes by something else" (Harnad 1987), and error-corrective feedback tells us whether the sorting is right in a given context.

    In order to know the context, however, it seems to me that a supervised learning mechanism must have some kind of innate--or, in the case of supervised learning algorithms--already-programmed information so that it can guide the system while it is learning. In other words, if we are supervised learners, we must have some innate mechanism that knows the context in which we are categorizing and provides us with error-corrective feedback for our categorization. Could it be the case that innate categories help provide the context for our supervised-learning mechanisms? Could Chomsky's UG be an example of such an innate category?

    ReplyDelete

Opening Overview Video of Categorization, Communication and Consciousness

Opening Overview Video of: