Blog Archive

Monday, September 2, 2019

6b. Harnad, S. (2003b) Categorical Perception.

Harnad, S. (2003b) Categorical PerceptionEncyclopedia of Cognitive Science. Nature Publishing Group. Macmillan. 
Differences can be perceived as gradual and quantitative, as with different shades of gray, or they can be perceived as more abrupt and qualitative, as with different colors. The first is called continuous perception and the second categorical perception. Categorical perception (CP) can be inborn or can be induced by learning. Formerly thought to be peculiar to speech and color perception, CP turns out to be far more general, and may be related to how the neural networks in our brains detect the features that allow us to sort the things in the world into their proper categories, "warping" perceived similarities and differences so as to compress some things into the same category and separate others into different categories.

Perez-Gay, F., Thériault, C., Gregory, M., Sabri, H., Harnad, S., & Rivas, D. (2017). How and why does category learning cause categorical perception? International Journal of Comparative Psychology, 30.

Pullum, Geoffrey K. (1991). The Great Eskimo Vocabulary Hoax and other Irreverent Essays on the Study of Language. University of Chicago Press.

45 comments:

  1. As I read about "innate categories", my mind wanders to the conversation about minimal grounding sets (MGS).

    How are MGS grounded? It makes sense to me to say that those innate categories would form the basis for grounding such a set, since these innate categories are undoubtedly the ones we acquire first and use the most. However, could we take it so far as to say that *all* of the content words in our minimal grounding set are derived from our innate categories?

    To answer this question, it's worth asking then how we gain our categories at all. We acquire them either by instruction or induction. In the first case, if other categories are learned by language, then they build off the MGS, so cannot be included in it. If they are learned by induction, then we could in theory keep acquiring new categories to include in our MGS ad infinitum. But we haven't yet - we're stuck at 1500 words. What's keeping us from doing so?

    ReplyDelete
    Replies
    1. As you note, there's no need for categories to be innate; they can also be learned by induction or instruction. But directly grounded (i.e., sensorimotor) inductive categories (including the ones in the MGS) need to be either innate or learned by induction. (Instruction is indirect grounding")

      Humans have grounding sets, but there's no need for them to be minimal, and no doubt we keep learning sensorimotor categories lifelong, even after language has gotten "off the ground."

      1500 is an approximate figure for the number of word in the MGS of a dictionary. People undoubtedly ground more than the minimum (and they don't all ground them with the same set). As I mentioned, we don't learn an MGS and then go into a dark room and learn all other categories through verbal instruction.

      Delete
  2. “There are even recent demonstrations that although the primary color and speech categories are probably inborn, their boundaries can be modified or even lost as a result of learning, and weaker secondary boundaries can be generated by learning alone (Roberson et al. 2000).” (section “Learned CP”)

    This part of the reading specifically reminded me of the ideas of phonemes and allophones in different languages, which is essentially the idea that some sounds (phonemes) distinguish between words in one language but other sounds (allophones) don't distinguish between words. In this case, the boundaries of certain speech categories are pruned away because the distinction isn’t meaningful. Two separate categories aren’t necessarily needed because the two sounds won’t differentiate between words. What I find interesting about this is the idea that these boundaries can be degraded far more easily than they can be rebuilt, as learners of a second language later in life find it difficult to rebuild these categories and more clearly “hear” the difference between what are now two separate phonemes but what were originally just allophones for them. There seems to be an idea here that the fine tuning of our speech sounds at least early in life is a process of boundary removal rather than creation. As the text discusses though, we should still be able to perceive the differences between two sounds whether they are allophones or phonemes, just there is a bias that two phonemes may sound more distinct to us than two allophones due to this categorical compression effect.

    Finally, given the paper was published in 2003, I am curious if more work has been done to empirically support the idea that there can also be instructional (learned by language) categorical perceptions in human cognition. I would expect that there are but I am interested if experimental support has been generated for this idea.

    ReplyDelete
    Replies
    1. Phoneme categories are special because they are biologically prepared, almost as much as color categories are.

      Distinguish between discrimination and categorization. One is a relative comparison task (same or different? how different?) (easier) and the other is an absolute identification task (much harder) ("what is that?" or "what to do with that?") (Miller 1956).

      There is evidence for CP effects of learning by induction, where you learn by trial and error and corrective feedback. Nothing solid yet on whether verbal instruction can cause CP (but I suspect it can). You're told the distinguishing feature(s) and then practice makes you better at detecting it, which makes the category pop out more. (But that's just "Stevan says" for now...)

      Delete
    2. Looking at the idea of phonemes again considering the difference between discrimination and categorization I think I gain a better understanding. Evolution has prepared us biologically to be able to discriminate between certain sounds (phonemes) and colours for instance, but as far as learning "the right things to do with them" we would need some kind of supervised, unsupervised, or instructional learning. For example, evolution has provided cognition with an innate ability (at least at infancy) to distinguish an 's' sound from other similar sounds like 'z' or 'sh', but to know the right thing to do with 's', (which in English could be something like adding it to the end of words to make a plural), that requires more than discrimination. It requires a form of learned categorization.

      Delete
    3. Stephanie, not quite. Some things we can categorize (do the right thing with) from birth, without learning; for some of them it is because there is a physical gap between them (e.g., zebras vs. giraffes), others because we are born with innate feature-detectors for them (colors, phonemes). Others we learn; our brains abstract features and ceate feature-detecting filters.

      Not all learning produces CP (between-category increase in similarity or discriminability and within-category decrease of similarity or discriminability). Easy learning doesn't; harder learning does.

      CP's function is to reduce the confusability between categories (uncertainty about what to do with what).

      (But don't mix up categorization and discrimination. One is absolute, the other is relative.)

      Delete
  3. “The frog's brain is born already able to detect "flies"; it needs only normal exposure rather than any special learning in order to recognize and catch them. Humans have such innate category-detectors too: The human face itself is probably an example”

    Visual object detection and analysis in the brain is something that (I’d argue) has been studied a fair bit. We know that visual perception involves the deconstruction and reconstruction of the world by breaking the image down to simple components such as points and lines and then building it back up again. The example of human faces is intriguing because there are some arguments in the field about whether faces themselves are intrinsically special, or whether we are trained to differentiate them. The fusiform gyrus is an area in the brain that selectively responds to faces. One group of scientists believe this area is hard wired to be specifically responsive to faces (as opposed cars or houses). However, another group argues that this area is responsible for identification of objects within a category for which the person displays visual expertise. This group showed evidence that car experts showed increased activity in the fusiform gyrus when viewing different cars (in addition to faces). These two camps mimic the idea of how much of our categorization is hardwired vs learned, and as most things tend to be, I’d guess that it is probably some combination of both.

    ReplyDelete
    Replies
    1. But it would be nice to know how the fusiform gyrus does it! "Breaking the image down to simple components such as points and lines and then building it back up again" does not quite answer that question...

      Delete
  4. Separate from my above skywriting: I don't really understand how we learn abstract concepts like truth and love from instruction. Is it simply because we get some description using words that we've already grounded and then we do the zebra = horse + stripes thing? I've been trying to think what kind of words I'd use to describe love... feeling + good + etc. Is that how works?

    ReplyDelete
    Replies
    1. I think the conclusion of the paper is that we're not quite sure. Since we don't really have a way to look into abstract categories like love (unless we look at its effects on the body, I guess), we can't investigate in the same way we do sensorimotor kinds of categories.

      Delete
    2. There is no way to convey what a sensation feels like that you do not have the sensory apparatus to feel.

      But sensory features you can feel can be categorized too, and then combined and recombined to describe further (more "abstract") categories.

      Look at the dictionary definitions of categories like "love" and "truth". They are approximate and incomplete, but if someone does not know what they refer to, the definitions give enough of an approximation to be able to recognize and talk about the referent category -- and distinguish members from non-members.

      All verbal definitions and descriptions are approximate rather than "exact." A picture (or object) is always worth more than the 1000 words that describe it. But you can keep getting closer with more words. That's the nuclear power of language (analogous to and related to the Strong C/T Thesis).

      But, like a computer simulation, the verbal description is never exhaustive -- and never the thing itself. It's just a way of teaching someone else's sensorimotor system to be able to recognize the category.

      Delete
    3. Hi Lyla, Eli and Prof Harnad,

      I love Categorical Perception because I do think it could accurately describe so many other complex ideas we perceive and name, like feelings we feel. I don't know if we learn the "meaning" of love or any other feeling from instruction, but I think it's safe to say we learn to name a particular sensation "love" and a different one "anger" etc. Learning to categorize or recognize and distinguish between and within different Named Feelings must be at least in part a process of learning from instruction, but not necessarily only that.

      I'm thinking of how polarized the United States are, politically, socially, culturally, and how that apparent polarization could be a result of “enhanced between-category differences relative to within-category differences” (Harnad) along the theoretical continuum of sociopolitical ideology. People who disagree with each other on some things start to emphasize their differences over their commonality with outgroup opponents and aim to present a “unified” (read: homogenous) identity base within the group – amplify differences without, minimize or compress differences within.
      This may be simply too philosophical as an analogy, but it is informing my understanding of CP.

      Delete
  5. The last two paragraphs of Harnad (2003b) seem to suggest that investigating the effects of language on CP of abstract categories can be done through the exploration of different kinds of perceptions of these. But is that not a given? We all assume that everyone has the same ideas for what "love" is, but they quite obviously don't. For some, love is acts of service for the person they care about and for others it might be devoting time to them. I think it's not a leap to conclude that the categories we have for such abstract things vary, sometimes a lot. So investigation into this would be more about whether language is the 'culprit' for this diversity, or something else. (Or am I missing the point and this was obvious all along?)

    ReplyDelete
    Replies
    1. Yes, categories vary; and they are all approximate (not just the subjective ones, like "beauty," but the objective ones too, like "cancer cells"). We don't all use the same features to pick them out. And if we look more closely, different features may pick out slightly different categories. What's remarkable is that with language we can match them well enough to get by (and a mismatch can always be resolved -- with more words).

      Language is a nuclear weapon; it's not omnipotent, but close enough!

      Delete
    2. @Eli, I've been thinking along the same train of thought as you! Many in this thread have mentioned defining "love”, which brought to mind the grief that categories cause us. One might be asexual for example and not associate sexual intimacy with the category of love. This can be a source of great distress, when the majority of the population sees things differently.

      I think categorization can be a great sense of comfort for us. In line with the love topic, one's struggle to determine their sexuality is an example of this. We crave the ease of slotting ourselves into categories that match up with what society lays out for us.

      You asked whether language is the 'culprit' for this diversity (of criteria for a single category), or something else. I can definitely see how language might be the culprit, but I’m wondering whether categorization is the culprit itself — the attempt to categorize that which simply can’t be categorized. Maybe this is a limitation of language (of categorization?) when it comes to abstract topics?

      Harnad mentioned : “what’s remarkable is that with language we can match them [features used to pick out categories] well enough to get by (and a mismatch can always be resolved - with more words)”. After reading this, I began to think that perhaps language isn’t the culprit? It can be used as a tool to help clarify categories when things get muddled.

      Delete
  6. “But when we look at our repertoire of categories in a dictionary, it is highly unlikely that many of them had a direct sensorimotor history during our lifetimes, and even less likely in our ancestors' lifetimes.”

    I imagine that for pre-language humans, there must have been some amount of categorization depending on how it made an individual feel in response to sensorimotor interaction with a thing or person rather than only categorizing based on sensorimotor features. Something would make me “feel” happy or excited and therefore it would be categorized as resulting in that emotion. The advent of language allowed us to transfer this personal feeling towards an input to someone outside of ourselves by telling them why it was good, truthful, etc. This could suggest that we moved from more feelings-based abstract categories to more language-based ones once we developed language. Certainly, this doesn’t explain the majority of categories in a dictionary since we had to actively learn through other what these were, but there must have been some abstract internal basis underlying these words that was present in our ancestors.

    ReplyDelete
    Replies
    1. Why do you begin with subjective categories, like "happy"? Humans, before language, and many other species, can do plenty of nonverbal categorizing ("doing the right thing...") with objective categories like food, and kin, and prey, and predators. Now re-think the origin of language from there, and take on the subjective categories after you've done that.

      Delete
  7. Categorical perception is defined as making within-category differences appear smaller than the between-category differences.
    Would it be correct to then think that CP is, in and of itself, an evolutionary tool? But we use this “tool” to then learn all the different categories? This would be a bit synonymous to the language acquisition device that Chomsky proposes. Maybe I misunderstood, but in the section about “Learned CP”, this could make sense because then, the reason that boundaries can be modified or lost is because we have the inherent mental capacity to change the boundaries. And then we would be back at the easy problem of “well then what are the actual causal mechanisms or this “tool”?”
    Or was that already made obvious in the paper somewhere…?

    ReplyDelete
    Replies
    1. CP is enhanced between-category differences relative to within-category differences, making the categories "pop out" more. The hunch is that CP is a side-effect of a feature-abstracting mechanism (a neural net) that enhances the salience of features that distinguish the categories and filters out the features that are irrelevant to the categorization. (There is a working model, but it is far from Turing-scale.)

      The evolved tool is not CP, exactly, but category learning capacity itself. This is not really like Chomsky's evolved Universal Grammar (UG). It is more like the opposite, since UG is thought to be innate precisely because it is unlearned (and unlearnable because of the "poverty of the stimulus”: the absence of negative evidence [Weeks 8 & 9]; learning requires both positive and negative evidence; supervised learning has to sample both members and nonmembers; this is about “Laylek” again… ).

      Losing innate CP boundaries because of disuse, as in the case of innate phoneme boundaries not used by a language, is not quite the same as learning new CP boundaries.

      But there is a candidate hunch about what the learning mechanism might be, in the form of supervised (“deep learning”) neural nets. The hunch may turn out to be wrong, but there will be more hunches.

      Delete
  8. “Computational modeling (Tijsseling & Harnad 1997; Damper &Harnad 2000) has shown that many types of category-learning mechanisms (e.g. both back-propagation and competitive networks) display CP-like effects. In back-propagation nets, the hidden-unit activation patterns that "represent" an input build up within-category compression and between-category separation as they learn; other kinds of nets display similar effects.”

    This passage reminded me of recent instances of ‘creative’ A.I., the algorithms of which were designed to be able to imitate the styles of certain artists via repeated exposure to their material. For example, there was a deep-learning algorithm that received as input scores from Bach and compared and contrasted the output with more of the scores to improve the accuracy and quality of the composition over time. Significantly, even after providing a great deal of Bach’s works and a large amount of time to analyse, the scores outputted never really resemble the real thing; there’s always something ‘off’ about it even if some minute aspects do seem similar.

    I’m curious as to how this dissection of the creative process relates to what we know about categorical perception. These creativity algorithms adhere mostly to unsupervised learning where the is no one explicitly telling the AI that a score is bad, bur rather, the robot just self-refers to the data that it has access to. The real Bach received feedback for his personal scores and to scores of others by his peers and also by the population he interacted with at large, which means he had supervised learning, as well. Would granting the AI access to this type of learning be one of the missing links of determining how creative processes categorises qualitatively good and bad outcomes

    ReplyDelete
    Replies
    1. Good points. (But I think it might be more realistic to try to reverse-engineer ordinary “uncreative” cognition — such as learning which mushrooms are edible and inedible — before taking on creativity or genius. Not that they are completely unconnected, but, creative cognition is surely “grounded” somehow in ordinary Turing cognition.)

      You did put your finger on it, though, with unsupervised algorithms trying to learn to produce more (or “better”) Bach by sampling a lot of Bach scores (the way GPT-3 samples lots of texts). What comes out when the net tries to “mirror” that landscape by producing new Bach of its own sounds promising at first, but is soon perceived (by us) to be humdrum and mechanical (algorithmic) — especially if you’re familiar with the real Bach from which it has been concocted. (The same is no doubt true of the literary quality of GMT-3 productions of text.)

      It could probably be improved somewhat by adding supervised learning — feeding it also scores by lesser composers, and weighting them with corrective feedback from expert human ratings on their relative quality. But even that would just make a net capable of rating quality — and of generating a humdrum landscape of it, if switched to “creative” (MN) mode, producing output, rather than just sorting input quality. Dan Dennett discusses alorithmic Bach (perhaps a little too admiringly) in From Bacteria to Bach and Back (with a "nup" (pun) or tail-rhyme that only a non-Tedescan would appreciate…)

      I doubt that great artistic creativity is shaped by feedback from popularity. Apart from the MN influence of hearing (or even being taught by) composers they admire (the way Bach admired Buxtehüde and Vivaldi), I think the geniuses hew to their own inner feedback -- and it takes public "feedback" much longer, if ever, to catch up. (I wouldn't know, though: we pygmies can only guess at such things...

      Trump's "output," in contrast, is partly shaped by populist feedback. We usually don't categorize that as creativity, though, but as demagoguery. His inner demons are mere pygmy psychopathy: he and his fans are mirroring one another, and fanning the flames...

      Delete
  9. In Professor Harnad’s article “Categorical Perception,” we learned about two types of perception, the Sapir-Whorf Hypothesis and the motor theory of speech production.

    *Colours*: The ROYGBIV bands of a rainbow look discrete, when in fact, there is no physics behind the bands. It’s a colour continuum. The Sapir-Whorf Hypothesis posits that the arbitrary learned names of the subdivisions of this continuum cause the bands. At the same time, the innate or learned feature detectors help distinguish between the colours too. When we look at the different shades of red, we use continuous perception. When we look at different colours, we use categorical perception (CP). CP occurs when different shades of a specific colour look similar (within-category compression) and when different colours look distinctive (between-category separation).

    *Speech-sounds*: When you synthesize an acoustic continuum from ba to ga to da, you can achieve the same rainbow effect (ba-ba-ba-ga-ga-ga-da-da-da). However, the difference between colours and sounds is that we can see but not generate colours, whereas we can hear and produce sounds. The reason we can hear a sound continuum is that there is a connection between the sounds (ba) and the associated motor action (the stop consonant). This is because the innate or learned feature detectors help distinguish between the sounds. So, the motor theory of speech perception posits that we perceive sounds using CP because "sensory perception is mediated by motor production."

    ReplyDelete
  10. "In this case the category is "continuous" (or rather, degree of membership corresponds to some point along a continuum). There are range or context effects as well: elephants are relatively big in the context of animals, relatively small in the context of bodies in general, if we include planets."

    I am confused about how this is possible given the existence of CP. If CP "in which a perceived quality jumps abruptly from one category to another at a certain point along a continuum, instead of changing gradually" exists, then don't all categories become a matter of all or none, but with the definition of all or none varying from individual to individual and depending on context? For example, an elephant is big compared to an ant, but small compared to the continent of Asia. Where the CP would occur in this instance depends on the details given, but in both cases it is either big or not. Is this language and context-dependence then the continuum aspect of it?

    ReplyDelete
    Replies
    1. What allows for continuous perception is the context. The example of the elephant is not a question of whether it is big or not, but how it fits in a continuum of things that are in the category “big”. The all-or-none, categorical perception is a matter of fitting in a category or not. However, the category “big” can never really be a category that something fits into in an all-or-none fashion.

      Delete
    2. Ting, according to the Strong W/S Hypothesis, it is naming colors that causas the discrete ROYGBIV bands of the rainbow. This is incorrect. They are caused by inborn feature-detectors (RGB peaks and R-G, B-Y opponent processes which cause perceived between-color separation and within-color compression). The Weak W/S Hypothesis is that learning to categorize can sometimes create learned feature-detectors that induce between-category separation and within-category compression (CP: Categorical Perception).

      According to the motor theory of speech perception (which is partly correct) we perceive phonemes partly by how we produce them. The discrete separation between the sound of pa and ta is because you can only produce pa with your lips and you can only produce ta with your tongue and palate. (That's why ventriloquists have trouble doing the ba and pa sound.) It's a kind of inborn feature-detector based partly on "mirror-neuron" (MN) capacity and its sensorimotor affordances (the kinds of sounds that can be made by a human mouth).

      But we also have (weaker) CP separation/compression boundaries for vowels, both in perceiving them and producing them. Yet both the vowel-sound perception space and the vowel-sound movement space are continuous, with no discrete motor discontinuities, like lips vs tongue/palate, yet there are perceived CP boundaries between ah and ooh and eee. So these perceived boundaries also partly involve some "mirror-neuron" capacity, whether inborn or learned.

      Ishika, there is no CP for big and small: it's not an absolute task (categorization, identification) but a context-dependent relative (comparison) task.

      Matt, you are right, but there is context-dependence even with "absolute" categories:

      On a table covered with fruit, if your reward depends on naming the fruit correctly, they're apples, oranges, bananas and tomatoes. But if your reward depends on naming the colors correctly, they're reds, oranges, yellows and greens.

      (And even in category-learning, the context matters, because "features" are not absolute: they are what successfully distinguishes members of different categories in a sample, resolving any between-category confusability. But if the sample on which you learn to categorize (by feature abstraction through supervised learning: trial, error, corrective feedback) is not representative enough, the features may prove to be insufficient once the sample -- hence the context of confusability -- is widened.)

      (Ask me about "fool's gold" in class.)

      Delete
  11. How does categorical perception play a role into verbal effects (I'm not sure what to call them honestly) such as alliterations or sibilances? We can learn about them in many ways (i.e. Hearsay), but even if we don't know what they are, they inherently stand out and create an emergent effect (E.g. the smooth sound). If this is the case, then it is not very clear how much of this effect can be learned or innate. Because the former will just aid the person in detecting the alliterations/sibilances, but the innate aspect of this effect is that we can hear it regardless of our prior knowledge. I'm really wondering where does the "sensory affect" that an alliteration/sibilance produces fit in terms of CP?

    ReplyDelete
    Replies
    1. "Treacherous Traitorous Trump" presumably stands out more than "Cheating, Untrustworthy Trump" partly because a repeated series is more noticeable and memorable; and "Sisyphus" is probably more ear-catching than "King of Corinth" because it has more high-frequency continuants (resembling a snake's hiss).

      But these nuances of literary style are more than what reverse-engineering basic, generic T3 capacity calls for. (That "easy" task is already Sisyphean enough!)

      You would not kick Ting because she does not notice alliteration...

      Delete

  12. If I am understanding this correctly we are in this paper dealing with the dilemma of determining whether categorical perception is an innate process or a learned process.

    Contrasting these two origins of CP we put into evidence that there are in fact two forms of categorical perception, one that is inborn and biologically evolved and the other that is learned (and a last which with language that has not yet been entirely proved). This paper argues that both are necessary to ground our individual before he is will be able to compute with any grounded symbols whatsoever.

    It is obvious that innate Categorical perception cannot stand on it own. In my previous comments (in 6.a) I put into evidence that forms of supervised learning were indispensable for an agent to learn to categorise, specifically when there are many ways of getting to a similar/same category. Alternatively, purely learned CP has not only been proved to exist (such as the thresholds between colors), but cannot stand on it’s own. We learned that, on the one hand, unsupervised learning consists in counting the number of occurrences of a feature in a sample set determining whether it is an invariant feature or a variant feature of a relative set. On the other hand, supervised learning consists in features being cued after each recursion by the perceiving agent (or an external agent) before being input for the next trial of categorising, thus refining the features composing a category. But in order for supervised learning to work, one must be able to initiate a new cue after the first step of the recurrence by using an initial category.
    Analogically, an infant with the sole faculty of learned CP could never categorize (i.e. understand) the initial finger which when pointed allows to designate a new category.
    In other words we are missing some sort of base case for the recursions involved in learned CP to initially occur and innate CP allows just that.

    ReplyDelete
  13. Categorical perception is the capacity to recognize differences in types of sensorimotor stimuli. While continuous perception is the ability to recognize differences along a specific dimension.

    I wonder if the concept of just-noticeable difference from psychology, which is the minimum change in stimuli for our sense to notice it, could resolve the dichotomy of continuous vs categorical perceptions.

    Often when looking at two similar grey colors, I perceive them as the same, even though the single dimension that they differ on, lightness, is continuous. This seems like it could actually be my brain actually mis-categorizing the second grey as the same “kind” of color as the first grey.

    ReplyDelete
    Replies
    1. I disagree that categorizing the two greys as the same would be miscategorization. I think, as we've seen with the "Funes the Memorious" example, categorization requires a certain level of abstraction. That is, we don't create seperate categories for every perceptible difference, which is why objects that are in the same category appear more similar to each other than objects in different categories even if they are equally different (because there are differences within categories that we pay less attention to because they're in the same category (I think this is also the gist of the ugly duckling theorem)).

      Furthermore, although the Whorf hypothesis stating that categorization is a direct result of naming is false, I do think the opposite holds. That is, we do not (to my knowledge) commonly use the same word to denote multiple similar categories. So the fact that you're calling them "two similar grey colours" instead of "grey and white" for instance, is indicative of the fact that they are in the same category.

      Delete
  14. I found this article to be really interesting and clear. I just wanted to clarify language-induced CP for my understanding. We've seen so far that categorization requires interaction between our sensory surfaces (such as retinas, mechanoreceptors etc.) and the outside world and that things that exist in the world afford us certain sensory experiences depending on our specific sensory organs. Categorical perception is the compression and rarefaction that we can observe between instances in the same category and instances of different categories. However, there is the question of whether we're able to learn categories and the CP through language alone. Harnad writes that neural net experiments suggest that once category names are grounded, they can be combined using boolean operators in increasingly complex ways. Does this imply that all the component categories in the operation need to be grounded in order to learn the new category and the CP with language alone (like grounding man and unmarried and bachelor)? If so, then this implies that there is never an instance of categorization that is not at some stage related to sensorimotor processing— even though language itself is just symbols, those symbols are somehow grounded with their referents in the real world.

    Also this is just out of curiosity but in a neural net, how do you operationalize "groundedness" so that you know the categories have been grounded?

    ReplyDelete
  15. Humans are capable of many kinds of categorical perception. Some are innate, like distinguishing colours, and may arise due to our biology (the certain cones which are used to perceive colour dictate what we see). What I think is more interesting is the categories that are learned through language, as those can vary across individual to individual. I think it is remarkable how language can subtly shape our understanding of categories - in the attached video, Lera Boroditsky says that native speakers of different languages select different adjectives to describe objects depending on if the object is masculine or feminine in their language. Humans can also perceive time differently based on the language they speak. I think all this shows that in order to reverse engineer cognition, we aren’t looking for absolute ways to categorize the world and everything in it, but we are looking for the ability to learn and forget new categories. Of course, we would need to distinguish what categories are learned and what are innate, but it is interesting how varied our categories can become based on learning, and how even though differences exist we can usually overcome them using language.

    ReplyDelete
    Replies
    1. I have a follow up question to your post: How is the ability to learn and forget new categories different from categorizing itself? Understanding how we categorize is nonetheless abstracting important features (which themselves are other categories) and forgetting unimportant ones (what funes memorious lacked). Separately, I have so many questions for Boroditsky's example of the languages that do not have counting numbers, and subsequently do not have algebra. Does this mean speakers would only be able to learn counting only though another language? Does this imply that numbers are not an innate category, despite counting to be a capacity we have that simply was not utilized?

      Delete
  16. Via the texture dichotomy experiments outlined by Prof Harnad, participants must learn to distinguish between two easy categories of textures (where it becomes obvious after just a few trials) and two hard ones (where only half of the subjects manage to learn the dichotomy). Based on the subjects' performances, we see categorical perception occurs for the hard textures. More specifically, the two hard categories become compressed within each respective category, and expanded between them. Thus, the learning of these hard categories actually made the categories then look different. Contrarily for the easy stimuli, this categorical perception did not occur.

    Then I'm inclined to wonder why categorical perception (which is not simply categorization but rather a phenomenon that alters the way in which we view the categories through means of expansion/compression) only occurred for the more difficult stimuli and simple categorization was adequate for the easy stimuli. Is there a threshold or range of categorization difficulty that corresponds with our ability to employ categorical perception? If such a threshold or range exists, what are the implications for continuous perception?

    ReplyDelete
  17. “Can categories, and their accompanying CP, be acquired through language alone?”

    I think so! What if I told you about my sister Clara who is studying civil engineering at Queen’s University. She is 23 and looks just like me. Has Clara been grounded? Pretend I continue describing her until she is grounded. (I think Frege would argue that Clara’s reference has been fixed.) But Clara doesn’t really exist. If I can ground my fake sister Clara with language alone, surely I can also ground categories.

    You could repeat this experiment with a category easily. Imagine I told you that I discovered a new beetle while trekking through the jungle. I could tell you all about this beetle – what it looks like, what it eats, etc. I could easily convince you of this new beetle’s existence. I could even test your knowledge of it and ask you to pick it out in a written multiple-choice test. But in reality there is no beetle. Even so, you have learned this new category through language alone, right?

    ReplyDelete
    Replies
    1. It’s an interesting point that we can create fictional categories that can be acquired though language alone. It's easy to describe the traits and differences between the various mythical creatures from a fantasy series in great detail; however, I’m not sure in these cases how we can test CP for these language-only categories. This seems much more difficult. How would expansion/compression work in your example of a fictional beetle versus a fictional beetle I make up?
      I also feel like the infinite nature/approximate nature of language and categories particularly complicates things for fictional language-only categories. Without a related sensory experience, we have much less to work with in determining the invariants of the category. We just make them up as we go, arbitrarily, and not know when to stop, even more so than when describing apples.

      Delete
  18. ‘"Red" and "yellow" may be inborn, but "scarlet" and "crimson"?’

    I would argue that the colours “scarlet’ and ‘crimson’ are just as inborn as the colours red and yellow. We know the words “red” and “yellow” when we are born, we only know how to tell them apart. The naming comes later. I think the same can be said for scarlet and crimson. We can intuitively tell these colours apart at birth, even if we cannot name them.

    ReplyDelete
  19. Having studied some linguistics during my time at McGill this paper felt especially salient.
    In regards to the categorical perception of consonants versus vowels ( a more accurate categorization for this discussion I would argue is obstruents versus sonorants.)
    With this I would also like to note that even within linguistic studies sonorants are much harder to categorize then obstruents are. There is never question of which obstruent is which regardless of which angle you take in categorizing these speech sounds. However there can be a lot of ambiguity in vowels (as well as the other sonorants), and it can be difficult to determine what category a vowel falls into. Even just looking at the charts used to show vowels you can see how continuous and non-discrete the distinctions are.

    ReplyDelete
  20. From Harnad, S. (2003b) Categorical Perception

    Sorry that this is a bit on the long side (first part is a summary, second paragraph is a question/comment)

    In this reading I learned that categories used to be designated as either categorical, meaning that membership in the category is “all-or-none” (ex: bird or not bird) or continuous meaning that membership in the category is “continuous” or a matter of degree (ex: things that are big or less big). Categories determine how we see and act upon the world; some are innate (ex: frog able to detect “flies” at birth or human facial recognition or perceived color and speech sound *but innate categories might be changed by learning) and some are learned (most of them; ex: words in dictionary are more often learned categories). In speech recognition, sounds like “ba” and “pa” are more categorical than other sounds because we perceive an abrupt change when we hear the two sounds (there is a distinctiveness between both). Lawrence’s explanation for this is that “stimuli to which you learn to make a different response become more distinctive and stimuli to which you learn to make the same response become more similar.” The distinctiveness is not the intrinsic physical difference of stimuli but a perceptual difference that is acquired. The recent understanding of CP is that it occurs when the perceptual difference of within category members is compressed or when between-category differences are separated relative to a baseline of comparison (“accordion effect”). Compression (within category) and separation (between-category) is operationally defined as Categorical Perception. Computationally, these operations can be performed by neural nets where the network selectively detects invariant features (shared by the members of the category) and distinguish them from different members of different categories. The computational models remain causal hypotheses (they correlate). Categories in language, once they are grounded in direct sensorimotor experience, may generate their own CP effects.

    I learned in another class the interesting notion of “priming” and “priming effects” (Kahneman) which is the idea that your environment (things that you perceive, words that you read or hear, etc.) primes certain behaviour (ex: reading the word banana causes an ease by which words related to banana come to mind and you are more likely to come up with words related to banana such as “yellow, fruit, monkey, etc.” ). In a sense, it is as if priming is a process by which you categorize or do the right things with the right kinds of things in your environment. You take the word “banana” and do the right things with it mainly, compression occurs (within category members pop up: “yellow, fruit, monkey”) and maybe separation too (things unrelated to banana disappear)
    Could priming effects be a type of categorical perception effect?

    ReplyDelete
  21. The recurring example of neural nets’ internal mechanisms reflecting categorical perception has been intriguing to me ever since we started discussing them. In this paper, it is mentioned that the internal “representations” in neural nets have within-category compression and between-category separation. Now, neural nets are not exactly the same kinds of “black boxes” as our minds have proved to be. In other words, although we cannot explain what weights each and every node in a neural net has been assigned during training on inputs and why (aka in such a way that we could explain precisely the process by which instance A is categorized as a B and not a C), we can explain the algorithms by which they achieve whatever categorization they are doing: we know exactly how they learn. We also know, by Searle’s periscope, that what they are doing is not cognition. So, supposing we can define a causal mechanism for symbol grounding, how do we determine if what our brains are doing to categorize is the same as what neural nets do? Is it a plausible mechanistic explanation for categorical perception and given a dynamical system that has sensorimotor perception, how do we connect the two systems as is supposedly the case in our brains?

    This has also been an intriguing aspect of the computational/analogical dichotomy for me. Where is the boundary (is there such a boundary?) in the mind for what is computational and what is analogical? How do these systems interact?

    ReplyDelete
  22. When I read about categorical perception - and how it makes things within a category appear more similar to each other, and more different from objects in another category (even if they share features with those other objects) - one of the first things I thought about was racial discrimination and prejudice, as it manifests in phrases like "all black people look the same". If you learn race as a category, and further still if that categorization (delineating people by race) is important/privileged in your cultural context/group, there may in fact be some truth to the statement: if you recognize two individuals as part of a racialized category, they may in fact look more alike to you. While it is not something we have discussed in this class, the presumption that vision/perception is objective just because it is based on stimulus in the world is a popular falsehood that CP speaks against.

    This consideration also connects back to the idea of "dimensionality reduction" - while all features of a given thing/person get 'seen'(on the level of our analog sensors), not all features are perceived(selected for) or remembered with equal weight, or at all. Skin colour, hair texture, and other features associated with a given race or ethnicity are hyper-marked compared to other visual features like jaw shape, cheekbone tilt, which may get reduced. Consider a brief anecdote: a director I worked with a couple of years ago was telling an actor on set that he "looked at lot like her brother". The actor was incredulous - the director was white, he was black; he didn't think that any brother of hers could possibly look like him. But she insisted, and we all looked at a picture of her brother - and it was there, at first we didn't see it, but when we looked for it, there were similarities - our perceptions adjusted.

    I understand that this topic is a bit of a tangent for our class, but still I think the example is useful. I'm not trying to male big fluffy claims that "we are all the same" or condemn us all as cognitively racist; instead I see this example as a strong support for categorizations role in consciousness - as it underlies not only 'objective' and 'universal' judgments like colour, shapes, and temperature, but also has the capacity to explain the groupish, arbitrary, and cultural judgements and perceptions that uniquely characterize human cognition and experience.

    ReplyDelete
  23. When reading "Categorical Perception," I found it particularly interesting that categorical perception seems to be done through "within-category compression and/or between-category separation." So even if, on the visual colour spectrum, a light blue and a dark blue are further from each other than a green and a dark blue, we still consider the blues to be in the same category and the blue & green to be in different categories because the blues are compressed and the blue/greens are separated in our brains, relative to the actual positions of the wavelengths on the spectrum. It seems that the goal of this compression/separation is to make categories that might otherwise be continuous (like colours) categorical or discrete, as separating things into discrete packages helps us better navigate the world.

    What came to mind while thinking about this text was the question of how this intra-category compression and inter-category separation is done. How does a system draw the boundaries of the categories which dictate over what domain compression is done and over what domain separation is done? These boundaries clearly are not based on the empirical properties of the categories, because the whole point of the compression/separation is that categories that are continuous in nature are compressed or separated in our minds. Harnad suggests that inputs which generate the same output in a system are compressed while inputs which generate different outputs are separated. So while categorization is done by compression/separation, what dictates this compression/separation is what we do with the stimuli that we must categorize.

    This connects back to the idea that categorization is doing the right thing with the right kind of thing and knowing whether we are categorization correctly depends on some kind of supervised learning, corrective feedback mechanism. If some categorical perception is innate, like colour categorical perception, it would be interesting to know why seeing any blue wavelengths, even if they differ, generate in us a similar output. In other words, as we were evolving, what did we do when we saw blue that we did not do when we saw green?

    ReplyDelete
  24. “We all see blues as more alike and greens as more alike, with a fuzzy boundary in between, whether or not we have named the difference.”

    “There are even recent demonstrations that although the primary color and speech categories are probably inborn, their boundaries can be modified or even lost as a result of learning, and weaker secondary boundaries can be generated by learning alone (Roberson et al. 2000).”

    I’m still trying to work through the different sides of the argument surrounding the universality of color categories. The description given of the Roberson et al. (2000) article does not seem to represent what is argued in the piece, from my understanding of it. Furthermore, it seems like that article’s findings show that we in fact do not all see blues as more alike and greens as more alike. Indeed, Roberson et al.’s research found that speakers of Berinmo and Himba do not show categorical perception between blue and green, as their color categories divide the spectrum in a different way. Instead, categorical perception aligns with their culturally based categories, which have, for example, boundaries that fall within the category we call green. The description given here seems to interpret these findings as meaning that categories can be added and subtracted to the “innate” categories, but this is not the interpretation of the authors. Instead, Roberson et al. claim that the findings indicate no evidence of innate categories at all. I know that previous research, such as Rosch Heider’s studies, argued that categorical perception was present for all of the innate categories, despite a language having less color words. However, how could we explain, instead, that an innate category has been “lost”? And how can we know that the categorical boundaries of another culture are a learned “secondary” boundary or a modification of innate categories, rather than simply their own categories?

    ReplyDelete
  25. This comment relates to the video above:

    In the video, Lera Boroditsky suggests that language shapes our cognitive abilities. The Strong Whorf-Sapir Hypothesis states that how we perceive our world is determined by our language. Could this be an example or evidence of this hypothesis which is mostly believed to be closer to the weak version? The speaker says, “…these studies suggest that each language comprises its own cognitive toolkit, a set of instructions that speakers of your language and generations past have created for you…” (00:07:42). Examples include how language shapes our memory and how languages without exact numbers would leave their speakers incapable of building mathematically complex architecture. Would these ‘cognitive toolkits’ be a part of OG, rules that can be learned?

    The effects of language in this video appear to be loftier than those discussed in the reading on categorical perception. The video seems to be connecting heavily to the idea of approximation in language because it appears that the more specific words in a language (i.e., numbers, descriptors, genders) the more vibrant, elaborative, and constructive the consequences of the language.

    One question I have is this: do we create words like numbers in order to better describe the features of an object, or are numbers an innate feature of our languages that we are all able to learn?

    ReplyDelete
    Replies
    1. Great post, and I also share a lot of thoughts relating to Whorf-Sapir and how categorization can influence how we perceive the world. To answer your question on whether numbers are an innate feature: I think words like numbers are similar to colours. We all have innate categories of colours, and it doesn't matter if we have the word for "green" or "blue" in our language, it doesn't affect if we perceive it. However, colours and numbers are different in the sense that colours exist in a spectrum, and therefore requires us to group together similar shades to call it a certain name ("green"). Innate or not, numbers are discrete, and I argue that they don't need to be categorized. However, I think you're right that numbers might help describe the features of an object, no matter if they are innate or not. This is only relevant in cases where numbers is a salient feature to categorize something though (bicycles vs. tricycles, not wine glass vs. ceramic mug)

      Delete

Opening Overview Video of Categorization, Communication and Consciousness

Opening Overview Video of: