Blog Archive

Monday, September 2, 2019

10a. Dennett, D. (unpublished) The fantasy of first-person science


Extra optional readings:
Harnad, S. (2011) Minds, Brains and TuringConsciousness Online 3.
Harnad, S. (2014) Animal pain and human pleasure: ethical dilemmas outside the classroomLSE Impact Blog 6/13 June 13 2014


Dennett, D. (unpublished) The fantasy of first-person science
"I find it ironic that while Chalmers has made something of a mission of trying to convince scientists that they must abandon 3rd-person science for 1st-person science, when asked to recommend some avenues to explore, he falls back on the very work that I showcased in my account of how to study human consciousness empirically from the 3rd-person point of view. Moreover, it is telling that none of the work on consciousness that he has mentioned favorably addresses his so-called Hard Problem in any fashion; it is all concerned, quite appropriately, with what he insists on calling the easy problems. First-person science of consciousness is a discipline with no methods, no data, no results, no future, no promise. It will remain a fantasy."
Click here -->Dan Dennett's Video
Note: Use Safari or Firefox to view; 
does not work on Chrome

Week 10 overview:





and also this (from week 10 of the very first year this course was given, 2011): 

Reminder: The Turing Test Hierarchy of Reverse Engineering Candidates

t1: a candidate that can do something a human can do

T2: a reverse-engineered candidate that can do anything a human can do verbally, indistinguishably from a human, to a human, for a lifetime

T3: a reverse-engineered candidate that can do anything a human can do verbally as well as robotically, in the external world, indistinguishably from a human, to a human, for a lifetime

T4: a reverse-engineered candidate that can do anything a human can do verbally as well as robotically, in the external world, and also internally (i.e., neurologically), indistinguishably from a human, to a human, for a lifetime

T5: a real human

(The distinction between T4 and T5 is fuzzy because the boundary between synthetic and biological neural function is fuzzy.)  

65 comments:

  1. Dennet argues for heterophonomenology, a third person approach to describing first person experiences. Essentially, he aims to gather the data necessary to build a machine that does the same mind reading that we do in our day to day - if this were a Turing Test, the data he suggests acquiring would enable a machine to read minds like we do (a subset of being able to do what we do), rather than use language like we do. The way he would acquire said data would be by first collecting data on subjects' physical states, catalogue their reported mental states, then from correlate the two.

    He then addresses tenets of the Zombic Hunch, claiming that we have neither a philosophical nor a psychological reason to maintain that Zombies couldn't experience anything conscious beings can experience. More to the point however, is that we don't need to establish whether Zombies do or don't have consciousness - we can (and should) remain agnostic, and still pursue heterophenomenology in all it's meant to be.

    Regardless of whether Zombies feel or not, get the impresssion that Dennet is right in supposing that heterophenomenology is the direction scientific research is and should go in. Science has never taken anything for granted, and is founded on testing everything it can find empirically. Historically, subjective experiences have been sheltered from scientific scrutiny, for a lack of motivation and means. But we now have the motivation - artificial intelligence is making us rethink what machines can do, and Cartesian dualism qualms are being pushed off the scientific scene and sequestered in personal lives. We also have the means - we can look into the brain and the body in ways we couldn't 50, 20, even 5 years ago.

    Science is a relentless pursuit of data and knowledge - and that answers "why" we should pursue heterophenomenology. As for "why nots", for maybe the first time in human history, there is nothing philosophical or methodological that is stopping us from going down this road.

    ReplyDelete
    Replies
    1. PREAMBLE

      Dan Dennett, a brilliant philosopher who has been the most influential in inspiring students to study cognitive science, is a kind and wonderful person (and has saved my career more than once!). So I don’t just love and admire him, but I owe him a lot. Nothing would make me happier than to be able to agree with him about consciousness, but alas I can’t!

      Dan is a student of the late British philosopher, Gilbert Ryle, who was a kind of philosophical behaviorist. In a nutshell, Ryle thought the solution to the “mind/body problem” — “what kind of special stuff is the mind?” “what is the difference between physical states and mental states?” — (now called the “hard problem”) — was that mental states are just behavioral states: states that give an organism the capacity to do the things they can do.

      Dan updated philosophical behaviorism to contemporary cognitive science. He was also the one who coined the term “reverse engineering”: finding the causal mechanisms underlying our capacity to do everything we can do. He was of course also a proponent of computation and the Turing Test (T2 and T3), but also of cognitive neuroscience (T4) and any dynamical system that can do the causal job. And he is an ardent evolutionist.

      A philosophical Zombie is a T5 that is insentient (does not feel at all). But anything that does not feel at all is a Zombie too: a train, a tulip (I hope!), or a teapot. So Zombies definitely exist — but the other-minds problem ensures that you can never be sure that a Zombie is really a Zombie, nor that any another other sentient organism really feels (i.e., is not a Zombie).

      Dan’s weasel-world is “belief.” One of the insights that set the direction of his thinking was when he was pondering what was going on in a chess-playing computer program: “Aha! It thinks [believes] that it should get its queen out early!” Maybe that’s all there is to thinking (cognition)! We attribute it to others because it helps us predict and influence their behavior.

      That kind of “mind-reading” (“heterophenomenology”) is adaptive; so evolution could select for it (and lazy learning could learn it). But the “Blind Watchmaker” (Darwinian evolution) is no more of a mind-reader than we are. So it’s really all just based on behavior: “Doing the right thing with the right kind of thing,” with “right” being determined by the consequences (reinforcement): survival, success, reproduction.

      Just one little big problem: feeling. What’s the causal explanation of that?

      Dan’s reply is that feeling is simply the behavioral states (“functional states”) that correspond to the ones we call felt states. Believing you should get your queen out early in chess is just an internal state (of the reverse-engineered causal mechanism).

      But belief is a weasel-word, because it feels like something to believe something. Dan says “Once you’ve solved [what you keep calling] the ‘easy’ problem, there’s nothing left to explain!.”

      Is that true?

      Delete
    2. Although it was very insightful in nuancing the hard problem, this paper felt to me like behaviourism begging the question all over again. Professor Denett spends this paper trying to depreciate the hard-problem, discrediting the need to explain experience by arguing that feeling is equivalent (or at least necessarily implies) believing. While I agree that our beliefs can stand anywhere between true or false, feelings cannot - they occur, or not at all.
      For example, if I sense raspberries aromas while drinking coffee, but their are no traces of raspberry associated molecules in my coffee cup, that cannot for sure mean that I have not felt raspberries in my coffee. Similarly, if I feel like I've lost my leg when it is simply anaesthetized and covered by some blanket, then according to Denett my conscious experience of having lost my leg is false. However, while my belief that I've lost my leg may be false, you cannot deny my experience of losing a leg, just as you can’t deny my tasting raspberries in my coffee.

      To add a layer, as professor Harnad said above, it "feels like something to believe something". Thus, believing is just a derivation/subset of one of the many ways of feeling and in this way, all that doesn’t feel merely behaves - Denett is just begging the question of how and why we feel.

      If as Denett had supposed, feeling could be reduced to belief, or belief were somehow a necessary consequence of feeling, then indeed the hard problem would merge into the easy problem and feeling could be causally explained with a functioning T3 or T4. But, believing does not necessarily follow from feeling - the easy problem and the hard problem remain two different ball games and solving the first might not get us any closer to the latter.

      Delete
    3. Julian, heterophenomenology (which includes neural imaging) is really just T4. I think Dennett would agree with you that T4 (or T3 -- maybe even T2) is a mind-reading machine, just like other people. But the (hard) question is whether that explains how and why it feels like something to do all that? Dennett’s answer is: What more could you possibly want than what T4 gives you? What else is there?

      You slightly misunderstand the “Zombic Hunch” which is about whether there could be an insentient T5. Explaining how and why there could not be a T5 (or T4 or T3) Zombie would solve the Hard Problem. But Dennett only says there’s nothing more there to explain. (Note that that’s not what Turing says: What does Turing say about the hard problem?)

      Matthew, a belief is a feeling that something is true (whether or not it is true). There is no such thing as an "unfelt belief": there is just unfelt data, or even a felt state, in my brain, that could turn into a belief, another felt state -- like my belief (now) that it's not Thursday. Until the moment that I decided to think of something I think is true, but was not thinking it a minute ago, I was not believing that today is not Thursday, any more than I was believing an infinity of other things I could potentially think if they came to mind.

      Belief is a weasel-word. Believing is feeling (rather the way "seeing is believing")...

      Delete
    4. @Prof:
      (1) I am confused about how explaining how and why there could not be a T5 Zombie would solve the Hard Problem. Would it be because in doing so we would have created a robot that is sentient, and since it is our creation, we would have figured out how to create consciousness and so how our brains produce feeling?

      (2) Also, regarding DD's argument that there is no hard problem, is it because reverse-engineering behavior would necessitate a corresponding internal state, which is what we call consciousness? So according to him, if we successfully reverse-engineered a human brain's function in a T3-passing way, there would be a need for internal states identical to the ones we have which would correspond to what we know as feeling? (Identical internal states + behavior would be T4?)

      Delete
    5. (1) A causal explanation of how and why there could not be a T5 Zombie would be a causal explanation of how and why feelings are needed to be a T5. We could then use that how and why to explain the causal role of feelings (i.e., to solve the hard problem). (But there is no such causal explanation of how and why there could not be a T5 Zombie [yet?] because it would be equivalent to a causal explanation of how and why we feel, i.e., the hard problem.)

      Logically, whatever formally proves that an X is impossible without Y is a formal proof that Y is necessary for X. But science and reverse-engineering are not formally mathematics, so they cannot prove either impossibility or necessity.

      (2) Solving the easy problem (of producing doing-capacity) requires internal states, but how and why those internal states are felt states is the hard problem.

      What DD says is that whatever internal states turn out to be able to do all the doing are also all there is to "feeling." There's only the easy problem. DD is still a behaviorist.

      Delete
    6. “Regardless of whether Zombies feel or not, get the impresssion that Dennet is right in supposing that heterophenomenology is the direction scientific research is and should go in. Science has never taken anything for granted, and is founded on testing everything it can find empirically. Historically, subjective experiences have been sheltered from scientific scrutiny, for a lack of motivation and means.”

      Julian, you seem to be interpreting the critique of heterophenomenology as being an attempt at limiting science. I don’t think there’s any doubt there are many things that can be learned from this methodology (not in my mind at least). However, it seems to be suited to explaining specific instances of subjective experience rather than the broader “hard question(s)” of consciousness.

      For instance, correlating first-person accounts with biological occurrences could lead to conclusions such as “this person is feeling positive feelings because of an influx of oxytocin in their brain” (putting aside the fact that determining the direction of causality, or causality altogether, is not truly possible through correlation alone). This can be very informative, but it does not answer the question of how and why it is that this biochemical reaction produces subjective experience.

      Moreover, the main issue with DD’s argument for heterophenomenology is that he presents a reductive view of consciousness to justify its use. DD puts forth that solving easy problem solves the hard problem, essentially arguing that the combination of first-person accounts and third-person assessments captures all that consciousness is. This just seems blatantly untrue - both are second-hand descriptions of consciousness, and they do not bring us any closer to measuring conscious experience directly.

      Lastly, you say that “subjective experiences have been sheltered from scientific scrutiny”. Once again, I think that skepticism of first-person accounts is an important general scientific principle. However, it does not make sense is when subjective experience is itself the object of the investigation. The idea that we would learn something about how someone feels that they don’t know themselves through objective means seems ridiculous to me. Whereas people can be wrong about the true cause of their internal states, but they cannot be “wrong” about what it is that they feel. Professor Harnad’s migraine example is a helpful way of understanding this.

      Delete
  2. “False negative: Some psychological things that happen in people (to put it crudely but neutrally) are unsuspected by those people. People not only volunteer no information on these topics; when provoked to search, they find no information on these topics. But a forced choice guess, for instance, reveals that nevertheless, there is something psychological going on”


    Someone with blindsight can guess what was presented to them above chance even though they cannot report seeing the thing. It is thought that this is happening due to a visual pathway that does not go through V1 and instead goes straight from the eyes to the ventral pathway. When we study these cases in class, it’s shocking that our brain can know something, and allow us to communicate it without consciousness (without feeling like ah yes, I saw X, and in fact saying I didn’t see anything instead).

    Alternatively, let’s take split brain patients. These patients have their corpus callosum which connects the two hemispheres severed, so communication between the two is not possible.
    Things presented in their left visual fields are processed by their right hemisphere. In most people (specifically right-handed people) language is localized to the left hemisphere so in this case, our patient would say that they had not seen anything. However, patients are able to reach out with their left arm and grab the item they “didn’t see”. This was thought to be an example of not having conscious access to information, but instead turned out to be simply an issue of not being able to communicate.

    For blindsight, it’s a bit trickier since they cannot report any vision and yet behave in ways that seem like they’re getting visual input, raising the question of how something as crucial as vision can be going on unconsciously.

    This is all really cool to talk about and think about, but I don’t know how much it applies to reverse engineering cognition because right now, we have no way to prove that anyone but ourselves actually feel anything (other minds problem). So, bringing in the hard problem (why we feel/ are conscious) without having a way to measure/ test it just feels like more work for no reward. If we found a clear way to test whether something/ someone feels, then I’d be inclined to reconsider including that requirement, but right now, I don’t think it’s reasonable to expect our T3 robot to feel when we have no way of testing that. For now, we’ll have to settle for what we’ve been settling for all semester which is a robot that can do everything we can do, because that’s the best we can get right now. It’s easy to test doing capacities, but feeling capacities have been off the table so far and I think they’ll remain until we figure out a way to test feeling to begin with.

    ReplyDelete
    Replies
    1. I'm sorry this is long, but it's something that I have a lot of thoughts on and was something I had trouble coming to terms with. I WANT a T3 robot to be cognizing the way I FEEL LIKE I cognize but right now, we're nowhere near figuring out how to test that on humans, let alone our nonexistent T3 robot so I just had to let that go

      Delete
    2. All good points. The (unsolved, untouched) hard problem is: How and why isn't all "sight" just blindsight?

      Delete
    3. Hey Lyla! :) I completely agree with you on how consciousness is interesting to discuss but not applicable to reverse engineering human cognition at this time.

      We learned from Dennett’s article that through heterophenomenology, researchers can only interpret a subject's behavioural, physiological and verbal responses from a third-person point of view. Most importantly, this method does not address the hard problem (how and why we feel). If we successfully build a T4 robot that has the same biochemical properties and performance capability to that of a human, we still have no test to check if that robot can feel like us. Hence, I kept thinking about the problem of other minds too.

      From Professor Harnad's transparencies, we know that there are many weasel-words for consciousness, and they all simply refer to "feeling" which can only be validated from the first-person perspective. We previously learned that the ultimate tool to read someone or something's mind is language. As of right now, we can only give the benefit of the doubt and assume that a non-human or T3 robot is feeling if it implies so.

      Heterophenomenology also reminds me of what we learned from weeks 4 and 7 about neuroimaging and evolutionary ecology. These first-person accounts of subjective beliefs are essentially the correlates of feeling. But we know that a correlation does not mean causation. Many students have already pointed out that, to answer the hard problem, we have to determine what the causal mechanisms of feeling are in the mind.

      Delete
  3. Dennett proposes heterophenomenology as a way to make the study of consciousness an empirical study – using concrete and measurable evidence that will make studying consciousness more rigorous and an objective science. Science, however, is not certain. As we have discussed previously in the class, certainty comes in two varieties: mathematical certainties from proofs, or Cartesian certainty in the sense that we can be certain we have a feeling. My confusion arises from the fact that Dennett has taken this 1st person certain knowledge of my personal feelings, recorded various 3rd person data like verbal explanations of my feelings, brain scans, if I blush etc., and now is trying to repackage it and sell it back to me as objective results. What I don’t understand is why this is such a remarkable process – we have gone from something (my feelings), about which I am fully certain, to now scientific data which is pretty certain but not perfectly so. As we discussed in the last lecture, linguists were able to use first-person evidence to test potential UG rules, and could overcome controversy about this method by showing how widespread those feelings (in this case feelings about making sense/not making sense) were shared.

    One way that I am thinking about the 3rd person heterophenomenological data we could collect about consciousness is as the sort of data one might collect for a T4 Turing Test where the capacity we are testing is if the candidate could feel or not. This is kind of complicated because feeling isn’t really doing. In Harnad’s paper Minds, Brains, and Turing we are asked to consider if we would dismiss someone as mindless if they turned out to be a T3 robot that didn’t pass T4. If we assume that heterophenomenological data is consciousness data, and the brain scan data collected as part of our heterophenomenological data set didn’t match what should have appeared with consciousness, then would we arrive at the conclusion that this person doesn’t feel? I agree with the tone of Harnad’s paper that this seems like an unbelievable (and potentially hazardous) conclusion to reach.

    I am definitely not certain about what I have just written because I found the Dennett paper a bit difficult to follow, so I look forward to the other readings and skywritings this week hopefully clarifying my understanding.

    ReplyDelete
  4. After reading Dennett’s paper, I would have to agree with Stephanie that heterophenomenology does not sound like a particularly radical new method to study first-person data. In his paper, Dennett writes that heterophenomenology moves “raw data to interpreted data” and that doing so makes it somehow neutral. I’m sort of confused by Dennett’s position on this.
    Later on his paper, he also writes that the ability to predict some phenomenon based on this 3rd-person point of view (via heterophenomenology) demonstrates the ability to explain said phenomenon. In my attempt to link this to the topic of consciousness, is Dennett then proposing with his Zombie hunch that the “hard problem” is not relevant to reverse-engineering cognition? I disagree with Dennett’s claim that the zombie hunch is just some mental block to “get over”; throughout this semester, I thought we’ve come to the conclusion as a class that to be able to simply predict cognition (like predicting the weather) isn’t enough to explain cognition, cause it doesn’t propose a causal mechanism. I fail to see how heterophenomenology, with its “neutral data”, can provide any more insight into providing a causal mechanism for even the easy problem of cognition.
    Turing says that “cognition is as cognition does” but I was under the impression that he was saying this because it’s simply the best that we could possibly hope to do with his Turing test, and not because he thought that cognition is genuinely just doing, and lacks feeling.

    ReplyDelete
    Replies
    1. When we interpret what someone else is doing, saying, or neuro-emitting, we are interpreting (mind-reading) the "raw" data they are emitting. This "3rd person view" is analogous to (and partly overlaps with) interpreting a formal symbol system in maths (Searle's "squiggles and squoggles"). But it's just as illusory as in Searle's Chinese room, because that interpretation is in our heads, and not in the heads of the one whose behavior, words, and neural correlates we are interpreting. And it feels like something to interpret their "data" that way.

      (The 1st-person 3rd-person dichotomy is just more Mustelidan Mist: the "1st person" is the feeler of the feelings whose correlates [data] the "3rd person" interprets (mind-reads); but the 3rd person is a feeler too, and interpreting feels like something!)

      People really do feel. So we are in fact right when we interpret them as feeling. Trouble is that the fact that we can (correctly) interpret other people's doing, saying, and neuro-emitting, as (among other things) feeling (knowing, feeling, wanting) [this is DD's chess/queen insight that I mention in the PREAMBLE above] does not explain how or why they are feeling -- rather than just doing, saying, and neuro-emitting. Which is just back to the "raw data."

      For DD, that's all there is; there's nothing left to explain. It's not that the hard problem is irrelevant to the easy problem: It's that there is no hard problem.

      "Cognition" is a word that can be used as yet another weasel-word. It refers to both doing-capacity and feeling-capacity. Solving the easy problem does provide a causal mechanism. But only for the doing-capacity. Turing is right that that's as much as you can do with his method. And it may be true (as "Stevan Says" it is) that any system that can pass T3 can not only do anything and everything that we can do, but it also feels, somehow -- but we have not explained how or why. All we've explained is how and why it can do what it can do. DD says there's nothing left to explain: "Get over it."

      Delete
    2. Ah okay. So according to DD, the hard problem doesn't exist because "feeling" is something that the interpreter ascribes (as the 3rd person) to someone else in order to understand them and avoid the other minds problem. Is this what you meant by this from the preamble:
      "Dan’s reply is that feeling is simply the behavioral states (“functional states”) that correspond to the ones we call felt states. Believing you should get your queen out early in chess is just an internal state (of the reverse-engineered causal mechanism)."

      But isn't "internal state" also just a huge weasel word?

      And then in reply to DD, we(/you) are saying that the hard problem DOES in fact exist, since to interpret something also feels like something.

      Delete
    3. Esther, can you explain what you mean by "'internal state' is a weasel word"? Is it a weasel word when we refer to the internal state of an oven or a washing machine, or a computer?

      A weasel word is either to use a synonym or something that is ambiguous, meaningless or irrelevant, in order to appear to have solved a problem, or added something new to it.

      "Internal state" seems to me to be as innocent as "on" in "the cat is on the mat". Things have external states you can see, and internal ones you can't. But if you open them up, you can. But you can't see feelings: you can't see that an internal state is a felt state, rather than an unfelt internal state of a Zombie.

      A felt state is a state that it feels like something to be in. Think of the Cogito. Esther can know that Esther is in a felt state. No one else can.

      Delete
  5. Dan Dennett describes heterophenomenology as:

    Maximally extended, it is a neutral portrayal of exactly what it is like to be that subject–in the subject’s own terms, given the best interpretation we can muster.

    Which basically says that someone’s verbal report of their “beliefs” (a weasel-word) about what it’s like to be them do not differ in any meaningful way from what it’s like to actually experience being them (I also acknowledge that heterophenomenology is not limited to verbal reports but I don’t think discussing that at length is relevant). I have to disagree.

    When I was reading Dennett's paper I couldn't help but think about the famous paper by Thomas Nagel: "what is it like to be a bat?" In that paper, Nagel argues that no matter how much objective 'third person' information we have about the physical systems of a bat, (how they echolocate, how poor their visual capacities are etc.) we will never experience what it is like to be a bat. This is because according to Nagel (and many others), subjectivity is a crucial component of consciousness: the only way to experience what it is like to be something or someone else is to be that someone or something else, there is no replacement for feeling.

    We can also apply this logic to human consciousness (animal consciousness is much trickier to think about since animals cannot use language and so we feel a lot less certain about whether a dog thinks as opposed to whether our fellow humans do). I can ask Dan Dennett what it’s like to be Dan Dennett. I can scan his brain while he talks to me or take measures of his physiological responses like blushing or perspiration. But even then I won’t actually know how it feels to be Dan Dennett and I won’t know why there is feeling at all. In this way I think Harnad was right to separate doing-capacity from feeling-capacity. If we can reverse engineer our doing capacities, I’m not confident that that would necessarily explain our feeling capacities, and I think that those really matter.

    (Also sorry that this is long and a bit of a tangent, I'm not entirely sure if this makes sense so I welcome any feedback)

    ReplyDelete
    Replies
    1. The hard problem is not to get me to feel what another feeler feels. The hard problem is to explain causally how and why the feeler's brain produces feeling.

      The other-minds problem isn't to get me to feel what another feeler feels either. The easy problem is to determine whether, when, and what a feeler feels.

      Heterophenomenology is equivalent to T4, which solves neither the other-minds problem nor the hard problem (do you see why?), although Turing (rightly) points out that the TT does as well as we can do on the other-minds problem.

      (To avoid weaselling, Nagel's paper should have been entitled "What does it feel like to be a bat?")

      Delete
    2. T4 means creating a robot that can do anything a human can do verbally and robotically, as well as what the brain does internally.

      It does not solve the other mind's problem because even if we created such a robot, we could not feel what the robot can feel (if it even feels). There would be no way of knowing whether it is a zombie or not. OMP is about whether or not something feels.

      (could I say this is because we could not use Searle's Periscope?)

      It does not solve the hard problem because it does not explain how and why the brain produces feeling. Because of the other mind's problem, we could not be sure of feeling, and because we focused on physical similarity, it could at best recreate correlations that we already see in neuroimaging studies, without explaining anything.

      Delete
    3. First, a correction to my reply to Allie, above:

      The second sentence should not have been:

      "The other-minds problem isn't to get me to feel what another feeler feels either. The easy problem is to determine whether, when, and what a feeler feels."

      It should have been:

      The other-minds problem isn't to get me to feel what another feeler feels either. The other-minds problem is to determine whether, when, and what a feeler feels.

      Ishika, you're mostly right. "The OMP is about whether or not something feels." But we would not have to feel whether something feels. It would be enough, for example, if there were evidence for the existence of a fifth physical force, and the ability to measure its presence whenever a sentient organism is in a telepathic or a telekinetic state (i.e., thinking or willing something) (as dualists like parapsychologists believe).

      Does this correlate have to be a 5th physical force rather than just a neural correlate? With the 5th force we would have had an independent causal explanation of feeling; with the neural "power," it is not clear that we can disentangle it as a generator of doing-capacity and a generator of feeling. So we're left with just the non-explanatory easy explanation of doing-capacity.

      And of course dualism is wrong; and there is no evidence at all for parapsychologists' dream of a mind-over-matter 5th force.

      Yes, Searle's Periscope across the OMP barrier would only have worked if computationalism ("cognition is just computation") had been right. But computationalism is wrong too.

      Delete
  6. DD's description of heterophenonemonology reminds me of Harnad's analogy of "a [thing] is worth more than 1000 words." DD suggests that we collect raw data, catalogue them, and make sure they are "bracketed for neutrality" (whatever this means.) Wouldn't we run into the problem of collecting more and more data (analogous to Harnad's many verbal descriptions of a given object) but never get to be in the state of consciousness itself? The missing element is what Chalmers suggested to be 'direct experience' (to which DD vemehently objected). It's like having all the possible data on our T4 robot and the moment that the data doesn't match what is "consciousness", we can claim that it doesn't have consciousness. I'm not convinced by DD's claim about there being nothing left to solve after the easy problem: collecting information does not get us anywhere in terms of consciousness. It helps, but there is certainly more work left.

    Also, DD's dismissive attitude towards Chalmers' advocacy for first-person account is kind of ironic: DD wants to make heterophenomenology "neutral", but the thing he is trying to work out is not at all neutral. Consciousness is something we've been grappling with forever.

    ReplyDelete
    Replies
    1. Yes, trying to feel exactly what another feeler feels on the basis of what the other feeler says, does, and "neuralizes" would be like the object/1000-words point about approximation.

      But see the reply above about what is really at issue in the hard problem or the other-minds problem.

      Heterophenomenology (T4) does not solve the hard problem. We already knew that even less than T4 would be enough to solve the easy problem. And Turing is right that that's the best we can do on the other-minds problem, and that that's good enough.

      And heterophenomenology (T4) certainly does not prove that there is no hard problem, let alone that there is no such thing as feeling, or that T4 capacity is all that's meant by "feeling."

      Delete
  7. CHALLENGE

    Can you state exactly why the hard problem is hard?

    ReplyDelete
    Replies
    1. I'll take a shot at this!

      So we've said that the hard problem is to causally explain how we feel what we feel and why we feel what we feel (put another way, how does our brain produce the sensation of feeling and why do we have this capacity). The easy problem is causally explaining how and why we can do everything we can do.

      The Turing Test seems to be best bet for solving the easy problem. If we can build something that can do everything we can do for a lifetime (T3), then we surely know how our doing capacities work. The 'why' part can potentially be explained by evolutionary psychology— our capacities evolved to address some kind of problem that our ancestors encountered. There is definitely more work to be done here, it's very easy to claim that a "thing evolved to address an evolutionary problem" without really explaining anything but as I said in a previous Sky, no other approach we've looked at so far has tried to tackle this part of the easy problem.

      Now onto the hard problem. At first my intuition was that the hard problem was hard because feelings are really nebulous and mysterious and can only be felt by the feeler. But I don’t think that’s quite right. I think the reason why the hard problem is hard is because there’s some kind of difference being doing capacities and feeling capacities that we have not been able to pin down and explain. If we manage to solve the easy problem, we’ve solved how we do everything we can do and yet feelings remain unsolved. What could be the causal explanation for feelings when we’ve already causally explained everything we can do! And so we become stuck because we know that feeling exists (because we feel it) but we’re out of causal fuel. So to put it succinctly, I think the reason why the hard problem is hard is because we’ll always be faced with an “explanatory gap” when it comes to feeling: even if we reverse engineer all of our doing capacities, we still haven’t answered the hard problem.

      Delete
    2. Even if we could create a perfect T4 (or above) robot who is physically and behaviourally identical to humans, we have no way of ever knowing if they are conscious because we don’t have Searle’s periscope.

      Delete
    3. The hard problem is concerned with explaining the causal mechanisms of how and why we feel. Explaining how our brains produce feeling from sensation and perception and why we do this in the first place would be the focus of the hard problem. Solving the hard problem would involve reverse-engineering feeling capacities. We cannot sufficiently explain how and why we feel through introspection, raw data or prediction.

      The hard problem is hard for several reasons. The first being that it is not clear what feelings are, and therefore we can't properly measure them, let alone reverse-engineer them. We know that we can get sensations from electrical signals that are interpreted by our brain, but this electrical signaling does not tell us what feeling it will produce. Furthermore, this electrical signaling does not tell us how it produces feeling -- it just simply is an outcome that we as cognizers experience. Thus, there is a large gap between what is signaled in the brain and the feeling that is produced. This gap could potentially help explain what are feelings, but this isn't evident either.

      The other difficulty surrounding the hard problem is what we have to causally explain how we feel and why we feel. When attempting to reverse-engineer the hard problem, the best that we can do is explain the doing part. The feeling aspect is just hanging there like a form of residue that cannot be explained, nor understood (understood in the sense that we don't know how it is there -- we know what feelings are, because we experience them as felt states). Moreover, doing and feeling capacities are intimately linked; it is not evident that one causes the other and that we would do things without having a feeling behind it. While this in itself complicates things, the thing that makes it trickier is that to fully reverse-engineer feeling capacities, we have to figure out why we are feeling things instead of doing things without feelings potentially influencing our decision in the first place (I.e. a causal response: I have a toothache, therefore I won't chew on the side that has the toothache. In this situation, there is no pain sensations influencing the cognizer's decision. That would be making decision without feelings influencing it. But in reality, we choose not to eat on the toothache side because our brain is telling us: it hurts (feeling) and that chewing on that side will provoke it (another feeling), therefore we must chew on the other side to avoid pain and prolonging our condition (feeling)).

      Thus, explaining causation becomes virtually impossible. There are too many variables to consider, and as the professor puts it "Causally superfluous".

      Delete
    4. Allie: The HP is to explain that we feel (anything). Determining whether and what we feel, and why is the OMP. (The rest of your comment is correct: The (eventual) solution to the easy problem will be the main reason the hard problem is hard! Because it leaves no causal room to go on to explain feeling.

      Katherine, It’s not that the OMP and the HP are unrelated. but what you describe is the OMP, not the HP.

      Robert, you’re mixing up the HP and OMP a bit too (but basically I think you get it).

      The HP is not about whether an organism is feeling this or feeling that but about how and why it is feeling at all.

      And the problem is not with defining feeling: We all know what the referent is, from the Cogito/Sentio introspection (and that’s also called an “ostensive” definition: showing rather than telling).

      The reason feeling seems causally superfluous is the fact that it seems that doing is explainable without it. That’s not very satisfying, because feeling is obviously a biological trait, produced (like doing) by the brain. (And of course feeling feels causal — it feels like a causal force that I am exerting!) So cogsci seems incomplete without an explanation of it. (And add to that that feeling is the only thing that matters!)

      Delete
    5. The hard problem is to provide a causal explanation of how and why we feel. It’s a tough nut to crack because the only feeling states we’re sure of are our own (given the OMP), which makes “feeling” a phenomena that is not objectively measurable. Even if we were to solve the easy problem and determine how/why we can do everything we can do, this does not guarantee an answer to the hard problem. Not only do we lack a way of accurately testing feeling, but feeling doesn’t seem necessary to doing.

      Delete
  8. Although I disagreed with the goal set by this paper and the presuppositions its arguments were based on, I did find it very relevant in (a) setting the background and dichotomising the prevalent streams of CogSci concerning the hard problem and (b) in illustrating how reverse-engineering cognition on the basis of the Turing test is difficult (if not in vain) in explaining how and why we feel.

    The “zombie hunch” as set by Chalmers is key. Supposing according to his definition we create a T5 robot identical “Molecule for molecule and in all the low-level properties postulated by a completed physics”, would this T5 be able to feel?

    On the one hand, as behaviourist such as Denett would defend, feelings somehow follow from physical mechanisms specific to the human body such that a T5 robot that is functionally and thus behaviourally identical to me would somehow have identical feeling as me. I see an image, I identify it as a deceased relative, my gut contracts and my heart accelerates, and I categorise this as sadness.

    Or on the other hand, as Chalmers defends, this T5 will be identical to me functionally, processing the same information, outputting appropriate behaviours and following indistinguishable internal mechanisms - HOWEVER he lacks conscious experience entirely and will therefore be “a zombie”. (As a zombie) I see an image, I identify it as a deceased relative, my gut contracts and my heart accelerates, and I categorise this as sadness, but I am never “conscious” of it in the way a fellow human being is conscious of it.

    However, it seems to me that non of these theories really actually matter; the way the hard problem is stated and our method of solving it can only ever be speculative. Why? Because from a third person point of view, there is simply no way of distinguishing T5 from a normal human being - reverse engineering the hard problem the way we are reverse engineering the easy problem (unless they are the same thing as Denett supposes) can only be inconclusive. The other minds problem, unless it is someday solved, will forever keep us from answering the hard problem.

    And as explained at the beginning of this class, cognition is as cognition does, so if we cannot demonstrate feelings being replicated by “doing them” (=replicating them artificially), we cannot demonstrate the how and why of feeling. So if I’m getting this right, the hard problem is at a stalemate - we either have to review our definitions of cognition and recalibrate our tests or forfeit and accept feeling as unexplainable altogether.

    ReplyDelete
    Replies
    1. The notion of a T5 Zombie is nonsense. It is just another way of highlighting that we cannot explain how or why T5 is not a Zombie. The hard problem is a problem of explanation. It can be neither solved nor elucidated by fantasies.

      And there is no other-minds problem (OMP) worth even thinking about in normal adult humans (except maybe under general anesthesia or in delta sleep). The OMP really only arises concerning people in chronic vegetative states or early foetal stages. There is also no sincere basis for doubt with nonhuman mammals, birds, most vertebrates (including of course reptiles, amphibians and fish) and most invertebrates. (Peter Singer's "bivalve exception" is almost as arbitrary as Descartes' human-only claim.) The only terrain where uncertainty about sentience is nontrivial is plants, single-celled microbes, corals and possibly sponges -- plus, of course, sub-T3 robots.

      But the hard problem is not a consequence of the other-minds problem: knowing that an entity feels does not explain how or why it feels.

      Delete
  9. Dennet’s heterophenomenology did not appear to me to be very different from something like a sort of neuroscience tailored to « understanding » subjective experience. Using the scientific method, he claims that once cognitive science characterizes the patterns and relations between « beliefs » and brain activity? or some other physical states, then at that point, both cognition and feeling will be explained. In other words, he seem to think that Turing’s engineering approach can encompass both the easy problem and the hard problem.

    The hard problem will not be solved by reverse-engineering cognition because causal mechanistic explanations that would allow cognitive scientists to design a T3 robot are not the same kinds of explanations that would resolve the hard problem. I think that it may be the case that figuring out how to build a T3 will give us the necessary tools to build something that is a feeling thing, but we won’t foreseeably have any way of knowing whether they are actually feeling, in the same way that this problem arises within our own species. We make the assumption that everyone else is not a zombie, and so will we with a T3 robot. But the grounds for believing that will not be because we understand the causal mechanisms of the robot. Like us, that robot will be made of inanimate matter. The hard problem is about finding out why feeling occurs in certain configurations of matter and not others and why it occurs at all. None of that follows from a science like heterophenomenology or more generally in the science(s) that will yield T3 robots.

    Although I don’t agree with Dennet, I also don’t agree with Chalmers and his philosophical zombie. There is no way to prove that a T5 robot is feeling but I don’t see how thinking about a « logically possible » T5 zombie is useful. He admits that it is physically impossible. I think that a molecule for molecule copy of me will have exactly the same properties as me (minus the experiences that "shaped" me, but that’s just tuning knobs). Is he a dualist of sorts? I can’t see what more there is to feeling then some unexplained physical/causal property to which access is made difficult by the other mind’s problem…

    ReplyDelete
    Replies
    1. I am commenting here on this post because I like debating with Solim, but I think you have done a great job at presenting Dennett's heterophenomenology and the hard problem so nothing to add to that.

      I have to say that I am very tempted by Dennett's idea. His reasoning makes me think about the "élan vital" that was proposed as an explanation of life. I think we all agree that the idea of an "élan vital" or of a certain force emanated from the sun that would have the virtue of confering "life" to all organisms was surely throwing in the towel on materialist explanations way too quickly. Today, we have good materialist explanations that account for the origin of life (for example: the birth of replicators near hot geysers in the depth of oceans). Consciousness or feeling looks like a mountain to tackle, just like explaining the origin of life looked like to the philosopher Henri Bergson when he thought of the "élan vital".

      Now, if I take Solim's comment as a start point:

      "The hard problem will not be solved by reverse-engineering cognition because causal mechanistic explanations that would allow cognitive scientists to design a T3 robot are not the same kinds of explanations that would resolve the hard problem."

      Aren't we throwing in the towel early here, saying that the causal mechanistic explanations that would allow to design T3 robots would just not be the kind of explanation to resolve the hard-problem?

      What is the basis for rejecting the type of explanations that are mechanistic? Because "feeling" looks different than a configuration of matter? Didn't we use to think that "life" looked different from "matter" and that it couldn't be explain materialistically. That seems to me to be selling the issue short.

      I have heard Dennett dissecting some of our feelings, such as "pain" (which might not be so convincing) but also the "feeling of seeing a blue sky" (which was much more convincing). This gave me some better idea of how materialistic explanations could do the trick. Be certain that I also think it seems almost impossible, just like explaining life might have looked impossible. But there are billions of neurons in the brain, and the brain itself is probably one of the most complicated machines of the universe. And we don't know much about it yet, so I believe we might be too early in saying that explanations to the easy problem wouldn't be the sort of explanations for the harder problem.

      See this link for a great interview with Dennett on exactly what we are talking about and on the dissection that I mentioned above:
      https://www.youtube.com/watch?v=eSaEjLZIDqc

      Delete
    2. Maximilien, I too at first thought that mechanistic explanations that would allow us to build a T3 might be a "two birds, one stone" kind of explanation. But that cannot be the case for one simple reason.

      When we discuss the hard problem in this class, we often use the catch phrase "explaining the how and why of feeling". I think that figuring out the how, although part of the hard problem, is probably not as "hard" as solving the why. The how might potentially be explained by the mechanistic explanations. Although, even if we have high certainty that the T3 robot is feeling (as we do with each other), because of the other minds problem there is never any guarantee that our intuitions or reasoning about whether such a robot is (or is not) feeling are correct. Given how intertwined feeling and thinking are, it seems unlikely that building something that can do what we can do would be a total zombie...but it isn't impossible that that would be the case either.

      Now, suppose we can circumvent this problem, we are still left with an unanswered question: why does the T3 robot feel at all? At this point, we have outlined a full mechanistic explanation of cognition that includes the mechanisms of feeling. But it seems that the feeling part is superfluous, it just "hovers" over the mechanisms causing the thinking. In other words, we have a case of unidirectional causality. The interaction of the physical parts that constitute the T3 robot cause both the feeling and the cognition, but the cognition can be explained without reference to the feelings, using the causal mechanisms outlined. Here you can see why one might go as far as to say that the problem is insoluble, as Harnad has voiced.

      Delete
    3. Maximilien, see other Replies in 10a-c about why the analogy between life (élan vital) and cognition (feeling) ne tient pas la route. Bref, when you have reverse-engineered all the observable, measurable properties of living organisms, there are not only no causal degrees of freedom left, but you don't need them, because there is no élan vital left either, so nothing left to explain.

      In contrast, when you have reverse-engineered all the observable, measurable properties of sentient organisms (all the way to T4 or even T5) there are again no causal degrees of freedom left, but you do need them, because one vital property, feeling, is left over, and and still has not been explained.

      Solim you basically get it, but there is still some conflation of HP and OMP in your comment.

      Delete
  10. To build off Solim’s post (which is super useful for synthesizing some of the big concepts, thank you Solim), I wanted to discuss how Dennett really connects Turing to his heterophenomenology method. The bridge between Turing and his method has to do with being able to investigate 1st person observations using 3rd person methodological principles of science. However, this seems to be as far as Dennett goes in terms of resembling any of Turing’s thoughts.

    The notion of taking raw data (verbal reports, behavioral reactions, etc.) and turning into interpretable data (beliefs, attitudes, emotional reactions, etc.) feels very loosely tied with computation. To me, it seems that Dennett supports taking already interpretable outputs and morphing them into another set of meaningfully interpretable outputs. There is little emphasis on inputs and the formal rules or manipulation that supposedly Turing believes is involved in cognition. That being said, how does Dennett’s heterophenomenology really help reverse-engineer cognition if he appears to be further reformulating what we already know. I wish he made more connections to Turing’s thoughts on cognition, since he uses him as the launch pad for his argument.

    ReplyDelete
    Replies
    1. DD's "heterophenomenology" is already part of T4.

      T3 already grounds the meanings of words in their referents, through the speaker's sensorimotor interactions with them: especially through category-learning and categorization (doing the right thing with the right kind of thing) and then producing and understanding (grounded) subject-predicate propositions to describe further categories (“a zebra = is a horse with stripes”, "the cat is on the mat").

      DD just reminds us that those propositions can include “I have a splitting headache” of “I believe computationalism is true.”

      DD is not particularly a computationalist (and probably Turing isn’t either). Both T3 and T4 are hybrid analog/computational.

      But Turing merely says that the TT methodology can only explain doing, not feeling, whereas DD says that’s all there is to feeling.

      Delete
  11. "that conscious experiences themselves, not merely our verbal judgments about them, are the primary data to which a theory must answer." (Levine, 1994)

    This quote sums up the hard problem––which is the problem and how and why we feel. Dennett argues that this claim is wrong. He outlines what he calls heterophenomenology, which is meant to be the study of felt (or conscious) states based on objectively observable data (data observable from the third person), like verbal judgments about conscious states. The reason, I think, this is appealing to Dennett is that it seems to be in line with the scientific method––science uses 3rd person-observable data to test hypotheses and come up with theories. As is evidenced by Dennett's title, "The Fantasy of First-Person Science," Dennett does not believe that proper science can be done if the data we are using or the phenomenon we are trying to explain are not accessible from the 3rd person. However, if we use this technique, as Dennett himself points out, we are not really studying feeling but rather the behaviour that is associated with felt states. Dennett writes, "what has to be explained by theory is not the conscious experience, but your belief in it (or your sincere verbal judgment, etc)."

    This seems to miss the point. The hard problem is not the problem of why we have certain beliefs about consciousness, but why conscious (i.e. felt) states exist at all. What Dennett is describing seems to be something like what Chalmers has called the "meta-problem of consciousness"––that is, explaining why we have certain beliefs and intuitions about consciousness. However, the meta-problem is not the same as the hard problem, and even if we could explain some of our beliefs about consciousness, it would not solve the hard problem. The "explanatory gap"––that is, the mystery of how we jump from inert non-feeling neurons undergoing action potentials to felt states––would remain. Hence, it seems that Levine's initial objection was correct––the data we must explain are not beliefs about feeling or behaviour associated with feeling, but the existence of feeling itself.

    ReplyDelete
    Replies
    1. In class this week I talked a little about my understanding of epistemology from studying gender and social justice, which has led me to question any scientific inquiry that claims to rely exclusively on tangible, quantifiable evidence in the pursuit of singular objective truth - a.k.a. positivist first-person science. I don’t dispute that there are truths we can’t change by any choice or behaviour, such as physical and mathematical principles, but our social situation inevitably influences our observation and interpretation of everything we want to measure, and how we ask our questions - fundamentally what things "mean."

      That being said, I think I agree with you because the hard problem isn't the other minds problem or explaining what are feelings. This heterophenomenology concept might be useful for certain theoretical purposes but it doesn't seem to actually tackle the question of what is the causal explanation for human feeling of being conscious or sentient.

      Delete
    2. AlexSG, 1st/3rd person-ing is just weaselling.

      We’re looking for a causal explanation of doing-capacity and of feeling capacity. Both are biological traits of organisms. As in all other fields of causal explanation, there are observable data, and causal explanations of them, whether it’s about how and why oranges fall or how and why organisms feel.

      Feelings are special, because you can only “observe” them in yourself; in others you can only observe their behavioral, verbal and neural correlates (all data).

      A second weaselling concerns “belief”: we don’t just “believe” that we feel: we know it (cogito/sentio). (Moreover, it feels like something to believe something; it also feels like something to know something.)

      DD does not explain how and why organisms feel; he just says the explanation is the same as the explanation of how and why organisms do what they can do, and there’s nothing else.

      AlexST, 1st/3rd person-ing is weaseling but it’s best to get the weasels straight:

      To the extent that “positivism” means anything at all any more, it is that scientific explanation is based on publicly observable data (not “1st-person” data).

      “Truth” refers to statements (propositions) which can be true or false or they can be just statements of opinion, preference or taste (“green is an ugly color”). If true or false, statements can either be provably true or false (in maths) or probably true or false (based on publicly observable data). If they are just matters of taste, preference, interpretation or opinion, all you can have is disagreement of agreement.

      Usually when people speak about “truths” in plural they just mean different statements that (some) people hold to be true, whether on the basis of, mathematical proof, publicly observable data, or shared opinion, interpretation, preference or taste.

      Truth does not change, but available proof or data, or prevailing opinion might change or differ. (You can probably anticipate what I’m going to say next: used in plural, multiple “truths” or “realities” are weasel-words.)

      So opinions and interpretations might differ about what is true, because they may depend on different data or different preferences. But there is a fundamental difference between opinions based on (1) proofs or data, on the one hand, and opinions based on (2) preferences or taste:

      Opinions based on (1) can be supported or shown to be false by proofs or data, whereas opinions based on (2) cannot be shown to be false; indeed, they are neither true nor false, they are just opinions. If you want an example, just consider the opinion of half the US that Trump won. Do you think the data from the counts and recounts can change that?

      A lot of social opinions are like that, including religious beliefs, political views, and cultural practices.

      Delete
    3. Hi Prof.,

      Okay - I misunderstood this "first person data" concept, I think it is the "third-person data" that aligns more with "positivist" approaches and suits the "objectivity" goal. I'm retreating on this point though, because I don't think it's all that relevant to the questions of how and why we feel some types of ways sometimes.

      In the context of Dennett's essay, what is the purpose of discussing first- versus third-person data? It seems like this paper is meant as a defense of his heterophenomenology, which is simply a methodology for performing research to investigate the "human experience" - i.e. what it feels like to do what we do. This is a descriptive endeavor and therefore necessarily inclusive of first and third person accounts of felt things related to behaviours/events, but none of it has any bearing on the fundamental questions of /why/ or /how/ some of us feel things sometimes.

      Did I just come full circle back to AlexSG's posted point?

      Delete
    4. AlexST, I think you've got it now.

      Delete
  12. Levine argues that "conscious experiences themselves, not merely our verbal judgments about them, are the primary data to which a theory must answer." (Levine, 1994) Dennett's response is that "heterophenomenology gives you much more data than just a subject’s verbal judgments; every blush, hesitation, and frown, as well as all the covert, internal reactions and activities that can be detected, are included in our primary data."

    This argument is not convincing to me as it reminds me of the issue with neuroimaging: observing blushes and frowns could at best be grounds for correlations, not causation.

    As mentioned in previous skywritings, "interpreting" these observations do not give us any meaningful insight into how and why we feel (hard problem). I feel like the result of this kind of enquiry would be a 1st-person account of how I am feeling together with some correlational data as already exists in neuroscience. I understand that Dennett does not think there is a hard problem, and that once the easy problem is solved, there will be nothing left to solve. However, the methodology he proposes here does not seem to be moving toward solving either one of the problems. Even with a T4 robot, we would not be sure of it feeling or not, due to the OMP. Dismissing the hard problem does not solve it.

    ReplyDelete
    Replies
    1. After reading the 10b article, I had a follow up question:
      "Dear Dan, you keep giving examples of successful prediction of functions from functions, and then an overall causal/functional explanation of the correlation."

      I am confused about whether this means that heterophenomenology can answer the easy problem. I am still unsure how the methodology as described in the first paper can yield answers different from the ones cognitive neuroscience has been for years, together with some introspection. Would successful causal/functional explanation of the correlation answer the easy problem?

      Delete
    2. Just substitute "doings" for "functions" and you will recognize the point. HeteroPh is just T4, and it has no causal explanation of feeling, just doing.

      Delete
  13. In this paper, Daniel Dennett defends his method of heterophenomenology, based on the principles of the scientific method, by which the investigation into consciousness, like any scientific investigation conducted so far, can be pursued from a third-person point of view. Dennett argues that first-person subjective reports of experience can be dealt with objectively, in accordance with the scientific method of data collection from observations, experiments and careful reasoning. Studying our "beliefs" in this manner and correlating them with all available data about the brain and our behaviour (T4) (also acquired through the same methodology) would reveal all that there is to be explained in cog sci in his opinion. Dealing with the easy problem was already something cognitive scientists knew pretty much how to do. Nevertheless, I think Dennett adds an important element; that we can use and integrate subjective data in this inquiry without compromising objectivity (by treating the 1st person data as 3rd person data). He attempts to put an end to the conflation of those two types of datas (the 1st person and 3rd person)

    Now, once all our doing capacities are reverse-engineered, Dennett argues that nothing left can be tackled in a fruitful manner. I feel like this is indeed true. There is an explanatory gap that remains when all doing capacities are explained. The hard-problem problem remains, but it exists in this explanatory gap... Why bother ? That is Dennett take home message I believe.


    ReplyDelete
    Replies
    1. T4 (and T3) already covers everything that the organism can do, including everything it can say (e.g., "I have a headache" or "I believe there is no explanatory gap."). (Please read about 1st/3rd weaselling in other Replies on this thread.)

      DD does not say "once all our doing capacities are reverse-engineered... nothing left can be 'tackled in a fruitful manner'." He says "once all our doing capacities are reverse-engineered... nothing is left to explain."

      The hard problem is the explanatory gap. DD denies that there is an explanatory gap.

      Delete
  14. To make sure I’m understanding correctly: DD is suggesting that there is no hard problem, and that interpreting and understanding know-how (doing) capacity encompasses both doing and feeling and causally explains consciousness (beliefs).

    Harnad is suggesting that it seems like, by solving the easy problem, there are no more degrees of freedom left to solve the hard problem. Doing capacity is explainable without feeling and there will always be uncertainty if we have truly explained all of cognition or just doing. Yet, feeling is involved in everything that we do, so it must be (or at least should be) explained causally. However, during the last class there seemed to be a suggestion that since the forces in our world must be able to causally answer the easy problem, there must also be a way to answer the hard problem. This mix of permanent uncertainty and the existence of an answer to the hard problem seems like a contradiction to me, but I know it’s because I’m missing something somewhere…

    ReplyDelete
  15. DD is suggesting that there is no hard problem because "feeling" is just "beliefs" about feelings. (That sounds more as if it made sense if you use weasel-words like "consciousness." With "feelings" it sounds incoherent.)

    Feelings are not magic, nor mysterious; they are familiar to us all: a biological trait of (some) living organisms; and it's obvious that they must be caused by biophysical function, via the usual causal forces, somehow, just as doings are. What is not obvious is how or why. That's why it's called the hard problem. (The source of the problem might be in the nature of causal explanation, or the nature of feeling, or both.)

    ReplyDelete
    Replies
    1. Professor Harnad,

      Don't we assume that it feels like something to be in a certain state without questioning the meaning of feeling, or what feeling refers to. It might be that "feeling" refers to doing capacities/ that the meaning of "feeling" is grounded in our doing capacities (the meaning of "feeling" being grounded internally on functional states and not just externally through sensorimotor interactions)

      Could you explain the difference between DD's notion of "feeling" and your notion of "feeling"? It seems as if DD does not have the same notion as yours of what "feeling" is and of how it is distinct and different from "experiences about which you have beliefs". My trouble is that I am not quite sure what the difference is between saying that it feels like something to believe (to have a belief) and saying that you are having a belief (such that "having a belief" is just a doing capacity).

      Dennett, D. (unpublished) The fantasy of first-person science:

      "if some of your conscious experiences occur unbeknownst to you (if they are experiences about which you have no beliefs, and hence can make no "verbal judgments"), then they are just as inaccessible to your first-person point of view as they are to heterophenomenology. Ex hypothesi, you don't even suspect you have them--if you did, you could verbally express those suspicions. So heterophenomenology's list of primary data doesn't leave out any conscious experiences you know of, or even have any first-person inklings about."

      Dennett is saying that experiences about which you have no beliefs are inaccessible to your first-point of view. If I try to extract Dennett's notion of feeling, it seems plausible to me that he is saying that experiences about which you have no beliefs are unfelt and therefore, it would suggest that experiences about which you have beliefs are what feelings are.

      Really not sure about all of that,
      Sorry for the length of this

      Delete
    2. Okay, because you have given it some thought, Maximilien, I’ll give you a detailed reply. (But this is not exam material):

      I don’t have to have a language to know the referent of “what it feels like to feel”. (It’s already there in Descartes’ Cogito/Sentio.)

      Nor does it have anything to do with what my theory of what causes feeling happens to be, nor with whether I believe there is a hard problem.

      When I feel a migraine, I could be wrong: It could be a tension headache, or a psychosomatic headache. But I can’t be wrong that it feels like what it feels like; and that it hurts.

      That it is a migraine rather than a tension headache is a belief.

      And believing anything also feels like something. It is not just “I am feeling a migraine” that it feels like something to believe, but also “the cat is on the mat” and “2+2=4”. They feel different from what it feels like to believe that “I have a tension headache,” “the mat is on the cat” and “2+2=5.”

      But a feeling, and a belief about a feeling are not the same feeling. (You can have multiple feelings at once — or almost at once, if it’s rapid serial time-sharing: The apple is round; the apple is red.)

      DD’s thinking will be more transparent to you if you swap “feeling” for the weasel words that conflate feeling with access to information (data), whether felt or unfelt; or that conflate feeling with belief about feeling:

      “EXPERIENCES about which you have beliefs”. --> “feelings about which you have beliefs.”

      DD: "if some of your CONSCIOUS EXPERIENCES feelings occur UNBEKNOWNST TO unfelt by you (if they are EXPERIENCES feelings about which you HAVE feel no beliefs, and hence can make no "verbal judgments"), then they are just as INACCESSIBLE TO YOUR FIRST-PERSON POINT OF VIEW unfelt as they are to heterophenomenology. Ex hypothesi, you don't even SUSPECT YOU HAVE feel them--if you did, you could verbally express those SUSPICIONS feelings. So heterophenomenology's list of primary data doesn't leave out any CONSCIOUS EXPERIENCES feelings you KNOW OF feel, or even have ANY FIRST-PERSON INKLINGS feelings about."

      MG: "Dennett is saying that EXPERIENCES feelings ABOUT WHICH YOU HAVE NO BELIEFS that you do not feel are INACCESSIBLE TO YOUR FIRST-POINT OF VIEW unfelt. If I try to extract Dennett's notion of feeling, it seems plausible to me that he is saying that EXPERIENCES feelings ABOUT WHICH YOU HAVE NO BELIEFS that you do not feel are unfelt and therefore, it would suggest that EXPERIENCES feelings ABOUT WHICH YOU HAVE BELIEFS that you do feel are what feelings are."

      SH: All true, but uninformative, once de-weaselled.

      Delete
    3. thank you very much for the in-depth response, this is much appreciated :)

      Delete
    4. Hi Professor!

      Reading your response to Maximilien, I started to wonder whether some of the problems with Dennett's model - which neglects to acknowledge any feeling about which you do not feel any beliefs as part of your feelings because they are 'inaccessible' - can be read as the failure of a computational model to truly capture all that goes on in a mind. This point is a bit strange, so you'll have to bear with me a bit. Basically, my thinking is that if you consider 'beliefs' as feelings which can be expressed through language (arranged through the computational process of syntax), you are in some ways performing a digitization, based on several points:

      First, you are limiting what is considered to the representable through language, which requires the categorization or translation of everything that can go into a feeling (which includes physical sensations, think about skin conductance, or the hair on the back of your neck rising), into a somewhat binary form which relies on abstract categories (the feelings we give names to).

      Further, there is no way for a measure of feeling based on beliefs - which relies heavily on verbal reports - to be continuous - between reports there are certain to be missing chunks of 'experience'/feeling. Yet to say that you didn't feel those milliseconds - just because you cannot describe them, or because there is lag in your description, seems incredibly reductive.

      If feelings are only what we have beliefs about, we are in some ways abstracting feeling from a part of continuous cognitive function, to a more special and voluntary experience. Given that we are theoretically feeling all the time - and can make ourselves conscious of feelings at any point in time, this hardly seems like a productive or informative way to study thought.

      Delete
  16. “How does [heterophenomenology] work? We start with recorded raw data. Among these are the vocal sounds people make (what they say, in other words), but to these verbal reports must be added all the other manifestations of belief, conviction, expectation, fear, loathing, disgust, etc., including any and all internal conditions (e.g. brain activities, hormonal diffusion, heart rate changes, etc.) detectable by objective means.”

    Dennett’s work is interesting in the context of this course because most of the theory that we’ve been looking at that is trying to describe cognition up to this point had been trying to fit cognition into a specific framework, be it computation, evolutionary theory, or universal grammar. What Dennett proposes is taking how people perceive their own thoughts, frame it objectively (Dennett refers to this as neutral bracketing), and correlating it with physiological measures. While this may be an ambitious to make cognitive science more ecological in some capacity, Dennett is subject to the same problems that neuroimaging is subject to, as reported by Fodor. No matter how robust the relations that he establishes between these objective measures and self-reports are, they are mere correlations; there is no physical link or psychic force that underlies the connection between these observations and the physiological measures, no explanation to why people feel in the first place.

    “Notice that Chalmers allows that zombies have internal states with contents, which the zombie can report (sincerely, one presumes, believing them to be the truth); these internal states have contents, but not conscious contents, only pseudo-conscious contents. The Zombic Hunch, then, is Chalmers’ conviction that he has just described a real problem. It seems to him that there is a problem of how to explain the difference between him and his zombie twin.”

    Not only does he offer no explanation, but the passage above also demonstrates that he doesn’t believe the problem exists. From what I understand, Dennett simply says that any state that can exhibit internal states will simply gain the ability to believe (i.e. a weasel word for feeling) just by virtue of having these attributes.

    ReplyDelete
    Replies
    1. Have a look at other Replies: HPhen is just a part of T4 (all doing, including saying, and brain doings).

      There is definitely causation and causal explanation there, not just correlation -- but it is only the causation, and the causal explanation of the doing, not feeling (i.e., "easy" reverse-engineering of doings.

      But there's a lot of weaselling here too: "contents, sincerity, beliefs" -- and of course "conscious." Felt or unfelt?

      Drop all that and distinguish felt from unfelt internal states. I can report a "content": I have a toothache. T4 can tell you "Yup, you're reporting that a nociceptive (damage-detecting") state is going on, and your tooth really does have a cavity, and the pain intensity level of 7 you report really corresponds to a reliable level of activity measurable in your insula." All those T4 doings can be causally explained -- except for the fact that they are felt, and not merely occurring, detected and reported.

      I'm not sure what Chalmers says or means, but what he should say and mean is that "Zombies" would have internal states (toasters are Zombies); and if they are T3 or T4 Zombies, they could report and describe those states without feeling a thing (although, being T4, they can talk and act (and neural activate) indistinguishably from the way a real feeling person does.

      There are no T3 or T4 Zombies. Probably there can't be. But the hard problem is explaining how or why there can't be. DD has not explained it. He just says that the doings, and sayings, and their underlying neural doings, are all there is. Nothing left to explain...

      Delete
  17. I wanted to post another comment about this topic just to make sure I understand it and its relevance before the final exam.

    Heterophenomenology is taking 'first-person' subjective data ("raw data") and correlating it to 'third-person' empirical data such as neural activity. DD asserts that our doing-capacities and feeling-capacities are not separate and that there is no hard problem because everything is accounted for if you can reverse engineer at the T4 level (if you can build a device that can do everything a human can do for a lifetime and is also indistinguishable at the neural level). According to DD, an internal state (what is going on in our bodies and our brains when we do anything or feel anything) is no different from a felt state, and we can infer/explain this felt state if we have the internal state correlates (internal states are by definition felt but DD assumes that simply observing this connection is enough of an explanation for feeling). Of course, this completely disregards the OMP. It feels natural to infer that another person feels because we are ourselves feelers and lazy evolution has imbued us with extraordinary mind reading abilities, but the truth is that a felt/unfelt internal state (if that’s even possible) would look exactly the same to us. To be honest, heterophenomenology doesn’t seem to add much to cognitive science since it is not providing causal explanations. Is this why we would consider it to be homuncular? Because although it claims to render the hard problem obsolete (everything is explained by the easy problem) there are so many questions about feelings left unanswered (the explanatory debt permanently deferred)?

    How would Dennett react to Key’s fish pain paper? My assumption is that he would agree with Key, since no one denies that fish have different neurocytoarchitecture than humans.

    If you could Professor, could you also respond with some additional connections to other parts of the course I may have missed?

    ReplyDelete
    Replies
    1. Allie, Heterophenomenology is just a part of T4. T4 includes the capacity to do and say anything a human being can do and say, indistinguishably from a human, to a human. T4 is also internally indistinguishable, so, for example, when T4 says “I have a headache right now” it would also have the same brain activity pattern that is correlated with when humans have, and say they have, a headache. That’s correlation, not causal explanation (as you note). It would be good for doing mind-reading, but the hard problem is not about mind-reading (OMP). It’s not about inferring whether or what feeling is being felt but about explaining how and why anything is being felt at all.

      So you are spot on with your diagnosis. Heterophenomenology is homuncular in that it fails to discharge the explanatory debt. DD dismisses this by declaring that there is nothing left to explain. He’s right that there’s nothing left to explain for the easy problem, (doing). But there’s everything left to explain for feeling. DD does not defer the debt, he denies it.

      DD is an extremely kind and human person. I’m sure he would not deny that fish feel pain, just as people do. But he would say that the T4 explanation of fish pain would explain all there was to explain. It would be interesting to ask Dan why causing or not causing pain matters, if there is no more to feeling than doing-capacity. My guess is that he would would give an adaptationist reply (about survival, success, reproduction), including social adaptation.

      What you missed was explaining how mental imagery is — and isn’t — homuncular, and especially how computationalism is homuncular. And whether T3 differs from T4 on the question of deferring the explanatory debt.

      Delete
    2. Thanks for the response!

      I understand that the hard problem isn't about mind-reading (OMP) but it is connected since they both deal with feeling. Because there are feeling creatures in the world (us and other creatures) we have the hard problem: how and why anything is being felt, and the other-minds problem: when and what a feeler feels. If one were to solve the hard problem (assuming the solution was universal across feeling things), I imagine that the OMP would also be solved since we could causally associate physical and felt states (so we could actually look at someone's brain and be certain that this is what they're feeling). Of course we're a long way off from this (if it can even be solved at all).

      As for the homuncularity of mental imagery and computationalism I'll give a short summary of my thoughts. Mental imagery is homuncular because it doesn't explain cognition at all. It claims that cognition is mental images (you can name your third grade teacher by picturing her likeness) but there are so many questions that remain unanswered: where do the images come from? How are the images associated with the name of the teacher? The list goes on. The functions of cognition are not explained, mental imagery just defers the explanatory debt. I suppose you could say that the sensory input of you seeing your teacher initially isn't homuncular (because it's analog) but that doesn't really help mental imagery.

      Computationalism is shown to be homuncular by the CRA. If Searle is the system and he does not understand Chinese, that means that there is a doing-capacity that cannot be explained by computation alone: mainly understanding. Anything approach to easy problem is homuncular if it fails to causally explain any of our doing capacities and defers that explanation to something else.

      As for T3 vs T4, the main difference between the two is that T4 is neurologically identical to humans whereas T3 is not necessarily so (they both have behavioural capacities identical to humans for a lifetime). I don't think there is really a difference in the explanatory debt because for the easy problem if we're able to build a T3 we've clearly found at least one causal explanation for cognition and T4 doesn't add anything (or maybe it does I'm not sure). For the hard problem (which isn't the main question of cog sci) we haven't been able to causally explain feeling with the brain so I think T3 and T4 equally don't help us.

      Delete
  18. I've been reading the replies, but I'm still unsure of whether we're certain the hard problem is harder than the easy problem.

    A main point that's been discussed is that with Heterophenomenology and a third-person perspective, we can never understand the causal system that makes us feel. But, are we sure that a third-person perspective will yield the causal system that’s responsible for all tasks of "doing", such as other sub-conscious tasks like memory-recall?

    If it won’t yield this subset of “doing” tasks, it means that engineering a robot to pass T3 is impossible because it would be missing these performance capacities. It also means that the hard problem is no harder than the easy problem, because we are equally unable to reverse-engineer the causal mechanisms of “doing” as we are the mechanisms of “feeling”.

    ReplyDelete
    Replies
    1. I found Prof H's response to Allie's comment above helpful in understanding this. Heterophenomenology does include subconscious tasks like memory recall because they are (presumably) "brain things" that could be accessible to our 3rd person perspective through means like brain studies, for example. Of course, the main problem with DD's heterophenomenology is that it denies the existence of the hard problem.

      Delete
    2. I think the main point there is that when we reverse-engineer a causal system that is capable of "doing" everything we can do, we'll know that it can do everything we can do since everything we can do is observable in some way, be it through reports, actions, or brain-imagery. So heterophenomenology does encompass everything we would include in a T3 (or even T4) robot, because everything we do (and everything we use to solve the other-minds problem) is observable. The problem arises from the fact that for us, it feels like something to do things and feel feelings. In constructing the T3 or T4 robot that does everything we can do, we don't answer the how and why of feeling. So what heterophenomenology does from what I understand is simply claim that the how and why of feeling just come with the territory of doing and doesn't attempt to explain it at all. It seems to be enough that the robot reports feeling and has the correct neural activity and action patterns to explain everything according to heterophenomenology, which doesn't explain anything related to the hard problem.

      Delete
  19. "Of course it still seems to many people that heterophenomenology must be leaving something out. That’s the ubiquitous Zombic Hunch.”
    Besides being riddled with weasel-words for “feeling”, Dennet claims in this article that all there is to solve the hard problem is to ask people about their beliefs. In doing this, he seems to gloss over the “why” of the hard problem - why does it feel like something to think? Dennet argues that the “Zombie Hunch” is not a strong enough counter to heterophenomenology because it is not scientific to reject something just because it feels wrong. This is what the other comments seem to be referring to that heterophenomenology is T4: a T4 robot would solve the easy problem, but Dennet claims that there is nothing else to solve. But as it has been pointed out, a T4 robot would not give us any insight into “why” we feel, and furthermore it may be likely that we will never know the answer to that question.

    ReplyDelete
  20. “So the question posed by the heterophenomenologist is:
    Why do people think their visual fields are detailed all the way out?
    not this question:
    How come, since people’s visual fields are detailed all the way out, they can’t identify things parafoveally?”

    Dennett is of the camp that conflates feeling and doing, rendering the hard problem as something the easy problem accounts for. In the quote above, he explains the case of the false positive when our sensory experience doesn’t match reality. I think that the question the heterophenomenologist rephrases (the second one) isn’t the right question to begin with, so the false positive/negative building blocks of his neutral data argument seem to miss the mark. The question that Dennett should be rephrasing is, “How and why do people feel like they’re seeing their full visual field?,” rather than how come they can’t identify things in their periphery. This more correct question seems more similar to his own heterophenomenologist’s rephrasing of the question now.

    The hard problem is figuring how and why we feel. The initial question of “How come, since people’s visual fields are detailed all the way out, they can’t identify things parafoveally?” wouldn’t align with how Descartes’s cogito relates to the hard problem. I am certain that I think (replaced with feel) and this is irrefutable, and the hard problem is explaining the causal mechanism for this feeling. It does not matter if objectively I am recognizing things in my periphery, I feel like I’m seeing whatever it is that I am seeing regardless of reality. This is a felt state.

    ReplyDelete
  21. Heterophenomenology has the same issue as every other explanation/method that we have looked at, namely the problem of ‘feeling’.
    Dennett does at least offer up an attempt at explaining the problem of ‘feeling’, but does not succeed in this attempt as it is too much based in the ideas of behaviourism, which he connects together using the weasel word ‘belief’.

    It also feels as though Dennett’s explanation loses itself between the article’s opening regarding his reinterpretation of Turing’s ideas into the question he works from and the actual conclusion he draws.

    ReplyDelete

Opening Overview Video of Categorization, Communication and Consciousness

Opening Overview Video of: