Blog Archive

Monday, September 2, 2019

10b. Harnad, S. (unpublished) On Dennett on Consciousness: The Mind/Body Problem is the Feeling/Function Problem

Harnad, S. (unpublished) On Dennett on Consciousness: The Mind/Body Problem is the Feeling/Function Problem

The mind/body problem is the feeling/function problem (Harnad 2001). The only way to "solve" it is to provide a causal/functional explanation of how and why we feel...



Click here to view --> HARNAD VIDEO
Note: Use Safari or Firefox to view; 
does not work on Chrome





67 comments:

  1. I cannot imagine an answer would satisfy Harnad's "why/how do we feel" question. As he mentions, when we give feelings causal roles of their own, (as for how a feeling causes a feeling), feelings seem needless, but without them in the equation, they remain irrelevant or unreachable as a product. It seems impossible to cross the gap between "functional stories'" correlational nature to a causational one. The mountain is too high.

    Or maybe I've got Harnad wrong. Maybe Harnad never wanted to us to answer, or even try to answer, the question in the first place. Maybe Harnad's whole point, in this paper and in his career, is this: "stop claiming to answer questions you're not answering/you cannot answer". While some have sought to "leap" over this barrier, Harnad's mission has been to show them that they are only sidestepping a Sisyphean mount, without ever climbing it.

    ReplyDelete
    Replies
    1. yep the second one was what I assumed (which is sort of what we've been hearing all throughout the course) which is that since we're not even close to having a clue to figuring out the hard problem, we shouldn't be including in our reverse engineering cognition mission

      Delete
    2. Yes. When you don't have an explanation, it's best not to claim to. But here's the part to ponder: This biological trait -- feeling -- that we cannot seem to explain the way we can explain all other biological traits because it seems to be causally superfluous -- does it mean that feeling is not important? And if it is important, how? and why?

      Delete
    3. Haha, professor your question feels like a trap!
      I feel like feeling is important, because I'm currently feeling it. But attempting to explain how and why feeling-capacity is important will just lead to the question of how and why we feel at all...

      Delete
    4. It's not a trap. Try it this way: Why is anything "important"?

      Delete
    5. I think feelings are really really important for how we treat other objects and beings in the world. I am basing my answer on Professor Harnad's response to my skywriting (see below) as well as from scanning ahead to the readings of next week. Over the summer I picked a lot of daisies and some other wildflowers near our house to make arrangements. My behaviour would have been completely changed if I thought that the plants had feelings and would feel pain when I picked the flowers off of them. Can I explain how or why I have feelings? No! But that doesn't mean that they aren't important and play a huge role in my everyday decisions. I think that feeling is important because it is through my own feelings that I have empathy and infer those feelings in others.

      Delete
    6. Wow I wish humans were that empathic. The fact that only 2% of Canadians are vegan tells me most humans actually don't care if other beings suffer. Very few people are actually against animal cruelty, or curelty in general.

      "Would you kick Ting even if she was a T3 robot?" "Do you still eat meat even though it causes the suffering of trillions of animals and there is no need for you to eat it?"

      You do, so you would.

      Delete
    7. Harnad, you confuse me: how are we supposed to answer why anything is important if this isn’t a philosophy class?

      TLDR; Anything is important because we feel like it is so.

      ***
      For thousands of years, things were important because God said so. When God died, the creator of meaning’s seat became vacant. Science rushed to the throne (of the one they had killed).

      It was a modern Midas. Greedy, Science sought to explaining "how" and "why" of everything it touched. Causal explanation, or knowledge, was the modern gold.

      However, like Midas, this pursuit didn't suffice: for, after having tasted the fruit of knowledge, the Garden of Eden became barren. After all was turned to gold and explained, there was no right thing left to do. Like gold itself, knowledge is the kind of thing that is useless unless it is used. If we seek meaning in causal explanations, then we are left in an unending goose chase: for if things were important thanks to what caused them, then because everything has a cause, nothing would be left that could be important.

      One thing remained untouched by Science because it was left unexplained: feeling. Dualism taught us this one thing: in the face of absolute skepticism, doubtlessly, I feel. This is how feeling gave meaning to science, and in some ways, our lives. Because feelings told us what the right thing to do was, with every kind of thing.

      Delete
    8. Stephanie, good observations, but a bit too quick! Of course what I do is affected by what I feel: the reason I don’t pick up a burning coal is because it hurts me, not because it hurts the coal!

      So when I ask “why is anything ‘important’?” the answer can’t be that it’s because my pain hurts me (although that’s certainly an example of something that is important!).

      There is something, though, in the Dennett-like point you make: that it might also have something to do with my feelings about what others feel.

      DD thought that the purpose of consciousness (= feeling) was to to help us read others’ minds (for our own sakes): “It (the chess-playing computer) thinks it should get its queen out early! That helps me plan my strategy for beating it.”

      (Problem is that the computer doesn’t think; and that some of our most intense feelings are not social attributions at all, for example, a tooth-ache or a migraine. And they have nothing to do with mind-reading.)

      Your flower example is similar: “If I felt that flowers feel pain, I would not pick them.”

      But there is also more to your example than DD’s…

      Katherine, powerful point, but, I hope, too pessimistic.

      "Very few people are actually against animal cruelty, or cruelty in general."

      You could say that the percentage of people who do what they should be doing against climate change — or even cigarette-smoking — for their own sakes is not very big either. So it can’t be just that they don’t care…

      Guidote:: “feelings told us what the right thing to do was, with every kind of thing”

      Really? Like what the feelings of 72M Americans told them was the right thing to vote for?

      “how are we supposed to answer why anything is important if this isn’t a philosophy class?”

      Try again (but try not to philosophize!)

      Delete
    9. @Professor, I think what Julian meant when he said "feelings told us what the right to do was, with every kind of thing" is that people will do things based on what their feelings tell them is the right thing to do. Because even if something is deemed morally incorrect, isn't it still the case that the cognizer's feelings prior to taking that decision were what drove him/her to do what they did?

      One example that comes mind is the "Trolley Problem", where there is no clear decision on where to steer the train (variables such as intention, cultural background and religion lead to vastly different answers on this question). In this situation, no matter what you do, someone is going to die and you're in part responsible for it. Therefore, when you take the decision to either pull the lever or not do anything, you're doing it based off of what you are feeling, and there is no clear "good" outcome or decision for you to make. Regardless if you pull the lever or not, the decision you took was not necessarily based off what was "right", but what your feelings deemed was the right thing to do.

      Delete
    10. I want to give a shot at the professor’s question about why things are important and how this might be relevant in explaining feeling as a highly adaptive trait.

      As was explained by Stephanie, concern (or dis-concern) for an object or person is strongly correlated with feeling. As trivial as it is, if I don’t “feel” for an object, I am indifferent to it’s use and misuse - it is unimportant. Similarly, a fellow being is important to me if I have a feeling towards him, and equivalently if I feel for him he will be important to me.
      In other words, something or someone becomes important, if and only if I have a feeling for that something or someone.

      This feels like a pretty platonic statement, but if you take a second to think about it, what happens when on one hand you have language to learn through instruction, while on the other hand you can create concern or dis-concern for anything or anyone?

      Evolutionarily speaking the combination of shared symbol manipulation and feeling is a nuclear bomb, allowing us to designate important and unimportant things across members of a same species at an unprecedented pace in evolution. Where our closest relatives seems to only show concern towards another being when it’s their kin (and debatably members of their in-group), humans can spread concern or dis-concern about something or someone with anyone that understands our language and theoretically across our entire species.

      In this day and age it allows us to go from a globalised highly fluid society one week, to one that confines across the globe to stop the spread of a virus the next - this definitely feels like a hack in the slow process that is evolution. Could a population of T3 and above machines be able to do this? Would they have the initial impetus we call feeling that would catalyse the chain reaction of concern and cooperation that has made us such an adaptable species so far?

      Does this resonate with your question professor or did I stray?

      Delete
    11. Robert, yes, almost by definition: Everything we do deliberately, voluntarily, we do because we feel like it. In other words, no one actually pushed us, and we weren’t asleep. It feels like something, to do something deliberately.

      But what is the point? We no more know how and why it feels like something to do something deliberately than we know how or why it feels like something to see green. Both are just examples of the hard problem.

      The trolley problem does not seem relevant either. Whatever we decide to do in the trolley problem (even if it’s to toss a coin), we do voluntarily, because that’s what we feel like doing.

      Both the question of free will and the problems of ethics are interesting, but they have no bearing one way or the other on the hard problem.

      Matthew, at one point you came close, but then you swerved away. (Actually, I’ve already hinted at the answer aloud several times in the course and in the skywriting Replies, but let me try to elicit from others a bit longer.)

      What we want to avoid is saying anything like “the adaptive advantage of pain is that it makes you want to take your hand out of the fire.”

      That’s not helpful, because it’s not clear why evolution would design us so that (1) tissue damage makes us (2) feel pain. which makes us (3) feel like pulling our hands out of the fire, so we (4) pull our hands out of the fire — instead of (1) tissue damage makes us (2) pull our hands out of the fire.

      Evolution is lazy, and that sounds like a lot of needless extra steps.

      Of course cognition is not just reflexes, but the same thing applies to learning, and even to language. Zombie robots can certainly learn; and there’s no reason they can’t learn to talk too. (We are not talking about T3 here: just about learning, talking, toy robots.)

      Stephanie’s conjecture didn’t work, because it was not clear what would make not wanting to pick flowers (thinking they might feel) any different from picking up hot coal (thinking it might hurt). But there is something different about the case of the flowers, and it is reminiscent of DD’s points about thinking that the chess program thinks it should get its queen out early; and about heterophenomenology.

      But what is it? And how and why would lazy evolution get into the act?

      Delete
    12. I agree with Matthew that the combination of language and feeling is a nuclear bomb. However, I think limiting this to designating things worthy of concern or not is too limiting. Similarly to why language evolved, it could be the case that feelings evolved to allow us to categorize things based on how they make us feel and transmit these categories. “Ouch” encompasses a lot of different things and not all of them are tissue damage, but they are all in the category that tells us to “not do” and/or “stop doing” what we are experiencing. This would also be the case for what makes us happy, sad, or angry. An infinite number of experiences can fit into these emotional categories, and we can create these new categories of feelings through language. Which is significantly simpler than having to learn everything from trial and error. Feelings are involved in everything we think and do and they are integral to our decisions of what we do, so clearly they are important (and we feel that it is so). Feelings allow us to interact with others on an emotional level. By seeing or hearing someone describe how an experience made them feel, we can also learn and infer how it would make us feel and categorize the experience appropriately (and this also seems to connect with our capacity for empathy).

      Delete
    13. Aylish, yes, feelings are important to us, and they certainly feel causal. But the hard problem is explaining how and why they are causal...

      Delete
    14. Hi professor,

      I’d like to take a stab at your question about the progression from tissue damage to pain. You posed the question of why evolution would design a four step process (tissue damage > pain > feeling that we should pull our hands out of the fire > pulling our hands out) rather than a simpler two step process (tissue damage > pull hands out of the fire). Evolution is lazy, why would it go with the lengthy option number one?

      What if feelings simplify, rather than complicate, by all creating? [insert any situation that would cause harm] > pain > [insert the most logical behaviour to escape the aforementioned situation]. Perhaps feelings make the process of determining how to respond, more efficient and streamlined. The tissue damage > pull hands out of fire equation would only apply to one single situation, potentially making things even more complicated — we’d need a different equation for each situation. With feelings, the initial situation and the resulting response can be anything.

      Maybe I’ve taken the example you gave too literally and honed in on a degree of specificity that you hadn’t intended. Either way, the overall thought I’ve had is that maybe feelings make things simpler in the sense that they’re a trigger which leads to a certain type of response. Pain leads to any behaviour that terminates or creates distance from the initial situation that caused pain. Rather than knowing what to do in each, exact situation (that would be a lot to handle!) — all we need to know is what to do in response to a limited set of emotions. Happiness > any behaviour that allows for the continuation of the situation that caused the happiness.

      Perhaps feelings are a mechanism that allow us to respond to a multitude of specific situations, with a limited and general set of responses that can be tailored to each situation. Maybe this is why feelings feel causal. They could be causal in the sense that they narrow down our range of possible responses.

      Delete
  2. “I don't care! I don't care if every nook and cranny, every last JND of my feeling life is correlated with and hence detectable and predictable from something you can pick up on your polygraph screen or can infer from my behavior. That's not the question! The question is: How/why does anyone/anything feel at all?”

    I could practically hear prof Harnad say this. I feel like if I can break it down to all the tiniest neural details of what’s going on, then I have in turn explained how and why we can do what we can do (easy problem). The reason we can remember is due to changes in plasticity and hippocampal cells and excitability and so on and so forth. I realized, actually about a month ago as I was studying for one of my neuroscience midterms, that this is how I’ve been getting taught cognition for the past three years. When I saw on the slides that “cognition arises from the brain” my first thought was prof Harnad would not be satisfied with this answer! I guess because I’ve been taught to think “well it’s because of the combination of all the chemistry going on in our brain”, that’s how we do w, so I always struggle to understand what explanation your rebuttal is demanding. BUT this has all been talking about the easy problem.
    When it comes to neuroscience and the hard problem, my intuition would be to say that, in the same way that it’s the combination of our neurochemistry that gives rise to cognitive functions and allows us to do what we can do, the combination of these different cognitive functions is what’s making us feel, but I wouldn’t have a way to test that. In a neuroethics class last year we discussed how to decide when someone was no longer conscious/feeling, how a decline in cognitive functioning could eventually be interpreted as lack of consciousness. Although now studies are showing that people in vegetative states are still conscious because they can communicate through thinking of certain objects to answer yes/no questions (think of house for yes, face for no). All this to say, I don’t think we should bother with the hard problem just yet. I don’t think we’re ready for it until we have a way to properly test it

    ReplyDelete
    Replies
    1. I'd agree. But what about the other-minds problem? Should we bother with that?

      Delete
    2. In response to Lyla's claim that "we shouldn't bother with the hard problem yet", and your question about bothering with the other-minds problem, I think that the other-minds problem is far more pressing. As you've said many times, just because I don't know for sure whether any other person feels, I don't assume that don't feel. Further still, knowing how and why other people feel never comes into the question of considering their feelings; I assume other humans feel and I couldn't care less about how they do it. While the hard problem is interesting, it certainly doesn't influence my behaviour, except for perhaps in specific ethical cases - such as those Lyla mentioned above.

      On the other hand, ignoring the problem of other minds has a long history of leading to unethical and cruel behaviour, including the mistreatment of BIPOC individuals and communities by white settlers and institutions, and the mistreatment of babies based on the belief that they can't feel pain. These problems continue today in our treatment of animals, which we often treat without concern for their feeling or consciousness. Work on the problem of other minds, and finding ways to indicate that animals do feel is in some ways more worth bothering with - from a pragmatic, ethical perspective - than working on the hard problem and examining how and why we (and other feeling beings) feel.

      Delete
  3. “*TURING: "I don't think we can make any functional inroads on feelings, so let's forget about them and focus on performance capacity, trusting that, if they do have any functional role, feelings will kick in at some point in the performance capacity hierarchy, and if they don't they won't, but we can't hope to be any the wiser either way" (from reading 10b).

    The most important point that I took away from this reading was that the question we are really interested in answering – the hard problem of why and how do we feel – was not answered nor really dealt with in the Dennett paper. I think that heterophenomenology answers different questions, sort of like when we were discussing Fodor. Maybe it can just answer the what and where type questions, like where does the brain light up when you feel a certain way? But as we also discussed previously, these correlations aren’t causal explanations! So heterophenomenology is not going to lead to progress in answering the hard problem.

    I chose this quote because I think it really clarifies why we can’t approach the hard problem the same way that we have been approaching the easy problem in the rest of this class. We can reverse engineer the heart because it does something very clear – it pumps blood. The brain was more difficult because it has a larger scope of “doing”, nevertheless it still “does” so those are abilities that we can reverse engineer and answer how, and to some extent look to evolution and answer why. The quote explains that if we can’t assign a function, a “doing” that is only possible with feelings, then how could we hope to reverse engineer them? Introspection doesn’t offer an answer to this question and neither does the 3rd person science of Dennett.

    ReplyDelete
    Replies
    1. Dan Dennett's heterophenomenology is human heterophenomenology. In principle, it might even give a close enough approximation to the question of whether this patient in a chronic vegetative state can still feel; or whether this human foetus already does. In other words, an approximate solution to the (human) other-minds problem. That's why we set aside the other-minds problem for other human minds at the beginning of the course, rather than just assuming that everything other than maths and Descartes' Cogito (Sentio) is not only uncertain but unknowable.

      What about nonhuman minds? The feelings of nonhuman organisms? Are we right to invoke the uncertainty there that we (rightly) set aside in the case of other humans? We would not kick Ting, who is a robot. What about Ting's pet pig?

      Delete
    2. I recently wrote a paper on the other minds problem as it relates to other organisms. Although I am no expert in philosophy and may be wrong in some of the following statements, I wish to share what I learned. In particular, I was compelled by an argument put forth by the philosopher Norman Malcom. In his work “Thoughtless Brutes” he clarifies that Descartes believed that higher level animals have sensations only in a mechanical, physiological sense, but not in the human sense. He believes this because, according to him, animals do not think with propositional content. However, Descartes also conceded that he could never know for sure is this is true. Malcom eventually goes on to describe how our insistence on believing that propositional content is required for consciousness blocks a philosophical understanding of a continuity of consciousness between humans and other animals. All of this is to say that we have so far discussed consciousness and feeling as an all-or-nothing principle. Clearly, from the way we use language to ascribe thinking and feeling to our pets (“My dog Spots thinks the cat ran up that tree”), we don’t believe other animals are thoughtless brutes. However, we do stop short of saying or thinking “The thought ‘the cat is up the tree’ occurred to my dog Spots”. So, thinking and having thoughts can be conceptualized as different processes.

      Of course, this does not provide a way to solve the other minds problem in humans or animals, but it does suggest, at least for some higher level animals, that we can set aside uncertainty in their ability to have feelings or to think.

      Delete
    3. Both the hard problem and the other-minds problem are about feeling.

      One feeling can stand in for them all: Can organisms other than humans feel "ouch"?

      Answer: Yes, all mammals and birds, and probably all vertebrates can, and probably probably most invertebrates too. Uncertainty about feeling only becomes nontrivial with organisms lacking any nociceptive neural tissue, like plants and microbes (e.g., rhododendrons and rhizobia). The uncertainty about non-feeling is trivial with rocks and rockets. But with robots, it becomes less and less trivial as they approach T3.

      Delete
  4. IMPORTANT Are the videos working for you? For example, mine, above, and Dan Dennett's, in 10a? Ting wrote me that it didn't work, and I tried it, and at first it didn't work, but then when I tried it repeatedly, in different browsers, it did work, but I'm not sure why. Please let me know. The above one is a pretty full summary of Week 10 -- probably better than the one I will do on Wednesday on zoom (because I've done it too often).

    Please post here to let me know whether it works for you.

    ReplyDelete
    Replies
    1. Hi, neither of them work for me (i'm using the most updated version of google chrome)

      Delete
    2. Esther, try copying the URL into another browser rather than doing it within the blog. Let me know is it works. (It failed for me, within chrome, in blogspot, but worked fine when I cut-pasted it into Safari and Firefox.

      Delete
    3. yes, it works in safari! but not in chrome

      Delete
    4. I have the same findings as Esther!

      Delete
    5. They both work for me on firefox! :D

      Delete
  5. From what I understood, the A team (spearheaded by DD) does not think a hard problem exists at all: what we call "feeling" are just internal states as a result of doing-capacity.

    The B Team thinks that the hard problem is relevant, but uses the word "consciousness" instead of "feeling" (even though they're practically the same thing). The B Team also proposes the possibility of a zombie - an insentient T5 that can do everything we can do but completely lacks feeling. (I'm not entirely sure what the purpose of arguing about the existence of a zombie is though)

    The professor proposes that he is on a newly made C Team: while it recognizes that the hard problem exists, the C team also recognizes that no one is anywhere near answering it and the best methods that we have to answering it are just those that will make progress towards the easy problem. Like Julian and Lyla mention above, the C team is distinct from the B team, because the B team insists that there are "alternative "science of consciousness"" to solve the hard problem, whereas C team recognizes that maybe the hard problem will remain unsolvable forever.
    Based on the quote from Turing that Stephanie highlighted above, would it be a reasonable guess to say that maybe Turing then is also part of the C team?

    ReplyDelete
    Replies
    1. All I can reply is that Turing thought that his method could solve (or at least test the solution to) the easy problem, but not the hard problem (because of the other-minds problem).

      The notion of a T5 Zombie is a philosopher's way of pointing to the hard problem. It's easy (and empty) to say "There cannot be Zombies." It's hard to explain how and why there cannot be Zombies. In fact, that is exactly the same thing as the Hard Problem.

      Think about it. In cogsci -- unlike in the philosophy of mind -- we are not concerned with metaphysics, with whether there is one kind of "stuff" in the world, or two: material and "mental."

      Cogsci is just concerned with explaining causally (i.e., reverse-engineering) how and why (some) organisms (sometimes) feel.

      Delete
  6. I agree with professor that the zombic hunch does not get us anywhere, since anyone of us could be (or not be) a zombie. That lies in the other-minds problem. However, while professor vehemently defends the importance of finding how and why we have feelings, I cannot even begin to suggest where to start. My pygmy brain would suggest that we ask people to describe their feelings as best as they could: this is similar to the methodology used to understand UG, and this is also not far from Dennett's heterophenomenology. Describing, however, does not get you to feel what the person feels (and is also limited to linguistic capabilities, which animals lack).

    I'm at a loss. I don't know how to proceed: this seems like a question with no answer. A part of me is also angry that we have to be okay with the fact that this is unsolvable.

    ReplyDelete
    Replies
    1. Wendy, you are not alone in that frustration. A lot of people want to be able to find an explanation for those hard problems. Some, and maybe most, people want to be able to find an explanation for everything.

      Whose to say that feelings should have causal explanations in the first place? Whose to say that everything should have a causal explanation too? Maybe we are looking for the wrong kind of explanations for the kinds of things we are examining. The wrong kind of explanation is a causal one. The kind of things is feelings (and now meaning).

      It seems like if we could explain everything, we could control everything (effectively making us God). That limitlessness the promise of Science (a Snake that tempts us with the poisonous fruit of knowledge). But it seems like feelings are limiting our causal-explanation capacities (is feeling another God?).

      But then, to switch gears, why is limitlessness desirable in the first place? Could it be because it... feels good? That's an explanation. Maybe not a causal one. But nonetheless, it seems it's the right kind of explanation.

      It may be time to stop treating feelings as the predicate (why do I feel), and maybe as the subject (feelings tell me why I do).

      Delete
    2. For better or worse, cogsci is the science (sic) that has the task of explaining how and why organisms can do what they can do, and also how and why organisms feel.

      Wendy, your frustration at not hearing a solution to the hard problem is certainly justified. I'm sorry I can't help. I feel it too!

      Introspection about what's going on in our heads in order to find an explanation of how we think was already set aside in week 1 (except insofar as finding out how anything works requires some reflection).

      The rules of UG were not discovered by introspection -- but it's true that the feedback about whether a sentence was starred or unstarred came from introspection. That’s pretty unique. I can only think of a few similar examples:

      (1) When we are injured, the feeling of pain is (usually) a good indicator of the right and wrong thing to do

      (2) If we did not feel, intuitively, that a logical proof is true, we would not be able to reason at all. See Lewis Carroll’s "What the Tortoise Said to Achilles"

      (3) The Fields-medalist mathematician, Roger Penrose, thought something similar was going on in the case of mathematicians’ intuitions even with mathematical hunches they could not yet prove, but that they “felt” were true. (This example is a much more dubious one than Lewis Carroll’s. Bertrand Russell showed how wrong intuitions could be with his famous example of William James and laughing gas — though the story is probably older.)

      (4) And although moral judgments are notoriously fallible, I think most people do have a sense of what is right and wrong, even if they don’t heed it.

      Julian, I think you might be wandering into hermeneutics. Please come back to causal explanation, at least till the end of the course!

      Delete
  7. I have to commend the Professor's persistence here. He did his very best to stick to his point and to illustrate what is the key problems in cog sci; How/Why we feel things, what is their causal mechanisms and why do we feel them in the first place.

    The bedrock of good research is asking good questions, and the more we stray away from the fundamental questions about how/why, and their causal mechanisms, the potential to not properly answer those questions increases. Obviously, I do not want to make a slippery slope argument here, nor do I want to invalidate research attempting to discover the "what" functions of cognition; that data is still important to us. I think the issue with that data, as Dr. Harnad has pointed out, is that it can be misinterpreted, or they will simply overstate the significance of their research. Staying in line with asking proper questions and reasoning is the best trajectory Cog. Sci. can take in order to give satisfactory answers to Easy and Hard problems.

    Also, I wholeheartedly agree with Lyla that concerns surrounding the Hard Problem should be discarded for the moment, or kept in the drawer for future analysis. Even if it turns out that the Hard and Easy problem can't exist without each other, focusing on the Easy problem first would probably be a better idea because it is readily available to study -- even if it is a tall order to study.

    ReplyDelete
    Replies
    1. @Rober Following up on: "Staying in line with asking proper questions and reasoning is the best trajectory Cog. Sci. can take in order to give satisfactory answers to Easy and Hard problems." I think it is worth mentioning that the reverse-engineering research trajectory is likely to give us an idea of how to build something that feels, although we won't be able to demonstrate that it feels and won't understand by virtue of having built it why it feels like something to be a T3(say)-passing robot. The line of questions and reasoning that will lead us to solve the hard problem is also unlikely to be the same line of questions and reasoning that will allow us to solve the hard problem.

      Delete
    2. EDIT: The line of questions and reasoning that will lead us to solve the easy problem is also unlikely to be the same line of questions and reasoning that will allow us to solve the hard problem.

      Delete
    3. Good grasp, both of you. (I hate it!)

      Delete
  8. The assigned reading was pretty much a follow-up on Dennet’s heterophenomenology failing to be the methodology that will eventually show what it sets out to show: the how and why of feeling. Although the question of the hard problem is interesting to ponder upon, I think it is clear that, at least within the current scientific paradigms, it is impossible to resolve the hard problem. And I honestly do not see the point in breaking our heads trying to solve it (at least for now) when it is clear that we cannot. We have a so-called « easy » problem that for 1) isn’t particularly easy and 2) we know is theoretically possible to solve because of 3) a clear research program elaborated by the TT. That’s already enough of a challenge isn’t it?

    Speculative talk
    I think it is unclear whether the inexplicability of feeling will remain as such « forever ». It seems to me that given the current body of knowledge that is contemporary science, more specifically, given the knowledge that we have about matter, the physical laws of nature, our conceptions of what counts as animate and what doesn’t, we cannot foreseeably solve the hard problem. Yet, there might be hope that after several iterations of paradigm shifts within the sciences (more particularly within physics), we come to connect the dots between fundamental properties of particles and emergent properties such as feeling (and by emergence I mean the physics concept of it whereby complex organizations of matter yield new properties that don’t exist at simpler levels of organization, not the steam/train engine metaphor we commonly see in other psych courses). The other minds problem would still be an obvious limitation. Even if we infer that at some level of organization, matter behaves in a way that suggests sentience, we have no way of proving that it feels like something to be that arrangement of atoms… But again, this is speculation: the only thing we know as of now is that there is a currently insurmontable problem to even start thinking about the hard problem itself and until we can figure out how to « leap over it », there is little hope for a truly causal explanation of how and why we feel.

    ReplyDelete
    Replies
    1. Why do you think the future insights will come from physics rather than biology? Physics is all over the universe (most of it neither alive nor sentient). Biology only happens on earth (unless there are exobiologies elsewhere).

      And there has been one prominent false start -- the one that everyone cites as a reason to believe that the hard problem will also be solved: It used to be thought that there was a "hard problem" of life: That there is something special about living organisms, an élan vital that could not be explained by ordinary physics. But then it turned out there was no such thing (and never was, and never needed to be). The properties of life are all just biochemical and biophysical properties; so nothing "extra" needed to be explained.

      Could that turn out to be true for sentience (feeling) too?

      Delete
    2. Unfortunately, the more I think about it, the less I am convinced that what you mentioned could turn out to be true for sentience as well. I don't think there is much doubt that the properties of sentience are also all just biochemical and biophysical properties. And perhaps biology would be more appropriate to explain that. But it actually doesn't matter which science would come up with those explanations. Unlike for life, there definitely is an "extra" that needs to be explained. Even if physics or biology comes up with a how, it would still be lacking a why. There would still be no clear reason why sentience exists at all. In other words, we don't know why it is there: the more we look, the more it seems that it is "causally superfluous" in the sense that most (maybe everything?) sentient organisms do could be done just as well without sentience.

      I guess I was thinking that a "how" explanation from physics would come with a "why", but I understand that it isn't so...

      Delete
    3. I agree (and that's why I said that the analogy with life was a false start: élan vital is fiction, but feeling is real).

      (Next week will be about how even if the hard problem of explaining feeling is unsolvable, nothing in the universe matters but feeling.)

      This week, though, let's ponder what the four real forces of physics have to say about it...

      Delete
    4. Solim, while I agree with the fundamental barriers that you mention with regards to solving the hard problem, I have a bit of an issue with this particular “take” on why we shouldn’t focus on the question. I feel like dismissal of the hard problem on the basis of already having enough to do with regards to the easy problem has been a recurring view expressed in this class, but I don’t find it very persuasive.
      For one, as far as we know, we are lightyears away from “solving” the easy problem, through reverse engineering or otherwise. It seems strange to me to limit what we are aiming to explain or replicate from the onset, when both goals are so distant that they serve more as guiding principles than tangible projects at this point. Perhaps I am underestimating the feasibility of truly creating TT-passing consciousness, but I feel skeptical that this goal is any more reachable than discovering insights into conscious experience through other methodologies.
      I also feel that marrying ourselves to TT as the only possible solution to these questions, as well as the justification for setting aside the hard problem, feels shortsighted. For instance, I read an article the other day on a theory that relates consciousness to the electromagnetic fields of our brains. I have no knowledge to be able to gauge the credibility of this theory, but it made me consider how we often don’t have a sense of how limited our imagination is until new discoveries come along. The way that findings about neural nets helped computer scientists overcome roadblocks in processing also comes to mind.
      If electromagnetic fields, or some biochemical element, or any other factor, turns out the be the key ingredient for consciousness, than we could work as hard as we want on creating a robot that simulates consciousness, but we will never achieve it. Would it not be more useful to try to get a sense of what the essential ingredients might be before we embark on this project?

      Delete
  9. Throughout the paper, Harnad stresses that to solve the hard problem, we must find a causal/functional explanation of how and why we feel. It may be the case that Chalmers's zombie does not exist––because any zombie that had identical physical properties to me would also have feeling as I do––but again, the question arises, why and how does feeling emerge from these physical properties? While it has been noted in this thread that we are very far from solving the hard problem, and we don't really even have a lead on any causal explanation of how/why we feel, I think there are two broad possibilities:
    (1) Feeling arises from the organization of matter––the way in which different bits of matter are put together.
    (2) Feeling arises from the type of matter that is used.
    If (1) turns out to be true and not (2), then it seems that if we could build a T3 Turing-indistinguishable robot, no matter what we built it out of, then it is possible that this robot would feel, assuming that the organization of matter that gives rise to our behaviour is the same as that which gives rise to the T3's behaviour.
    If (2) turns out to be true, then a T3 would not feel, unless we knew precisely the kind of stuff to make the robot out of to give rise to feeling. Of course, both (1) and (2) could turn out to be true. Or perhaps neither, in the unlikely event that the dualists were right! (There is also a third option, which is that the physical stuff is not important at all but rather the algorithm that is used, since computation is implementation-independent. But this has all but been refuted by Searle's periscope which shows that cognition =/= only computation.)

    In a response, Harnad asked whether the hard problem––in the absence of leads or potential explanations––is important. I think it is. To state the obvious, our felt states are everything to us. There wouldn't really be any point to anything without them. If the single most important and central element or our existence seems to escape scientific explanation––our best method of understanding the world––that shows, at the very least, that our worldview and our understanding of the world have some major blind spots. Recognizing the importance of the hard problem both humbles us, giving us a sense of our own (lack of) perspective, and motivates us to try to bridge the apparent disjunction between the phenomenon of feeling and all the rest of the phenomena in the universe.

    ReplyDelete
    Replies
    1. Yes, feelings (hence the organisms that feel them) are the only things that matter.

      But (1) and (2) are not only non-explanatory ("How and why does the 'organization of matter' or the 'type of matter" cause feeling?") -- there's not even a way of testing (1) against (2) -- except for doings (which is just back to the "easy problem").

      Delete
  10. Harnad's main issue with Dennett is the one mentioned a lot in the skywritings of 10a: Dennett focuses on correlations and on recording "raw data" relating the subjective experience of the subjects. Dennett's approach does not explain how/why we feel. He aims to predict functionality/causality without explaining how/why, instead dismissing the hard problem.

    kid-sib question for distinguishing feeling and belief:

    When you say "I believe that X", it feels like something to believe X. Having a belief is a feeling. The belief itself, however, is just a sentence to which we attribute a truth value. It is not different from any other proposition.

    ReplyDelete
    Replies
    1. And nonhuman animals can also believe in the cat's being on the mat without formulating -- or being able to formulate -- a proposition. We're the compulsive sub-titlers of everything we are seeing, doing, believing and feeling as verbal narratives.

      Delete
  11. At the start of yesterday’s class, we talked about the difference between the easy and hard problems of cognitive science. The easy problem (how and why we can do what we can do) is easy because the solution is to make observations of a subject’s behaviour. Turing hypothesized that once we have created a robot that can behave indistinguishably from that of a human, we have reverse-engineered human cognition and solved the easy problem.

    On the other hand, the hard problem (how and why we feel at all) is hard because it is unsolvable! Time and time again, Professor Harnad emphasizes that "the name of the game is not just inferring and describing feelings [through heterophenomenology], but explaining them" (Harnad, 2000). As well, we can even rephrase the hard problem by asking how and why we are not zombies. Sure, we know that humans and other species with a nervous system feel, but we cannot be 100% sure until confuting the problem of other minds (which is undeniable, of course). Hence, Professor Harnad argues that if we can compare a human to a T5 robot that is identical to us "right down to the last molecule," it will be impossible for one of them to feel and the other to not. Even though we cannot prove it, they will both feel the same way.

    In Professor Harnad's conference recording, he called the hard problem "an explanatory gap." He gives a simple example. We can answer the easy problem when we take certain parts out of a machine and observe what it can no longer do. For example, a neurosurgeon performed a lobectomy to remove the majority of patient H.M.'s hippocampus and amygdala in an attempt to cure his epilepsy. After the surgery, he became unable to form new memories. All this makes intuitive sense until we ask: what would happen if we take out feelings? What can we not do without feelings?

    In conclusion, we can all agree that feelings are a fundamental property of our behaviours and our motivations. Could it be that we can never solve the hard problem because it is impossible to remove/suppress these feelings? Even a T3 robot that has sensorimotor robotic capability indistinguishable from that of a human for a lifetime needs to ground words, which requires feeling. So, does this mean to pass T3, we need to solve the hard problem first? What is something that we can do that does not require feeling?

    ReplyDelete
    Replies
    1. The easy problem is "easy" because doings are observable and there is no reason to doubt that we can reverse-engineer the capacities that cause them.

      Feelings are not observable (except by the feeler) but (as DD's heterophenomenology -- which is really just T4 -- points out) there is a strong correlation between what and when we feel (F) and what we say and do and our brain does (D).

      Let's say the F/D correlation is perfect. That moots the other-minds problem: but it still does not show how to solve the hard problem. We still don't know how or why the brain generates these correlates: an explanation would have to be causal, not just correlational.

      Interesting to think about: why isn't consciousness (feeling) like quarks? We can't observe either of them (though in the case of feelings, as feelers, we all know they really happen).

      Unobservable "unbound" quarks are needed to explain observable atomic physical behavior. Without them, we could not explain the "doings" of protons. Could feelings turn out to be the same sort of thing: unobservable, but without them we could not explain the doings of organisms?

      The necessity of quarks to the (current) causal explanation of protons is complicated.

      Could feelings ("qualia") prove to be necessary to explain doings in some such way?

      (I think not, but that's just "Stevan Says...")

      Delete
  12. Dennett's method, heterophenomenology, is a way to supposedly get at the core of our beliefs (feelings) by correlating them with the accompanying functional states. According to him, there is nothing more to answer on the question of feelings. There is no hard problem, he says, because we've already accounted for feelings if we solve T4. However, as many have pointed out (most tenaciously by Harnad), T4 will never explain why we have feelings and what the causal relationship is between those functional states and our felt mental states. To be sure, these things are correlated with one another, we're pretty certain where the functional correlates of certain feelings are in the brain, but the key is that it's not causal; it doesn't reduce our uncertainty at all about how we feel what we feel and how we would build human feelings into a machine.

    As Robert said, I think one of the most important things about this article and maybe the whole course is that we as (aspiring) cognitive scientists need be able to recognize what questions we're answering with our research and not overreach with our conclusions. We need to think critically about what we can claim and recognize when our arguments are equivocations.

    ReplyDelete
    Replies
    1. On the topic of feelings, I agree with a lot of what’s been said about the frustration with the hard problem. I have absolutely no idea how to go about solving it and I’m not even sure it’s possible (if there is a solution perhaps it’s so far away that with our current level of scientific advancement we can’t even imagine it). Part of my frustration has to do with the connection between the easy and hard problems (which I brought up in class). If we solve the easy problem, if we build something that is completely and utterly indistinguishable from us in everything that it can do in the world, how is it possible that how and why brains are capable of producing feeling remain unexplained? I understand the notion of the explanatory gap— that we’ve used all of our causal forces on our doing capacities and so we don’t have any remaining degrees of freedom to explain feeling but… why is it this way? Doesn’t our segmentation of the easy and hard problems set us up so that only one can be solved (easy) while the other (hard) remains unexplainable? Where would more causal fuel come from to explain how and why the feeler’s brain produces feeling? Is there a 5th force akin to ‘will’ or ‘spirit’? Because of the set-up of the easy problem, feelings can be seen as ‘superfluous’, since it seems that we should be able to explain everything we do without them, but that doesn’t feel right because we have a sense that feelings play a causal role in our actions and our doings. It’s seems impossible that there could be a T5 that was a Zombie but I can’t give a causal explanation as to why (and that particular thought experiment doesn’t get us any closer to an answer).

      (sorry this is so long but I had a lot of thoughts, and feelings!)

      Delete
    2. I don't feel very optimistic about the HP either, but here are a couple of possibilities to think about, one out of T4, and one out of T3 (and language itself).

      (1) What if it did require some parts of T4 in order to successfully pass T3, and the T4 parts that were needed were precisely the T4 correlates of feeling? Would that be a hint?

      (2) And what about the OMP: Some mind-reading is obviously behavior-reading: When the lion roars and lumbers toward me, I know he's about to attack me. I'm not reading his mind, I'm reading his behavior. A zombie could read that behavior too.

      But when someone tells me "I'm feeling depressed," how could a T3 Zombie (if there could be a T3 Zombie) ground the category "depressed"? Dan Dennett is no doubt right about the usefulness of being able to mind-read a chess-player to figure out that they think they should get their queen out early. Supervised learning could distinguish a chess-player that does or does not think that. But how would it be adaptive for a Zombie to say "I'm feeling depressed." And based on what grounding? (See Wittgenstein on "private language".)

      (I think there are answers to this kind of question, but they do feel a bit strained...)

      Delete
    3. In answer to (1), do you mean that we could look at exactly what was lacking or holding the T3 robot back from passing T3, and then infer that if we add certain T4 elements and our candidate can now pass T3, that those T4 elements are what can explain whatever parts of the "doing" that just T3 on its own couldn't manage? And if those T4 elements are brain correlates of feeling (like one might measure in heterophenomenology) then this would be a hint that maybe these degrees of freedom could be explained by feeling? I see what you mean that this is only a hint though, because it really just gives a starting point but not a fully laid out how or why. It doesn't seem like a very satisfying answer to "why" to just say well without feelings you can't do X. I would want to know why feelings are necessary to do X, not just that they are.

      Delete
    4. That was my interpretation of (1), that there's a chance we need something more than just sensorimotor robotic capacities in order to do everything we can do for a lifetime. We can look to T4 to try and find what the 'missing pieces' are and what if it turns out that these pieces are the correlates of feeling? I think that this could be a hint, but I don't find it very satisfying at all. We're still stuck with only the functional correlates of feeling and it doesn't seem to get us closer to causality. The problem still remains about how are those physical systems giving rise to feeling at all?

      Delete
    5. Stephanie, Allie, we agree. (Boring!)

      Delete
  13. “Just pick any feeling at all: pinch/ouch. That's all you need. The full-blown problem is there, even with an organism that has that feeling and that feeling only in its repertoire. Explain the how/why of that. The rest is just a ritual dance skirting around the question.”

    I am motivated to answer that the causal mechanism when we are pricked by a rose bush or pinched by a younger sibling is the nervous system. Why do we say ouch, or any exclamation to express what we felt? I guess because we imitate parents and others growing up, and our ancestors did it because that calls attention to yourself so you can get help if you are injured – therefore those humans who called out survived better. I am confused by our word “feeling” though. Does feeling refer to what the nervous system does, the currents of electrical signals traveling around our corporeal tissues through the interorgan freeway of nerves? Or does feeling refer exclusively to the undetectable but through verbal approximation first person account of the felt thing?

    Harnad writes in response to Dennet, “If they are unfelt, they are not feelings, and hence not relevant to any of this! (Plenty of unfelt internal functions, from temperature regulation to perhaps semantic priming and blindsight: So what? They are not the problem! Feelings are!)”

    Does this mean that (1) if I am pricked by a rose bush or pinched by my brother and my nerves do all the things they do, conducting signals of the impact to my skin etc., but (2) I don’t notice it because of adrenaline (which can mute pain sensation) or because there is some nerve damage or I experience frequent pinches of this sort (and therefore I am habituated to it and do not register the sensation as significant anymore), then (3) there was nothing to be felt (because I didn't feel like I felt anything)? Or are we simply separating the categories of “felt” versus involuntary internal states?

    ReplyDelete
    Replies
    1. No one sensible doubts that feelings are caused by the brain. The problem is explaining how and why. (The accent here was not on my response to what I do [say "ouch"] when pinched, but on what I feel.)

      Do you think, by the way, that nonsocial species -- who do not communicate with or seek help from others -- do not feel pain when injured?

      If an injury is not felt, it is still an injury, but it is not a feeling. (The hard problem is how and why injury - or any state -- is sometimes felt.)

      Delete
    2. So the problem is to explain how and why it feels like something to feel something? Not to explain how and why the nervous system does what it does, but how and why we can feel things that are happening. Are we trying to explain how and why a child feels like something when they have a stomach ache, even though they might not be able to say they feel an ache in the stomach area, or feel that it is a result of having consumed too much candy etc.? It's not a question of how and why someone's stomach aches - which can have many possible physiological explanations - but a question of how and why that person can feel something at all?

      Delete
    3. AlexST, the hard problem is to explain how and why it is that (some) organisms feel (i.e., they are sentient) rather than just able to do what needs to be done (move, process sensory input, learn, communicate -- all of which is beginning to be shown by robotic modelling to be doable without feeling).

      It's not about "why it feels like something to feel something" but about why it feels like something to do something.

      Explaining "how and why the nervous system does what it does" is the "easy" problem.

      Delete
  14. A recurring theme in Dennett’s paper that Harnad picks up on aptly is that there is a conflation between correlation and causation. In particular, heterophenomenology is the practice in which 1st-person verbal accounts of feeling are correlated with physiological measures that co-occur with them. As pointed out in the reading and in previous discussion points, it’s clear that the brain has a role in generating feeling. But the nature of that connection is never explored and gathering more correlations will never fill this gap. It will only make predictions of what the mind will do based on physiological measures more accurate and vice versa.
    Another theme that manifests in Dennett’s paper that Harnad analyses is the conflation of explanation and prediction. I would argue that the two are definitely not mutually exclusive. If you understand why A causes B, it follows that you would be able to predict how a change in A will cause a consequent change in B. There is no such entailment relationship the other way around: predicting that a change in A produces in B will not explain how A is connected to B, no matter the degree of accuracy. One illustration that Harnad brings up that can be applied to this situation is his comparison the feeling/doing problem and the Moon. We know it is impossible to have the clone of the moon with no gravity because the physical properties of the Moon necessarily bestow gravity upon it. Likewise, we can predict that T3/T4/T5 that can pass the easy problem can also pass the hard problem, but we can confirm a mechanism that would allow such a thing to occur.

    ReplyDelete
  15. THE MIND/BODY PROBLEM IS THE FEELING/FUNCTION PROBLEM by Stevan Harnad

    In this paper, I realized indeed that professor Harnad is not exactly part of the B team. As he mentions, he rather considers himself to be neither a part of the A team nor the B team. Harnad believes that the “hard problem” exists, has not been solved and is insoluble. Dennett defends the idea that “his” heterophenomenology gets rid of the hard problem by making the 1st person data about our subjective experience a kind of data that does not escape good old 3rd person objective science. Harnad is saying wait a minute Dan, while it is indeed true that we can work with 1st person data using heterophenomenology and 1st person data is not inaccessible, there remains a hard problem mainly, how in the hell do you go from the biological stuff firing action potentials communicating through electrochemical synapses that can implement complex computations called the brain to feeling. Not the weaseled and entangled notion of “consciousness” but simply, feeling. I think professor Harnad dislikes the word consciousness because the referent of the word is superfluous (or should I say the “referents” as the notion seems to encapsulate numerous referents). An abstract notion of the simultaneous presence of various feelings and degrees of intensity of feeling. Why not just use feeling and end the confusion? I agree. The hard problem can now be stated appropriately: How and why do we feel? Harnad insists that Dennett should acknowledge the question. It has nothing to do with heterophenomenology. Dennett! Please answer me begs Harnad. I need a functional explanation of feeling and I will recognize one when I see it says he. I want to know how squiggles this and that plus grounding this and that plus categorization all of this and more complexity and more synapses and more electricity and more connections and the world would yield feeling. I don’t want a correlation says Harnad, I want an explanation, just like I recognize an explanation when given with a mechanism for our doing capacities. Since Harnad recognizes that there is a hard problem and Dennett does not, he does not consider himself part of the A team (Dennett’s team). He is neither a member of the B team as he believes in the Turing Test especially a T4 (molecular/functional identity) and rejects zombies.

    Here is the gist of it:

    “But the name of the game is not just inferring and describing feelings, but explaining them. And explaining them is not merely predicting under what conditions they will occur, nor even predicting what they will feel like. That's all easy stuff (by which I just mean normal science). The hard part is (and always has been) this: Suppose you have a successful causal mechanism (be it molecular, synthetic, or computational -- let's not quibble, it doesn't matter) for predicting feelings, including all the functional conditions and states in which they occur, right down to the last reportable JND, in every conceivable situation.

    You will still not be able to give even a hint of a hint as to how it is that that mechanism feels at all (you'll just have the molecular or computational mechanism that correlates with the feelings), nor of why (in, say a Darwinian, or some other functional sense) it feels.”

    But wait a minute, why not even a hint? That is quite pessimistic, isn’t it? Is it that it feels like it won’t give us a hint? This is where Dennett comes in and says “I feel it, but I don’t credit it” and that I think is important and what appears impossible to explain or what appears to be insoluble may just be appearances (like “life”, it felt impossible to explain). Let’s forget about the “élan vital” and remember the feeling that it was impossible to explain life and that it truly felt like there was a gap. I don’t yet see the explanatory gap for feeling, I only feel it so I don’t credit it.

    ReplyDelete
    Replies
    1. The absence of a causal explanation of feeling is not just a feeling of having or not having an explanation: we really don't have a causal explanation.

      Delete
  16. But I can tell you that until you explain why and how a pinch hurts, the game's not won. (That it does hurt, and that that hurting correlates perfectly with some functional story, is not the how/why explanation we were seeking...)

    For me, this sums up the main issue with Dennet’s paper. Harnad points out throughout his response to Dennet that we are not actually explaining the how/why question of feeling, because Dennet disregards this problem altogether. Just because the question is abstract, doesn’t mean it isn’t worth studying. We know that we have feelings, and the correlation-causation analysis that Dennet proposes seems to ignore this fact. Having a detailed description of all the mechanisms corresponding to people’s reports about their own feelings does nothing to answer the question of why they are feeling in the first place. Harnad rightly points out that Dennet’s paper has not actually done what he has claimed - that is, he has not answered the hard problem because he is answering the wrong question.

    ReplyDelete
  17. The hard problem is about “why it feels like something to do something”. Maybe it feels like something to do something, because feeling is a reinforcer and it’s what allows us to learn and evolve at a more rapid pace?

    I was thinking about the Chinese room argument and what it means to have the feeling of understanding. How and why it developed.. Where it might have begun. What immediately comes to mind is the experience of learning to speak, as a child. I began wondering what it would feel like for someone who has never understood a language.. To understand. At first, it might be driven by reinforcement from those around them (observing others’ reactions to words that I, as a young child, speak and using that as a gauge to test the efficacy of my speaking). Perhaps once I gained the affirmation of others that I had learned to communicate effectively, I would then be able to observe my own internal state and know what it feels like, when I understand a language. From that point on, I wouldn’t rely so heavily on the responses of others, because I would know what it feels like to understand.

    That feeling of understanding might motivate more complex language development as well as the desire to learn a new language. Both of which could aid evolution.

    ReplyDelete
  18. Question about the explanatory gap: is the gap (1) the idea that our causal explanations cannot explain feelings because they appear to abide by different mechanisms (i.e. non-physical ones), or (2) the idea that because we cannot explain feelings yet, we cannot know whether our causal explanations will one day solve the hard problem?

    The first would make the stronger claim that we could never solve the harder problem; the second makes the weaker claim that we don’t know yet whether we can solve it (and should not claim to have solved it just yet).

    Or, is there something I’m missing or misunderstanding?

    ReplyDelete
    Replies
    1. I think right now both are equally valid formulations and it depends on you which one you choose. Currently, we don't know that there isn't a causal explanation for our feelings so I feel like the second question is valid. It accounts for the fact that we might one day discover the causal explanations for feelings and simply states that we haven't gotten there yet. I think it's a more hopeful way of looking at it. On the other hand, I feel like the first question is more likely to be the case, especially withn everything we've seen so far in class.

      Delete
    2. Hi Julian, it's closer to (2). And the problem with (1) is not that feelings "abide by different mechanisms (i.e. non-physical ones)" but that we have no idea what the mechanism might be: feeling looks causally superfluous.

      Delete
  19. 'feeling' is the focal point here, the hard question that no one has managed to explain no matter how much they have claimed to have done so.
    This has been expanded into the idea of 'feeling what others feel', which holds a lot of common group with the frequent points for and against true altruism and if there might be any kind of helpful clues in that discussion to be considered here.

    ReplyDelete

Opening Overview Video of Categorization, Communication and Consciousness

Opening Overview Video of: