Blog Archive

Monday, September 2, 2019

1a. What is Computation?



Optional Reading:
Pylyshyn, Z (1989) Computation in cognitive science. In MI Posner (Ed.) Foundations of Cognitive Science. MIT Press 
Overview: Nobody doubts that computers have had a profound influence on the study of human cognition. The very existence of a discipline called Cognitive Science is a tribute to this influence. One of the principal characteristics that distinguishes Cognitive Science from more traditional studies of cognition within Psychology, is the extent to which it has been influenced by both the ideas and the techniques of computing. It may come as a surprise to the outsider, then, to discover that there is no unanimity within the discipline on either (a) the nature (and in some cases the desireabilty) of the influence and (b) what computing is --- or at least on its -- essential character, as this pertains to Cognitive Science. In this essay I will attempt to comment on both these questions. 


Alternative sources for points on which you find Pylyshyn heavy going. (Remember that you do not need to master the technical details for this seminar, you just have to master the ideas, which are clear and simple.)


Milkowski, M. (2013). Computational Theory of Mind. Internet Encyclopedia of Philosophy.


Pylyshyn, Z. W. (1980). Computation and cognition: Issues in the foundations of cognitive science. Behavioral and Brain Sciences3(01), 111-132.



Pylyshyn, Z. W. (1984). Computation and cognition. Cambridge, MA: MIT press.

57 comments:

  1. Maybe a basic question, but something that's bothering me nonetheless from "What is a Physical Symbol System?":

    The author writes that the "physical symbol system hypothesis of Newell and Simon (1976) [is]:
    A physical symbol system has the necessary and sufficient means for general intelligent action". They then proceed to write that "[This hypothesis] means that any intelligent agent is necessarily a physical symbol system".

    Distilled, the hypothesis states that A can do B. From that, they then say that all things that can do B are necessarily thing A. Wouldn't this be a logical fallacy? A french chef can make a ratatouille. But not all things that can make a ratatouille are french chefs.

    ReplyDelete
    Replies
    1. Good point, and you are right. There is confusion here between computation and computationalism. Computation is the manipulation of symbols (e.g., words, numbers, 0's and 1's) by executing rules (algorithms) that operate on the symbols' (arbitrary) shapes (not their meanings). Computationalism is the theory that cognition is computation. These are two different things. (Turing defined computation; Newell & Simon were computationalists.)

      I think there is also a confusion between the strong Church/Turing Thesis (1) that (just about anything) can be simulated by computation and (2) computationalism.

      This will become much clearer in the next 2 weeks (especially the difference between things and their computational simulations). It's important.

      Delete
    2. JG, what I am confused about (and perhaps this is what you were really objecting to, and what Professor Harnad is referring to above) is whether there is a difference between a thing that can produce intelligent action and an intelligent agent. What is an intelligent agent in the first place?

      If intelligent action is only produced by intelligent agents, then I do not think what you have pointed out is a logical fallacy. This is confusing sufficient conditions with necessary and sufficient conditions, a much stronger statement. The hypothesis not only states that A can do B, but additionally that it is necessary to have A to do thing B. This means you cannot have B without A, as A is a necessary condition for B (this would be like saying a thing can make a ratatouille if and only if it is a french chef - therefore, if we see a thing that can make a ratatouille, it must be a french chef, and if we see a french chef, they must be able to make a ratatouille).

      However, if producing intelligent action is not a “necessary and sufficient” condition to being an intelligent agent, then there is indeed a problem with the quote, namely, conflating intelligent action with intelligent agents. I have clear definitions for "ratatouille" and "french chefs", but I am not so clear on the definition of "intelligence". The author themselves writes "what levels above the neuron level are needed to account for intelligence is still an open question". How is it possible to say that general intelligent action is something only intelligent agents can do? Is this the difference between computation and computationalism?

      Delete
    3. Alessia, Turing is just saying that "intelligence" is as intelligence does (or is able to do). The rest is just our arbitrary use of words of praise!

      "Intelligent" (or "smart" or "clever") is just an adjective. We can apply it to an organism or to an action or an ability, but it's arbitrary. What is not arbitrary is whether a state that an organism is in is a felt state: Does it feel like something to be in that state? It feels like something to be a partridge. It does not feel like something to be a pebble, or a planet. We notice that the kinds of things that do what we like to call "intelligent" tend to be sentient organisms: organisms that have states that it feels like something to be in.

      So we conflate the two: our arbitrary term of praise for when an organism does something clever -- and the fact that the organism is sentient. And that we (who are also sentient organisms), when we do something clever, tend to be in a state that it feels like to be in. We know what it feels like to do something smart; and we like to think that we do it deliberately (although introspection about how we do it ends up drawing a blank).

      But what it feels like to do something smart -- and to do it in a way that feels like it's deliberate -- is what we mean, intuitively, by "intelligence," "cognition," or "thinking."

      Maybe insentient organisms (plants? microbes?) and objects (computers, robots) can do things that look clever to us, and that we are only used to seeing done by sentient organisms. But nothing is really at issue about whether it's really "clever," since what really matters (and is not arbitrary) is whether it is sentient.

      Turing points out that in trying to explain how clever things can be done by anything -- whether an organism or an artifact -- all we can explain is how the mechanism produces the ability (the "easy' problem), but not how (the "hard" problem) or even whether it produces felt states (the "other-minds" problem). Tuing suggests we reconcile ourselves with the trying to solve the easy problem (of explaining doing capacity) as the only one we can hope to solve. And he suggests that either the Turing Test (T2, or T3, or T4) will also generate felt states -- or it won't. We can never hope to be the wiser.

      (So it's not about whether "intelligence" is a property of actions or of actors, nor about whether it is necessary or sufficient, nor about whether it is a property of "agents" or "agency"! It's really only about whether it's sentient: and if it is, it's sentient even when it's not intelligent, as in just feeling warm, or tired...)

      Delete
  2. “What we’ve learned so far is that computation involves the manipulation of representations by following some specified procedure, but that the same computation can be achieved by many different procedures, representations, and mechanisms. As long as the procedures are behaviorally equivalent, they’re in some sense interchangeable. The difference between what we’ve called the functional and imperative models above is largely a matter of what aspects of the procedure’s behavior are considered relevant to the task at hand” (This is from the article “What is computation?” by Ian Horswill)

    For a computer to pass the Turing Test, it would need to be true that cognition is the same as computation, or in other words, that our cognition is just rule-guided symbol manipulation. As stated in the passage above, there are many different procedures and mechanisms that can be used for the same computation. It is not the point that a Turing Machine and human brain are following identical mechanisms; there is no paper tape full of zeroes and ones feeding into my ear, through my brain, and out the other ear.

    The main issue I will discuss in this response is the idea that cognition/computation can be achieved by procedure. In the article on Turing Machines we can see that there is a so-called “instruction manual” that essentially serves as the algorithm for the Turing Machine. One can think of this as a series of “if…then…” statements that allow the Turing Machine to make its computations. Let’s say in the Turing Test, the experimenter emailed both the human and the computer asking them for a random number. Based on the instruction manual guiding the Turing Machine, presumably there must be some procedure or algorithm to determine a random number. I am not suggesting that it would be so simple as to say “if asked for a random number, then return 5” as this is hardly random, but it could be something with significantly more steps perhaps relying on previous actions, outputs, or inputs. By my understanding of the fundamentals of an algorithm, at some deep level the computer generation of a random number would have to be predictable. If asked enough times, a pattern could emerge, or one would be able to predict which so-called “random” number the computer is going to offer. Could the same be said of human cognition?

    I am not going to pretend that asking anyone to introspect the procedure they implement to come up with a random number can serve as evidence in my argument. Whether or not it feels like some small person in your brain simply generates a random number for you to say is not very productive to actually understanding how the cognition happens. That being said, I do not feel confident saying that there is a true procedure for cognition to offer up a random number. I am not denying there aren’t computational procedures in our brain, I just wonder if generation of randomness is potentially a divergence from this.

    ReplyDelete
    Replies
    1. Even if the programmer truly strove to make the computer able to decide upon a random number, for instance “if asked for a random number, then measure some sort of environmental input and apply a complicated formula to that value,” this still seems predictable (dependent on some environmental data), and still with a formula, so if one had access to the environmental data and had perhaps reverse-engineered the algorithm from previous answers, they could predict the “random” number. One could argue and say that “compute” is the wrong verb to use, and one cannot “compute” a random number by the very definition of random as chance or without method. I do not think that this is an unfair argument to make, and actually just suggests that cognition and computation are not two perfectly overlaid circles on a Venn Diagram and that there are certain aspects of cognition, in this case the human ability to use cognition to generate randomness, that are not contained in the realm of computation. I realize that a large part of this issue also rests on the ability of human cognition to be able to produce random numbers when prompted. Do humans truly have this ability? A number of studies have shown that humans have some biased understandings of randomness, and the discussion of true randomness is one that certainly goes far beyond our focus here on computationalism. Nevertheless, there seems to be an underlying issue of reconciling computation which is inherently procedural and methodological, with production of randomness, which is without method.

      Delete
    2. As you said in your reply, I would also argue that humans are almost always prone to biases. Some examples that I could think of right now would be the priming effect and mere exposure effect.

      Just going to quote your main text here:
      "By my understanding of the fundamentals of an algorithm, at some deep level the computer generation of a random number would have to be predictable."

      I think that at that point, it's us humans who think that the machine is being predictable, since we tend to look for patterns in randomized things. I don't know if we could argue that the machine would think that of itself... (and who else is there to evaluate the true randomness of a machine? Another machine?)

      I think you brought up a really good analogy that "cognition and computation are not two perfectly overlaid circles on a Venn Diagram" and it reflects Prof Harnad's paper as well. I would like to also add that maybe your example of our cognition generating "random" numbers is not an example of computation, rather, one of mental imagery?

      Delete
    3. Stephanie, you can forget about "representations" in this course. It's ok in algebra but it's a weasel-word in cogsci (representations of what, to whom?). What's being manipulated in computation is symbols: arbitrary shapes (Searle calls them "squiggles and "squoggles") that can be interpreted as meaning something by us, but not by the computer or the computation.

      You are right that, by definition, if a number is generated by a rule, it is not random.

      Computation is "implementation-independent," which means it can be physically done in many different ways; the computation is the rules, not the machine that executes them. Think of it as something like a difference in formal notation: A calculating machine can add with the usual notation (A + B) or reverse-Polish notation (A B +). The sum is the same. What matters is the rule, not the way you physically follow it.

      Even if a computer alone could pass the Turing Test, as we will see, that would not mean that cognition is just computation.

      You are right that, by definition, if a number is generated by a rule, it is not random. A series of numbers is random if the shortest rule for generating that string is as long as the string itself. That is relevant for understanding what "information" is, and what reducing uncertainty is. But it is not directly relevant to what cognition is. (It's Kolmogorov/Fomin/Chaitin complexity theory.)

      (Good comments, Stefanie, but please keep them a little shorter, because there are 50 students doing 2 comments a week, and only one me [as far as I know!]).

      Wendy, generating a random series makes sense, but not generating a "random number": is "427" random? (random compared to what?) A coin is bias if you can do better than chance in predicting whether it comes up heads or tails. If it has a slight bias toward tails, you win more if you always bet tails rather than trying to switch "randomly" for each prediction. If, as in the vegan sandwich machine, you always bet on heads, you'd get lunch more often then if you kept switching. The bigger the bias, the more you get to eat...

      Delete
    4. “By my understanding of the fundamentals of an algorithm, at some deep level the computer generation of a random number would have to be predictable. If asked enough times, a pattern could emerge, or one would be able to predict which so-called “random” number the computer is going to offer. Could the same be said of human cognition?

      I found this to be a very interesting angle on the possible limits of computation! Professor Harnad brought in the question of algorithmic randomness, but from my very basic understanding of this concept, it seems to me that it is distinct from the common understanding of “stochastic” randomness. From what I could gather, Kolmogrov randomness is random in the sense it cannot be predicted by any algorithm, but this is due to the fact that the sequence is “incompressible”, not that it defies the deterministic nature of computation. I therefore think that the question of whether computation can produce an equivalent to the (perceived) ability of the human mind to generate a random number still stands.

      It seems clear to me that computation cannot generate a truly random number, given that randomness is essentially defined in opposition to computation. You mentioned the possibility of using environmental input, and this seems to be the only way for a computer to generate randomness (my favourite number generator uses atmospheric noise). However, in this case computation is simply “piggy-backing” on the unpredictability of some “analog” process. It also requires for the computer to have some sort of sensory ability to measure the "outside" world.

      I think the more difficult question is whether the human mind is capable of randomness. You brought up biases, and indeed, when asked to generate a number between 1 in 10, nearly half of people say 7! It appears our conceptions of randomness are more than purely mathematical definitions - they also carry various connotations, such as ideas of “uniqueness”. A prime number like 7 just feels so random!

      Despite this, the fact that the different answers are not equiprobable does not mean that a process is not random (and ultimately unpredictable). So are we producing real randomness?

      It feels impossible to say for sure, but I would be inclined to say no. There still must be some form of computation occurring to choose a number, however complex or obscure that process is to us. I don’t think our thinking can defy the causal chains that govern the world; therefore given unlimited information, we would theoretically be able to chart the algorithm that someone used to “randomly” pick 7. Whatever the answer is it’s a very interesting way to think about the limits of computation!

      Delete
  3. "The level of abstraction provides a partial ordering of abstraction. A lower-level abstraction includes more details than a higher-level abstraction. An agent can have multiple, even contradictory, models of the world. The models are judged not by whether they are correct, but by whether they are useful." - "What is a physical symbol system?"

    "But if each individual neuron can be simulated computationally, then it should be possible in principle to simulate the whole brain by simulating the individual neurons and connecting the simulations together." - "What is computation?"

    Both these extracts hinge upon the idea of emergence. In the first extract, emergence would occur at the point where the brain/AI determines which of the models would be most useful. In the second extract, emergence would be at the point where we go from simulating individual neurons to successfully simulating the brain (as opposed to its individual components). AIs have successfully learned to model their behavior so it replicates that of humans. They do so by carrying out computations identical to that of humans. However, when we think about the hard problem of consciousness, does it not transcend information processing? An AI may be able to process a stream of input and produce an output (for example a decision). Is consciousness/feeling then the ability for humans to make mistakes? Of course, this could be likened to the concept of a bug in programming. However, bugs are the result of mistakes in code. Can we assume that the imperfection that is inherent to being human is also a "mistake in code"? Or could it be the inevitable result of the emergent entity from human computation that we call consciousness?

    ReplyDelete
    Replies
    1. Building on this, we need to then consider if performing the right processing (or computation) but arriving at the “wrong” output is a valid form of computation. Say I decide not to bring an umbrella with me because the forecast only predicts a 45% chance of rain. Probability dictates that I probably won’t need an umbrella, so I conclude that I probably don’t need to bring one. However, if it rains anyway, does this make my decision wrong even if the probability was on my side? Would this “wrong” decision extend to a computer making the same calculations?

      This is a fairly simple situation but when this extends to very complex and uncertain interpretations and considerations, like modifying your response to a question based on the other person’s mood (which could be considered an incomplete or uncertain input) would errors in an AI’s output still be considered errors if they don’t lead to the desired output? And how would a computer be able to clearly know the desired output if that output has no basis in objective measurement? For example, how would a computer compute the desired output: “say something to make John happier”? How would a computer be able to correctly compute not only John’s current state of happiness but also know how to increase this happiness without error? For humans, all thinking involves a risk of error and I would argue that this is a core component of consciousness.

      Delete
    2. Ishiko, forget "emergence"! It's a weasel-word masquerading as an explanation. "Something happened that was unexpected, and I have no idea how or why: So the explanation is that it was an emergent!" Does that reduce uncertainty in any way, if what you want is an explanation?

      Individual neurons (like just about anything) can be simulated by computation, but they are not neurons, they are just squiggles and squoggles that can be interpreted -- by us -- as neurons. Same is true for any real cognition the real neurons are doing. The simulated neurons would not be cognizing.

      "Levels of abstraction" is clear when we think of apples, fruit, food, things.

      An acrostic, however
      Be it ever so clever,
      Seems spooky only because
      Telling it verbally
      Rather than in writing
      And aligning and
      Capitalization, on the other hand,
      Tricks you into believing a secret (mental) conspiracy theory!

      More about this when we get to Pylyshyn's notion of the "virtual machine."

      Aylish, in cogsci (and life), "right" and "wrong" (and information and uncertainty) depend on the outcome. If you ate a poison mushroom, that was wrong. If you ate an edible mushroom, that was right. If you picked the window that had a vegan sandwich behind it, that was right. If not, not.

      Mind-reading is hard (lots of uncertainty) for both people and computers.

      Delete
    3. Ishika (not Ishiko), apologies for the typo!

      Delete
  4. The discussion on levels of abstraction in 'What is a Physical Symbol System?' reminded me of a story I encountered in a collection of works by Neil Gaiman. The story follows the downfall of a king who becomes obsessed with creating the perfect replica of his kingdom. It starts small, depicting mountain ranges, rivers, and towns and was small enough to fit in the palace gardens and with a relatively high level of abstraction. But he keeps demanding it be iterated on, and 'improved' each time by increasing its size (also decreasing the level of abstraction). He puts all the kingdom's resources towards this project, and his final attempt which led to his complete downfall was a replica on a 1 to 1 scale that was completely accurate to the region (attempting a replica that employed no abstraction). An impossible feat that is a very stark example of how useless a representation can be after passing a certain level of minimal abstraction (as well as how potentially costly these representations can be).

    ReplyDelete
    Replies
    1. Nice allegory about scale and approximation, but it's not about abstraction!

      To "abstract" is to "extract" some features but not others. A square has 4 equal-length sides, a parallelogram has 4 parallel sides, a quadrilateral has 4 sides, etc. A red apple is red and round. Abstract the feature red, and you have all things that are red.

      Abstracting will turn out to be important in learning to categorize ("Do the right thing with the right kind of thing") because you have to find find (detect and abstract) the features that distinguish the edible mushrooms from the toxic ones.

      Delete
    2. How does this apply to something like a map then? More specifically lets consider a metro map that depicts each stop and how the stations connect, but does not show the distance between the stations accurately/proportionately. Would this be an abstraction of the metro system or an approximation?

      Delete
    3. A map is an analogue of what it is a map of.

      Abstracting is filtering (selectively detecting) the features that things have in common (so you can do the right thing with the right kind of thing).

      Mapping is copying things, or their features.

      "Analog computation" is not really computation (i.e., not Turing computation) because the map's shape resembles the thing it is a map of, whereas the shapes of the symbols in Turing computation are arbitrary; they do not resemble the things that they (can be interpreted as) being about.

      A "2" does not resemble two-ness any more than "red" resembles redness. And "2 o'clock" does not resemble the time of day that it refers to.

      But a sundial really resembles some features or the passage of time in a day on earth during daytime, as the earth rotates, casting its shadow.

      This too will become clearer later in the course.

      Delete
  5. I thoroughly enjoyed this article. The very end was particularly interesting. The distinction between "Knowledge" and "Symbol" levels of abstraction "common to both biological and computational entities" really speaks to the challenge discussed by Prof. Harnad of describing cognition and computation without relying on homuncular theories. If I understood it correctly, the difference between these two levels of abstraction is analogous to Pylyshyn's "criterion of impenetrability" which suggests some kind of threshold below which the "functional architecture" cannot be influenced to change by thought or speech. This presents a solid contradiction within Pylyshyn's theory - bringing us back to the Symbol-Grounding problem, because the "impenetrable" levels of cognition, running on implementation-independent formal (symbol based) computation, are impossible to describe outside the symbolic level. Our task as cognitive scientists would be to find the bridge between the so-called "knowledge" and "symbol" (or thinking and computing) levels. We have yet to really penetrate and understand the relation and connections between cognition and (or as?) computations, is that why it is so difficult to separate or describe "cognition" and "computation" from the cog sci approach?

    ReplyDelete
    Replies
    1. Pylyshyn's "impenetrability" was intended to distinguish what was cognitive from what was not cognitive ("vegetative") by whether being told something (true) about it would change how you perceived it. Being told that the two lines in the mueller-lyer illuson are the same size does not make them look the same size.

      For an example of something that is cognitively penetrable, ask me in class about the "Monte Hall" problem (but don't look it up in google first or I won't be able to reduce your uncertainty in real time!

      On "knowledge" vs. "symbol" "levels," ask me in class (and see the "acrostic" above).

      Delete
  6. “So the real question is whether a sufficiently big PC could simulate the human brain”

    At first glance this sounds impossible because it sounds too difficult to strip down the brain simply into inputs and outputs, maybe partially because we like to believe there’s more to us than that. However, all neurons can really do is increase or decrease activity in response to whatever input they’re receiving. This is already starting to sound a lot like 1s and 0s, with our current “state” being a snapshot of synaptic activity the moment the input is received, and according to the “instruction set”, an output of 1s and 0s is triggered. In terms of the instruction set, we posit our actions, and hence our outputs, to our will; I behaved this way because I chose to or wanted to. So, tying this all together, does this mean that we could potentially reach a point where AI has will? What about sentience (ie. the feeling of experiencing life)? Or is the concept of will an illusion we created to account for the fact that the reason for our behavior is unknown to us? This is less of a discussion of whether computationalism is true and more of an evaluation of its implications.

    ReplyDelete
    Replies
    1. Without becoming too off-topic, is it possible to think of "will" at a level of abstraction that describes it as complicated cost/benefit analysis? If so, then maybe AI already has a level of "will". Furthermore, keeping in mind behavioral equivalence, might it be possible for a computer to reach the functionality of the human brain but not offer us the understanding of how the brain works? With this in mind, perhaps a computer can fulfill all the roles of a human brain computationally yet not have the sentience or will of a human. The idea that there is some threshold of complexity in the circuitry of the brain that leads to consciousness also raises questions. For example, at what level of abstraction do computers fail to match humans? If knowledge and symbol levels are common to both biological and computational entities, what is holding a computer back? I suppose only the human ability to program at this point.

      Delete
    2. Is will really about cost and benefit though? If we take the word rational to mean making a decision that has the highest probability of maximizing benefits and minimizing costs, then we find that often our decisions are irrational. For example, if I gave you these two choices: take (100% guaranteed) $5 or have a 50% chance of getting $10, more people are likely to choose the guaranteed option even though from a utility aspect, both choices have the same value (5x1 vs 10x0.5). I know this is an oversimplified example but in this case, since the utility of both is the exact same, we'd expect a computer to pick both equally (if asked this question lets say 100 times, we'd see a 50/50 split for both options). But for us, there seem to be a lot of other factors coming into play like whether you're risk averse, how you're feeling that day etc. which, I'd argue at least for now, a computer cannot experience.

      For the sentience part, I wonder if our ability to understand how our brains work is the reason we're conscious. Assuming that is the case, I'd imagine that at a certain point AI will be able to understand and modify its own code. Actually, considering AI that learns already exists (I think?) then it'd already be modifying its own code, so maybe we're just waiting for it to understand what it's modifying, which would then line up with your idea of sentience right?

      Delete
    3. Lyla, there's more to brain activity than just all-or-none action potentials. There's also graded postsynaptic potentials, connectivity, intensity, biochemistry. But according to the Strong C/T Thesis, computation should be able to simulate that too. Yet a simulated waterfall is still not wet...

      Matt, Lyla, yes, you can "think" of "will" as c/b analysis, and that may be fine for the C/T Thesis, but it feels like something to will something, and that's missing. A c/b analysis may predict and explain what you are doing, but it does not even touch the fact that it feels like something to do it. That's the hard problem.

      Delete
  7. “Behavioral equivalence is absolutely central to the modern notion of computation: if we replace one computational system with another that has the “same” behavior, the computation will still “work.””

    “The only aspect of a program’s behavior the functional model is concerned with is the relation between its input and its output; it considers procedures that generate the same output for the same inputs to be behaviorally equivalent.”

    Reading these articles, I couldn’t seem to be able to shake my problem with the concept of behavioural equivalence. To be fair, by the end of “What is Computation”, the author admits that there is no clear answer to the question, and that the myriad of definitions that exist all have their issues already. I will just also be adding my own small issues in this comment.

    In short, I almost fully disagree with the possibility that the concept of behavioural equivalence can be used when it comes to explaining different computational systems in human cognition, particularly with emotions. For example, it means two very different things when someone receives flowers expectedly or unexpectedly. The input is the same; someone (maybe her name is Sarah) is now in the possession of a bouquet. And the output could be the same too; Sarah is pleased because the flowers are beautiful. But did she have to strongly hint at her significant other for the past 2 weeks that she would really appreciate flowers OR did she just have a super terrible week at work and was surprised with flowers as a pick-me-up?
    I don’t think the lived human experience could be as rich and interesting as it is if these nuances didn’t matter. Perhaps at a microscopic scale, emotions can also be broken down into a set of chemical “procedures” via synaptic connections, but I’m not sure if these discrete changes are or can be perceived. And if they can’t be perceived, maybe emotions become something that are “continuous”, like time? And if they’re continuous, they can’t be measured and if they’re not measurable then how does that fit in a computational system dependent on behavioural equivalence? In the culture today that values productivity and creative output above all else, I can see that kind of narrow emphasis on input/output alone being potentially very harmful.

    ReplyDelete
    Replies
    1. While I definitely agree with you in that the functional model and in the idea of behavioural equivalence, are inadequate to describe how we conceive of emotions generally, I don’t think the argument you provide necessarily contradicts the functional model.
      If we only consider observable behaviour to be the relevant inputs and outputs and that emotion can be summed up in these discrete pieces of data, the functional model is clearly insufficient. However, suppose a human being can be envisioned as some sort of Turing machine. The input necessary for a step is both a symbol that has to be read (i.e. in this case, we can consider this the external stimuli – the fact that Sarah received flowers) and a state that the machine has to be in. If we assume this internal state is the additional context that you provide for each scenario (i.e. whether Sarah had asked for the flowers or not), then the two emotional computations are not behaviourally equivalent since the inputs in each of them are different, which is your original objection, if I understood correctly.

      Delete
    2. Esther, here's another one of the strong/weak terminologies I said there would be in this course. There's "Strong Equivalence" and "Weak Equivalence." Weak Equivalence is when two computers both produce the same output for the same input. Strong Equivalence is when both do it using the same algorithm (computer program, symbol-manipulation rules). Pylyshyn likes strong equivalence for a computer program that cognizes. Most cognitive scientists would be happy with just weak equivalence. But either way, input/output is just doing things. It misses feeling completely.

      And distinguish the question of weak/strong equivalence from another important property of computation that will be important when we get to Searle's "Chinese Room Argument": implementation-independence. The computation itself is just the algorithm (program, software) that the computer executes. The same algorithm can be executed by many different physical computers (hardware). They are all doing the same computation. (And that doesn't just mean weak equivalence! It's the same computation even if it's very different hardwares executing the same algorithm (software, symbol manipulation rules).)

      William, remember the difference between a thing and its computational simulation. Simulated waterfalls are not wet. And the Turing Test is indeed just about doing (input/output), not feeling. And that's even true of the robotic version of the Turing Test, which is hybrid dynamic/computational: Yet it's still just doings, not feelings.

      More about all this as the course goes on...

      Delete
  8. In What Is Computation, Horswill mentions that “changing our ideas about computation changes our ideas about thought and so our ideas about ourselves” (2). I think that even the idea of cognition being defined and understood can be threatening to people. We (especially in the West) take great pride in our “individuality” and “uniqueness”. Developing an understanding of cognition and potentially replicating it, threatens this.

    At one point Horswill mentions that a possible (though limiting) definition of computation is “the process of deriving the desired output from a given input” (4). This brought to mind the human struggle of constantly trying new approaches to bring our goals into fruition, whether that be improving a relationship with a partner or perfecting a soup recipe. It’s a life-project : the art of experimentation, reflection, more experimentation and hopefully before we die, arrival at our desired outputs. In this sense, our lives revolve around determining the correct inputs.

    The discussion of “infinite loops” reminds me of the negative thought patterns that cognitive behaviour therapy strives to help clients break. In the same way that a computer program may repeatedly run the same piece of code, our brains sometimes get stuck in vicious cycles. Someone with OCD may fixate on an obsessive thought (ex. I am dirty), which results in persistent hand washing. Similar to fine tuning code so the program doesn’t miss a line, CBT often urges people to interrupt their thought patterns in order to avoid this continuous loop.

    ReplyDelete
    Replies
    1. Claire, for you, and for most of the other students at this stage of the course, you are still just getting "impressions" of what computation is, and means. It will become more concrete soon. And at the beginning of the course everyone (quite naturally) wants to see if they can project feelings onto computation. That, too, we'll be looking at more and more closely, as we head to Searle's Chinese Room Argument (that cognition cannot be just computation), and then to the "symbol grounding problem" (which is about why not).

      Delete
  9. “There are people who argue we could live forever by “downloading” ourselves into silicon.”

    I wanted to stop on this quote as I thought it could offer some insights not only on identity and what differentiates us with a simulation of us but also how this thought experiment points to a few limits of the computational theory of cognition.

    It’s an ongoing issue in computer science as to how we generate pure randomness in a computer. There are today two different types of random numbers that a computer can generate. 1) Pseudorandom numbers which are entirely generated algorithmically by inputing a “seed” and outsourcing different and seemingly random outputs at every entry - the problem with pseudorandom numbers is that they are generated algorithmically and that they will only be random so long as the seed is unknown (furthermore this seed can be deduced by reverse engineering the algorithm). 2) True random numbers are generated by having the computer measure some outside entropic phenomena such as radioactive decay or atmospheric noise which essentially is determined by the mood of our universe and thus arguably random… As cognitive systems, human beings, whether they want to or not, are continually plugged in and coupled to their surroundings from head to toe and thus continually subject to randomness in a way that a computer is not necessarily designed to be. Considering the above thought-experiment, could I still be me by “downloading” myself onto silicon? I could be an un-aging simulation of me as determined by all my past accumulated sensory interactions with my environment, but I could not be the human living version of me as continually plugged into and stimulated by my random environment. In other words, until I could be downloaded into a vessel with the identical sensory-motor capabilities as my previous body, capable in real-time of sensing random interactions with my environment, I would be stuck in an old simulation of me. To bring it back to the text, what this thought experiment reveals is that what may be missing from a computational theory of cognition is how and to what degree a computational system has to be coupled to its environment to be considered cognitive.

    ReplyDelete
    Replies
    1. I feel you're losing some of the nuance this question involves. I don't feel randomness has much to do with the quote you're bringing up.

      First of all, do we really have any idea how humans generate "random" responses? A lot of the time, seemingly unpredictable statements have some measure of thought behind them. In conversations, people often suddenly veer off-topic, supposedly, when in reality the last discussed thought reminded them of something else, which connected to a different topic (or several), which brought to mind the thought they then voiced. Although it appears to be so, that thought was not random.

      You could ask me to generate a random word right now. Any word. And I could come up with "spaghetti" or "communism" on the spot, without truly knowing why those words sprung to mind. To both you and me, these thoughts would appear random. But are they truly? Perhaps the meal I ate yesterday suddenly came to mind. Perhaps I regularly converse about communism with friends. (Although neither of those is true.) As you've mentioned in your comment, these responses seem random as long as the origin of the process behind them is unknown, just like pseudorandom numbers generated by an algorithm. Can we then say that what I've generated is human randomness?

      Your point on true random numbers seems to imply that to take the human out of their random environment is to take away their randomness. And if it suddenly happens that my brain is taken out of my head and put, fully functioning, into a jar - an environment where I am not subject to the "mood of the universe"? Would that then take away my capacity for "human randomness"?

      Finally, what does randomness have to do with cognition? Would you not consider the me that is a brain in a jar capable of cognition, since I would be taken away from my environment? Are we only capable of thought and sentience when we have an environment to influence us and to bounce off-of? Just food for thought :)

      Delete
    2. See my reply about randomness and complexity theory in another thread.

      Both of you (and the author too) are conflating the implementation-independence of computation, the difference between things and their computer simulations (Strong C/T), computationalism (is cognition just computation?) and the Hard Problem (doings vs. feelings).

      This will all sort out as the course goes on. The first week is always the most impressionistic and subjective.

      Delete
  10. To echo some comments mentioned above, I also struggled to reconcile the idea of behavioural equivalence when reading the "What is Computation" paper. I understand that behavioural equivalence is the foundation of modern computation and I don't dispute the importance of two different procedures arriving at the same behavioural outcome, however I do take issue with the idea that the output behaviour is the only thing that defines a person or system. In a math class for example, we were always required to show our work and even if the output (or answer) was correct, we would be deducted points if the process was unconventional. Searle's Chinese Room thought experiment also suggests that behavioural equivalence is merely mimicry and not replication— that an intelligent output does not necessitate an intelligent machine that has a grasp of meaning. Where does meaning and understanding fit into all of this? Are there situations where mimicry is all we need (I would agree there is) and if so, does the true essence of the human mind even matter if a machine can pass the Turing test?

    On another note I have thoughts and questions about the predictability of human behaviour and what that means for artificial intelligence. Is it really possible to model human behaviour in all its randomness and irrationality? A simplistic algorithm for responding to a situation could be: assess your current state, assess your goals and desired outcome, and then assess the difference between your current state and the goal and act accordingly to close the gap (of course this is way oversimplified). However we know that people don't always act in their own best interests? How can we possibly capture the range of possible outcomes/outputs for a given input? Is this just solved in theory because a Turing machine has infinite capacity?

    Lastly I also take issue with the claims made by Newell & Simon (1976) that an intelligent agent is necessarily a physical symbol system. Here again I bring up the question of meaning, isn't intelligence more than a computer combining meaningless combinations of symbols? Also how does this hypothesis contend with the hard problem of consciousness; answering the question of how physical states give rise to mental states.

    ReplyDelete
    Replies
    1. You are struggling with the difference between doing and feeling. Welcome to the course! There's still a lot to be said. But cogsci will turn out to have a good chance to (eventually) solve the "easy problem" of explaining how and why organisms can do what they can do, but it's destined for a rough ride with the "hard problem" of explaining how and why organisms feel rather than just do.

      Implementation-independence, strong/weak equivalence, etc. are fun to think about and sort out, but after that the real work begins...

      Delete
  11. In "What is Computation" Horswill takes out ability to simulate an individual neuron and expands it to mean that eventually we will be able to simulate the entire brain. Although this is a very interesting concept, I do think there are issues with it.

    The neurons and neural networks we've simulated so far have, to my knowledge, been entirely logical. However, as multiple people have already mentioned, our decision making process is rarely, if ever, entirely logical. I think the fact that the computer simulation of a neuron has been around for decades, and the fact that artificial neural networks still cannot pass the Turing test is indicative of the fact that simply connecting these artificial neurons isn't enough to simulate the brain and that we're missing something.

    What I'm wondering is if this problem would persist with if the machine had infinite memory. On one hand, remembering everything might mean that the responses would be too impacted by previous events. That is, the machine might be too limited by previous experiences if it remembered all said previous experiences. On the other hand, having knowledge of all the events and their accompanying reactions might allow for the machine to develop some kind of emotion metric that would allow for it to react to an event based on a simulated emotional environment. Although I'm not sure this should be considered intelligent anyway because it would simply average previous responses (even if it was possible).

    ReplyDelete
    Replies
    1. Computation has been around for nearly a century, computers for almost as long. How long do you think it should take to pass the TT with computation, and why?

      Passing the TT is not the same thing as simulating neurons, or the brain. (Be sure you understand the difference.)

      What is your alternative for logic: toss a coin? do self-contradictory things?

      What do you mean by "infinite memory"? This would be a good time to read "Funes the Memorious" by Borges.

      Delete
  12. From “What is computation?”:
    “What may be less obvious is that what we’ve called procedures above are themselves representations. They’re just text. And as such, they can be manipulated like any other text: copied, modified, erased, etc... What makes them different from other representations is that they can also be followed (executed). The process of reading and following a stored representation of a procedure is called interpretation. “

    The above citation points out to the fact that the so-called procedures for manipulating representations when performing a certain computation are themselves a special type of representation. They are kinds of meta-representations (representations about representations).
    I find it very interesting that we humans intuitively understand when a certain representation counts as a procedure or an instruction. For example, I automatically and unconsciously know when I am driving that a “detour sign” or a “stop sign” instructs me to behave in a certain way. I would never encounter those signs and only think to myself that they are merely strings of letters or symbols. I intuitively grasp the fact that they are prescribing a certain behaviour.

    My question is “What is it about a certain representation that makes a human perceive it as an instruction?” or maybe even better “What is it about humans that makes a certain representation the kind of representation that can be followed, or that instructs or guides their behavior?”

    ReplyDelete
    Replies
    1. What is a "representation"? (I only know about symbols and symbol manipulation rules. And by "intepret" I don't mean when a computer executes the code. It's when a person reads it, interprets it, and understand it. ("Representation" is a weasel-word -- and homuncular, too!)

      Delete
  13. In "What is a Turing Machine?" by Jack Copeland and "What is Computation?" by Ian Horswill, we learned that a Turing Machine can do computation. Computation is the act of manipulating symbols (by "reading (i.e. identify) the symbol currently under the head, writing a symbol on the square currently under the head (after first deleting the symbol already written there, if any), moving the tape left one square, moving the tape right one square, changing state, or halting") according to an instruction table with a finite number of directions to carry out a function.

    It is impressive that a single, universal Turing machine (UTM) can carry out every computation that can be performed by any other Turing machine! Can the human brain ever become a UTM that can receive inputs from the external environment, refer to its instruction table and be able to at once provide the correct answer every time?

    Some downfalls of human computation include: (1) its necessities of energy and time; (2) humans are often biased and impulsive; and (3) sometimes its outputs can be inaccurate. On the other hand, it is advantageous that a human can improve upon their instruction table as they progress through life. Most likely, the human mind will never be as effective or fast as a Turing machine. But if one day, the human mind possesses unlimited focus and time, its computation capability would be limitless.

    ReplyDelete
    Replies
    1. The human brain can do symbol-manipulation, just as a Turing Machine does. The question is whether computation (symbol-manipulation) is what the human brain does (or all that the human brain does).

      Delete
  14. After familiarizing myself more with the idea of computationalism and how it can be really parallelized to the human brain a really complex issue. Because if we did manage to create a computer that can simulate every synaptic connection in the brain and then that type of computing would also be able to replicate what it is like to be conscious, what would we be able to compute?

    Obviously I think we could compute what human beings are "wired" to compute such as being able to interact with objects in the environment to facilitate our needs. But what about when it comes to morality? How would it compute the moral landscape? Would it be able to discover a form of objective morality or would it simply try to compute a morality for itself? Also, if we make this hypothetical computer that is a mirror image to the brain in terms of computing and consciousness, would it act like a human being or would it go in another direction that would be likely unforeseen?

    I ask myself these questions because while I understand that cog. sci. is attempting to lay out and objective map of how consciousness can/does function, is the way to acquire those answers best done through a computational lens?

    ReplyDelete
    Replies
    1. I don't think that computationalism makes any explicit claims about consciousness and as far as I can tell, consciousness might not even be necessary for intelligent action: i.e moral decisions could be computed without there being a sense of feeling "what it is like" to make that moral decision. Morevover, your interpretation of computationalism doesn't align with what it is (at least how I understand it). You mention creating a computer that would "simulate every synaptic connection in the brain". A functional theory of cognition does not appeal to physical implementation(s) and therefore there is no need to simulate synaptic connections which are considered to be hardware, not software, in order to reverse-engineer an intelligent system. The reverse engineering that cognitive science is attempting to do occurs at a functional level (i.e. computational level). Computation is symbol manipulation. Simplistically, if we can understand how the mind manipulates symbols to produce intelligent behaviour (and if that is actually what is going on in there), then we can implement whatever algorithm our minds use in any appropriate physical system.

      Delete
    2. Thank you for that clarification; that makes a lot of sense. You explained things very concisely and clearly.

      Delete
  15. “Alan Turing (of Turing machine fame) argued that if a computer could fool humans into thinking it was human, then it would have to be considered to be intelligent even though it wasn’t actually human. In other words, intelligence is ultimately a behavioral phenomenon.” (What is Computation)

    This wasn’t the main point of the article, but this is something I am fascinated with. When is something intelligent?

    Whatever “intelligence” is, I find it very hard to believe that intelligence is strictly a behavioural phenomenon, as Turing argues. Say I code a robot to recite the Wikipedia article for any subject you ask. Sure, the robot has a vast database of knowledge, but I don’t think the robot is intelligent. It simply uses a search feature to look up whatever you have asked it and replies whatever is saved in its storage. However, consider a human with this same ability to describe any subject in great detail. We would likely say this person is intelligent. A key difference, I believe, is understanding. The robot can blurt out some accurate descriptions, but it is unable to grasp the concepts it describes.

    ReplyDelete
    Replies
    1. According to Turing, intelligence is as intelligence does. It is our capacity to do all the things we can do that cogsci tries to reverse-engineer, and the TTest tests whether cogsci has succeeded. Explaining how and why we can do all that is cogsci's "easy problem."

      Then there's also the fact that it feels like something to do and be able to do all the things we can do. Explaining how and why we feel is cogsci's "hard problem"...

      Delete
  16. In lecture, Professor Harnad dismissed Simulation Theory as being nonsense. And while I’m not sure if he was referring to Simulation Theory specifically or pancomputationalism generally, after reading these articles, specifically “What is Computation?” and “What is a Physical Symbol System?,” I think it does not make sense to disregard without discussion.

    On pg.17 of “What is Computation?,” Horswill addresses cochlear implants and their capacity to replace damaged biological systems with computing systems. I’m under the impression that Cochlear implants can repair problems where the bones in the mid-ear no longer stimulate the eardrum, by sensing sound waves and stimulating the eardrum at the correct frequency. This is obviously a high-level abstraction, using as little detail as possible, of the neurological system that ultimately stimulates the eardrum, but if we think about applying this to the brain, is there any reason that would prevent us from simulating neurons or entire regions of the brain, outside of time-space complexity?

    Looking at the other blog posts, I’m imagining Prof. Harnad will say that I’m not understanding the difference between doing and feeling, because while the transplant may function the same, will it “feel” the same to the person. But, I’d argue that that assertion is only valid if feeling does not have a physical representation that can be simulated. If the transplanted region of the brains reacts to stimuli in the same way as the original brain, it may communicate the same functionality AND feeling to the rest of the brain, and the person would be unaware of a difference.

    ReplyDelete
    Replies
    1. 1. Cochlear implants are not just computers; they are analog devices, with sensors and effectors. They are not computational simulations; they are synthetic dynamical systems.

      2. It's not about doing vs feeling. It's about computation vs dynamical systems. (And neither real nor synthetic cochlea feel. Presumably feeling happens higher up in the brain.)

      3. The Strong C/T Thesis suggests that just about anything can be simulated computationally. But computationally simulated movement does not move and computationally simulated feeling (whatever that means!) is not felt.

      4. Some parts of brain function are no doubt replaceable by just computers, computing. But, for example, neither sensory transduction nor moving can be. Neither can heat.

      Delete
  17. In "What is Computation?" Ian Horswill notes that two procedures are behaviourally equivalent if given the same input(s), they return the same output. This is what we called "weak equivalence" in class. When discussing computational neuroscience, Horswill states that "computation is all about behavioral equivalence" (16), suggesting that the goal of computational neuroscience is to construct a computer simulation which is behaviorally equivalent to a brain. However, Pylyshyn holds that weak equivalence is not enough for cognitive science; strong equivalence is required--i.e., a computer model of the brain brain must not only give the same inputs and outputs the brain does but must also use the same algorithm the brain uses.

    If we are to adhere to Pylyshyn's view of cognitive science, for any algorithm used in a computer model that is (at least partially) behaviorally equivalent to the brain, it must be determined whether this is indeed the algorithm the brain itself uses. This raises the question: how are we to test whether a given algorithm (which we have programmed in a computer) is used in the brain? Must we look at the brain hardware and from that try to infer the "software" -- as is often done in neuroscience? Or are there non-invasive psychological methods to test whether the algorithm we have posited is indeed used in the brain?

    ReplyDelete
    Replies
    1. To adhere to Pylyshyn's view we would have to believe that cognition is just computation. But what if it isn't? And even if it were, why would identifying the right algorithm matter?

      If a computer is doing something complex enough you can't determine what algorithm it is executing based on its input/output.

      If there are two digital computers that are weakly equivalent (and doing something complex enough), you can't determine whether they are running the same algorithm either; not even by comparing their speed, because their speed should be equivalent too, for I/O equivalence (and speed differences could be hardware differences rather than software differences).

      Delete
  18. I am curious why Turing created limits to this machine, was it for the purpose of reminding us that we have bounds to our abilities, even those of higher consciousness? Or was he scared of having machines be limitless? I am so intrigued by this concept because I have never even heard of a Turing machine, and this is my fifth year in psychology. How could I not know about a machine that is able to do such complex computations, that can keep going on and on and is known as one of the more accurate computers. Was Turing making these machines around the same time that others were pioneering the computer? Do we still use Turing machines? Is it a method that labs/researchers continue to use as the “gold standard” for computing.

    ReplyDelete
  19. In ‘What is Computation’, Horswill states ‘that computation is an idea in flux. Our culture is in the process of renegotiating what it thinks the concepts computation and computer really mean. Computation lies on a conceptual fault line where small changes can have major consequences to how we view the world’.

    In light of the above discussion, how variable is computation then, and can we say if it is in a constant state of fluctuation? From my understanding, if small variations can have major repercussions, then computation will always be ‘an idea in flux’ till the end of time; with cultural shifts, refutation of certain methods, and introduction of new ones, it is inevitable that it will change.

    Also discussed, that if computation is simply ‘information processing’, then to what extent can we say this would vary with respect to those living and brought up in collectivist and individualist societies? To what extent can these cultural dimensions affect the way information is processed?

    ReplyDelete
  20. "A model of a world is a representation of the specifics of what is true in the world or of the dynamic of the world. The world does not have to be modeled at the most detailed level to be useful... The models are judged not by whether they are correct, but by whether they are useful.
    - What is a physical symbol system?

    In this reading and specific passage, I thought it was curious how the author emphasized the value of a model's usefulness, as disconnected and more important than a model's accuracy, precision, or "correctness". Perhaps it's a misuse of language colloquially, but I've always considered that it is within the very definition of what it means to be a model to be able to accurately exemplify whatever is being modelled. The author explains how low-level abstraction increases accuracy at the expense of reason, but how worthwhile are models of high-level abstraction that are simply easier to digest and process? I would imagine that the former is ultimately what we still continue to strive for and will prove to be more useful. The author's perspective also seems to parallel the Functional Model of computation in the other reading, which describes how systems are measured by the efficacy of their computing, instead of the "how" or "why" they compute.

    ReplyDelete
    Replies
    1. It just means that models can be helpful (in answering how/why questions, in making predictions) even if they are approximate (i.e., close, but not exact). All of science (and language, and thought) is only approximate. Your model can get closer and closer to the reality it is trying to describe, but never exact or complete. This is no big deal. You can probably trust the epidemiological models about how to minimize the spread of Covid, for example, even if they are not yet the whole story.

      Delete
  21. I'm still working out the technical axioms of the class, weak/strong, C/T, etc... but just going off beginning thoughts I had; it seems to me the idea of cognition or thinking (if I may equate the two as being used in the same sense in these texts) being solely computational, and that all human cognition does is compute in the computational sense, is strongly biased by a sort of out of touch, disembodied, "Western", male-centric thought, possibly also influenced by Christian traditions of neglecting the feminine. Also Strong AI, that symbol manipulation alone can lead to consciousness (if I am correct on this, and I'm not sure where consciousness really comes into the dialogue), reflects this bias.

    I can have thoughts that are at varying degrees of consciousness. I can say things, or there can be general unconscious dialogue running in my head, which I am not aware is quite happening, or of which I am not clear as to what the extent their relatedness to and implication on the world is. Perhaps something like a "picture" is not clearly formed. Based on this loose starting point, I suggest a movement away from "thinking" or "computation" as a marker of cognition, toward "awareness". How can we create machines that are aware, and have something like attentional flexibility? How is attention guided by non-propositional processes? Have we denigrated the value of awareness as a function of our cognition, in favour of computational thinking?

    The addition of such a feature, would make for connection to work in other fields such as depth psychology (particularly Jungian psychology) and its description of the phenomenological expressions of development of the human personality, or 'individuation', as essentially a bio-psysocio-spiritual process of human development — interestingly, alchemical processes are often invoked as parallels to the psychic developmental process, suggesting a basis in some chemical reactive/energetic activity. Neuropsychology and psychology also suggest a dual system (e.g. hemispheric lateralization of the brain, and system 1/system 2 processing). It seems the computational system would only correspond to 'logos', 'animus', the left hemisphere, or system 2 processing. "The Master and His Emissary" and "Technic and Magic" are two books that comment on the pitfalls of the neglect of the right hemisphere or "magic" in our world today. Buddhist tradition of extensive inquiry into the mind ("buddhist psychology") also has a huge emphasis on the cultivation of attention and awareness and its continued dialogue with cognitive science may be fruitful.

    By the way, as a small follow-up to my comment in class about when the word "compute" arose — 'The word 'compute' comes from the Latin word computare, meaning "arithmetic, accounting, reckoning". ... The Latin word computare itself comes from: Latin com, meaning "with", and. Latin putare, meaning "to settle, clear up, reckon"'

    I'm looking forward to reading the Pylyshyn chapter sometime.

    ReplyDelete
  22. In reading the article on Computationalism, I found Horswill’s description of the imperative model of computationalism quite useful. Before reading the article, I was only familiar with the functional model – which is the version of computation more commonly discussed in culture. Considering an imperative model, one can more readily understand how computationalism is made plausible – that is, why many folks think that computation could be the sum of cognition. Compared to the functional model, the imperative model lends itself easily to looping and approximating – if not completely replicating – continuous analog processes. While I do not think that computation is the sum of thinking – both on the bases of many good arguments and perhaps also to a degree based on a grandmother-ish bias – this model of computation does seem to readily account for many unconscious and involuntary processes of the brain e.g. lower brain and brain stem processes, maintaining oxygen, as well as some simpler conscious mental tasks, such as holding a number in short term memory. While these problems are certainly not those which Turing or computationalists tend to be, or even need be concerned with, Horswill’s explanation of the imperative model is still useful and informative in so far as it assists in illustrating how computation can approximately simulate organic processes, pulling back some of the irrational, intuitive objections which separate computation and digital machines from ‘natural machines’ and processes.

    ReplyDelete
    Replies
    1. I also found Horswill’s Imperative Model useful in not only providing a less restrictive understanding of computation, but also in providing some kind of explanation/justification for the rise of computationalism. Specifically, as mentioned above, simulation and the Strong Church Turing Hypothesis account for (approximating & replicating) continuous processes. It’s interesting to me that while behaviorism was falling out of favor and understanding the blackbox was on the rise, computationalism was developed as a potential solution – where key notions like behavioral equivalence and the functional model’s ideas for input/output were defined, aligning quite well with behaviorism. Following rules & the potential for simulation provided some kind of answer to the blackbox; however, I think this justification comes across as forcing the wrong jigsaw piece to finish the puzzle.

      Delete
  23. I just had a question about the midterm material: Prof Harnad explained while we were going over midterm answers that implementation-independence is not a strength of computationalism, but I didn't quite catch why that was.

    ReplyDelete
    Replies
    1. Eli, the question asked about the strengths and the weaknesses of computation as an explanation of cognition.

      The strengths are the power of computation (which seems almost mental), the Strong C/T Thesis (= Weak AI) that computation can simulate or model just about anything, the successes of neural nets.

      The weaknesses are that symbols are ungrounded squiggles and squoggles, unconnected to (what can be interpreted as) their referents, and sensorimotor and analog function is not computational.

      Implementation-independence is a property of computation, but neither a strength nor a weakness for computation as a candidate explanation of cognition.

      (Some computationalists wrongly think that the implementation-independence of computation is an insight into the hard problem: they think that the reason the "mind/body" problem seems hard, but isn't, is that cognition occurs at the software "level", so it's a mistake to look for the "mind" at the hardware level.)

      (Implementation-independence is, on the other hand, a weakness for computationalism (the theory that cognition is just computation, and that T2 is evidence of this) because Searle uses implementation-independence ("Searle's Periscope") to show that implementing the T2-passing computation would not produce understanding. So that's the "soft underbelly" of computationalism. If put that way, implementation-independence could be interpreted as a weakness of computation as an explanation of cognition. (But as far as I can tell, no one put it that way on the mid-term; if they had, it would have gained points, not lost them)

      Delete

Opening Overview Video of Categorization, Communication and Consciousness

Opening Overview Video of: