lectures256p3

50 %
50 %
Information about lectures256p3
Entertainment

Published on December 7, 2007

Author: Arley33

Source: authorstream.com

Neuroscience:  Neuroscience Phil/Psych 256 Cameron Shelley Overview:  Overview The central thesis: Thinking is like computation Why did Turing associate intelligence with a general purpose computer? Neuroscience raises questions for this paradigm: Is the thinking/computation analogy useful as a guide in understanding human cognition? Is what the brain does usefully viewed as a kind of computation? Hardware vs. software:  Hardware vs. software Churchland and Sejnowski: “The analogy between levels of description in a conventional computer and levels of explanation in nervous systems may well be profoundly misleading.” Distinctions we apply to computers apply also to the brain? The most fundamental distinction for computers: Hardware vs. software For brains: molecular, membrane, cell, circuit, networks, maps Marr on vision:  Marr on vision Marr (1945–1980): a three-level framework for describing vision: Computational: input/output functions Algorithmic: representations and procedures Implementation: device design Implementation could be treated like an afterthought on this sort of approach Evolution of finger motion:  Evolution of finger motion The brain is a product of evolution, not design An engineer would design a motor cortex with a somatotopic map One area to each finger, adjacent fingers - adjacent areas Schieber and Hibbard (1993): individual finger movements require more neural activity than whole-hand movements Whole-hand movements were more essential to ancestral monkeys Brain maps:  Brain maps A top level of description: Brain maps Consider the visual cortex (Felleman and Van Essen, 1991) The visual areas are not contiguous Brain maps2:  Brain maps2 Consider a “subway” map of the visual cortex (Felleman and Van Essen, 1991) Expected features: Regions are roughly hierarchical Depth corresponds roughly with abstraction Unexpected features: Regions are densely interconnected Most connections are reciprocal Downward connections outnumber upward ones Brain scans:  Brain scans Neurons that work harder consume more oxygen Scanners can detect oxygen uptake E.g., PET (1973) and fMRI (1993) Use a subtraction method E.g., Damasio et al. (1996) used PET to identify inferotemporal cortex areas associated with name categories Brain scans2:  Brain scans2 Same method employed with fMRI E.g., Kanwisher et al. (1997) used fMRI to associate the ‘FFA’ with face recognition, as opposed to the ‘PPA’ Gauthier & Tarr (2000) used fMRI to dispute this conclusion How neurons represent:  How neurons represent Early connectionism focused on firing rates of neurons and ignored firing patterns E.g., “10100” vs. “00011” Neurons might respond to patterns by synchronization Spiking or pulse networks represent firing patterns E.g., LISA (Hummel and Holyoak, 1998) Electroencephalograms:  Electroencephalograms EEG records frequency and intensity of electrical activity in small brain areas Event Related Potentials (ERPs): pair a cognitive task with an EEG E.g., Hillyard et al. (1973) used ERPs to determine when attention (to one ear or the other) begins (ca. 90msec). How molecules matter:  How molecules matter Connectionism has focused on the electrical activity of neurons Chemicals affect neuron function, e.g., caffeine which blocks adenosine uptake Three kinds of neuronal signaling Autocrine: secrete molecules to its own receptors Paracrine: secrete molecules to neighbours Endocrine: secrete molecules to far away cells Impact on Cognitive Science:  Impact on Cognitive Science Neuroscience has produced some surprises Possible responses include Maintain classical view: brain details concern implementation only Abandon classical view: forget about symbolic representations etc. Reconcile views: research requires negotiation between symbolic and brain-centered accounts of cognition Churchland and Sejnowski prefer 2 Thagard seems to prefer 3 Discussion questions:  Discussion questions Is it appropriate to say that the brain contains mental representations? Of what kind? Explain. Is it appropriate to describe brain processing as computation? Explain. In the light of modern neuroscience, is the analogy between thinking and computation profoundly misleading? Explain. In the light of modern neuroscience, are there good reasons to persist in speaking of knowledge in terms of high-level mental representations? Explain. Discuss the impact of Neuroscience on the methods of the classical Cognitive Science paradigm. How should Cognitive Scientists respond to this challenge? Explain. Emotions:  Emotions Phil/Psych 256 Cameron Shelley Overview:  Overview If you had to design an intelligent robot, would you include emotions? Contribution Evolutionary origin Predictability Overview2:  Overview2 Classic Cognitive Science takes no notice of emotions Yet, the problem has always been around: “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain…” Turing’s responses: The statement begs the question: How do we know that an intelligent computer will not experience emotions? “I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.” Emotions:  Emotions Emotion-laden categories: Basic emotions: happiness, sadness, anger, disgust, fear and surprise Complex emotions: love, shame Feelings: hunger, calm Moods: apathy, depression Personality traits: shyness, melancholy How would you categorize the following emotion terms? aggressiveness, anxiety, boredom, deja-vu, dejection, interest, joy, loathing, lust, optimism, resignation, revulsion, terror, whininess Dimensions of emotion:  Dimensions of emotion Phenomenological quality/hedonic tone Valence: positive or negative Physiological symptoms Action readiness Emotions are more transient than moods Perhaps moods are just not easily resolved Traits are not emotions but dispositions Emotions vs. feelings:  Emotions vs. feelings Emotions are hard to distinguish from feelings E.g., Are tense, fatigued, alert, and calm feelings? Two views of emotion: States of mind (points in emotion space) Kinds of experiences (feelings) Perhaps (2) is consistent with the distinction between basic and complex emotions Emotions and knowledge:  Emotions and knowledge Emotions are appraisals (Oatley): compare the current situation to goals, concerns, plans Basic emotions relate to basic goals Appraisals:  Appraisals Complex emotions: basic emotions attached to cognitive states Cognitive states involved are quasi-propositional I am <emotion> that <proposition>. Emotions and the body:  Emotions and the body Why are emotions feelings and not simply concepts? Why not visual instead of visceral (e.g., loathing)? Focus on the physiological dimension of emotion E.g., anger feels hot, fear feels shaky The James-Lange theory (ca. 1885) We meet a bear and tremble, and because we tremble feel afraid. Evidence for James-Lange:  Evidence for James-Lange The Blatz trapdoor experiment (1925) Facial expressions prompt feelings (Ekman 1979) E.g., smiling increases happiness We describe emotional experience in bodily terms E.g., “I was deeply moved” Problems include: Deafferented people (and cats) still have emotions The mapping between bodily states and emotions is vague, e.g., why do fear and excitement not feel the same? Emotions and the brain:  Emotions and the brain Perhaps the brain has special emotion hardware The Cannon-Bard theory (ca. 1930): the limbic system produces emotions The thalamus signals both body and brain Delgado: surgical evidence for the importance of the amygdala Tamed a bull with a remote control (1965)! Is the amygdala an “emotional computer” (LeDoux 1993)? Problems: We can learn emotional responses from experience, e.g., a taste for spicy food (Rozin & Schiller, 1979) The somatic marker hypothesis:  The somatic marker hypothesis Emotions result from the attachment of somatic (body) images to cognitive states via the limbic system (Damasio 1994) Accounts for involvement of limbic system Makes sense of basic vs. complex emotions Helps to account for hedonic tone Embarrassed to miss the exam? Somatic image represents the hot feeling A proposition (?) represents the cause The limbic system facilitates the attachment Emotions and reason:  Emotions and reason The ventromedial cortex connects limbic and higher cortical regions Damage to the VM results in loss of rational decision making ability E.g., E.V.R. (surgical lesion, 1984) Phineas Gage (railroad worker, 1848) Implications: Emotions result from interaction of brain regions We need emotions to be rational Computing emotions:  Computing emotions Emotion nodes could be added to a localized network ITERA (Nerb and Spada, 2001) Models how people respond to news of environmental disasters E.g., Exxon Valdez spill vs. Indian Ocean Tsunami Problem: not neurologically plausible Computing emotions2:  Computing emotions2 Organization of brain regions could be modeled GAGE (Wagar and Thagard, 2004) Integrates (synchronizes) cognitive and emotional information Responds appropriately when ‘lesioned’ Plausibility: Configuration of areas models configuration of brain regions Connections are bidirectional Reproduces appropriate behavior How bodies matter:  How bodies matter Neither model includes representation of the body Can such models account for emotions as feelings? A body could be simulated, but the importance of a body remains unexplained Perhaps we need to consider the action readiness of emotions E.g., anger primes us to lash out Emotions would be pointless without bodies to direct How bodies matter2:  How bodies matter2 The kind of body you have is crucial to the kinds of experiences you have Is an actual, human-like body required to simulate human emotions then? Consider Kismet (Breazeal) Discussion questions:  Discussion questions Could a computer ever experience emotions? The same emotions that you do? Explain. How do emotions contribute to your performance on important cognitive tasks? Could a robot be intelligent without having emotions? Is there anything we can learn from reproducing emotions in a robot that we cannot learn from simulating them on a computer? Explain. Consciousness: Metaphysics:  Consciousness: Metaphysics Phil/Psych 256 Cameron Shelley Overview:  Overview Why can you not observe the mental experiences of your neighbors? Consciousness seems to be private How can it be private yet real? How can we know that any two people have the same conscious experiences? How can you have both observable and unobservable aspects to your nature? The mind-body problem If consciousness is more than a matter of states of mind, then how can Cognitive Science account for it? Wakefulness:  Wakefulness Consciousness vs. asleep, comatose, dead Comes in degrees Wakefulness can be judged by public criteria, e.g., responses to speech or prodding These criteria are not foolproof: Paralysis Locked-in Qualia:  Qualia Usually, people who are awake have conscious experiences, i.e., qualia Qualia: what it is like to be you, e.g., seeing a fire truck, feeling hungry, in pain, being angry Ineffable: difficult to capture in words Man, if you’ve got to ask, you’ll never know. How can two intelligent beings be in a room, yet one has qualia and the other not? Could a human be intelligent and yet not experience qualia? E.g., zombies! Self-consciousness:  Self-consciousness We can also be self-conscious: Proneness to embarrassment Self-detection: aware of bodily events Self-monitoring: self-detection past (memory) and future (imagination) Self-recognition, e.g., Gallup’s mirror experiments Awareness of perspective: Different people have different beliefs and limitations Self-knowledge: you as the hero of a personal narrative Updating some mental representations of oneself Do qualia contribute to self-identity as such? The mind-body problem:  The mind-body problem Rene Descartes (1596–1650): What kind of thing is a mind? A part of the body? They share few qualities, yet do affect each other Main problem: How can a body have conscious experiences? Descartes’s solution: substance dualism Problems: How do mind and body interact? The pineal gland? The principle of conservation of energy is violated Functionalism:  Functionalism Focus on the computational level, i.e., on the function(s) of consciousness Functionalists have focused on self-consciousness: Consciousness is the mind’s “operating system” (Johnson-Laird 1984) Track the computer’s state, control access to its internal resources, regulate interactions with the world Thoughts track our status, assign cognitive resources to current projects, frame actions to satisfy goals Pros: facilitates multiple realization Cons: Facilitates rampant multiple realization Does not account for qualia Property dualism:  Property dualism Perhaps qualia should come first The fading qualia argument (Chalmers 1996) You are conscious now Your android double would not be conscious If you are changed into your android double Consciousness would not just vanish Nor would it fade away How do we resolve the contradiction between (2) and (3)? Property dualism2:  Property dualism2 Material objects have two kinds of properties, physical and mental The mental properties emerge in physical objects of sufficient functional complexity The study of mental properties must be added to science, like electrical charge to physics (Maxwell, ca. 1850) Property dualism preserves strong intuitions and some advantages of functionalism and substance dualism It also preserves some of their difficulties: Like substance dualism, it is unfalsifiable Like funtionalism, it admits rampant multiple realizability Identity materialism:  Identity materialism Conscious mental states are simply certain brain states (Place 1959) Advantages include Conscious states do physical work No zombies: A physical duplicate must also be conscious Does it explain qualia? Perhaps brain states and conscious mental states are just two ways of naming the same thing Compare temperature and mean kinetic energy Brain state is simply impersonal whereas conscious mental state suggests re-enactment Problem: not enough multiple realizations? Consciousness: Science:  Consciousness: Science Phil/Psych 256 Cameron Shelley Overview:  Overview Three basic senses of consciousness: Wakefulness Qualia Self-consciousness Qualia present the biggest obstacle Scientific research on consciousness is both functionalist and materialist Remaining challenges: To what extent do current theories explain consciousness? The problem of qualia (set aside) Wakefulness:  Wakefulness EEG studies have revealed how brain activity various with wakefulness Wavelength increases in sleep (1 to 4) except in REM sleep (SP) Wakefulness2:  Wakefulness2 Some brain regions play special roles in wakefulness When falling asleep: Signals from brainstem to thalamus diminish Thalamus ceases to relay sensory inputs to higher cortex Thalamic neurons enter oscillating activity pattern The thalamus partially reactivates in REM sleep Wakefulness and rhythms:  Wakefulness and rhythms Chemcial rhythms also correlate with wakefulness (Thagard) As glycogen stores are depleted, adenosine levels increase, causing drowsiness So, there is a chemical ebb and flow associated with wakefulness Electrical and chemical rhythms are not obviously representational or procedural in nature Analogy: If you want to understand traffic flow, the kinds of cars used and their fuels are not the whole story Dynamical systems account of mind later… Neural correlates of consciousness:  Neural correlates of consciousness Assumptions of NCC research Set aside qualia Concern for both localization and patterns of neural activity correlated with conscious experiences Discussed by Crick and Koch (1990) Why are we conscious? To produce the best possible interpretation of the visual scene, and To make this interpretation available to brain regions responsible for voluntary action. Consciousness has a special survival value Cognitive mechanisms:  Cognitive mechanisms Short-term memory Small amount of information stored for a fraction of a second Accounts for fluidity of conscious experience Attention Visual attention directs gaze to locations or objects Results from competition among coalitions of neurons Neural structures and patterns:  Neural structures and patterns C&K identify visual associative and motor (executive) cortices as sites of NCCs Short-term memory neurons are organized into a circuit activated by circular pattern of signals Feedback circuits create opportunity for constant renegotiation of representations of the situation Blindsight:  Blindsight Blindsight: people with no visual experience can perform simple visual tasks E.g., people with extensive V1 damage Explanation: vision occurs in two streams Dorsal stream: egocentric, non-conscious Ventral stream: allocentric, conscious Blindsight patients still get info from dorsal stream Did you know that you have blindsight? Binocular rivalry:  Binocular rivalry Binocular rivalry: different image presented to each eye Percept switches from one image to the other NCC prediction: correlated neurons in frontal associative cortex Prediction confirmed by Logothetis et al (1990s) Activity in V4 and MT only 34% correlated with percept Activity in STS and IT (integration cortex) 90% correlated Global workspace theory:  Global workspace theory NCC approach: search for correlates Supplemented with functional claims Global workspace theory (Baars 1988) Define computational/algorithmic functions Supplement with neurological research Consciousness is a kind of “spotlight” in working memory Working memory is like a blackboard on which different processes share information Theater of the mind:  Theater of the mind Working memory is like a stage Attention acts like a spotlight Audience members are experts at different jobs Components of the theater include: Working memory, with the spotlight of consciousness. Information entered by sensory processes. Information read by action (motor) processes. Various unconscious experts interact with working memory, e.g., Long-term memory Knowledge of language Automatic behaviors Interpretation is also affected by context, concerning self-identity, intentions, expectations, and so on. Aspects of GW theory:  Aspects of GW theory Operation of working memory tends to be serial instead of parallel More flexible, deliberate behaviour results Functions of consciousness on GW theory include: Negotiating the best way of representing the world Keeping representations active long enough for learning By mobilizing a variety of cognitive expertise, allowing for better critical thinking. Cognitive evidence for GW:  Cognitive evidence for GW Account for automatic/stereotypic nature of some unconscious behaviours E.g., Hilbert’s inappropriate retirement Conscious experience has a serial feel Seriality imposed by resource limitation Classical cognitive science approach, e.g., rules and search, models only an atypical mode of neural processing Challenges:  Challenges Purpose: integration of data for direction of action The problem of rampant multiple realizability: Is IDA conscious? Data-driven: theories stated somewhat vaguely Do the theories fit with observations? Titchener circles: should we be conscious of the actual diameter of the inner circles when reaching for them? Discussion questions:  Discussion questions Thagard (p. 187) claims that “consciousness can plausibly be understood as a computational-representational process”. What does he mean? Do you agree? Explain. Wakefulness is associated with points in electrical and chemical rhythms of neuronal activity. What is the significance of these rhythms? Do they constitute a problem for functionalist accounts of consciousness? Explain. Crick and Koch, and Baars agree that consciousness functions to make human behavior more intelligent. How so? What is the functionalist view of consciousness? What are its strengths and weaknesses relative to the materialist view? What would it take to make a conscious robot? Embodiment:  Embodiment Phil/Psych 256 Cameron Shelley Overview:  Overview How would you program a robot to play outfield in a baseball game? Via the classical approach, like chess? Assumptions of the classical paradigm Connection between perception and action is not immediate Extreme demands on world knowledge Overview2:  Overview2 (1) Known as the perceive-think-act cycle (2) requires complete and perfect world knowledge A better design: Closer connection of perception with action Less reliance on knowledge Compare with human performance We need a more interactionist view of intelligence Direct perception, image schemata, robotics Direct perception:  Direct perception Perception is often viewed as a kind of “unconscious inference”, i.e., abductive Explains bistable percepts, e.g., Necker cube Challenged by J. J. Gibson (1904–1979) Pilots gain ‘knowledge’ through texture, relative motion, etc. Optic flow produced by motion Perception is direct: unmediated by mental representations Affordances:  Affordances Affordances: opportunities for action E.g., door handles, plates, etc. Perception includes affordances, suggestions about how to interact with things Suppose you are jogging and encounter a hanging apple. What do you see? Suppose you are jogging to breakfast. What do you see? Affordances2:  Affordances2 When we perceive, we see not just categories but opportunities for interaction “That apple looks delicious” “That apple looks disgusting” We cannot literally see tastes but we do anticipate how we might act, e.g., eat Direct perception and knowledge:  Direct perception and knowledge Gibson closes the gap between perception and action What about expertise? A matter of refining perception, e.g., a concert pianist vs. a novice Organizing body/world interactions is important for intelligence Concepts do allow for associations between categories and actions, e.g., scripts Metaphor:  Metaphor The body can serve as a source of cognitive information “I was bowled over by your suggestion.” “That argument is hardly compelling.” “We have too much momentum to quit now.” “I got carried away.” “Once she gets rolling, she’ll never shut up.” What do these expressions have in common? Image schemata:  Image schemata Metaphors reveal the importance of image schemata: recurrent patterns relating perceptions to actions (Johnson 1984) E.g., COMPULSION, how forces impinge on the body Physical forces, e.g., airplane flight, tectonic plates Social forces, e.g., “Frank’s friends pushed him into having his ears pierced” Image schemata and knowledge:  Image schemata and knowledge COMPULSION is not a concept to be applied, it is a neural pathway trained through bodily experience (Lakoff and Johnson 1999) E.g., throwing balls, opening doors, avoiding a fall Disembodied concepts are a mistake: They lack the same significance Duplicate available information Robotics:  Robotics The perceive-think-act cycle model Consider Shakey (SRI ca. 1970) Employed explicit world model Rearranged objects in a room Problems: Often failed at goals Took too long The classical style robot is unresponsive Brooks on robots:  Brooks on robots Intelligent robots must be situated “The world is its own best model” Close the perceive-act gap They must also be embodied Expertise at disembodied activities, e.g., chess, does not typify intelligence Expertise at interaction means narrowing the perceive-act gap and having the right kind of body Subsumption architecture:  Subsumption architecture Behaviour is the result of perception/action controllers Complexity of behaviour is obtained by added layers of controllers No executive unit! Cog:  Cog Cog is a recent example (1994–2003) Human-like upper body Designed to imitate actions Questions for Cog include: Does your design scale up? Is your creator’s notion of intelligence too broad? Are you a humanoid insect? Discussion questions:  Discussion questions Brooks suggests that the complexity of human behavior may be inherent in the complexity of the environment (p. 406). What does he mean? Do you agree? Explain. What is the perceive-think-act cycle? In what ways should the think part be reduced in light of the embodiment challenge to Cognitive Science? Thagard suggests (p. 196) that increased use of imagery in cognitive models would help to address the embodiment challenge. How so? Do you agree? Explain. Compare the classical view of intelligence as expertise with the embodied view of intelligence as the ability to interact. What are main strengths and weaknesses of each view? To what extent can the two views be reconciled? Explain. Dynamical systems:  Dynamical systems Phil/Psych 256 Cameron Shelley Overview:  Overview Consider Kelso’s (1984) experiments on finger motion Antiphase motion is stable only at low frequency Why? Motion governed by coupled oscillators Perhaps the brain is a dynamical system, not a computational system The CRUM governor:  The CRUM governor How would you maintain constant speed in a steam engine? The CRUM governor: attach a computer Measure current flywheel speed v IF v < desired speed THEN calculate narrower valve setting s If v > desired speed THEN calculate wider valve setting s Open valve to setting s Return to 1 Salient features of the CRUM governor include: Explicit, symbolic representations Works via calculations Controlled via perceive-think-act cycle The Watt governor:  The Watt governor Watt’s (1736–1819) approach was different: Attach the flywheel to the valve via a lever Appropriate configuration assures constant speed under diverse circumstances Two models of cognition:  Two models of cognition Van Gelder (1995): the Watt governor should replace the digital computer as a model of human cognition Intelligence is a matter of appropriate interaction Computer vs. governor: Temporal vs. atemporal Representational vs. non-representational Discrete vs. continuous The governor is coupled: its evolution is determined by a continuous interaction of parts A digital computer is not: its evolution is determined by instructions that abstract away from its parts and their interactions Dynamical systems theory:  Dynamical systems theory Dynamical system: a set of interacting parts that changes over time State: the state of its components at time t A linear system: uniform motion xt1 = xt + s•(t1 – t) A non-linear system: predators and prey (Lotka and Volterra, 1920) dx/dt = (A – By)x [x = prey] dy/dt = (Cx – D)y [y = predators] The evolution of the system depends on initial settings of variables (x, y) and constants (A, B, C, D) Feedback:  Feedback Positive feedback: push the system in the same direction as before E.g., the term Cx in equation (2) Negative feedback: push the system in a different direction than before E.g., the term –By in equation (1) Attractors:  Attractors Attractor: State space trajectory a system tends to fall into Point: single state the system does not leave E.g., simple oscillator, predator/prey (sometimes) Periodic: set of states the system cycles through E.g., ‘ideal’ pendulum, predator/prey (sometimes) Aperiodic (strange): collection of similar but distinct trajectories E.g., weather patterns (Lorentz 1963), predator/prey (sometimes) Phase shift: transition between attractors Olfaction:  Olfaction Prevailing view of perception: Sensory data is processed via somatosensory maps E.g., Wilder Penfield (1958) neurosurgical observations Rabbit olfaction:  Rabbit olfaction Skarda and Freeman (1991) monitored rabbit olfactory bulb when presenting stimuli E.g., carrot, predator Conclusions about olfaction: A global property of the olfactory bulb Neural pattern depends on context, e.g., reward Basal activity in olfactory bulb is chaotic Becomes periodic when separated from cortex Constructed a model based on differential equations Olfaction is not the result of mental representations and computations The same may hold for the entire brain… Emotions and relationships:  Emotions and relationships Spousal conversation is a dynamical system (Gottman et al. 2003) Influence functions: DEs that model how emotional expression affects each spouse Constants: uninfluenced steady state Positive feedback: repair Negative feedback: dampening Emotional tones appear as attractors Transitions are phase shifts Some couples are volatile, others stable Both kinds can succeed in marriage DST and Cognitive Science:  DST and Cognitive Science Is the DST position testable? Model construction requires enormous simplifications in DEs (Eliasmith) Dynamical systems exist along a continuum of programmability (Clark 1997) Partially programmable: minimal instruction sets Fall between PCs and Watt governors on the continuum Recurrent ANNs are instructively viewed as dynamical systems (Elman 1995) They are temporal, continuous, coupled but representational Discussion questions:  Discussion questions What, in Van Gelder’s view, is wrong with the CRUM account of knowledge? Do you agree with his assessment? Explain. How much of driving a car involves mental representations and how much involves abilities concerning how to interact with the world? Explain. Compare dynamical systems and connectionism as accounts of cognition. Do you agree with Van Gelder that the connectionist account is inadequate? Explain. Intentionality:  Intentionality Phil/Psych 256 Cameron Shelley Overview:  Overview Does your dictionary know anything? “cat”: a carnivorous mammal (Felis catus) long domesticated as a pet and for catching rats and mice b : any of a family (Felidae) of carnivorous usually solitary and nocturnal mammals (as the domestic cat, lion, tiger, leopard, jaguar, cougar, wildcat, lynx, and cheetah) Does an electronic dictionary know “cat”? What is the difference between you and the computer? Quantitative or qualitative? Intentionality? Intentionality:  Intentionality Brentano (1838–1917): the aboutness of mental states When I look at and recognize a cat, my mental state is in some way about the cat Intentionality cannot be a simple matter of cause Mental states may be about imaginary things, e.g., unicorns, gold mountains, etc. Problem for classical CogSci paradigm: Having a mental representation about something seems inadequate for intentionality Dasein:  Dasein Husserl (1859–1938) advocated phenomenology Introspection into mental states as such Heidegger (1889–1976) revised phenomenology Combined with intentionality Reveals the fleetingness of existence Dasein: “being there” (the mind as an instrument for being) Knowledge is knowledge-how, not knowledge-that Knowledge is holistic Knowledge cannot be made explicit in symbols In the zone:  In the zone Dreyfus (What computers can’t do, 1972) Conscious experience changes with experience E.g., chess experts come to perceive the board differently than novices The need for explicit rules fades with experience Intelligence is a matter of the intentional qualities of mental states Have you ever been “in the zone”? Is that what everyday thinking is like? The Heideggerian account:  The Heideggerian account Knowledge is that which Facilitates interaction with the world (interactionism) Cannot be captured in discrete units (holism) Intentionality seems automatic for knowledge-how How does holism contribute to intentionality? A script contains knowledge-how, right? The Chinese room:  The Chinese room Something about a brain state cannot be true of a digital computer’s state Mental states have meaning (semantics) Raised against “strong AI” A computer simulation of thinking thinks Not the case with other simulations The argument:  The argument A room containing data sheets and an instruction book in English An English speaking person Papers input through a slot, the person follows the instructions and outputs papers through the slot The person does not understand Chinese, although the room passes the Turing test No digital computer understands what it does Understanding is not a matter of behaviour or configuration Replies:  Replies Systems reply: The person + room understands Chinese Functionalist Response: let the man internalize the rules He still does not understand Chinese Robot reply: Expose the symbols to world feedback Interactionist Response: Concedes that strong AI is false Interaction is not sufficient Imagine the room on wheels etc. More replies:  More replies Brain simulator reply: Apply neural representation Simulate the activity of a Chinese speaker’s brain Holist Response: replace paper and ink with pipes and valves Still no understanding of Chinese Combination reply: An interactive, neurally plausible robot Meaningfulness: Interactionism Holism Response: neither is necessary or sufficient for meaningfulness The robot is only an ingenious mannequin The other minds reply:  The other minds reply Searle’s objection is stronger than the Heideggerian one Will anything satisfy Searle? Reply: we standardly judge knowledge by behaviour in people; just apply this standard to machines Response: Behavioural criteria are not relevant! An intentional state is a meaningful state, period Searle’s positive program:  Searle’s positive program So, what is adequate? Two options: Dualism: meaning is non-physical Materialism: meaning is a brain state Searle espouses biological naturalism “What matters about brain operations is not the formal shadow cast by the sequences of synapses but rather the actual properties of the synapses.” Searle’s program2:  Searle’s program2 What do brains have the circuits lack? No one is sure Searle: brains have causal powers Does this mean that brains can cause physical events that computers cannot? Such as? Perhaps Searle is a dualist after all The Chinese room argument may simply reveal our biases about organisms and computers Discussion questions:  Discussion questions What is holism about knowledge? Is there any kind of knowledge that is not holistic? Explain. What is intentionality? Could a computer have it? A robot? Explain. Describe Searle’s Chinese room experiment and its intended conclusion. Describe what you think is the most potent reply and Searle’s response to it. Are you convinced by the argument? Explain. Externalism:  Externalism Phil/Psych 256 Cameron Shelley Overview:  Overview How do you find addresses in a strange town? Externalism: Intelligence involves external resources Knowledge involves external representations Internalism: knowledge and intelligence is strictly a matter of what is between the ears of individuals Externalism:  Externalism Cognitive technology: objects adapted to enhance knowledge or intelligence Consider writing (Plato): Theuth: an enhancement to memory Thamus: produces forgetfulness, remindings Social context may also enhance knowledge and intelligence E.g., in science Internalism:  Internalism Internalism: Epistemic individualism: knowledge in a group is the sum of the knowledge of its members Reductionism: action of a group is the sum of the actions of its members Are the following things known: Average Earth-moon distance, first occurrence of cat, most likely future oil price? Do collective actions have the same properties as members’ actions? “No!” “Yes!” Beehive relocation program:  Beehive relocation program When honeybees relocate, the group decides: Scouts search the area They report results via wiggle dance This gets other scouts to visit the site Dancers compete to recruit uncommitted scouts The first site over a threshold is selected by the swarm The group, not individuals, decides Cognitive technology:  Cognitive technology Expert bartenders improve performance by using differently shaped glasses (Beach 1988) We sculpt the environment to serve as a kind of extended memory store In what ways do you do this? Cognitive technology can also increase intelligence E.g., the use of diagrams (Slezak 1995) Cognitive technology2:  Cognitive technology2 Cognitive technology is consistent with interactionism External stores and operations take advantage of pattern-completing talents (McCelland et al. 1986) E.g., performing long division The computer may be the ultimate cognitive tool Cognitive technology and externalism:  Cognitive technology and externalism Cognitive technology is more than just situatedness It involves active configuration of the world Does the mind extend beyond the body? Also, intelligence is not an individual quality (?) Consider symbol use among chimpanzees: Does it raise our estimation of their intelligence? Or is the chimp+symbol system intelligent? Social epistemology: Science:  Social epistemology: Science Internalist view of science: Reductionist: A claim is justified if the evidence should convince any competent scientist Individualist: Any competent individual is capable of determining the justification of a given claim Accepted by many philosophers, e.g., Popper (1902–1994) Rejected by others, e.g., Charles Peirce (1839–1914) Externalism and science:  Externalism and science Thomas Kuhn (1922–1996) adopted an externalist view: Scientists agree on basic claims and practices New scientists apprentice, pick up common knowledge Collective assessment of work, promotion Influence of the paradigm: Fit with the paradigm determines (?) acceptance of claims Fit is determined by the community, not individuals Does social context determine knowledge?:  Does social context determine knowledge? Some of Kuhn’s remarks suggest that the paradigm determines acceptance of claims E.g., positrons were seen before they were observed after Dirac’s 1928 prediction Without the group, nothing counts as knowledge! True? Consider the Müller-Lyer illusion Distributed cognition: Navigation:  Distributed cognition: Navigation In some cultures, sea navigation is the job of an individual (Hutchins 1994) In an aircraft carrier, it is the job of the whole crew How many people know how to dock the carrier? None! Social networks:  Social networks The crew is like a connectionist network Each member is a node Links are information channels Individuals are differentiated though: Each person is expert in a given domain Each person is related to other via a role The crew is organized hierarchically The crew can learn from experience Discussion questions:  Discussion questions What is externalism about minds? What would Searle say about it? Would you agree? Explain. What is epistemic individualism? Is it a good account of scientific practice? Explain. What is reductionism about intelligence? Use some current examples to explain in what ways is it is challenged by the existence of cognitive technology. What is distributed cognition? Use an example from your own experience to explain in what ways distributed cognition challenges CRUM. How would a CRUM robot be programmed to play soccer? How would you alter the design in view of the externalist challenge? Explain the implications of your answer for the CRUM view of human intelligence.

Add a comment

Related presentations