60 %
40 %
Information about concepts2

Published on January 3, 2008

Author: Savina


How (not) to Explain Concepts:  How (not) to Explain Concepts Steven Horst Wesleyan University Preliminaries:  Preliminaries An early version of a paper, some parts perhaps not quite brought to term Use of slides, information in multiple modalities How (not) to Explain Concepts Overview:  How (not) to Explain Concepts Overview How not to explain the semantics of concepts -- two familiar approaches that bark up the wrong tree The lineaments of a new account Continuity with animal cognition The Discrimination Engine Realization through neural nets Modularity and incremental gains Philosophical Payoffs Part I:  Part I Two Familiar Problems with Accounts of Concepts Problem 1: The “Logical” Approach:  Problem 1: The “Logical” Approach Conceptual semantics can be handled in the same way as the semantics of predicates A “semantic theory” is a mapping from expressions in a language onto their extensions (e.g., D. Lewis on “Languages and Language”) Tarskian version Direct assignment of primitive denotation Recursive rules for expressions An Example -- Fodor’s Causal Covariation Account:  An Example -- Fodor’s Causal Covariation Account Basic idea: the semantic value of a “symbol in mentalese” is its (characteristic) cause More formally: there is an asymmetric causal covariation relation between cows and symbols that mean ‘cow’, and this explains why ‘cow’-symbols mean ‘cow’. Problems:  Problems At best, an explanation of meaning-assignments, not of meaningfulness Account (putatively) distinguishes things that mean cow from those that mean horse Does not distinguish things that are meaningful from those that are meaningless--causal covariation is pandemic E.g., there is a causal covariation relation between cows and cowpies, but cowpies do not mean ‘cow’. Such a theory only explains meaning-assignments once meaning is already in the picture to begin with! Generalization:  Generalization More generally, a mapping is not enough to explain semantics (I.e., semanticity) Specifies, but does not explain, meaning-assignments (cf. Simon Blackburn, Hartry Field) Mapping alone is weaker than meaning Mappings are cheap -- indefinitely many possible “interpretations” of a language. Why is “Formal Semantics” Attractive?:  Why is “Formal Semantics” Attractive? 20th century attention to philosophy of language, semantics, largely stems from interests of logicians Special interests of logicians Truth truth-preserving inference Completeness Consistency Leads to odd features of the “languages” logicans talk about:  Leads to odd features of the “languages” logicans talk about Only sentences with truth values are talked about (cf. Austen 1962) Desire for/assumption of bivalence Fuzzy predicates problematic for extensional approach Tarskian definition not possible for languages with indexicals, reference to expressions in the object language. Linguistic change, idiolectic variation only accommodated by changes/differences in entire language Historical Extremes:  Historical Extremes Some Positivists called for “reform” of natural language Quine -- don’t know what I mean by ‘rabbit’ Davidson: we each speak our own language (Then what is English? How is communication possible?) Amazing to linguists that these issues are largely ignored by philosophers Limits of Logical Approach:  Limits of Logical Approach Logical approach not good for talking about non-assertoric utterances (nor uses of concepts in things other than judgements) Many features of actual languages and concepts seem problematic Fuzziness/vagueness (predicates & concepts) Indeterminacy (predicates & concepts) Non-alethetic felicity conditions (utterances/thoughts) Context-dependence (Edidin) Failure of bivalence, sorites paradoxes (Statements/judgments) Cartwright on scientific laws Analysis of Problem 1:  Analysis of Problem 1 Prevailing approach to semantics in analytic philosophy has been guided by the interests of logicians: As if we were asking: (not: “What is language like?”) “What would language have to be like if it were to accommodate certain virtues pertaining to truth and inference?” Analysis of Problem 1:  Analysis of Problem 1 Prevailing approach to semantics in analytic philosophy has been guided by the interests of logicians OK so far as it goes Other possible theoretical perspectives Pragmatics/sociolinguistics Ethology/animal behavior Psychology Evolution Dynamic systems, cybernetics Suggestion:  Suggestion Set aside logically-inspired approach Try other approaches See if things that were problematic become more transparent …will try to implement this different approach in second half of talk Second Problem--Too Much or Too Little:  Second Problem--Too Much or Too Little Two basic kinds of approaches to concepts Rich Views: Those that look at concepts within rich context of human mind -- hold that concepts are inseparable from other features of human mentation Consciousness, natural language, reasoning Searle, Brandom, Blackburn, Wittgenstein Reductive Views: Those that stress continuities with animal cognition, computation or some other kind of system, reduce concepts to something else Rich Views--Claims and Appeal:  Rich Views--Claims and Appeal Claims: Cannot have concepts without other things in human mentation (e.g., consciousness, inference, natural language) Intuitive appeal: Not clear that one would call something a concept if we knew it lacked these other things Not clear how to individuate concepts semantically without these other things (e.g., could it mean ‘bachelor’ if one didn’t infer ‘male’ or ‘unmarried’?) Rich views--Problems:  Rich views--Problems Problems Obvious continuities between human and animal cognition call for explanation Biological Behavioral Things in animals seem concept-like Tracking kinds and variable properties My cat seems to be able to tell dogs from mice, animate mice from inanimate Re-identification My cat can identify some individuals (e.g. me) Behavior cued to kind- and property-differences Reductive Views:  Reductive Views Take some set of features of information-processing or animal cognition and treat these as an analysis of concepts in us. E.g.,: Concepts are “just” discriminative abilities Thoughts are “just” symbolic representations Concepts are “just” symbol types in a language of thought Languages are just functions from terms to their extensions Reductive Views--Problems:  Reductive Views--Problems Not clear that our concepts would be what they are without inferential relations, self-reference, consciousness, language (fine-grained) semantic individuation tied to inferential commitments Role of division of linguistic labor Doesn’t seem right to say human concepts are just animal cognition “plus” an add-on: the phylogenetically older elements are transformed by being taken up into a new kind of system. A Dilemma:  A Dilemma Neither rich nor reductive views seem wholly satisfactory Seems to present a choice between the idea that lower-level theories explain everything (reduction) or nothing Can one find a middle way which Stresses continuities with animal precursors of human thought Gives some explanatory insight Compatible with constitutive role inferential, linguistic, phenomenological features seem to play in human conceptuality? A Way Out:  A Way Out Explanation of concepts in terms of features continuous with animal cognition is not a philosophical analysis (in terms of necessary and sufficient conditions) It is scientific explanation involving idealization Idealization:  Idealization Take a rich phenomenon (say, moving bodies -- dynamics) Idealize away from some set of factors that do in fact matter in vivo (e.g., mechanical interactions involved in wind resistance, impetus) …to arrive at a more accurate understanding of what is left over (e.g., gravity) Slide24:  Actual (noisy) trajectory of projectile Electro-magnetism Wind (mechanical force) Slide25:  Actual (noisy) trajectory of projectile Electro-magnetism Wind (mechanical force) Galileo’s parabolic trajectory of projectiles Idealization Idealizations:  Idealizations Do not aspire to tell the whole story about a system Necessarily describe how things actually behave Provide adequate basis for predictions May not be computable (3-body problem) May not be factorable (feedback systems) Chaotic systems Are not properly understood as universally quantified claims about actual behavior of objects and events Idealizations:  Idealizations Do Provide true stories (pace Cartwright) about real invariants in nature Application to concepts...:  Application to concepts... Leave the word ‘concept’ for the rich things that go on in us. Investigate the continuities under the name proto-concepts (reached by idealization away from consciousness, etc.) Leave open the question of whether the kind ‘concept’ is Protoconceptuality plus add-ons Determined essentially by relations to other things like consciousness and reasoning. Slide29:  Concepts Inference Consciousness Language Concepts have rich web of relations in us Slide30:  Concepts Inference Consciousness Language Concepts have rich web of relations in us Idealize away from Language Inference Consciousness Slide31:  Concepts Inference Consciousness Language Concepts have rich web of relations in us Idealize away from Language Inference Consciousness Proto-Concepts Part II:  Part II Lineaments of a Non-Reductive Account of (Proto)Concepts (I.e., concepts in us, seen under the idealizing move, and their precursors in the animal kingdom) Stage 1: Discrimination:  Stage 1: Discrimination Basic suggestion: protoconcepts are first and foremost things employed in the enterprise of the discrimination of environmentally salient conditions within the life of a homeostatic system (organism). Requires some system within organism capable of some set of states that covary with salient states of affairs -- SCHEMAS These states must be exploitable in control of behavior More than a purely informational relation--tracks salient affordances (only very sophisticated animals can track “properties” in any general way!) Causal Covariation:  Causal Covariation A B S2 System A B S1 System Discrimination:  Discrimination Takes place only in a homeostasis engine DISCRIMINATOR must respond to salient states of affairs Must have further connections in a feedback loop driving behavior on the basis of discrimination Note non-reductive definition -- something is a discriminator by dint of its role in a more complex system Simple Example -- The Fly:  Simple Example -- The Fly “Roughly speaking, the fly’s visual apparatus controls its flight through a collection of about five independent, rigidly inflexible, very fast responding systems (the time from visual stimulus to change of torque is only 21 ms). For example, one of these systems is the landing system; if the visual field “explodes” fast enough (because a surface looms nearby), the fly automatically “lands” toward its center. If this center is above the fly, the fly automatically inverts to land upside down. When the feet touch, power to the wings is cut off.” [Reichardt and Poggio, 1976, 1979; Poggio and Reichardt 1976, reported in Marr 1982, pages 32-33] Slide37:  Discriminator Circuit Motor Control Circuit Excitatory Connection Inhibitory Connection Fly -- No Real Representations:  Fly -- No Real Representations “it is extremely unlikely that the fly has any explicit representations of the visual world around him—no true conception of a surface, for example, but just a few triggers and some specifically fly-centered parameters.” (Marr, p. 34) What might this mean? 2 Kinds of Schemas:  2 Kinds of Schemas Object-oriented schemas Contain elements that covary with Objects Properties of objects Interface-oriented schemas Elements covary with relations at boundaries between organism and environment, not articulated into components that represent the relata. Slide40:  The state of affairs to which the discriminator is attuned is a fly-relevant affordance Surface-approaching “Representations”:  “Representations” A technical and stipulative definition ‘representation’ =df an element in a schema whose function is to covary with objects or properties of objects. Note that none of the ordinary associations of ‘representation’ are intended to be operative -- syntax, language Flies have no representations:  Flies have no representations Flies have no representations, but only interface-oriented schemas. Perception, cognition and action do not seem to be distinguished in the fly: the motor control mechanisms are directly driven by perceptual stimuli, without any apparant intervening level at which cognition takes place. The fly’s brain contains a distinction device, but what it distinguishes are fly-relevant ecological conditions that are not factored out into states of affairs involving objects and properties. Fly “semantics”:  Fly “semantics” Either the fly has no semantics at all, or else there is no distinction between semantics and pragmatics for flies: Slide44:  The activation of the fly’s “landing system” might be equally well (or badly) described by us as a REPORT “there is a surface coming up,” or as a WARNING “Brace for impact, laddie!” There is a surface apporaching Brace for impact, Laddie! Differences in Higher Animals (1): Types of Proto-Concept:  Differences in Higher Animals (1): Types of Proto-Concept Seems to involve inner models that have elements that track objects (bird, updrafts, worm) Seems to track kinds of things In some species, ability to model states of objects (dead/alive, in heat/not, etc.) In social animals, ability to re-identify particular individuals of the same kind Recombinability of these elements grounds generativity, productivity of thought Note parallels between grammatical classes and representational abilities of animals:  Note parallels between grammatical classes and representational abilities of animals Track objects Track Kinds Track States Track individuals Definite descriptions Common nouns Verbs and adjectives Proper nouns But note that these kinds of representational abilities seem to be present in nonlinguistic animals -- productivity does not require language or syntax Differences in Higher Animals (2) Learning:  Differences in Higher Animals (2) Learning Lots of ways (architectures) to implement discrimination circuits Different (harder) problem of learning -- requires a discrimination ENGINE Mere curcuit-planning not enough Rule-based systems have proved bad at learning Accomplished in terrestrial animals through particular kinds of nervous systems Neural Networks and Neural Modeling in Cognitive Psych.:  Neural Networks and Neural Modeling in Cognitive Psych. Attempts to model psychology based on architectural features of the brain Often models only coarse-grained features Distributed processing Massively parallel connections Hebbian learning Neural Networks and Neural Modeling in Cognitive Psych.:  Neural Networks and Neural Modeling in Cognitive Psych. Features of cognition “fall out” of the model Learning discrimination of salient (I.e., reinforced-for) features comes naturally Plasticity of learning new discriminations Adjustment of existing discriminations Loosening/tightening vigilance Fuzziness of predicates Neural Networks and Protoconcepts: Some Claims:  Neural Networks and Protoconcepts: Some Claims Protoconcepts are elements within a discrimination engine In terrestrial animals capable of conditioning, this engine is realized through a neural net architecture. Some features of animal cognition to be understood in terms of task of discrimination Others are artifacts of the realizing system First Payoff:  First Payoff Some features of language and concepts that may have seemed odd from standpoint of formal semantics Fuzzy predicates Linguistic change Difficulty of formulating semantic rules Seem natural if we assume Protoconcepts part of discrimination engine Implemented through neural networks Fuzzy Predicates:  Fuzzy Predicates The dynamics of the training of neural systems tends to partition state space around a set of attractors, which are strongly influenced by choice of paradigms in training set. Competitive feedback nature of neural nets makes what happens at the borders less predictable. (In some cases, even chaotic.) Slide53:  Input (feature) space External Stimuli Cause activations of feature-space Slide54:  Input (feature) space Connections to a higher area initially random Concept space Slide55:  Feedback processes Concept space Input (feature) space Slide56:  Create a kind of partition of feature space Concept space Input (feature) space Slide57:  But not exactly a binary partition….. Concept space Input (feature) space Slide58:  Behavior in some areas remains probablistic! Concept space Input (feature) space ?? Fuzzy Predicates:  Fuzzy Predicates The overall dynamics is good for a system that is optimized for Learning protoconcepts through feedback A noisy environment If fuzziness of protoconcepts is any sort of biological disadvantage, this is outweighed by the overwhelming advantages of having a learning engine. (Architecture makes good biological sense, even if it does not make good sense for logic, and first appears in creatures that don’t do logic.) Fuzzy Predicates & Conceptual/Linguistic Change:  Fuzzy Predicates & Conceptual/Linguistic Change Fuzzy boundaries may even have good side effects: Organism is a homeostatic system seeking equilibrium Things that fall along the edges (are neither clearly THIS nor THAT) cause dissonance in system This can prompt active learning, refinement of partition of state-space using new seed exemplars along the boundaries. If so, an increment in intelligence Slide61:  Instability causes dissonance….. Concept space Input (feature) space ?? Slide62:  Forces new plasticity, learning Concept space Input (feature) space Slide63:  Finds new equilibrium: distinction is more fine-grained Concept space Input (feature) space Slide64:  …or perhaps forms a new partition Concept space Input (feature) space Conceptual Change and Theories of Meaning:  Conceptual Change and Theories of Meaning Familiar accounts of meaning emphasize different things: Relation of concepts to “sense data” or “ideas” in perception (Empiricism) Relation of concepts to other concepts Relation of concepts to things that they stand in a particular causal relation with Relation of concepts to words in a public language Each point seems to get something right Hard to combine them in an analysis Conceptual Change and Theories of Meaning:  Conceptual Change and Theories of Meaning Three of these factors just seem natural from a biological and/or cybernetic standpoint Function of learning is to partition feature space in a way that the resulting protoconcepts track real invariants that can be interacted with causally(causal semantics) In a distributed network, learning a protoconcept is eo ipso also to readjust its constitutive relations to the rest of the network (conceptual role semantics?) If the protoconcept is related to perceptual schemas, learning is eo ipso an adjustment of pathways from sensory inputs to protoconcepts (Empiricist semantics) Conceptual Change and Theories of Meaning:  Conceptual Change and Theories of Meaning More importantly, you need multiple factors in your protoconceptual system if you are to learn new protoconcepts Say you are learning about a phenomenon through perception (“ideas”-->protoconcepts) During learning, the function from inputs to protoconcepts is continually changing Conceptual engine needs a second factor to “pin down” the phenomenon you are trying to latch onto as “the same” (an ostensive/causal factor) Conceptual Change and Theories of Meaning:  Conceptual Change and Theories of Meaning Example -- you perceive something in a dim and confused way -- say, something in the woods, or an unfamiliar squiggle on a slide You say, “What’s that?” You perform tests and seek new data Perceptions vary over time Your neural nets are inchoately trying to unite the phenomenon under a meaning But through it all, there is something that allows you keep “pointing” (mentally!) to the phenomenon as “THAT” Conceptual Change and Theories of Meaning:  Conceptual Change and Theories of Meaning While a multiple-factor theory of meaning may not be needed for logic, it seems necessary for learning to take place (Would next want to look at models to see whether some have this kind of feature) If concept-learning does involve the inter-relation of multiple ways of latching onto the world, some philosophical puzzles might be a result of ways these come into conflict. Conceptual Change and Theories of Meaning:  Conceptual Change and Theories of Meaning E.g., conflicting intuitions about strict identity and relative identity -- I.e., does it make sense to say X and Y are “the same” without saying “the same [sortal term]” If one system within the conceptual engine is ostensive/causal, its “internal logic” might allow strict identity If another system is non-ostensive, wrapped up in a network of relations, its “internal logic” might allow only relative identity. I.e., modularity opens doors to disunity of reason, language Moral of Story:  Moral of Story Whether a feature makes sense depends on the perspective from which you look at it Several “odd” features of cognition, concepts, language “fall out” if you approach them from a biological perspective: Discrimination engine Realized through neural networks with particular dynamics optimized for learning Building on this Foundation:  Building on this Foundation Still many features of human conceptuality, language missing (e.g., reasoning) Suggestion: Look at further advances as modular, incremental ways of exploiting discrimination engine Some accrued in phylogeny Modular increments exploit existing systems, do not cause ground-level redesign. Slide73:  Causation Discrimination Object/Property Articulation (Representation) Reps. For Individuals Learning Modular Increments in the Story So Far (A Tree) Slide74:  Causation Discrimination Object/Property Articulation (Representation) Reps. For Individuals Learning Some Increments may be Independent Reps. For Actions Reps. For Self Slide75:  Causation Discrimination Object/Property Articulation (Representation) Reps. For Individuals Learning Some may occur only in certain kinds of animals Reps. For Actions Reps. For Self Supervised Learning Associative Inference Social Animals Slide76:  Causation Discrimination Object/Property Articulation (Representation) Cries Learning Some require convergences of earlier gains Generativity Special Repns Supervised Learning Associative Inference Learned Cries Creativity Language (grammar) Slide77:  Causation Discrimination Object/Property Articulation (Representation) Cries Learning Protoconceptual Articulation Precedes Grammar Generativity Supervised Learning Learned Cries Language (grammar) Vervet thought is generative Hawk in tree Hawk in air Leopard in tree Vervet cries don’t exploit all the information that is in their brains Slide78:  Causation Discrimination Object/Property Articulation (Representation) Cries Learning Protoconceptual Articulation Precedes Grammar Generativity Supervised Learning Learned Cries Language (grammar) Language per se is not needed for generativity, productivity Language exploits pre-existing generativity, productivity of thought Protoconeptual Articulation Precedes Language:  Protoconeptual Articulation Precedes Language Analogy with the fly’s discriminators, in which semantics and pragmatics are all mushed together. Vervet cries equally interpretable as: “Lo! There’s a leopard!” “Get up in the treetops, lads!” It is the addition of syntax that separates semantics and pragmatics for language. Sociality of Language:  Sociality of Language Language emerges in a social species out of pre-existing factors Ability to utter noises Supervised learning Coordinated activity in group Takes on an existence as a new kind of entity -- “the natural language L” -- which exists in a feedback relation with the conceptual spaces of L-speakers Natural Languages:  Natural Languages Languages, in this sense are Not just abstract entities (functions from symbols to sets) Not to be understood wholly in terms of what is in any individual (or otherwise reductively) To be understood as an essential element of a particular kind of homeostatic, reproductive, adaptive system -- a linguistic community “things” in a peculiar sense -- perhaps in the sense fields, vortices, other dynamic systems are “things” Functions of Natural Languages:  Functions of Natural Languages Familiar philosophical themes: When two individuals share one, they can express thoughts and be understood An instrument for action (speech acts) But equally importantly Part of the extended phenotype of the linguistic community Plays an essential role in conceptual development, as a particularly efficient and flexible form of supervised instruction Language Learning as Supervised Instruction:  Language Learning as Supervised Instruction Supervised learning: Protoconceptual space of teacher (T) is in goal state G. Protoconceptual space of learner (L) is in initial state I. L engages in imitation, free experimentation T provides feedback, so that L’s protoconceptual space is under control of feedback mechanisms that compare it to G L moves from I to G. Slide84:  Teacher Pupil Initial state: Pupil’s proto-conceptual space does not match world or that of other L-speakers. Partitions of state-space into proto-concepts Slide85:  Teacher Pupil Language-learning sets up a feedback process which shapes the protoconceptual space of the pupil Slide86:  Teacher Pupil Until it approximates that of the teacher Equilibrium Slide87:  But not just diadic teacher/pupil relationships Whole network of feedback relationships Operating on individuals’ concepts Through vehicle of language (Hence?) “language” is a social phenomenon Language Learning as Supervised Instruction:  Language Learning as Supervised Instruction Language learning: Not just learning words for existing concepts A shaping of conceptual space through supervised learning -- Learner’s conceptual space is shaped to resemble Teacher’s. Burgean/Putnamian division of linguistic labor seems to fall out of this as a way of coordinating usage -- a particular form of the concepts of the learner coming under control of those of the expert Slide89:  Causation Discrimination Object/Property Articulation (Representation) Cries Learning Language thus retools representation Generativity Supervised Learning Learned Cries Language (grammar) Protoconceptual system is never the same again once language appears Empirical question of how much downward-retooling of brain there is (Jaynes) Other Advances Based on Language:  Other Advances Based on Language Cries Learning Generativity Supervised Learning Associative Inference Learned Cries Creativity Language (grammar) Verbal Arts External Symbols Cultural Repository of Kn. Symbolic storage of kn. Outside individuals. Slide91:  Cries Learning Generativity Supervised Learning Associative Inference Learned Cries Creativity Language (grammar) Another Advance: Reasoning So far just associative inference Not clear this even deserves to be called “reasoning” Formal Reasoning:  Formal Reasoning Cries Learning Generativity Supervised Learning Associative Inference Learned Cries Creativity Language (grammar) Verbal Arts External Symbols Cultural Repository of Kn. Symbolic storage of kn. Outside individuals. Formal Reasoning ?? Formal reasoning, like language, builds upon a convergence of prior gains Formal Reasoning:  Formal Reasoning A consolidation of earlier gains Associative reasoning Articulated language Perhaps requires external symbols? (At least for long proofs…..) Oriented to particular practical ends Deriving consequences of knowledge Testing consistency of beliefs Part III (Don’t worry, it’s shorter!):  Part III (Don’t worry, it’s shorter!) Philosophical Payoffs What Does this have to do with Philosophy??:  What Does this have to do with Philosophy?? A sketchy story about the lineage of concepts (surely not robust enough to count as a theory!) Indeed, concepts with some of the juicy stuff (consciousness, inference, language) set aside. Emphasizes that we are language-using social animals and shabby reasoners, but we already knew that! What philosophical problem is this supposed to solve??:  What philosophical problem is this supposed to solve?? Not an attempt to solve a local problem in “normal philosophy” Philosophical jiu-jitsu -- showing that There is another way to think about familiar philosophical issues than mainline analytic philosophy on the logical paradigm If you thematize your problems differently, you can explain different things, different things become simple/problematic Symptom, Diagnosis, Therapy:  Symptom, Diagnosis, Therapy Symptom: Logic, Philosophy of language in 20th century say things about language, concepts that don’t fit the data of real languages, human thought Ignore aspects of language and thought not closely related to truth and inference Diagnosis: Apparent problems stem from using just one narrow paradigm for looking at concepts and language (language as a vehicle for making utterances with truth values) Symptom, Diagnosis, Therapy:  Symptom, Diagnosis, Therapy Therapy Find an alternative way of thematizing the problem (biological/cybernetic) Clearly the lineaments of a viable scientific program here (cognitive science, ethology, development) Renders intelligible things that seemed like puzzles or problems on the other view. Not necessarily a rival to the logical analysis of language -- two theories can be different without being rivals. (E.g., wave/particle duality) Applications:  Applications Already covered in passing… Fuzzy concepts a result of the architecture of system employed in learning (I.e., as a discrimination engine) This feature also arguably gives an increment in intelligence through active learning as a process of dissonance-reduction One might expect a learning-optimized system to have both ostensive/causal and qualitative elements to conceptuality and these might have different internal logics (because they are modular and optimized differently) Modularity and Formal Reasoning:  Modularity and Formal Reasoning How are humans endowed with formal reasoning abilities? An engineer, faced with this problem and allowed to design from the ground up, would design something optimized to be a reasoning engine (say, a digital computer!) Exact extensions Bivalent assertions Good inferential abilities, not prone to fallacy Modularity and Formal Reasoning:  Modularity and Formal Reasoning But evolution doesn’t have this luxury (of ground-up redesign). It is forced to co-opt existing parts that may be optimized for something else! Like using a screwdriver as a chisel Like designing a new Oracle database that needs to work with your old legacy systems on a VAX! Concepts and Reason:  Concepts and Reason Concepts a product of a selective process that optimized for a discrimination engine that could learn (in noisy conditions) Fast Sloppy Flexible, adaptive, self-correcting Relation to world driven by pragmatic constraints Concepts and Reason:  Concepts and Reason Reasoning “wants” (I.e., would operate optimally if it had) a conceptual system that is Stable (values stay the same!) Tidy (no vagueness or indeterminacy) Tied to the world exactly Note that this is the profile of the “languages” the logican and logically-inspired philosophy of language has tended to drift towards. Concepts and Reason:  Concepts and Reason In fact, a kind of mismatch between: Conceptual engine Reasoning engine Perfectly understandable if viewed biologically Generates things like philosophical paradoxes E.g., Sorites paradoxes Why are there failures of bivalence and why is this so troubling?:  Why are there failures of bivalence and why is this so troubling? An artifact of the mismatch of aims of the conceptual system and the reasoning system. The reasoning system aims at the careful completion of valid inference. It operates as though the conceptual system were well-defined rather than fuzzy. However, in fact the conceptual system was designed for other ends, and concepts are more often than not fuzzy, as a result of the architecture of neural nets. This is troubling because it indicates that we are not built with a consistent design. From the standpoint of logical reasoning, we have been equipped with the wrong conceptual parts. Why do people think in terms of contrast pairs (and larger contrast sets) rather than in terms of the logical partitions of P/not-P?:  Why do people think in terms of contrast pairs (and larger contrast sets) rather than in terms of the logical partitions of P/not-P? Neural nets partition a state space into two or more sectors based on paradigms and feedback. The result is a set of distinctions centered around positive paradigms—mines and rocks, or animal/vegetable/mineral—rather than predicates and their complements. A partition of a state space into two segments (P’s and Q’s) is equivalent to a logical partition into P's and non-P's in terms of its extension, but not in terms of its lineage or function . Neither does it involve the syntactic aspect of a representation cojoined with a negation indicator. A partition into more than two sectors can be analyzed externally (or by another system within the organism) into any sector and its complement, but this is not a natural description of its dynamics as a neural network. Symptom, Diagnosis, Therapy:  Symptom, Diagnosis, Therapy Symptom Philosophical approaches to concepts are either “rich” or reductive Reductive views don’t do justice to human conceptuality “Rich” views risk saying nothing explanatory Diagnosis Assumption that an “explanation” would be a philosophical analysis (necessary/sufficient conditions) Symptom, Diagnosis, Therapy:  Symptom, Diagnosis, Therapy Therapy Find a way to look (scientifically) at the continuities, incremental stages underlying human cognition without worrying about their relation to consciousness, language, reasoning Treat the “bracketing” move as an idealization rather than a reduction. Allows much explanation without being a reductive explanation To some extent, language and reason did come back into the picture! As much as you’re going to get anyway

Add a comment

Related presentations

Related pages

Startseite | Concept2

Der SkiErg. Mit dem Concept2 SkiErg trainiernen Sie -wie beim Skilanglaufen- Kraft und Ausdauer zugleich. Mehr →
Read more

Home | Concept2

The SkiErg. The Concept2 SkiErg brings the fitness benefits of Nordic skiing to everyone. More →
Read more


Rudergeräte von Concept2 bestechen durch ihre konkurrenzlose Funktionalität, Haltbarkeit und nicht zuletzt durch unseren vorbildlichen Kundenservice.
Read more

Badspiegel & Spiegelleuchten » perfekt bei Concept2u

Große Auswahl an hochwertigen Badspiegel Badezimmerspiegel nach Maß mit verschiedenen Beleuchtungen Qualität jetzt günstig bestellen.
Read more

Concept2 – Wikipedia

Concept2 bietet Skulls und Riemen mit einem Ruderblatt in „Big-Blade“- und „Macon“-Form an. Insbesondere für die Big-Blades sind verschiedene ...
Read more

Concept2 Logbook | Login

Concept2 Forum; Pace Calculator; Contact Us; Concept2. Making world class rowing products since 1976. Facebook Twitter Instagram Pinterest ...
Read more

concept2 | eBay - Elektronik, Autos, Mode, Sammlerstücke ...

Tolle Angebote bei eBay für concept2 concept 2 rudergerät. Sicher einkaufen.
Read more

Concept 2, Freizeit, Hobby & Nachbarschaft | eBay ...

eBay Kleinanzeigen: Concept 2, Freizeit, Hobby & Nachbarschaft - Jetzt finden oder inserieren! eBay Kleinanzeigen - Kostenlos. Einfach. Lokal.
Read more

Concept2 online kaufen: Sport-Thieme

Sport-Thieme steht seit 65 Jahren für ein umfangreiches Sortiment qualitativ hochwertiger Sportgeräte zu fairen Preisen. In unserem Online-Shop können ...
Read more


With unrivaled function, durability, customer service, and post-sale support, Concept2 Indoor Rowers are the machine of choice for people of all ...
Read more