Word Recognition Models

50 %
50 %
Information about Word Recognition Models

Published on June 16, 2008

Author: lrizoli

Source: slideshare.net

Description

Overview of Coltheart's Dual-Route Model and Seidenberg & McClelland's neural network models of word recognition.

Course presentation for PSYC365*, Fall 2004, Dr. Butler, Queen's University.

Images used without permission.

Word Recognition Models Lucas Rizoli Thursday, September 30 PSYC 365*, Fall 2004 Queen’s University, Kingston

Human Word Recognition ● Text interpreted as it is perceived – Stroop test (Red, Green, Yellow) – Aware of results, not of processes ● Likely involves many areas of brain – Visual – Semantic – Phonological – Articulatory ● How can we model this?

Creating a Word Recognition Model ● Assumptions – Working in English – Only monosyllabic words ● FOX, CAVE, FEIGN... – Concerned only with simple word recognition ● Symbols → sounds ● Visual, articulatory systems function independently ● Context of word is irrelevant

Creating a Word Recognition Model ● Rules by which to recognize CAVE – C → /k/ – A → /A/ – VE → /v/ ● Describe grapheme-phoneme correspondences (GPC) – Grapheme → phoneme

Creating a Word Recognition Model ● Recognize HAVE – H → /h/ – A → /A/ – VE → /v/ – So HAVE → /hAv/ ? ● Rules result in incorrect pronunciation

Creating a Word Recognition Model ● English is quasi-regular – Can be described as systematic, but with exceptions – English has a deep orthography ● grapheme → phoneme rules inconsistent – GAVE, CAVE, SHAVE end with /Av/ – HAVE ends with /@v/

Creating a Word Recognition Model ● Models needs to recognize irregular words ● Check for irregular words before applying GPCs – List irregular words and their pronunciations ● HAVE → /h@v/, GONE → /gon/, ... – Have separate look-up process

Our Word Recognition Model From Visual System Orthographic Input Irregular GPCs Words Phonological Output To Articulatory System

The Dual-Route Model ● Proposed by Max Coltheart in 1978 – Supported by Pinker, Besner – Revised throughout the 80’s, 90’s, 00’s ● Context sensitive rules ● Rule frequency checks ● Lots of other complex stuff ● We’ll follow his 1993 model (DR93)

DR93 Examples Note: Above, /a/ should be /@/ Context-sensitive GPC

What’s Good About DR93 ● Regular word pronunciation – Goes well with rule-based theories ● Berko’s Wug test (This is a wug, these are two wug_) ● Childhood over-regularization ● Nonword pronunciation – NUST, FAIJE, NARF are alright

What’s Not Good About DR93 ● Irregular word pronunciation – GONE → /dOn/, ARE → /Ar/ ● GPCs miss subregularities – OW → /aW/, from HOW, COW, PLOW – SHOW, ROW, KNOW are exceptions ● Biological plausibility – Do humans need explicit rules in order to read?

The SM89 Model ● Implemented by Seidenberg and McClelland in 1989 – Response to dual-route model – Neural network/PDP model – “As little as possible of the solution built in” – “As much as possible is left to the mechanisms of learning” ● We’ll call it SM89

The SM89 Model Hidden Units (200 units) Orthographic Units Phonological Units (400 units) (460 units) From Visual System To Articulatory System

The SM89 Model ● Orthographic units are triples – Three characters – Letters or word-border Orthographic Units – CAVE (400 units) ● _CA, CAV, AVE, VE_ – Context-sensitive

The SM89 Model Hidden Units (200 units) ● Hidden units needed for complete neural network ● Encode information in a non-specified way ● Learning occurs by changing weights on connections to and from hidden units – Process of back-propagation

The SM89 Model ● Phonological units are also triples – /kAv/ ● _kA, kAv, Av_ ● Triples are generalized Phonological Units (460 units) ● [stop, vowel, fricative] ● Number of units are sufficient for English monosyllables

How SM89 Learns ● Orthographic units artificially stimulated ● Activation spreads to hidden, phonological units – Feedforward from ortho. to phono. units ● Model response is pattern of activation in phonological units

How SM89 Learns ● Difference in activation between response and the correct activation ● Error computed as the sum of difference for each unit, squared ● Weights of all connections between units adjusted

How SM89 Learns ● Simply, it learns to pronounce words properly – Don’t worry about the equations

How SM89 Learns ● Trained using a list of ~ 3000 English monosyllabic words – Includes homographs (WIND, READ) and irregulars ● Each training session called an epoch ● Words appeared somewhat proportionately to their frequency in written language

Practical Limits on SM89’s Training ● Activation calculated in a single step – Impossible to record how long it took to respond – Correlated error scores with latency ● Error → time ● Frequency of words was compressed – Would’ve required ~ 34 times more epochs – Saved computer time

How SM89 Performed

How SM89 Performed Human SM89

What’s Good About SM89 ● Regular word pronunciation ● Irregular word pronunciation ● Similar results to human studies – Word naming latencies – Priming effects ● Behaviour the result of learning – Ability increases in human fashion

What’s Not Good About SM89 ● Nonword pronunciation – Significantly worse than skilled readers – JINJE, FAIJE, TUNCE pronounced strangely ● Design was awkward – Triples – Feedforward network – Compressed word frequencies – Single-step computation

The SM94 Model ● Seidenberg, Plaut, and McClelland revise SM89 in 1994 – Response to criticism of SM89’s poor nonword performance ● We’ll call this model SM94 ● Compared humans’ nonword responses with model’s responses

The SM94 Model Hidden Units (100 units) Graphemic Units Phonological Units (108 units) (50 units) From Visual System To Articulatory System

How SM94 Differs From SM89 ● Feedback loops for hidden and phonemic units ● Weights adjusted using cross-entropy method – Complicated math, results in better learning ● Not computed in a single step ● No more triples – Graphemes for word input – Phonemes for word output – Input based on syllable structure

Examples of SM94’s Units

Nonwords ● May be similar to regular words – SMURF ← TURF ● In many cases there are many responses – BREAT ● ← EAT ? ● ← GREAT ? ● ← YEAH ?

Nonwords Human

How SM94 and DR93 Performed Note: Above, PDP is SM94; Rules is DR93

Comparing SM94 and DR93 ● Both perform well with list of ~ 3000 words – SM94 responds 99.7% correctly, DR93 78% ● Both do well with nonwords – SM89’s weakness caused by design issues ● SM94 avoids such issues – Neural networks equally capable for nonwords

Comparing SM94 and DR93 ● SM94 is a good performer – Regular, irregular words – Behaviour similar to human ● Latency effects ● Nonword pronunciation ● DR93 still has problems – Trouble with irregular words – More likely to regularize words

Models and Dyslexia ● Consider specific types of dyslexia – Phonological Dyslexia ● Trouble pronouncing nonwords – Surface Dyslexia ● Trouble with irregular words – Developmental Dyslexia ● Inability to read at age-appropriate level ● How can word recognition models account for dyslexic behaviour?

DR93 and Dyslexia ● Phonological dyslexia as damage to GPC route – Cannot compile sounds from graphemes – Relies on look-up ● Surface dyslexia as damage to look-up route – Cannot remember irregular words – Relies on GPCs ● Developmental dyslexia – Problems somewhere along either route ● Cannot form GPCs, slow look-up, for example

SM89 and Dyslexia ● Developmental dyslexia as damaged or missing hidden units 200 Hidden Units 100 Hidden Units

The 1996 Models and Dyslexia ● Plaut, McClelland, Seidenberg, and Patterson study networks and dyslexia (1996) – Variations of the SM89/SM94 models ● Feedforward ● Feedforward with actual word-frequencies ● Feedback with attractors ● Feedback with attractors and semantic processes – Compare each to case studies of dyslexics

Feedforward and Dyslexia Case- Studies

Feedback, with Attractors and Semantics, and Dyslexia Case-Studies

The 1996 Models and Dyslexia ● Most complex damage caused closest results – Not as simple as removing hidden units ● Severing semantics ● Distorting attractors ● Results are encouraging

Questions or Comments

Add a comment

Related pages

Word recognition - Wikipedia

Word recognition, according to Literacy information and Communication System (LINCS) is "the ability of a reader to recognize written words correctly and ...
Read more

The Science of Word Recognition - microsoft.com

The Science of Word Recognition. ... Adams, M.J. (1979). Models of word recognition. Cognitive Psychology, 11, 133-176. Bouma, H. (1973).
Read more

Models of visual word recognition - ScienceDirect.com

Highlights • I review models of visual word recognition and data used to evaluate them. • I focus on recent IA and mathematical/Bayesian models.
Read more

Models of word recognition - ScienceDirect

Major hypotheses about the processes involved in word recognition are reviewed and then assessed through four experiments. The purpose of the first ...
Read more

Cognitive Modeling - Lecture 6: Models of Spoken Word ...

Background and Motivation Models of Word Recognition Cogent Implementation of Cohort Discussion Cognitive Modeling Lecture 6: Models of Spoken Word Recognition
Read more

PART I Word Recognition Processes in Reading

PART I Word Recognition Processes in Reading SSR1 2/21/05 10:06 AM Page 1. ... how the pronunciation of a printed word is generated). These dual-route models
Read more

Models of visual word recognition - Cell

Models of visual word recognition Dennis Norris Medical Research Council Cognition and Brain Sciences Unit, 15 Chaucer Road, Cambridge CB2 7EF, UK
Read more

Visual Word Recognition: Theories and Findings - Psychology

The topic of “visual word recognition” may have the largest literature in Cognitive Psy-chology and, therefore, a chapter on the topic must be selective.
Read more