perceptron 2 4 2008

100 %
0 %
Information about perceptron 2 4 2008
Education

Published on April 30, 2008

Author: Belly

Source: authorstream.com

Perceptrons and Linear Classifiers:  Perceptrons and Linear Classifiers William Cohen 2-4-2008 Slide2:  Announcement: no office hours for William this Friday 2/8 Slide3:  Dave Touretzky’s Gallery of CSS Descramblers Linear Classifiers:  Linear Classifiers Let’s simplify life by assuming: Every instance is a vector of real numbers, x=(x1,…,xn). (Notation: boldface x is a vector.) There are only two classes, y=(+1) and y=(-1) A linear classifier is vector w of the same dimension as x that is used to make this prediction: Slide5:  w -W Visually, x · w is the distance you get if you “project x onto w” X1 X1 . w X2 . w The line perpendicular to w divides the vectors classified as positive from the vectors classified as negative. In 3d: lineplane In 4d: planehyperplane … Slide6:  Wolfram MathWorld Mediaboost.com Geocities.com/bharatvarsha1947 Slide7:  Notice that the separating hyperplane goes through the origin…if we don’t want this we can preprocess our examples: What have we given up?:  What have we given up? -1 +1 Outlook overcast Humidity normal What have we given up?:  What have we given up? Not much! Practically, it’s a little harder to understand a particular example (or classifier) Practically, it’s a little harder to debug You can still express the same information You can analyze things mathematically much more easily Naïve Bayes as a Linear Classifier:  Naïve Bayes as a Linear Classifier Consider Naïve Bayes with two classes (+1, -1) and binary features (0,1). Naïve Bayes as a Linear Classifier:  Naïve Bayes as a Linear Classifier Naïve Bayes as a Linear Classifier:  Naïve Bayes as a Linear Classifier “log odds” Naïve Bayes as a Linear Classifier:  Naïve Bayes as a Linear Classifier pi qi Naïve Bayes as a Linear Classifier:  Naïve Bayes as a Linear Classifier Naïve Bayes as a Linear Classifier:  Naïve Bayes as a Linear Classifier Summary: NB is linear classifier Weights wi have a closed form which is fairly simple, expressed in log-odds An Even Older Linear Classifier:  An Even Older Linear Classifier 1957: The perceptron algorithm: Rosenblatt WP: “A handsome bachelor, he drove a classic MGA sports car and was often seen with his cat named Tobermory. He enjoyed mixing with undergraduates, and for several years taught an interdisciplinary undergraduate honors course entitled "Theory of Brain Mechanisms" that drew students equally from Cornell's Engineering and Liberal Arts colleges…this course was a melange of ideas .. experimental brain surgery on epileptic patients while conscious, experiments on .. the visual cortex of cats, ... analog and digital electronic circuits that modeled various details of neuronal behavior (i.e. the perceptron itself, as a machine).” Built on work of Hebbs (1949); also developed by Widrow-Hoff (1960) 1960: Perceptron Mark 1 Computer – hardware implementation Slide17:  Bell Labs TM 59-1142-11– Datamation 1961 – April 1 1984 Special Edition of CACM An Even Older Linear Classifier:  An Even Older Linear Classifier 1957: The perceptron algorithm: Rosenblatt WP: “A handsome bachelor, he drove a classic MGA sports car and was often seen with his cat named Tobermory. He enjoyed mixing with undergraduates, and for several years taught an interdisciplinary undergraduate honors course entitled "Theory of Brain Mechanisms" that drew students equally from Cornell's Engineering and Liberal Arts colleges…this course was a melange of ideas .. experimental brain surgery on epileptic patients while conscious, experiments on .. the visual cortex of cats, ... analog and digital electronic circuits that modeled various details of neuronal behavior (i.e. the perceptron itself, as a machine).” Built on work of Hebbs (1949); also developed by Widrow-Hoff (1960) 1960: Perceptron Mark 1 Computer – hardware implementation 1969: Minksky & Papert book shows perceptrons limited to linearly separable data, and Rosenblatt dies in boating accident 1970’s: learning methods for two-layer neural networks Mid-late 1980’s (Littlestone & Warmuth): mistake-bounded learning & analysis of Winnow method; early-mid 1990’s, analyses of perceptron/Widrow-Hoff Slide19:  Experimental evaluation of Perceptron vs WH and Experts (Winnow-like methods) in SIGIR-1996 (Lewis, Schapire, Callan, Papka), and (Cohen & Singer) Freund & Schapire, 1998-1999 showed “kernel trick” and averaging/voting worked The voted perceptron:  The voted perceptron A B instance xi Slide21:  (1) A target u (2) The guess v1 after one positive example. Slide22:  u -u 2γ u -u 2γ v1 +x2 +x1 v1 -x2 v2 (3a) The guess v2 after the two positive examples: v2=v1+x2 (3b) The guess v2 after the one positive and one negative example: v2=v1-x2 I want to show two things: The v’s get closer and closer to u: v.u increases with each mistake The v’s do not get too large: v.v grows slowly Slide23:  u -u 2γ u -u 2γ v1 +x2 +x1 v1 -x2 v2 (3a) The guess v2 after the two positive examples: v2=v1+x2 (3b) The guess v2 after the one positive and one negative example: v2=v1-x2 > γ Slide24:  u -u 2γ u -u 2γ v1 +x2 +x1 v1 -x2 v2 On-line to batch learning:  On-line to batch learning Pick a vk at random according to mk/m, the fraction of examples it was used for. Predict using the vk you just picked. (Actually, use some sort of deterministic approximation to this). The voted perceptron:  The voted perceptron Some more comments:  Some more comments Perceptrons are like support vector machines (SVMs) SVMs search for something that looks like u: i.e., a vector w where ||w|| is small and the margin for every example is large You can use “the kernel trick” with perceptrons Replace x.w with (x.w+1)d Experimental Results:  Experimental Results Slide30:  Task: classifying hand-written digits for the post office More Experimental Results (Linear kernel, one pass over the data):  More Experimental Results (Linear kernel, one pass over the data)

Add a comment

Related presentations

Related pages

Perceptrons and Linear Classifiers William Cohen 2-4-2008 ...

Perceptrons and Linear Classifiers William Cohen 2-4-2008. Upload Log in. My presentations; Profile; Feedback; Log out; Search Download presentation. We ...
Read more

Linear Classification (Part II: Perceptron)

2 Perceptron Training data: ... 11/13/2008 4 w-W Notice that the separating hyperplanegoes through the origin…if we don’t want this we can preprocess ...
Read more

Linear Classifiers and the Perceptron - cs.cmu.edu

Linear Classifiers and the Perceptron William Cohen February 4, 2008 1 Linear classifiers Let’s assume that every instance is an n-dimensional vector ...
Read more

perceptron 2 4 2008, SlideSearchEngine.com

perceptron 2 4 2008 Education presentation by Belly ... Published on April 30, 2008. Author: Belly. Source: authorstream.com
Read more

Multi-Layer Perceptron - CAE Users

2.4 BLAS Implementation. . . . . . . . . . . . . . . . . . . . . . . . .6 ... Generated on Fri Dec 19 11:53:34 2008 for Multi-Layer Perceptron by Doxygen.
Read more

Linear threshold functions and the perceptron algorithm

CS281B/Stat241B (Spring 2008) Statistical Learning Theory Lecture: 2 Linear threshold functions and the perceptron algorithm Lecturer: Peter Bartlett ...
Read more

Perceptron Classifiers - Computer Science and Engineering

Perceptron Classifiers Charles Elkan elkan@cs.ucsd.edu October 2, 2007 Suppose we have N training examples. The training data are a matrix with N
Read more

Perceptron

Perceptron. v. 1.6 November 2015 ... y = y modulo H / 2. With interpolation. 4 If x > W – 1, x = x modulo W , otherwise ... (Penguin Books 2008, ...
Read more

CPSC 420-500: Program 3, Perceptron and Backpropagation

CPSC 420-500: Program 3, Perceptron and Backpropagation Yoonsuck Choe Department of Computer Science Texas A&M University October 31, 2008 1 Overview
Read more

The Perceptron Algorithm - cs.colorado.edu

Two Versions of the Perceptron Algorithm: Greg Grudic. ... -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1-1-0.8-0.6 ... 2/20/2008 1:49:49 PM ...
Read more