advertisement

advertisement

Information about Topic Models Based Personalized Spam Filter

Published on December 24, 2007

Author: sudarsun

Source: slideshare.net

Spam filtering poses a critical problem in

text categorization as the features of text is

continuously changing. Spam evolves continuously and

makes it difficult for the filter to classify the evolving

and evading new feature patterns. Most practical

applications are based on online user feedback, the

task calls for fast, incremental and robust learning

algorithms. This paper presents a system for

automatically detection and filtering of unsolicited

electronic messages. In this paper, we have developed

a content-based classifier, which uses two topic models

LSI and PLSA complemented with a text patternmatching

based natural language approach. By

combining these powerful statistical and NLP

techniques we obtained a parallel content based Spam

filter, which performs the filtration in two stages. In

the first stage each model generates its individual

predictions, which are combined by a voting

mechanism as the second stage.

text categorization as the features of text is

continuously changing. Spam evolves continuously and

makes it difficult for the filter to classify the evolving

and evading new feature patterns. Most practical

applications are based on online user feedback, the

task calls for fast, incremental and robust learning

algorithms. This paper presents a system for

automatically detection and filtering of unsolicited

electronic messages. In this paper, we have developed

a content-based classifier, which uses two topic models

LSI and PLSA complemented with a text patternmatching

based natural language approach. By

combining these powerful statistical and NLP

techniques we obtained a parallel content based Spam

filter, which performs the filtration in two stages. In

the first stage each model generates its individual

predictions, which are combined by a voting

mechanism as the second stage.

advertisement

What is Spam ? unsolicited, unwanted email What is Spam Filtering ? Detection/Filtering of unsolicited content What’s Personalized Spam Filtering ? Definition of “unsolicited” becomes personal Approaches Origin-Based Filtering [ Generic ] Content Based-Filtering [ Personalized ]

What is Spam ?

unsolicited, unwanted email

What is Spam Filtering ?

Detection/Filtering of unsolicited content

What’s Personalized Spam Filtering ?

Definition of “unsolicited” becomes personal

Approaches

Origin-Based Filtering [ Generic ]

Content Based-Filtering [ Personalized ]

Content Based Filtering What does the message contain ? Images, Text, URL Is it “irrelevant” to my preferences ? How to define relevancy ? How does the system understands relevancy ? Supervised Learning Teach the system about what I like and what I don’t Unsupervised Learning Decision made using latent patterns

What does the message contain ?

Images, Text, URL

Is it “irrelevant” to my preferences ?

How to define relevancy ?

How does the system understands relevancy ?

Supervised Learning

Teach the system about what I like and what I don’t

Unsupervised Learning

Decision made using latent patterns

Content-Based Filtering -- Methods Bayesian Spam Filtering Simplest Design / Less computation cost Based on keyword distribution Cannot work on contexts Accuracy is around 60% Topic Models based Text Mining Based on distribution of n-grams (key phrases) Addresses Synonymy and Polysemy Run-time computation cost is less Unsupervised technique Rule based Filtering Supervised technique based on hand-written rules Best accuracy for known cases Cannot adopt to new patterns

Bayesian Spam Filtering

Simplest Design / Less computation cost

Based on keyword distribution

Cannot work on contexts

Accuracy is around 60%

Topic Models based Text Mining

Based on distribution of n-grams (key phrases)

Addresses Synonymy and Polysemy

Run-time computation cost is less

Unsupervised technique

Rule based Filtering

Supervised technique based on hand-written rules

Best accuracy for known cases

Cannot adopt to new patterns

Topic Models Treats every word as a feature Represents the corpus as a higher-dimensional distribution SVD: Decomposes the higher-dimensional data to a small reduced sub-space containing only the dominant feature vectors PLSA: Documents can be understood as a mixture of topics Rule Based Approaches N-Grams – Language Model Approach More common n-grams more closer the patterns are.

Topic Models

Treats every word as a feature

Represents the corpus as a higher-dimensional distribution

SVD: Decomposes the higher-dimensional data to a small reduced sub-space containing only the dominant feature vectors

PLSA: Documents can be understood as a mixture of topics

Rule Based Approaches

N-Grams – Language Model Approach

More common n-grams more closer the patterns are.

Describes underlying structure among text. Computes similarities between text. Represents documents in high-dimensional Semantic Space (Term – Document Matrix). High dimensional space is approximated to low-dimensional space using Singular Value Decomposition (SVD). Decomposes the higher dimensional TDM to U, S, V matrices. U: Left Singular Vectors ( reduced word vectors ) V: Right Singular Vector ( reduced document vectors ) S: Array of Singular Values ( variances or scaling factor ) LSA Model, In Brief

Describes underlying structure among text.

Computes similarities between text.

Represents documents in high-dimensional Semantic Space (Term – Document Matrix).

High dimensional space is approximated to low-dimensional space using Singular Value Decomposition (SVD).

Decomposes the higher dimensional TDM to U, S, V matrices.

U: Left Singular Vectors ( reduced word vectors )

V: Right Singular Vector ( reduced document vectors )

S: Array of Singular Values ( variances or scaling factor )

PLSA Model By PLSA model, a document is a mixture of topics and topics generate words. The probabilistic latent factor model can be described as the following generative model Select a document d i from D with probability Pr ( d i ). Pick a latent factor z k with probability Pr ( z k |d i ). Generate a word w j from W with probability Pr ( w j |z k ). Where Computing the aspects model parameters using EM Algorithm

By PLSA model, a document is a mixture of topics and topics generate words.

The probabilistic latent factor model can be described as the following generative model

Select a document d i from D with probability Pr ( d i ).

Pick a latent factor z k with probability Pr ( z k |d i ).

Generate a word w j from W with probability Pr ( w j |z k ).

Computing the aspects model parameters using EM Algorithm

N–Gram Approach Language Model Approach Looks for repeated patterns Each word depends probabilistically on the n-1 preceding words. Calculating and Comparing the N-Gram profiles.

Language Model Approach

Looks for repeated patterns

Each word depends probabilistically on the n-1 preceding words.

Calculating and Comparing the N-Gram profiles.

Overall System Architecture Training Mails Preprocessor LSA Model PLSA Model N-Gram Other Classifiers Combiner Final Result Test Mail … .

Preprocessing Feature Extraction Tokenizing Feature Selection Pruning Stemming Weighting Feature Representation Term Document Matrix Generation Sub Spacing LSA / PLSA Model Projection Feature Reduction Principle Component Analysis

Feature Extraction

Tokenizing

Feature Selection

Pruning

Stemming

Weighting

Feature Representation

Term Document Matrix Generation

Sub Spacing

LSA / PLSA Model Projection

Feature Reduction

Principle Component Analysis

Principle Component Analysis - PCA Data Reduction - Ignore the features of lesser significance Given N data vectors from k -dimensions, find c <= k orthogonal vectors that can be best used to represent data The original data set is reduced to one consisting of N data vectors on c principal components (reduced dimensions) To detect structure in the relationship between variables that is used to classify data.

Data Reduction - Ignore the features of lesser significance

Given N data vectors from k -dimensions, find c <= k orthogonal vectors that can be best used to represent data

The original data set is reduced to one consisting of N data vectors on c principal components (reduced dimensions)

To detect structure in the relationship between variables that is used to classify data.

LSA Classification Score Input Mails LSA Model PCA BPN Token List Vector 1xR R: Rank MxR M: Vocab Size R: Rank Vector 1xR’ RxR’ R: InVar Size R’: OutVar Size

PLSA Classification Score Input Mails PLSA Model PCA BPN Token List Vector 1xZ Z: Aspects MxZ M: Vocab Size R: Aspects Count Vector 1xZ’ ZxZ’ Z: InVar Size Z’: OutVar Size

Model Training Build the Global (P)LSA model using the training mails. Vectorize the training mails using LSI/PSLA model Reduce the dimensionality of the matrix of pseudo vectors of training documents using PCA. Feed the reduced matrix into neural networks for learning. Model Testing Test mails is fed to (P)LSA for vectorization. Vector is reduced using PCA model. Reduced vector is fed into BPN neural network. BPN network emits its prediction with a confidence score (P)LSA Classification

Model Training

Build the Global (P)LSA model using the training mails.

Vectorize the training mails using LSI/PSLA model

Reduce the dimensionality of the matrix of pseudo vectors of training documents using PCA.

Feed the reduced matrix into neural networks for learning.

Model Testing

Test mails is fed to (P)LSA for vectorization.

Vector is reduced using PCA model.

Reduced vector is fed into BPN neural network.

BPN network emits its prediction with a confidence score

N-Gram method Construct an N-Gram tree out of training docs Documents make the leaves Nodes make the identified N-grams from docs Weight of an N-gram = Number of children Higher order of N-gram implies more weight Weight Wt Wt * S / ( S + L ) P: Total number of docs sharing a N-Gram S: Number of SPAM docs sharing N-Gram L: P - S

Construct an N-Gram tree out of training docs

Documents make the leaves

Nodes make the identified N-grams from docs

Weight of an N-gram = Number of children

Higher order of N-gram implies more weight

Weight Wt Wt * S / ( S + L )

P: Total number of docs sharing a N-Gram

S: Number of SPAM docs sharing N-Gram

L: P - S

An Example N-Gram Tree T5 T1 T2 T3 T4 3 rd 2 nd N1 2 nd 1 s t N2 N3 N 4

Combiner Mixture of Experts Get Predictions from all the Experts Use the maximum common prediction Use the prediction with maximum confidence score

Mixture of Experts

Get Predictions from all the Experts

Use the maximum common prediction

Use the prediction with maximum confidence score

Conclusion Objective is to Filter mail messages based on the preference of an individual Classification performance increases with increased (incremental) training Initial learning is not necessary for LSA, PLSA & N-Gram. Performs unsupervised filtering Performs fast prediction although background training is a relatively slower process

Objective is to Filter mail messages based on the preference of an individual

Classification performance increases with increased (incremental) training

Initial learning is not necessary for LSA, PLSA & N-Gram.

Performs unsupervised filtering

Performs fast prediction although background training is a relatively slower process

References [1]I. Androutsopoulos, J. Koutsias, K. V. Chandrinos, G. Paliouras, and C. D. Spyropoulos. “An Evaluation of Naïve Bayesian Anti-Spam Filtering”, Proc. of the workshop on Machine Learning in the New Information Age, 2000. [2]W. Cohen, “Learning rules that classify e-mail”, AAAI Spring Symposium on Machine Learning in Information Access, 1996. [3] W. Daelemans, J. Zavrel, K. van der Sloot, and A. van den Bosch, “TiMBL: Tilburg Memory-Based Learner - version 4.0 Reference Guide”, 2001. [4] H. Drucker, D. Wu, and V. N. Vapnik., “Support Vector Machines for Spam Categorization”, IEEE Trans. on Neural networks, 1999. [5] D. Mertz, “Spam Filtering Techniques. Six approaches to eliminating unwanted e-mail.”, Gnosis Software Inc., September, 2002. Ciencias Físicas, Universidad de Valencia, 1992. [6] M. Vinther, “Junk Detection using neural networks”, MeeSoft Technical Report, June 2002. Available: http://logicnet.dk/reports/JunkDetection/JunkDetection.htm. [7] Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., & Harshman, R. “Indexing By Latent Semantic Analysis”, Journal of the American Society For Information Science , 41, 391-407. (1990) [8] Sudarsun Santhiappan, Venkatesh Prabhu Gopalan, and Sathish Kumar Veeraswamy,”Role of Weighting on TDM in Improvising Performance of LSA on Text Data”, Proceedings of IEEE INDICON 2006 . [9] Thomas Hofmann, “Probabilistic Latent Semantic Indexing,” Proc. 22 Int’l SIGIR Conf. on Research and Development in Information Retrieval, 1999 [10]Sudarsun Santhiappan, Dalou Kalaivendhan and Venkateswarlu Malapatti .” Unsupervised Contextual Keyword Relevance Learning and Measurement using PLSA”, Proceedings of IEEE INDICON 2006. [11]Landauer, T. K., Foltz, P. W., & Laham, D. “Introduction to Latent Semantic Analysis”, DiscourseProcesses, 25, 259-284. (1998). [12]G. Furnas, S. Deerwester, S. Dumais, T. Landauer, R. Harshman, L. Streeter and K. Lochbaum, "Information retrieval using a singular value decomposition model of latent semantic structure," in The 11th International Conference on Research and Development in Information Retrieval, Grenoble, France: ACM Press , pp. 465--480. (1988) [13] Damashek, M. Gauging , “Similarity via N-Grams: Language-Independant Sorting, Categorization and Retrieval of Text”. Science , 267 . 843-848. [14] Sholomo Hershkop, Salvatore J.Stolfo , “Combining Email models for False Positive Reduction”, KDD’05, August 2005.

Any Queries…. ? You can post your queries to [email_address]

BibTeX @MISC{Ieee_topicmodels, author = {Member Ieee and Venkatesh Prabhu Gopalan and Valarmathi B}, title = {Topic Models based Personalized Spam Filter ...

Read more

Topic Models Based Personalized Spam Filter. Sudarsun. S Director – R & D, Checktronix India Pvt Ltd, Chennai Venkatesh Prabhu. G Research Associate ...

Read more

This falls under the category of user-based collaborative filtering. ... based vector model to ... filters recommend products based on ...

Read more

Topic Models Based Personalized Spam Filter Sudarsun. S Director – R & D, Checktronix India Pvt Ltd, Chennai Venkatesh Prabhu. G Research Associate ...

Read more

A spam filter model based on topic partition in P2P was proposed.This model can limit the query into a smaller scope according to the mail contents ...

Read more

Author Topic Model-Based Collaborative Filtering for Personalized POI Recommendations Full Text ... an author topic model-based collaborative filtering ...

Read more

Getting too much spam in Office 365? ... personalized content and ads. ... see the Configure spam filter policies topic.

Read more

Probabilistic Language Models Naïve Bayes text classification. ... used by CS dept’s spam filter, ... and classification methods based on

Read more

Recommended IMAP client settings. ... Do not enable your client's junk mail filters. Gmail's spam filters also work in your IMAP client, ...

Read more

## Add a comment