04 1 IR Basics 3

57 %
43 %
Information about 04 1 IR Basics 3

Published on January 14, 2008

Author: Carmela

Source: authorstream.com

Web Search – Summer Term 2006 II. Information Retrieval (Basics Cont.):  Web Search – Summer Term 2006 II. Information Retrieval (Basics Cont.) (c) Wolfgang Hürst, Albert-Ludwigs-University Organizational Remarks:  Organizational Remarks Exercises: Please, register for the exercises by sending me (huerst@informatik.uni-freiburg.de) an email till Friday, May 5th, with - Your name, - Matrikelnummer, - Studiengang (BA, MSc, Diploma, …) - Plans for exam (yes, no, undecided) This is just to organize the exercises but has no effect if you decide to drop this course later. Recap: IR System & Tasks Involved:  INDEX Recap: IR System & Tasks Involved INFORMATION NEED DOCUMENTS User Interface PERFORMANCE EVALUATION QUERY QUERY PROCESSING (PARSING & TERM PROCESSING) LOGICAL VIEW OF THE INFORM. NEED SELECT DATA FOR INDEXING PARSING & TERM PROCESSING Evaluation of IR Systems:  Evaluation of IR Systems Standard approaches for algorithm and computer system evaluation Speed / processing time Storage requirements Correctness of used algorithms and their implementation But most importantly Performance, effectiveness Another important issue: Usability, users’ perception Questions: What is a good / better search engine? How to measure search engine quality? How to perform evaluations? Etc. What does Performance/Effectiveness of IR Systems mean?:  What does Performance/Effectiveness of IR Systems mean? Typical questions: How good is the quality of a system? Which system should I buy? Which one is better? How can I measure the quality of a system? What does quality mean for me? Etc. Their answer depends on users, application, … Very different views and perceptions User vs. search engine provider, developer vs. manager, seller vs. buyer, … And remember: Queries can be ambiguous, unspecific, etc. Hence, in practice, use restrictions and idealization, e.g. only binary decisions Precision & Recall:  Precision & Recall PRECISION = # FOUND & RELEVANT # FOUND RECALL = # FOUND & RELEVANT # RELEVANT RESULT: DOCUMENTS: 1. DOC. B 2. DOC. E 3. DOC. F 4. DOC. G 5. DOC. D 6. DOC. H Restrictions: 0/1 Relevance, Set instead of order/ranking But: We can use this for eval. of ranking, too (via top N docs.) Calculating Precision & Recall:  Calculating Precision & Recall Precision: Can be calculated directly from the result Recall: Requires relevance ratings for whole (!) data collection In practice: Approaches to estimate recall 1.) Use a representative sample instead of whole data collection 2.) Document-source method 3.) Expanding queries 4.) Compare result with external sources 5.) Pooling method Precision & Recall – Special cases:  Precision & Recall – Special cases Special treatment is necessary, if no doc. is found or no relevant docs. exist (division by zero) NO REL. DOC. EXISTS: A = C = 0 1st CASE: B = 0 2nd CASE: B > 0 EMPTY RESULT SET: A = B = 0 1st CASE: C = 0 2nd CASE: C > 0 Precision & Recall Graphs:  Precision & Recall Graphs Comparing 2 systems: System 1: Prec 1 = 0.6, Rec 1 = 0.3 System 2: Prec 2 = 0.4, Rec 2 = 0.6 Which one is better? Prec.-Recall-Graph: The F Measure:  The F Measure Alternative measures exist, including ones combining Prec. p and Rec. r in 1 single value Example: The F Measure ( = rel. weight for recall, manually set) SOURCE: N. FUHR (UNIV. DUISBURG) SKRIPTUM ZUR VORLESUNG INFORMATION RETRIEVAL, SS 2006 Example for different  Calculating Average Prec. Values:  Calculating Average Prec. Values 1. Macro assessment Estimates the expected value for the precision of a randomly chosen query (query or user oriented) Problem: Queries with empty result set 2. Micro assessment Estimates the likelihood of a randomly chosen doc. being relevant (document or system oriented) Problem: Does not support monotony Monotony of Precision & Recall:  Monotony of Precision & Recall Monotony: Adding a query that delivers the same results for both systems does not change their quality assessment. Example (Precision): Precision & Recall for Rankings:  Distinguish between linear and weak ranking Basic idea: Evaluate precision and recall by looking at the top n results for different n Generally: Precision decreases and recall increases with growing n Precision & Recall for Rankings PRECISION RECALL Precision & Recall for Rankings (Cont.):  Precision & Recall for Rankings (Cont.) Realizing Evaluations:  Realizing Evaluations Now we have a system to evaluate and: Measures to quantify performance Methods to calculate them What else do we need? Documents dj (test set) Tasks (information needs) and respective queries qi Relevance judgments rij (normally binary) Results (delivered by the system) Evaluation = comparison of Given, perfect result: (qi, dj, rij) with result from the system: (qi, dj, rij(S1)) The TREC Conference Series:  The TREC Conference Series In the old days: IR evaluation critical because No good (i.e. big) test sets No comparability because of different test sets Motivation for initiatives such as TREC: Text REtrieval Conference (TREC), since 1992, see http://trec.nist.gov/ Goals of TREC: Create realistic, significant test sets Achieve comparability of different systems Establish common basics for IR evaluation Increase technology transfer between industries and research The TREC Conf. Series (Cont.):  The TREC Conf. Series (Cont.) TREC offers Various collections of test data Standardized retrieval tasks (queries & topics) Related relevance measures Different tasks (tracks) for certain problems Examples for Tracks targeted by TREC: Traditional text retrieval Spoken document retrieval Non-English or multilingual retrieval Information filtering User interactions Web search, SPAM (since 2005), Blog (since 2005) Video retrieval etc. Advantages and Disadv. of TREC:  Advantages and Disadv. of TREC TREC (and other IR initiatives) Very successful, progress which otherwise might probably not have happened But disadvantages exist as well, e.g. Only compares performance but not actual reasons for different behavior Unrealistic data (e.g. still too small, not represen- tative enough) Often just batch mode evaluation, no interactivity or user experience (Note: There are interactivity tracks!) Often no analysis of significance Note: Most of these arguments are general problems of IR evaluation and not necessarily TREC specific TREC Home Page:  TREC Home Page Visit the TREC site at http://trec.nist.gov and browse the different Tracks (gives you an idea about what is going on in the IR community) Recap: IR System & Tasks Involved:  INDEX Recap: IR System & Tasks Involved INFORMATION NEED DOCUMENTS User Interface PERFORMANCE EVALUATION QUERY QUERY PROCESSING (PARSING & TERM PROCESSING) LOGICAL VIEW OF THE INFORM. NEED SELECT DATA FOR INDEXING PARSING & TERM PROCESSING

Add a comment

Related presentations

Related pages

Hookes' Law

Basic principles of IR; Hookes' Law model; Important absorptions; Sample spectra; Basics: Infra red (IR) spectroscopy deals with the interaction ...
Read more

Infrared Spectroscopy - Home - Michigan State University

Infrared Spectroscopy. 1. Introduction As noted in a previous chapter, the light our eyes see is but a small part of a broad spectrum of electromagnetic ...
Read more

HardBeatzz - Give it a try - 04 - Basics - YouTube

HardBeatzz - Give it a try - 04 - Basics HardBeatzz. ... 3 0. Don't like this ... - Duration: 1:04:51. by DjVenit Offiicial 3,526 views. 1:04:51 ...
Read more

Management’s Discussion and Analysis of Financial ...

1,159.98 941.04 23.3 Pipeline 1,159.98 1,135.85 2.1 ... Investor Relations Department e-mail: ir@gazprom-neft.ru Address: 3-5, Pochtamtskaya Street, ...
Read more

Infrared spectroscopy - Wikipedia, the free encyclopedia

A basic IR spectrum is essentially a graph of ... with the symbol cm −1. Units of IR wavelength are commonly given ... 3 Absorption bands. 3.1 Badger's ...
Read more

Learning DSLR | Facebook

Learning DSLR, Ottawa ... on for the iPhone and it has a full 1" sensor! Too bad it ... that you can actually share that 'capacity' with 3 other friends ...
Read more

Electronics 101 - International Rectifier

Electronics 101. Email. Print. How to Buy; ... 1. Introduction : 2. Passive Components : 3. ... About IR. Careers; Press Room;
Read more

Dear MyChron 3 Owner - AiM, The World Leader in Data ...

• MyChron 3 Display Unit ( 1 ). ... ( 6 ) for IR version. • Optional USB data download cable ... The MyChron 3 Basic model supports one temperature sensor.
Read more

International relations theory - Wikipedia, the free ...

Many often conflicting ways of thinking exist in IR theory, including constructivism, ... 3 Liberalism. 3.1 Neoliberalism; 3.2 Post-liberalism; 4 ...
Read more

Regulations | EASA

Basic regulation; Publication date ... 23/04/2015 (Adoption date: 23/04/2015) Commission Regulation (EU) No 2015/640. ... © easa.europa.eu 2015.
Read more