advertisement

AREng_Chap1_Intro

50 %
50 %
advertisement
Information about AREng_Chap1_Intro
Others-Misc

Published on November 28, 2008

Author: aSGuest4298

Source: authorstream.com

advertisement

Automated reasoning and theorem proving : Automated reasoning and theorem proving Introduction: logic in AI Automated reasoning: Resolution Unification Normalization Introduction: : Introduction: Motivating example Automated reasoning Logic: Syntax Model semantics Logical entailment The AI dream in the 60’s: : 3 The AI dream in the 60’s: Logic allows to express almost everything ‘formally’. Logic also allows to prove “theorems” based on the information given. Can we exploit this to build automated reasoning systems ?? Underlying premises: : 4 Underlying premises: Example: : 5 Example: The following knowledge is given : 2. Marcus was a Pompeian. 3. All Pompeians were Romans. 4. Caesar was a ruler. 5. All Romans were either loyal to Caesar or hated him. 6. Everyone is loyal to someone. 7. People only try to assassinate rulers to whom they are not loyal. 8. Marcus tried to assassinate Caesar. Can we automatically answer the following questions? Conversion to the First Order Logic: : 6 Conversion to the First Order Logic: Representation of facts: 1. Marcus was a man. man(Marcus) 2. Marcus was a Pompeian. Pompeian(Marcus) 4. Caesar was a ruler. ruler(Caesar) 8. Marcus tried to assassinate Caesar. try_assassinate(Marcus, Caesar) Conversion tothe First Order Logic (2): : 7 5. All Romans were either loyal to Caesar or hated him. Conversion tothe First Order Logic (2): General representation (representation of rules): 3. All Pompeians were Romans. x Pompeian(x)  Roman(x) 6. Everyone is loyal to someone. x y loyal_to(x,y) 7. People only try to assassinate rulers to whom they are not loyal. xy person(x)  ruler(y)  try_assassinate(x,y)  ~loyal_to(x,y) x Roman(x)  loyal_to(x,Caesar)  hates(x,Caesar) The “theorem” ? : 8 The “theorem” ? Was Marcus loyal to Caesar? Did Marcus hate Caesar? hates(Marcus,Caesar) ~loyal_to(Marcus,Caesar) A proof using backward-reasoning problem-reduction: : 9 A proof using backward-reasoning problem-reduction: ~loyal_to(Marcus,Caesar) + Modus ponens Problems:1) knowledge representation: : 10 Problems:1) knowledge representation: Natural language is imprecise / ambiguous see “People only try …” Obvious information is easily forgotten. see man <-> person Some information is more difficult to represent in logic. Vb.: “perhaps …”, “possibly…”, “probably…”, “the chance of … is 45%”, Logic is inconvenient from a software engineering perspective. too ‘fine-grained’ (like an assembly language) Problems:2) Problem solving: : 11 Problems:2) Problem solving: All trade-offs that we had with search methods based on states space representation: backward/forward, tree/graph, OR-tree/AND-OR, control aspects, ... What deduction rules are needed in general? Example: prove “ hates(Marcus,Caesar) “ How do we handle x and y ? Problems:2) Problem solving (2): : 12 Problems:2) Problem solving (2): How to compute substitutions in the general case ? Which theorem do we try to prove? Ex.: loyal_to(Marcus,Caesar) or ~loyal_to(Marcus,Caesar) How to handle equality of objects? Problem: combinatorial explosion of the derived equalities (reflexivity, symmetry, transitivity, …) How to guarantee correctness/completeness? The formal model semantics of Logic : The formal model semantics of Logic The meaning of “Logical entailment” Propositional logic : Propositional logic Basic concepts: : 15 Basic concepts: In propositional logic: Semantics (meaning) : 16 Semantics (meaning) In general (for all knowledge representation formalisms): 2 approaches to define semantics: Describe the meaning by means of a natural language Exs. (propositional logic):  : “implies” ~ : “not true that”  : “or” p  q : “p if and only if q” ~ p  r : “not p and r” every symbol and every well-formed formula gets meaning through the associated natural language Semantics (2) : 17 Semantics (2) Describe the meaning by converting to an associated “mathematical” object In propositional logic : the set of all propositional symbols that are logically entailed by the given formulas: q p r Logically entailed But how to define “logical entailment” ? : 18 But how to define “logical entailment” ? NOT as: Everything that we can derive from the formulas SINCE: At this moment we do not know yet: “what is a complete set of the derivation rules” This is exactly what Automated Reasoning aims to find out! BUT by: Interpretations Models Interpretation: : 19 Interpretation: = a function that assigns a truth value to each “atomic” formula Truth table Model : 20 Model Given a set of formulas S: a model is an interpretation that makes all formulas in S true Logical entailment: : 21 Logical entailment: Given a set of formulas S and a formula F: F is logically entailed by S ( S |= F ), if all models of S also make F true. F: r  p Predicate logic : Predicate logic Slide 23: 23 p(Yvonne) p(x)  ~q(x) z p(z) x y p(x)  q(y)  p(f(Yvonne, Yvette) Formally: : 24 Formally: An alphabet consists of variables, constants, function symbols, predicate symbols (all user-defined) and of connectors, punctuations en quantifiers. Terms are either: variables, constants, function symbols provided with as many terms as arguments, as the function symbol expects. Well-formed formulas are constructed from predicate symbols, provided with terms as arguments, and from connectors, quantifiers and punctuation – according to the rules of the connectors. Example: : 25 Example: Alphabet: { {0}, {x,y}, {s}, {odd,even}, Con, Pun, Quan} Terms: { 0, s(0), s(s(0)), s(s(s(0))), … x, s(x), s(s(x)), s(s(s(x))), … y, s(y), s(s(y)), s(s(s(y))), … } Well-formed formulas: odd(0), even(s(0)), … odd(x), odd(s(y)), … odd(x)  even(s(s(x))), … x ( odd(x)  even(s(x)) ), … odd(y)  x (even( s(x))), ... Interpretation: : 26 Interpretation: = a set D (the domain), and a function that maps constants to D, and a function that maps function symbols to functions: D -> D, and a function that maps predicate symbols to predicates: D -> Booleans. Assigning truth values: : 27 Assigning truth values: 1. To ground atomic formulas: form: p(f(a), g(a,b)) Example: I( odd( s ( s( 0 ) ) ) ) ) = ? = I (odd) ( I(s) ( I(s) ( I(0) ) ) ) = “even” ( succ ( succ ( 0 ) ) ) = “even” ( succ ( 1 ) ) = “even” ( 2 ) = true Assigning truth values (2): : 28 Assigning truth values (2): 2. For closed well-formed formulas: (= no non-quantified variables) x F(x) is true if: for all d D: I( F(d) ) = true  x F(x) is true if: there exists d D such that: I( F(d) ) = true further: use the truth tables. Example: I(x odd( s ( x ) )  odd(x) ) = ? = true if for all d in N: I (odd( s (d) )  odd(d) ) = true = “even” ( succ(d) )  “even” (d) Assume: d = 0, then: = false  true Truth tables: false ! Semantics / Logical entailment: : 29 Semantics / Logical entailment: Exactly as in propositional logic ! Additional: inconsistency: Given a set of formulas S: S is inconsistent if S has no models. Example: S = { p(a), ~p(a)} Marcus example: : 30 Marcus example: A F =  man person ruler Roman Pompeian hates loyal_to try_assassinate P “Intended” interpretation: Is a model IF ALL FORMULAS ARE CORRECT Marcus example: : 31 Marcus example: I(man) = I(person)= I(Roman) = “natural number” I(Pompeian) = “even number” I(ruler) = “prime number” I(try_assassinate) = “ > ” I(loyal_to) = “divides” I(hates) = “doesn’t divide” Model ?? : 32 Model ?? 1. Marcus was a man. 4 is a natural number 2. Marcus was Pompeian. 4 is an even number 4. Caesar was a ruler. 3 is a prime number 8. Marcus tried to assassinate Caesar. 4 > 3 3. All Pompeians were Romans. Even numbers are naturals. 5. All Romans were either loyal to Caesar or hated him. A number either divides 3 or doesn’t divide 3. 6. Everybody is loyal to somebody. Each number is a divisor of some number. 7. People try to assassinate only those rulers to whom they are not loyal. A natural number that is greater than a prime number doesn’t divide the prime number. YES ! “Logic is all form, no content” : 33 “Logic is all form, no content” Only the underlying structure of a set of logical formulas is important for the conclusions! (up to the names isomorphism) But from the knowledge representation perspective also the ‘contents’ is important. Relation with respect to other courses: : Relation with respect to other courses: Logic as a foundation for AI : 35 Logic as a foundation for AI A much deeper and more formal study of Logic for Knowledge Representation and Reasoning ! Programming Languages and Programming Methodologies : 36 Programming Languages and Programming Methodologies Logic-based programming languages (Prolog/CLP) Selected Topics in Logic Programming : 37 Selected Topics in Logic Programming Formal studies of semantics and formal methods (analysis, termination) of logic-based programming languages (also beyond Prolog) Fundamentals of Artificial Intelligence : 38 Fundamentals of Artificial Intelligence Mostly in the exercises Methodology for knowledge representation? : 39 Methodology for knowledge representation? Very complicated. Not many simple guidelines. Choose an alphabet that allows us to represent all objects and all relations from the problem domain: What are the basic objects, functions and relations in your problem domain ? Ontology: represent only the RELEVANT information Choose constants, function and predicate symbols to represent them. Translate every sentence in a natural language in one or more corresponding logical formulas.

Add a comment

Related presentations

Related pages

Constraint propagation - Departement Computerwetenschappen ...

Automated reasoning and theorem proving Introduction: logic in AI Automated reasoning: Resolution Unification Normalization
Read more