advertisement

Jarrar: Informed Search

67 %
33 %
advertisement
Information about Jarrar: Informed Search
Technology

Published on March 4, 2014

Author: jarrar02

Source: slideshare.net

advertisement

Lecture Notes, Advanced Artificial Intelligence (SCOM7341) Sina Institute, University of Birzeit 2nd Semester, 2012 Advanced Artificial Intelligence (SCOM7341) Chapter 4 Informed Searching Dr. Mustafa Jarrar Sina Institute, University of Birzeit mjarrar@birzeit.edu www.jarrar.info Jarrar © 2012 1

Discussion and Motivation How to determine the minimum number of coins to give while making change?  The coin of the highest value first ? Jarrar © 2011 2

Discussion and Motivation Travel Salesperson Problem Given a list of cities and their pair wise distances, the task is to find a shortest possible tour that visits each city exactly once. • • • Any idea how to improve this type of search? What type of information we may use to improve our search? Do you think this idea is useful: (At each stage visit the unvisited city nearest to the current city)? Jarrar © 2011 3

Best-first search • Idea: use an evaluation function f(n) for each node – family of search methods with various evaluation functions (estimate of "desirability“) – usually gives an estimate of the distance to the goal – often referred to as heuristics in this context  Expand most desirable unexpanded node. • Implementation: Order the nodes in fringe in decreasing order of desirability. • Special cases: – greedy best-first search – A* search Jarrar © 2011 4

Romania with step costs in km Suppose we can have this info (SLD)  How can we use it to improve© 2011search? our Jarrar 5

Greedy best-first search • Greedy best-first search expands the node that appears to be closest to goal. • Estimate of cost from n to goal ,e.g., hSLD(n) = straight-line distance from n to Bucharest. • Utilizes a heuristic function as evaluation function – f(n) = h(n) = estimated cost from the current node to a goal. – Heuristic functions are problem-specific. – Often straight-line distance for route-finding and similar problems. – Often better than depth-first, although worst-time complexities are equal or worse (space). Jarrar © 2011 6

Greedy best-first search example Jarrar © 2011 7

Greedy best-first search example Jarrar © 2011 8

Greedy best-first search example Jarrar © 2011 9

Greedy best-first search example Jarrar © 2011 10

Properties of greedy best-first search • Complete: No – can get stuck in loops (e.g., Iasi  Neamt  Iasi  Neamt  ….) • Time: O(bm), but a good heuristic can give significant improvement • Space: O(bm) -- keeps all nodes in memory • Optimal: No b m Jarrar © 2011 branching factor maximum depth of the search tree 11

Discussion • Do you think hSLD(n) is admissible? • Would you use hSLD(n) in Palestine? How? Why? • Did you find the Greedy idea useful? Ideas to improve it? Jarrar © 2011 12

A* search • Idea: avoid expanding paths that are already expensive. Evaluation function = path cost + estimated cost to the goal f(n) = g(n) + h(n) -g(n) = cost so far to reach n -h(n) = estimated cost from n to goal -f(n) = estimated total cost of path through n to goal • Combines greedy and uniform-cost search to find the (estimated) cheapest path through the current node – Heuristics must be admissible • Never overestimate the cost to reach the goal – Very good search method, but with complexity problems Jarrar © 2011 13

A* search example Jarrar © 2011 14

A* search example Jarrar © 2011 15

A* search example Jarrar © 2011 16

A* search example Jarrar © 2011 17

A* search example Jarrar © 2011 18

A* search example Jarrar © 2011 19

A* Exercise How will A* get from Iasi to Fagaras? Jarrar © 2011 20

A* Exercise Node A B C D E F G H I J K Coordinates (5,9) (3,8) (8,8) (5,7) (7,6) (4,5) (6,5) (3,3) (5,3) (7,2) (5,1) SL Distance 8.0 7.3 7.6 6.0 5.4 4.1 4.1 2.8 2.0 2.2 0.0 Jarrar © 2011 21

Solution to A* Exercise Jarrar © 2011 22

Greedy Best-First Exercise Node A B C D E F G H I J K Coordinates (5,9) (3,8) (8,8) (5,7) (7,6) (4,5) (6,5) (3,3) (5,3) (7,2) (5,1) Distance 8.0 7.3 7.6 6.0 5.4 4.1 4.1 2.8 2.0 2.2 0.0 Jarrar © 2011 23

Solution to Greedy Best-First Exercise Jarrar © 2011 24

Another Exercise Do 1) A* Search and 2) Greedy Best-Fit Search Node A B C D E F G H I J K L C (5,10) (3,8) (7,8) (2,6) (5,6) (6,7) (8,6) (1,4) (3,4) (7,3) (8,4) (5,2) g(n) 0.0 2.8 2.8 5.0 5.6 4.2 5.0 7.2 7.2 8.1 7.0 9.6 h(n) 8.0 6.3 6.3 5.0 4.0 5.1 5.0 4.5 2.8 2.2 3.6 0.0 Jarrar © 2011 25

Admissible Heuristics • • • A heuristic h(n) is admissible if for every node n, h(n) ≤ h*(n), where h*(n) is the true cost to reach the goal state from n. An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic • Example: hSLD(n) (never overestimates the actual road distance) • Theorem-1: If h(n) is admissible, A* using TREE-SEARCH is optimal. (Ideas to prove this theorem?) Jarrar © 2011 26

Optimality of A* (proof) • Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. We want to prove: f(n) < f(G2) (then A* will prefer n over G • • • • f(G2) = g(G2) g(G2) > g(G) f(G) = g(G) f(G2) > f(G) since h(G2) = 0 since G2 is suboptimal since h(G) = 0 from above Jarrar © 2011 27

Optimality of A* (proof) • Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. • f(G2) > f(G) from above • h(n) ≤ h^*(n) since h is admissible • g(n) + h(n) ≤ g(n) + h*(n) • f(n) ≤ f(G) Hence f(G2) > f(n), and n is expanded  contradiction! thus, A* will never select G2 for expansion Jarrar © 2011 28

Optimality of A* (proof) • Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. • In other words: f(G2) = g(G2) + h(G2) = g(G2) > C*, since G2 is a goal on a non-optimal path (C* is the optimal cost) f(n) = g(n) + h(n) <= C*, since h is admissible f(n) <= C* < f(G2), so G2 will never be expanded  A* will not expand goals on sub-optimal paths Jarrar © 2011 29

Consistent Heuristics • A heuristic is consistent if for every node n, every successor n' of n generated by any action a, h(n) ≤ c(n,a,n') + h(n') • If h is consistent, we have f(n') = g(n') + h(n') = g(n) + c(n,a,n') + h(n') ≥ g(n) + h(n) = f(n) • i.e., f(n) is non-decreasing along any path. • Theorem-2: If h(n) is consistent, A* using GRAPH-SEARCH is optimal. • consistency is also called monotonicity Jarrar © 2011 30

Optimality of A* • • • A* expands nodes in order of increasing f value Gradually adds "f-contours" of nodes Contour i has all nodes with f=fi, where fi < fi+1 Jarrar © 2011 31

Complexity of A* • The number of nodes within the goal contour search space is still exponential – with respect to the length of the solution – better than other algorithms, but still problematic • Frequently, space complexity is more important than time complexity – A* keeps all generated nodes in memory Jarrar © 2011 32

Properties of A* • Complete: Yes • (unless there are infinitely many nodes with f ≤ f(G) ) • Time: Exponential • Because all nodes such that f(n) <= C* are expanded! • Space: Keeps all nodes in memory, • Fringe is exponentially large • Optimal: Yes  who can propose an idea to improve the time/space complexity Jarrar © 2011 33

Memory Bounded Heuristic Search • How can we solve the memory problem for A* search? • Idea: Try something like iterative deeping search, but the cutoff is f-cost (g+h) at each iteration, rather than depth first. Two types of memory bounded heuristic searches:  Recursive BFS  MA* Jarrar © 2011 34

Recursive Best First Search (RBFS) best alternative over fringe nodes, which are not children: do I want to back up? RBFS changes its mind very often in practice. This is because the f=g+h become more accurate (less optimistic) as we approach the goal. Hence, higher level nodes have smaller f-values and will be explored first. Problem? If we have more memory we cannot make use of it. Jarrar © 2011 35

Simple Memory Bounded A* (MA*) • This is like A*, but when memory is full we delete the worst node (largest f-value). • Like RBFS, we remember the best descendent in the branch we delete. • If there is a tie (equal f-values) we first delete the oldest nodes first. • Simple-MBA* finds the optimal reachable solution given the memory constraint. • But time can still be exponential. Jarrar © 2011 36

SMA* pseudocode function SMA*(problem) returns a solution sequence inputs: problem, a problem static: Queue, a queue of nodes ordered by f-cost Queue  MAKE-QUEUE({MAKE-NODE(INITIAL-STATE[problem])}) loop do if Queue is empty then return failure n  deepest least-f-cost node in Queue if GOAL-TEST(n) then return success s  NEXT-SUCCESSOR(n) if s is not a goal and is at maximum depth then f(s)   else f(s)  MAX(f(n),g(s)+h(s)) if all of n’s successors have been generated then update n’s f-cost and those of its ancestors if necessary if SUCCESSORS(n) all in memory then remove n from Queue if memory is full then delete shallowest, highest-f-cost node in Queue remove it from its parent’s successor list insert its parent on Queue if necessary insert s in Queue end Jarrar © 2011 37

Simple Memory-bounded A* (MA*) (Example with 3-node memory) Progress of MA*. Each node is labeled with its current f-cost. Values in parentheses show the value of the best forgotten descendant. Search space A 13[15]  = goal f = g+h A A 12 A A 12 13 G 0+12=12 13 10 8 B G B 8+5=13 10 10 C H E I 24+0=24 10 F 8 J G 13 16 16+2=18 20+0=20 10 15 8 D 20+5=25 B 15 10+5=15 A 15[15] A 15[24] A 8 K 30+0=30 24+0=24  A 20[24] 8 15 B G 15 24[] 30+5=35 18 H B 20[] 24+5=29 B I 24 G 15 24 C 25  D 20 Can tell you when best solution found within memory constraint is optimal or not. Jarrar © 2011 38

Admissible Heuristics How can you invent a good admissible heuristic function? E.g., for the 8-puzzle Jarrar © 2011 39

Admissible heuristics E.g., for the 8-puzzle: • h1(n) = number of misplaced tiles • h2(n) = total Manhattan distance (i.e., no. of squares from desired location of each tile) • h1(S) = ? • h2(S) = ? Jarrar © 2011 40

Admissible Heuristics E.g., for the 8-puzzle: • h1(n) = number of misplaced tiles • h2(n) = total Manhattan distance (i.e., no. of squares from desired location of each tile) • h1(S) = ? 8 • h2(S) = ? 3+1+2+2+2+3+3+2 = 18 Jarrar © 2011 41

Dominance • If h2(n) ≥ h1(n) for all n (both admissible) • then h2 dominates h1 • h2 is better for search: it is guaranteed to expand less nodes. • Typical search costs (average number of nodes expanded): • d=12 IDS = 3,644,035 nodes A*(h1) = 227 nodes A*(h2) = 73 nodes • d=24 IDS = too many nodes A*(h1) = 39,135 nodes A*(h2) = 1,641 nodes • What to do If we have h1…hm, but none dominates the other? h(n) = max{h1(n), . . .hm(n)} Jarrar © 2011 42

Relaxed Problems • A problem with fewer restrictions on the actions is called a relaxed problem • The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem • If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h1(n) gives the shortest solution • If the rules are relaxed so that a tile can move to any near square, then h2(n) gives the shortest solution Jarrar © 2011 43

Admissible Heuristics How can you invent a good admissible heuristic function?  Try to relax the problem, from which an optimal solution can be found easily.  Learn from experience. Can machines invite an admissible heuristic automaticlly? Jarrar © 2011 44

Local Search Algorithms • In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution. • State space = set of "complete" configurations. • Find configuration satisfying constraints, e.g., n-queens. • In such cases, we can use local search algorithms. – – keep a single "current" state, try to improve it according to an objective function Advantages: 1. Uses little memory 2. Finds reasonable solutions in large infinite spaces Jarrar © 2011 45

Local Search Algorithms • Local search can be used on problems that can be formulated as finding a solution maximizing a criterion among a number of candidate solutions. • Local search algorithms move from solution to solution in the space of candidate solutions (the search space) until a solution deemed optimal is found or a time bound is elapsed. • For example, the travelling salesman problem, in which a solution is a cycle containing all nodes of the graph and the target is to minimize the total length of the cycle. i.e. a solution can be a cycle and the criterion to maximize is a combination of the number of nodes and the length of the cycle. • A local search algorithm starts from a candidate solution and then iteratively moves to a neighbor solution. Jarrar © 2011 46

Local Search Algorithms • If every candidate solution has more than one neighbor solution; the choice of which one to move to is taken using only information about the solutions in the neighborhood of the current one. • hence the name local search. • Terminate on a time bound or if the situation is not improved after number of steps. • Local search algorithms are typically incomplete algorithms, as the search may stop even if the best solution found by the algorithm is not optimal. Jarrar © 2011 47

Example: n-queens • Put n queens on an n × n board with no two queens on the same row, column, or diagonal. • Move a queen to reduce number of conflicts. Jarrar © 2011 48

Hill-Climbing Search • • • • Technique which belongs to the family of local search. starts with a random (potentially poor) solution, and iteratively makes small changes to the solution, each time improving it a little. When the algorithm cannot see any improvement anymore, it terminates. Problem: depending on initial state, can get stuck in local maxima. Hill climbing can be used to solve problems that have many solutions, some of which are better than others. (e.g. Bisimilarity ) Jarrar © 2011 49

Hill-climbing search: 8-queens problem Each number indicates h if we move a queen in its corresponding column • h = number of pairs of queens that are attacking each other, either directly or indirectly (h = 17 for the above state) Jarrar © 2011 50

Hill-climbing search: 8-queens problem A local minimum with h = 1 Jarrar © 2011 51

Simulated Annealing Search • Find an acceptably good solution in a fixed amount of time, rather than the best possible solution. • Locating a good approximation to the global minimum of a given function in a large search space. • At each step, the SA heuristic considers some neighbour s' of the current state s, and probabilistically decides between moving the system to state s' or staying in state s. • Widely used in VLSI layout, airline scheduling, etc. Jarrar © 2011 52

Properties of Simulated Annealing Search • One can prove: If T decreases slowly enough, then simulated annealing search will find a global optimum with probability approaching 1 (however, this may take VERY long) • Widely used in VLSI layout, airline scheduling, etc. Jarrar © 2011 53

Genetic Algorithms • Inspired by evolutionary biology such as inheritance. • Evolves toward better solutions. • A successor state is generated by combining two parent states. • Start with k randomly generated states (population). Jarrar © 2011 54

Genetic Algorithms • A state is represented as a string over a finite alphabet (often a string of 0s and 1s) • Evaluation function (fitness function). Higher values for better states. • Produce the next generation of states by selection, crossover, and mutation. • Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population. Jarrar © 2011 55

Genetic Algorithms fitness: #non-attacking queens probability of being regenerated in next generation • Fitness function: number of non-attacking pairs of queens (min = 0, max = 8 × 7/2 = 28) • 24/(24+23+20+11) = 31% • 23/(24+23+20+11) = 29% etc Jarrar © 2011 56

Genetic Algorithms [2,6,9,3,5 0,4,1,7,8] [2,6,9,3,5,8,0,4,7,1] [3,6,9,7,3 8,0,4,7,1] [3,6,9,7,3,0,4,1,7,8] Jarrar © 2011 57

Homework-1 • Draw the map of about 20 towns (including Jerusalem), Illustrate the greedy, A*, and RBSF algorithms to go between a town and Birzeit University. • Estimate the straight line distance between these towns (use Google earth). Prove (theoretically, and by example) that your estimation is consistent and admissible; and that the obtained path is optimal. • Overestimate the distances, and proof (theoretically, and by example) that the obtained path to Birzeit is not optimal.        Upload this homework to Ritaj, and don’t send it to me directly. Your solution should be clear, and with animation (use ppt). Each student should select different towns (whole Palestine!). Don’t send me email (Only to Ritaj). File name: AAI10.Search.RamiHodrob.v52.ppt Spilling of Town should correct (as found on the map or Wikipedia) Deadline (9/2/2011) Jarrar © 2011 58

Add a comment

Related presentations

Presentación que realice en el Evento Nacional de Gobierno Abierto, realizado los ...

In this presentation we will describe our experience developing with a highly dyna...

Presentation to the LITA Forum 7th November 2014 Albuquerque, NM

Un recorrido por los cambios que nos generará el wearabletech en el futuro

Um paralelo entre as novidades & mercado em Wearable Computing e Tecnologias Assis...

Microsoft finally joins the smartwatch and fitness tracker game by introducing the...

Related pages

Un-Informed Search - Mustafa Jarrar

Un-Informed Search Dr. Mustafa Jarrar Sina Institute, University of Birzeit mjarrar@birzeit.edu www.jarrar.info Artificial Intelligence Lecture Notes on ...
Read more

Chapter 3 Un-Informed Search - Mustafa Jarrar

Chapter 3 Un-Informed Search Lecture Notes, ... Jarrar © 2012 17 Search Terminology Search Tree –Generated as the search space is traversed
Read more

Informed Search (Part 1/2) - YouTube

Informed Search (Part 1/2) Jarrar Courses. Subscribe Subscribed Unsubscribe 968 968. ... and http://www.jarrar.info The lecture covers: Informed ...
Read more

Jarrar's Courses: Jarrar: Un-informed Search

Online Courses by Mustafa Jarrar, Birzeit Univeristy, Palestine. Jarrar: Un-informed Search
Read more

Jarrar's Courses: Advanced Artificial Intelligence Course(AAI)

Advanced Artificial Intelligence Course. ... Mustafa Jarrar: Informed Search By: Mustafa Jarrar: Games By: Mustafa Jarrar: Introduction to Information ...
Read more

Khalida Jarrar - Wikipedia, the free encyclopedia

Khalida Jarrar (Arabic: خالدة جرار ‎‎) is a Palestinian feminist, human rights activist and senior lawyer for the PLO. She is a member of the ...
Read more

JARRAR (Tug) IMO

Get the latest live position for the JARRAR. You can also check the schedule, technical details and many more.
Read more

Raed Jarrar | American Friends Service Committee

Raed Jarrar serves as AFSC’s Policy Impact ... Search form. Search . American ... American Friends Service Committee. Donate; Get Involved; Stay Informed ...
Read more