bentivegna thesis04

50 %
50 %
Information about bentivegna thesis04
Education

Published on January 10, 2008

Author: Siro

Source: authorstream.com

Learning from Observation Using Primitives:  Learning from Observation Using Primitives Darrin Bentivegna Outline:  Outline Motivation Test environments Learning from observation Learning from practice Contributions Future directions Motivation:  Motivation Reduce the learning time needed by robots. Quickly learn skills from observing others. Improve performance through practice . Adapt to environment changes. Create robots that can interact with and learn from humans in a human-like way. Real World Marble Maze:  Real World Marble Maze Real World Air Hockey:  Real World Air Hockey Research Strategy:  Research Strategy Roll Off Wall Roll To Corner Guide Roll From Wall Leave Corner Domain knowledge: library of primitives. Manually defining primitives is a natural way to specify domain knowledge. Focus of research is on how to use a fixed library of primitives Marble Maze Primitives Primitives in Air Hockey:  Primitives in Air Hockey Right Bank Shot Straight Shot Left Bank Shot - Defend Goal -Static Shot -Idle Take home message:  Take home message Learning using primitives greatly speeds up learning and allows more complex problems to be performed by robots. Memory based learning makes learning from observation easy. I created a way to do memory based reinforcement learning. Problem is no fixed set of parameters to adjust. Learn by adjusting distance function. Present algorithms that learn from both observation and practice. Slide9:  Raw Data Observe Critical Events in Marble Maze Slide10:  Raw Data Wall Contact Inferred Observe Critical Events in Marble Maze Observe Critical Events in Air Hockey:  Shots made by human Human paddle movement Puck movement Paddle Y Paddle X Puck X Puck Y Shots made by human Human paddle movement Puck movement +x +y +x +y Paddle Y Paddle X Puck X Puck Y Observe Critical Events in Air Hockey Learning From Observation:  Learning From Observation Memory-based learner: Learn by storing experiences. Primitive selection: K-nearest neighbor. Sub-goal generation: Kernel regression (distance weighted averaging) based on remembered primitives of the appropriate type. Action generation: Learned or fixed policy. Three Level Structure:  Three Level Structure Primitive Selection Action Generation Sub-goal Generation Learning from Observation Framework:  Learning from Observation Framework Primitive Selection Action Generation Sub-goal Generation Learning from Observation Observe Primitives Performed by a Human:  Observe Primitives Performed by a Human ◊-Guide ○-Roll To Corner □- Roll Off Wall *-Roll From Wall X-Leave Corner Primitive Database:  Primitive Database Create a data point for each observed primitive. The primitive type performed: TYPE State of the environment at the start of the primitive performance. State of the environment at the end of the primitive performance. Marble Maze Example:  Marble Maze Example Primitive Type Selection:  Primitive Type Selection Lookup using environment state. Weighted nearest neighbor. Many ways to select a primitive type. Use closest point. Use n nearest points to vote. Highest frequency. Weighted by distance from the query point. Sub-goal Generation:  Sub-goal Generation Locally weighted average over nearby primitives (data points) of the same type. Use a kernel function to control the influence of nearby data points. Action Generation:  Action Generation Provides the action (motor command) to perform at each time step. LWR, neural networks, physical model, etc. Creating an Action Generation Module (Roll to Corner):  Creating an Action Generation Module (Roll to Corner) Record at each time step from the beginning to the end of the primitive: Environment state: Actions taken: End state: Transform to a Local Coordinate Frame:  Transform to a Local Coordinate Frame Global information Primitive specific local information. Dist to the end Reference Point Learning the Maze Task from Only Observation:  Learning the Maze Task from Only Observation Related Research: Primitive Recognition:  Related Research: Primitive Recognition Survey of research in human motion analysis and recognizing human activities from image sequences. Aggarwal and Cai. Recognize over time. HMM, Brand, Oliver, and Pentland. Template matching, Davis and Bobick. Discover Primitives. Fod, Mataric, and Jenkins. Related Research: Primitive Selection:  Related Research: Primitive Selection Predefined sequence. Virtual characters, Hodgins, et al., Faloutsos, et al., and Mataric et al. Mobile robots, Balch, et al. and Arkin, et al. Learn from observation. Assembly, Kuniyoshi, Inaba, Inoue, and Kang. Use a planning system. Assembly, Thomas and Wahl. RL, Ryan and Reid. Related Research: Primitive Execution:  Related Research: Primitive Execution Predefine execution policy. Virtual characters, Mataric et al., and Hodgins et al. Mobile robots, Brooks et al. and Arkin. Learn while operating in the environment. Mobile robots, Mahadevan and Connell RL, Kaelbling, Dietterich, and Sutton at al. Learn from observation Mobile robots, Larson and Voyles, Hugues and Drogoul, Grudic and Lawrence. High DOF robots, Aboaf et al., Atkeson, and Schaal. Review:  Primitive Selection Action Generation Sub-goal Generation Learning from Observation Review Using Only Observed Data:  Using Only Observed Data Tries to mimic the teacher. Can not always perform primitives as well as the teacher. Sometimes select the wrong primitive type for the observed state. Does not know what to do in states it has not observed. No way to know it should try something different. Solution: Learning from practice. Improving Primitive Selection and Sub-goal Generation from Practice:  Improving Primitive Selection and Sub-goal Generation from Practice Improving Primitive Selection and Sub-goal Generation Through Practice:  Improving Primitive Selection and Sub-goal Generation Through Practice Need task specification information to create a reward function. Learn by adjusting distance to query: Scale distance function by value of using a data point. f(data point location, query location) related to Q value: 1/Q or exp(-Q) Associate scale values with each data point. The scale values must be stored, and learned. Store Values in Function Approximator:  Store Values in Function Approximator Look-up table. Fixed size. Locally Weighted Projection Regression (LWPR), Schaal, et al. Create a model for each data point. Indexed by the difference between the query point and data point’s state (delta-state). Learn Values Using a Reinforcement Learning Strategy:  Learn Values Using a Reinforcement Learning Strategy State: delta-state. Action: Using this data point. Reward Assignment Positive: Making progress through the maze. Negative: Falling into a hole. Going backwards through the maze. Taking time performing the primitive. Learning the Value of Choosing a Data Point (Simulation):  Learning the Value of Choosing a Data Point (Simulation) Incoming Velocity Vector Observed Roll Off Wall Primitive = Two marble positions with the incoming velocity as shown when the LWPR model associated with the Roll Off Wall primitive shown is queried. Testing area (12.9,18.8) Computed Scale Values BAD GOOD Maze Learning from Practice:  Maze Learning from Practice Cumulative failures/meter Table LWPR Obs. Only Real World Simulation Learning New Strategies:  Learning New Strategies Learning Action Generation from Practice:  Learning Action Generation from Practice Improving Action Generation Through Practice:  Improving Action Generation Through Practice Environment changes over time. Need to compensate for structural modeling error. Can not learn everything from only observing others. Knowledge for Making a Hit:  Knowledge for Making a Hit After hit location has been determined. Puck movement Puck-paddle collision Paddle placement Paddle movement timing Hit Location Target Location Path of the incoming puck Hit Line Absolute Post-hit Velocity Target Line Results of Learning Straight Shots (Simulation):  Results of Learning Straight Shots (Simulation) Observed 44 straight shots made by the human. Running average of 5 shots. Too much noise in hardware sensing. Robot Model (Real World):  Robot Model (Real World) Puck Motion Impact Robot Target Location Outgoing Puck Velocity Incoming Paddle Velocity Robot Trajectory Obtaining Proper Robot Movement:  Obtaining Proper Robot Movement Six set robot configurations. Interpolate between the four surrounding configurations. Paddle command: Desired end location and time of the trajectory, (x, y, t). Follows fifth-order polynomial equation, zero start and end velocity and acceleration. Robot Model:  Robot Model Desired state of the puck at hit time. Robot trajectory Starting location (x, y, t) Compute the movement command Pre-set time delay Generate robot trajectory (x, y, t) Robot Movement Errors:  Robot Movement Errors Movement accuracy determined by many factors. Speed of the movement. Friction between the paddle and the board. Hydraulic pressure applied to the robot. Operating within the designed performance parameters. Robot Model:  Robot Model Learn to properly place the paddle. Learn the timing of the paddle. Observe its own actions: Actual hit point (highest velocity point). Time from when the command is given to the time the paddle observed at the hit position. Improving the Robot Model:  Improving the Robot Model Desired state of the puck at hit time. Robot trajectory Using the Improved Robot Model:  Using the Improved Robot Model Desired trajectory Observed path of the paddle. Desired hit location – Location of highest paddle velocity - +x +y Starting location Using the Improved Robot Model:  Using the Improved Robot Model Desired trajectory Observed path of the paddle. Desired hit location – Location of highest paddle velocity - +x +y Starting location Real-World Air Hockey:  Real-World Air Hockey Major Contributions:  Major Contributions A framework has been created as a tool in which to perform research in learning from observation using primitives. Flexible structure allows for the use of various learning algorithms. Can also learn from practice. Presented learning methods that can learn quickly from observed information and also have the ability to increase performance through practice. Created a unique algorithm that gives a robot the ability to learn the effectiveness of data points in a data base and then use that information to change its behavior as it operates in the environment. Presented a method of breaking the learning problem into small learning modules. Individual modules have more opportunities to learn and generalize. Some Future Directions:  Some Future Directions Automatically defining primitive types. Explore how to represent learned information so it can be used in other tasks/environments. Can robots learn about the world from playing these games? Explore other ways to select primitives and sub-goals. Use the observed information to create a planner. Investigate methods of exploration at primitive selection and sub-goal generation. Simultaneously learn primitive selection and action generation.

Add a comment

Related presentations