BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Penn Engineering Events - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Penn Engineering Events
X-ORIGINAL-URL:https://seasevents.nmsdev7.com
X-WR-CALDESC:Events for Penn Engineering Events
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20200308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20201101T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20210314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20211107T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20220313T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20221106T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20210217T150000
DTEND;TZID=America/New_York:20210217T160000
DTSTAMP:20260407T035534
CREATED:20210212T152152Z
LAST-MODIFIED:20210212T152152Z
UID:4240-1613574000-1613577600@seasevents.nmsdev7.com
SUMMARY:Spring 2021 GRASP SFI: “Model-Based Deep RL for Robotics”
DESCRIPTION:Abstract: Deep learning has shown promising results in robotics\, but we are still far from having intelligent systems that can operate in the unstructured settings of the real world\, where disturbances\, variations\, and unobserved factors lead to a dynamic environment. \nIn the first part of the talk\, I will show that model-based deep RL can indeed allow for efficient skill acquisition\, as well as the ability to repurpose models to solve a variety of tasks. I will then scale up these approaches to enable locomotion with a 6-DoF legged robot on varying terrains in the real world\, as well as dexterous manipulation with a 24-DoF anthropomorphic hand in the real world. \nIn the second part of the talk\, I will focus on the inevitable mismatch between an agent’s training conditions and the test conditions in which it may actually be deployed\, thus illuminating the need for adaptive systems. Inspired by the ability of humans and animals to adapt quickly in the face of unexpected changes\, I will present a meta-learning algorithm within this model-based RL framework to enable online adaptation of large\, high-capacity models using only small amounts of data from the new task. These fast adaptation capabilities are seen in both simulation and the real-world\, with experiments such as a 6-legged robot adapting online to an unexpected payload or suddenly losing a leg. Finally\, I will further extend the capabilities of our robotic systems by enabling the agents to reason directly from raw image observations. Bridging the benefits of representation learning techniques with the adaptation capabilities of meta-RL\, I will present a unified framework for effective meta-RL from images. With robotic arms in the real world that learn peg insertion and ethernet cable insertion to varying targets\, I will show the fast acquisition of new skills\, directly from raw image observations in the real world. \nI conclude that model-based deep RL provides a framework for making sense of the world\, thus allowing for reasoning and adaptation capabilities that are necessary for successful operation in the dynamic settings of the real world. \nJoin the Zoom Meeting here
URL:https://seasevents.nmsdev7.com/event/spring-2021-grasp-sfi-model-based-deep-rl-for-robotics/
CATEGORIES:Seminar
ORGANIZER;CN="General Robotics%2C Automation%2C Sensing and Perception (GRASP) Lab":MAILTO:grasplab@seas.upenn.edu
END:VEVENT
END:VCALENDAR