BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Penn Engineering Events - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Penn Engineering Events
X-ORIGINAL-URL:https://seasevents.nmsdev7.com
X-WR-CALDESC:Events for Penn Engineering Events
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20200308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20201101T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20210314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20211107T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20220313T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20221106T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20211201T150000
DTEND;TZID=America/New_York:20211201T160000
DTSTAMP:20260406T084459
CREATED:20211129T201637Z
LAST-MODIFIED:20211129T201637Z
UID:5863-1638370800-1638374400@seasevents.nmsdev7.com
SUMMARY:Fall 2021 GRASP SFI: Lucas Manuelli\, NVIDIA\, “Robot Manipulation with Learned Representations”
DESCRIPTION:We would like to have robots which can perform useful manipulation tasks in real-world environments. This requires robots that can perceive the world with both precision and semantic understanding\, methods for communicating desired tasks to these systems\, and closed loop visual feedback controllers for robustly executing manipulation tasks. This is hard to achieve with previous methods: prior work hasn’t enabled robots to densely understand the visual world with sufficient precision to perform robotic manipulation or endowed them with the semantic understanding needed to perform tasks with novel objects. This limitation arises partly from the object representations that have been used\, the challenge in extracting these representations from the available sensor data in real-world settings\, and the manner in which tasks have been specified. The talk will have two sections. In the first section I will focus on object-centric representations and will present a family of approaches that leverage self-supervision\, both in the visual domain and for learning physical dynamics\, to enable robots to perform manipulation tasks. Specifically we (i) demonstrate the novel application of dense visual object descriptors to robotic manipulation and provide a fully self-supervised robot system to acquire them (ii) introduce the concept of category-level manipulation tasks and develop a novel object representation based on semantic 3D keypoints along with a task specification that uses these keypoints to define the task for all objects of a category\, including novel instances\, (iii) utilize our dense visual object descriptors to quickly learn new manipulation skills through imitation and (iv) use our visual object representations to learn data-driven models that can be used to perform closed loop feedback control in manipulation tasks. The second part of the talk will discuss an alternative action-centric approach that enables the incorporation of language-instructions in our manipulation pipelines.
URL:https://seasevents.nmsdev7.com/event/fall-2021-grasp-sfi-lucas-manuelli-nvidia-robot-manipulation-with-learned-representations/
LOCATION:Levine 307\, 3330 Walnut Street\, Philadelphia\, PA\, 19104\, United States
ORGANIZER;CN="General Robotics%2C Automation%2C Sensing and Perception (GRASP) Lab":MAILTO:grasplab@seas.upenn.edu
END:VEVENT
END:VCALENDAR