BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Penn Engineering Events - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Penn Engineering Events
X-ORIGINAL-URL:https://seasevents.nmsdev7.com
X-WR-CALDESC:Events for Penn Engineering Events
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20210314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20211107T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20220313T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20221106T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20220415T103000
DTEND;TZID=America/New_York:20220415T114500
DTSTAMP:20260406T013345
CREATED:20220328T200141Z
LAST-MODIFIED:20220328T200141Z
UID:6619-1650018600-1650023100@seasevents.nmsdev7.com
SUMMARY:GRASP on Robotics: Vincent Sitzmann\, Massachusetts Institute of Technology\, “Self-supervised Scene Representation Learning for Robotics”
DESCRIPTION:Given only a single picture\, people are capable of inferring a mental representation that encodes rich information about the underlying 3D scene. We acquire this skill not through massive labeled datasets of 3D scenes\, but through self-supervised observation and interaction. Building machines that can infer similarly rich neural scene representations is critical if they are to one day parallel people’s ability to understand\, navigate\, and interact with their surroundings. In this talk\, I will demonstrate how we can equip neural networks with inductive biases that enable them to learn 3D geometry\, appearance\, and even semantic information\, self-supervised only from posed images. I will show how this approach unlocks the learning of priors\, enabling 3D reconstruction from only a single posed 2D image. I will then talk about a recent application of self-supervised scene representation learning in robotic manipulation\, where it enables us to learn to manipulate classes of objects in unseen poses from only a handful of human demonstrations\, as well as the application of neural rendering to learn latent spaces amenable to control. I will then discuss recent work on learning the neural rendering operator to make rendering and training fast\, and how this speed-up enables us to learn object-centric neural scene representations\, learning to decompose 3D scenes into objects\, given only images. Finally\, I will discuss how neural scene representations may offer a new angle to tackle challenges in robotics.
URL:https://seasevents.nmsdev7.com/event/grasp-on-robotics-vincent-sitzmann-massachusetts-institute-of-technology-self-supervised-scene-representation-learning-for-robotics/
LOCATION:Wu and Chen Auditorium (Room 101)\, Levine Hall\, 3330 Walnut Street\, Philadelphia\, PA\, 19104\, United States
ORGANIZER;CN="General Robotics%2C Automation%2C Sensing and Perception (GRASP) Lab":MAILTO:grasplab@seas.upenn.edu
END:VEVENT
END:VCALENDAR