BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Penn Engineering Events - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Penn Engineering Events
X-ORIGINAL-URL:https://seasevents.nmsdev7.com
X-WR-CALDESC:Events for Penn Engineering Events
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20220313T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20221106T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20231206T150000
DTEND;TZID=America/New_York:20231206T160000
DTSTAMP:20260403T223005
CREATED:20231201T154836Z
LAST-MODIFIED:20231201T154836Z
UID:10261-1701874800-1701878400@seasevents.nmsdev7.com
SUMMARY:Fall 2023 GRASP SFI: Ge Yang\, NSF Institute of AI and Fundamental Interactions and MIT CSAIL\, "Feature Fields for Robotics: Language-Grounded Perception and Mapping at Multiple Scales"
DESCRIPTION:This is a hybrid event with in-person attendance in Levine 307 and virtual attendance on Zoom. \nABSTRACT\nWhat kind of representation do robots need in order to be as generally capable as humans in handling unseen scenarios? Recent work in vision and vision-language foundation models has become quite good at telling what is in a scene\, but they do not capture the geometry needed for handling physical contact. State-of-the-art methods in inverse graphics capture detailed 3D geometry\, but they are missing the semantics. In this talk\, I will present a way to combine accurate 3D geometry with rich semantics into a single representation format called distilled feature fields and ways to use this representation for perception during few-shot manipulation with a robotic arm. Using features sourced from the vision-language model\, CLIP\, our method allows the user to designate novel objects for manipulation via free-text natural language\, and can generalize to unseen expressions and novel categories of objects. I will also present ways to scale feature fields up for building maps and the dual purpose of building realistic physics simulators for reinforcement learning. Finally\, I will present our recent effort in building a unified representation for semantics\, geometry\, and physics called Feature Splatting.
URL:https://seasevents.nmsdev7.com/event/fall-2023-grasp-sfi-ge-yang/
LOCATION:Levine 307\, 3330 Walnut Street\, Philadelphia\, PA\, 19104\, United States
CATEGORIES:Seminar
ORGANIZER;CN="General Robotics%2C Automation%2C Sensing and Perception (GRASP) Lab":MAILTO:grasplab@seas.upenn.edu
END:VEVENT
END:VCALENDAR