BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Penn Engineering Events - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Penn Engineering Events
X-ORIGINAL-URL:https://seasevents.nmsdev7.com
X-WR-CALDESC:Events for Penn Engineering Events
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20210314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20211107T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20220313T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20221106T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20221107T153000
DTEND;TZID=America/New_York:20221107T163000
DTSTAMP:20260405T121206
CREATED:20221027T124316Z
LAST-MODIFIED:20221027T124316Z
UID:7765-1667835000-1667838600@seasevents.nmsdev7.com
SUMMARY:ESE Fall Colloquium - "On the Principles of Parsimony and Self-Consistency: Structured Compressive Closed-Loop Transcription"
DESCRIPTION:Ten years into the revival of deep networks and artificial intelligence\, we propose a theoretical framework that sheds light on understanding deep networks within a bigger picture of intelligence in general. We introduce two fundamental principles\, Parsimony and Self-consistency\, that address two fundamental questions regarding Intelligence: what to learn and how to learn\, respectively. We argue that these two principles can be realized in entirely measurable and computable ways for an important family of structures and models\, known as a linear discriminative representation (LDR). The two principles naturally lead to an effective and efficient computational framework\, known as a compressive closed-loop transcription\, that unifies and explains the evolution of modern deep networks and modern practices of artificial intelligence. Within this framework\, we will see how fundamental ideas in information theory\, control theory\, game theory\, sparse coding\, and optimization are closely integrated in such a closed-loop system\, all as necessary ingredients to learn autonomously and correctly. We demonstrate the power of this framework for learning discriminative\, generative\, and autoencoding models for large-scale real-world visual data\, with entirely white-box deep networks\, under all settings (supervised\, incremental\, and unsupervised). We believe that these two principles are the cornerstones for the emergence of intelligence\, artificial or natural\, and the compressive closed-loop transcription is a universal learning engine that serves as the basic learning units for all autonomous intelligent systems\, including the brain. \nRelated papers can be found at: https://arxiv.org/abs/2207.04630 and https://www.mdpi.com/1099-4300/24/4/456/htm
URL:https://seasevents.nmsdev7.com/event/ese-fall-colloquium-on-the-principles-of-parsimony-and-self-consistency-structured-compressive-closed-loop-transcription/
LOCATION:Wu and Chen Auditorium (Room 101)\, Levine Hall\, 3330 Walnut Street\, Philadelphia\, PA\, 19104\, United States
CATEGORIES:Colloquium
ORGANIZER;CN="Electrical and Systems Engineering":MAILTO:eseevents@seas.upenn.edu
END:VEVENT
END:VCALENDAR