ESE Fall Colloquium – “On the Principles of Parsimony and Self-Consistency: Structured Compressive Closed-Loop Transcription”
November 7, 2022 at 3:30 PM - 4:30 PM
Organizer
Venue
Ten years into the revival of deep networks and artificial intelligence, we propose a theoretical framework that sheds light on understanding deep networks within a bigger picture of intelligence in general. We introduce two fundamental principles, Parsimony and Self-consistency, that address two fundamental questions regarding Intelligence: what to learn and how to learn, respectively. We argue that these two principles can be realized in entirely measurable and computable ways for an important family of structures and models, known as a linear discriminative representation (LDR). The two principles naturally lead to an effective and efficient computational framework, known as a compressive closed-loop transcription, that unifies and explains the evolution of modern deep networks and modern practices of artificial intelligence. Within this framework, we will see how fundamental ideas in information theory, control theory, game theory, sparse coding, and optimization are closely integrated in such a closed-loop system, all as necessary ingredients to learn autonomously and correctly. We demonstrate the power of this framework for learning discriminative, generative, and autoencoding models for large-scale real-world visual data, with entirely white-box deep networks, under all settings (supervised, incremental, and unsupervised). We believe that these two principles are the cornerstones for the emergence of intelligence, artificial or natural, and the compressive closed-loop transcription is a universal learning engine that serves as the basic learning units for all autonomous intelligent systems, including the brain.
Related papers can be found at: https://arxiv.org/abs/2207.04630 and https://www.mdpi.com/1099-4300/24/4/456/htm

