ASSET Seminar: “Formal Methods for Language Model Systems”
January 14 at 12:00 PM - 1:15 PM
Share this event
Organizer
AI-enabled Systems: Safe, Explainable, and Trustworthy (ASSET) Center
Email:
asset-info@seas.upenn.edu
Website:
View Organizer Website
Formal methods are often dismissed as too rigid, complex, or unscalable for frontier language model systems (e.g., LLMs, VLMs, agentic systems). In this talk, I will challenge this assumption with both theoretical insights and empirical evidence across various domains, including chatbots, autonomous driving, mathematical reasoning, code generation, and agentic AI.
I will present a new set of efficient formal frameworks for LLMs that:
- Specify and verify safety properties (e.g., secure code generation, catastrophic risk), yielding stronger guarantees than standard evaluation methods such as benchmarks or red teaming.
- Guide generation with semantic guardrails, ensuring outputs respect formal constraints, substantially improving both reasoning performance and safety.
- Train models that are more performant and safer, and synthesize agents that provably adhere to formally specified constraints (e.g., privacy, resource consumption).
Together, these advances demonstrate that formal methods provide a principled foundation for improving the utility, safety, and efficiency of frontier language model systems.

