Loading Events

CIS Seminar: ” Specializing LLMs for Reliability”

February 25, 2025 at 3:30 PM - 4:30 PM
Details
Date: February 25, 2025
Time: 3:30 PM - 4:30 PM
  • Event Tags:,
  • Organizer
    Computer and Information Science
    Phone: 215-898-8560
    Venue
    Amy Gutmann Hall, Room 414 3333 Chestnut Street
    Philadelphia
    19104
    Google Map

    Large language models (LLMs) have advanced the frontiers of AI reasoning: they can synthesize information from multiple sources, derive new conclusions, and explain those conclusions to their users. However, LLMs do not do this reliably. They hallucinate facts, convincingly state incorrect deductions, and exhibit logical fallacies like confirmation bias. In this talk, I will describe my lab’s work on making LLM systems reliable by introspecting their behavior. First, I will demonstrate that better understanding of LLMs helps us train them to be more reliable reasoners. Our work shows that model interpretation techniques can advance training methodology and dataset curation for reasoning models. Second, I will argue that automating fine-grained evaluation of LLM output provides a level of understanding necessary for further progress. I will describe the ingredients of effective automated evaluators and a state-of-the-art factuality evaluation system, MiniCheck, showing that analyzing the nature of hallucinations can help reduce them. Finally, I will describe how deeper understanding of LLMs will let us tackle their most fundamental limitations, such as their inconsistency when given different inputs. I will propose how these pieces might soon be combined to form reliable AI systems.