BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Penn Engineering Events - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Penn Engineering Events
X-ORIGINAL-URL:https://seasevents.nmsdev7.com
X-WR-CALDESC:Events for Penn Engineering Events
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20220313T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20221106T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20231110T100000
DTEND;TZID=America/New_York:20231110T110000
DTSTAMP:20260404T002541
CREATED:20231105T175238Z
LAST-MODIFIED:20231105T175238Z
UID:10086-1699610400-1699614000@seasevents.nmsdev7.com
SUMMARY:PRECISE Seminar: Evaluation and calibration of AI models with uncertain ground truth
DESCRIPTION:For safety\, AI systems in health undergo thorough evaluations before deployment\, validating their predictions against a ground truth that is assumed certain. However\, this is actually not the case and the ground truth may be uncertain. Unfortunately\, this is largely ignored in standard evaluation of AI models but can have severe consequences such as overestimating the future performance. To avoid this\, we measure the effects of ground truth uncertainty\, which we assume decomposes into two main components: annotation uncertainty which stems from the lack of reliable annotations\, and inherent uncertainty due to limited observational information. This ground truth uncertainty is ignored when estimating the ground truth by deterministically aggregating annotations\, e.g.\, by majority voting or averaging. In contrast\, we propose a framework where aggregation is done using a statistical model. Specifically\, we frame aggregation of annotations as posterior inference of so-called plausibilities\, representing distributions over classes in a classification setting\, subject to a hyper-parameter encoding annotator reliability. Based on this model\, we propose a metric for measuring annotation uncertainty and provide uncertainty-adjusted metrics for performance evaluation. We present a case study applying our framework to skin condition classification from images where annotations are provided in the form of differential diagnoses. The deterministic adjudication process called inverse rank normalization (IRN) from previous work ignores ground truth uncertainty in evaluation. Instead\, we present two alternative statistical models: a probabilistic version of IRN and a Plackett-Luce-based model. We find that a large portion of the dataset exhibits significant ground truth uncertainty and standard IRN-based evaluation severely over-estimates performance without providing uncertainty estimates. \nLinks: https://arxiv.org/abs/2307.09302 https://arxiv.org/abs/2307.02191 \n 
URL:https://seasevents.nmsdev7.com/event/precise-seminar-evaluation-and-calibration-of-ai-models-with-uncertain-ground-truth/
LOCATION:https://upenn.zoom.us/j/96715197752
ORGANIZER;CN="PRECISE":MAILTO:wng@cis.upenn.edu
END:VEVENT
END:VCALENDAR