BEGIN:VCALENDAR VERSION:2.0 METHOD:PUBLISH PRODID:-//Telerik Inc.//Sitefinity CMS 14.4//EN BEGIN:VTIMEZONE TZID:Central Standard Time BEGIN:STANDARD DTSTART:20251102T020000 RRULE:FREQ=YEARLY;BYDAY=1SU;BYHOUR=2;BYMINUTE=0;BYMONTH=11 TZNAME:Central Standard Time TZOFFSETFROM:-0500 TZOFFSETTO:-0600 END:STANDARD BEGIN:DAYLIGHT DTSTART:20250301T020000 RRULE:FREQ=YEARLY;BYDAY=2SU;BYHOUR=2;BYMINUTE=0;BYMONTH=3 TZNAME:Central Daylight Time TZOFFSETFROM:-0600 TZOFFSETTO:-0500 END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT DESCRIPTION:Large language models (LLMs) can hold remarkably fluent convers ations—but they also make things up. These “hallucinations” (or more accur ately\, confabulations) are one of the biggest challenges to building trus tworthy AI systems. In this talk\, Ash will explore why these errors happe n\, how we can spot them\, and what can be done to reduce them. Ash will i ntroduce VISTA Score\, a new method for checking factual consistency acros s multi-turn conversations\, and show how it outperforms existing tools in identifying misleading claims. She will also share practical strategies—f rom better prompts and retrieval methods to fine-tuning with both human an d synthetic data—that can make smaller models nearly as reliable as their larger counterparts. The goal: to understand not just how these systems go wrong\, but how we can make them more transparent\, responsible\, and ali gned with the truth.Speaker: Ash Lewis is a computational linguist and Ph. D. candidate at The Ohio State University studying how to make AI systems more reliable and less likely to “make things up.” Her research explores w hy large language models hallucinate—and how to detect\, measure\, and red uce those errors in dialogue settings. Ash’s work bridges computational mo deling and linguistic analysis\, developing lightweight\, trustworthy AI t ools for applications like virtual assistants\, education\, and question a nswering. She is especially interested in how smaller\, well-trained model s can rival massive ones in both accuracy and transparency.Monday\, Februa ry 9\, 2026 — \;1pm EST | 12pm CST | 10am PSTREGISTER HERE DTEND:20260209T190000Z DTSTAMP:20260414T073452Z DTSTART:20260209T180000Z LOCATION: SEQUENCE:0 SUMMARY:Hallucination in the Wild: A Field Guide for LLM Users UID:RFCALITEM639117308926087952 X-ALT-DESC;FMTTYPE=text/html:
Large language models (LLM s) can hold remarkably fluent conversations—but they also make things up. These “hallucinations” (or more accurately\, confabulations) are one of th e biggest challenges to building trustworthy AI systems. In this talk\, As h will explore why these errors happen\, how we can spot them\, and what c an be done to reduce them. Ash will introduce VISTA Score\, a new method f or checking factual consistency across multi-turn conversations\, and show how it outperforms existing tools in identifying misleading claims. She w ill also share practical strategies—from better prompts and retrieval meth ods to fine-tuning with both human and synthetic data—that can make smalle r models nearly as reliable as their larger counterparts. The goal: to und erstand not just how these systems go wrong\, but how we can make them mor e transparent\, responsible\, and aligned with the truth.
Ash Lewis is a computational linguist and Ph.D.
candidate at The Ohio State University studying how to m
ake AI systems more reliable and less likely to “make things up.” Her rese
arch explores why large language models hallucinate—and how to detect\, me
asure\, and reduce those errors in dialogue settings. Ash’s work bridges c
omputational modeling and linguistic analysis\, developing lightweight\, t
rustworthy AI tools for applications like virtual assistants\, education\,
and question answering. She is especially interested in how smaller\, wel
l-trained models can rival massive ones in both accuracy and transparency.
Monday\, February 9\, 2026 — \;1pm EST | 12pm CST | 10am PST