Events
Events

Hallucination in the Wild: A Field Guide for LLM Users
Large language models (LLMs) can hold remarkably fluent conversations鈥攂ut they also make things up. These 鈥渉allucinations鈥 (or more accurately, confabulations) are one of the biggest challenges to building trustworthy AI systems. In this talk, Ash will explore why these errors happen, how we can spot them, and what can be done to reduce them. Ash will introduce VISTA Score, a new method for checking factual consistency across multi-turn conversations, and show how it outperforms existing tools in identifying misleading claims. She will also share practical strategies鈥攆rom better prompts and retrieval methods to fine-tuning with both human and synthetic data鈥攖hat can make smaller models nearly as reliable as their larger counterparts. The goal: to understand not just how these systems go wrong, but how we can make them more transparent, responsible, and aligned with the truth.
Speaker:
Ash Lewis is a computational linguist and Ph.D. candidate at The Ohio State University studying how to make AI systems more reliable and less likely to 鈥渕ake things up.鈥 Her research explores why large language models hallucinate鈥攁nd how to detect, measure, and reduce those errors in dialogue settings. Ash鈥檚 work bridges computational modeling and linguistic analysis, developing lightweight, trustworthy AI tools for applications like virtual assistants, education, and question answering. She is especially interested in how smaller, well-trained models can rival massive ones in both accuracy and transparency.
Monday, February 9, 2026 鈥 1pm EST | 12pm CST | 10am PST
