Pepa Atanasova
Navn
Reality Check of LLM-driven Fact Verification: Retrieving and Utilising Evidence in the Wild
Beskrivelse

Retrieval-augmented generation (RAG) is widely used to enhance AI’s knowledge with new or changing information, but real-world complexities challenge its reliability. This talk explores how Large Language Models (LLMs) process retrieved evidence, why they often fail to utilize it correctly, and how misinformation risks emerge from unreliable or insufficient context. Based on recent research, I will highlight key pitfalls in LLM-driven fact verification and discuss strategies for improving fact checking robustness.

Dato & Tid
onsdag den 5. november 2025, 14.45 - 15.15
Sal
Sal 4
Temaer
AI & Emerging Technologies

Slides fra seminar
Slides fra seminaret vil være synlige på denne side, hvis den pågældende taler ønsker at dele dem. Bemærk venligst, at du skal være logget ind for at se dem.