Name
                                    Reality Check of LLM-driven Fact Verification: Retrieving and Utilising Evidence in the Wild
                                        Description
                                    Retrieval-augmented generation (RAG) is widely used to enhance AI’s knowledge with new or changing information, but real-world complexities challenge its reliability. This talk explores how Large Language Models (LLMs) process retrieved evidence, why they often fail to utilize it correctly, and how misinformation risks emerge from unreliable or insufficient context. Based on recent research, I will highlight key pitfalls in LLM-driven fact verification and discuss strategies for improving fact checking robustness.
Speakers
                                    Pepa Atanasova  - Assistant Professor - Tenure Track  - University of Copenhagen
Kasper Lindskow - Head of AI - JP/Politikens Media Group
                                        Kasper Lindskow - Head of AI - JP/Politikens Media Group
Date & Time
                                    Wednesday, November 5, 2025, 2:45 PM - 3:15 PM
                                        Theater
                                    Theater 4
                                        DTS Tracks 2025
                                    AI & Emerging Technologies
                                
                Slides from presentation
Slides from the presentation will be visible on this site if the speaker in question wishes to share them.
Please note that you need to be signed in in order to see them.