Navn
Large Language Model Security in a Multilingual World
Beskrivelse

Large Language Models (LLMs) are spreading like wildfire and are being pushed for implementation across society. While the potential of effectivization is clear, the risks involved when implementing this new technology are largely overlooked. Unlike traditional software, LLMs process natural language inputs from users, which opens a vast array of potential attack vectors, risking data leakage, user manipulation, and even medical misdiagnosis. These aspects can typically not be boiled down to a single line of code to be fixed, but constitute a complex interaction between AI architectures, training data, prompts, and manipulation thereof. We will discuss some of the security risks in LLMs, touching upon the importance of a multilingual perspective, providing an overview of proposed ethical guidelines for LLM security practitioners along the way.

Moderator
Moderator: Jiri Srba - Aalborg University
Dato & Tid
torsdag, oktober 31, 2024, 12:30 PM - 1:00 PM
Sal
Sal 6

Slides fra seminar
Slides fra seminaret vil være synlige på denne side, hvis den pågældende taler ønsker at dele dem. Bemærk venligst, at du skal være logget ind for at se dem.