Name
Large Language Model Security in a Multilingual World
Description

Large Language Models (LLMs) are spreading like wildfire and are being pushed for implementation across society. While the potential of effectivization is clear, the risks involved when implementing this new technology are largely overlooked. Unlike traditional software, LLMs process natural language inputs from users, which opens a vast array of potential attack vectors, risking data leakage, user manipulation, and even medical misdiagnosis. These aspects can typically not be boiled down to a single line of code to be fixed, but constitute a complex interaction between AI architectures, training data, prompts, and manipulation thereof. We will discuss some of the security risks in LLMs, touching upon the importance of a multilingual perspective, providing an overview of proposed ethical guidelines for LLM security practitioners along the way.

Date & Time
Thursday, October 31, 2024, 12:30 PM - 1:00 PM
Theater
Theater 6