Large Language Models (LLMs) are spreading like wildfire and are being pushed for implementation across society. While the potential of effectivization is clear, the risks involved when implementing this new technology are largely overlooked. Unlike traditional software, LLMs process natural language inputs from users, which opens a vast array of potential attack vectors, risking data leakage, user manipulation, and even medical misdiagnosis. These aspects can typically not be boiled down to a single line of code to be fixed, but constitute a complex interaction between AI architectures, training data, prompts, and manipulation thereof. We will discuss some of the security risks in LLMs, touching upon the importance of a multilingual perspective, providing an overview of proposed ethical guidelines for LLM security practitioners along the way.
Johannes Bjerva - Associate Professor, Data, Knowledge, and Web Engineering - Aalborg Universitet