Large language model (LLM) agents represent the next generation of artificial intelligence (AI) systems, integrating LLMs with external tools and memory components to execute complex reasoning and decision-making tasks. These agents are increasingly deployed in domains such as healthcare, finance, cybersecurity, and autonomous vehicles, where they interact dynamically with external knowledge sources, retain memory across sessions, and autonomously generate responses and actions. While their adoption brings transformative benefits, it also exposes them to new and critical security risks that remain poorly understood. Among these risks, memory poisoning attacks pose a severe and immediate threat to the reliability and security of LLM agents. These attacks exploit the agent鈥檚 ability to store, retrieve, and adapt knowledge over time, leading to biased decisions, manipulation of real-time behavior, security breaches, and system-wide failures. The goal of this project is to develop an information-theoretical foundation for understanding and mitigating memory poisoning in LLM agents.
This position, funded by the Swedish Research Council (VR), offers an exciting opportunity to work at the forefront of AI security, tackling some of the most pressing challenges in the field.
You will become a key member of a dynamic and ambitious research team across Link枚ping University (Asst. Prof. Khac-Hoang Ngo), Chalmers University of Technology (Prof. Alexandre Graell i Amat), and Recorded Future (Dr. Johan 脰stman). Our team currently consists of 5 PhD students and 2 postdocs. You will be hosted at Link枚ping University, and research visits to Chalmers and Recorded Future will be organized throughout the Ph.D.
Application deadline:聽January 12, 2026
Full information and a link to apply:聽https://liu.se/en/work-at-liu/vacancies/27883