Qinyuan Wu
qwu [at] mpi-sws [dot] org
Campus E1 5
66125, Saarbruecken, Germany
I am a third-year PhD student at the CS@Max Planck and the Max Planck Institute for Software Systems (MPI-SWS), advised by Krishna Gummadi. I am also fortunate to closely collaborate with and receive guidance from Evimaria Terzi (Boston University), Mariya Toneva (MPI-SWS), and Muhammad Bilal Zafar (Ruhr University Bochum) (Odered by last name alphabet). Before I joined MPI-SWS, I got my bachelor degree in mathematics-physics from University of Electronic Science and Technology of China (UESTC).
I investigate how large language models (LLMs) internalize, represent, and utilize knowledge—seeking to enhance their reliability, interpretability, and safety. My work centers on understanding the interplay between internal learning (from training) and external adaptation (via prompts, retrieval, or tool use).
Ultimately, I aim to understand and improve the loop between how LLMs learn, remember, refer, and act—toward more trustworthy and cognitively grounded AI systems.
Beyond core research, I collaborate on:
-
Privacy and security in LLMs – balancing data protection with model utility and efficiency.
-
Neuroscience-inspired modeling – linking human memory mechanisms to LLM cognition.
-
LLM systems and optimization – exploring how PEFT, quantization, and inference techniques affect learning and behavior.
Figure: Overview of my research focus — connecting internal and external knowledge in LLMs.
news
Nov 04, 2025 | I’ll attend EMNLP 2025 happening in Suzhou, come and chat! |
---|---|
Sep 08, 2025 | Serve as TA for a new seminar course on LLM training in University of Saarland, check the course page: Efficient Training of Large Language Models: From Basics to Fine-Tuning. |
Jul 29, 2025 | Our new paper Rote Learning Considered Useful: Generalizing over Memorized Data in LLMs is on ArXiv now: ArXiv. |
Jul 21, 2025 | Our new paper Rethinking Memorization Measures in LLMs: Recollection vs. Counterfactual vs. Contextual Memorization is on ArXiv now: ArXiv. |
Feb 21, 2025 | Check our new paper to revisit privacy, utility, and efficiency trade-offs when fine-tuning LLMs: ArXiv. |