news

Jan 26, 2026 Our paper ‘Rote Learning Considered Useful: Generalizing over Memorized Data in LLMs’ and ‘In Agents We Trust, but Who Do Agents Trust? Latent Source Preferences Steer LLM Generations’ are accepted in ICLR 2026, the camera-ready version is coming out soon! These papers will also be presented in this year’s IASEAI annual conference!
Jan 20, 2026 Our paper ‘The Algorithmic Self-Portrait: Deconstructing Memory in ChatGPT’ is accepted in The Web Conference 2026, the camera-ready and arxiv version is coming out soon! This paper will also be presented in this year’s IASEAI annual conference!
Nov 04, 2025 I’ll attend EMNLP 2025 happening in Suzhou, come and chat!
Sep 08, 2025 Serve as TA for a new seminar course on LLM training in University of Saarland, check the course page: Efficient Training of Large Language Models: From Basics to Fine-Tuning.
Jul 29, 2025 Our new paper Rote Learning Considered Useful: Generalizing over Memorized Data in LLMs is on ArXiv now: ArXiv.
Jul 21, 2025 Our new paper Rethinking Memorization Measures in LLMs: Recollection vs. Counterfactual vs. Contextual Memorization is on ArXiv now: ArXiv.
Feb 21, 2025 Check our new paper to revisit privacy, utility, and efficiency trade-offs when fine-tuning LLMs: ArXiv.
Feb 12, 2025 Check our new postion paper to argue the importance of episodic memory in long-term LLM agents: ArXiv.
Oct 24, 2024 Our paper Towards Reliable Latent Knowledge Estimation in LLMs: Zero-Prompt Many-Shot Based Factual Knowledge Extraction was accepted by WSDM 2025, see you in Hannover!
Oct 10, 2024 One benchmark about the episodic memory of LLMs is on Arxiv! You can read it here.
Jul 27, 2024 One paper about the memorisation of LLMs is on Arxiv! You can read it here.
Apr 19, 2024 One paper about the knowledge estimation of LLMs is on Arxiv! You can read it here.