| Nov 04, 2025 | I’ll attend EMNLP 2025 happening in Suzhou, come and chat! |
| Sep 08, 2025 | Serve as TA for a new seminar course on LLM training in University of Saarland, check the course page: Efficient Training of Large Language Models: From Basics to Fine-Tuning. |
| Jul 29, 2025 | Our new paper Rote Learning Considered Useful: Generalizing over Memorized Data in LLMs is on ArXiv now: ArXiv. |
| Jul 21, 2025 | Our new paper Rethinking Memorization Measures in LLMs: Recollection vs. Counterfactual vs. Contextual Memorization is on ArXiv now: ArXiv. |
| Feb 21, 2025 | Check our new paper to revisit privacy, utility, and efficiency trade-offs when fine-tuning LLMs: ArXiv. |
| Feb 12, 2025 | Check our new postion paper to argue the importance of episodic memory in long-term LLM agents: ArXiv. |
| Oct 24, 2024 | Our paper Towards Reliable Latent Knowledge Estimation in LLMs: Zero-Prompt Many-Shot Based Factual Knowledge Extraction was accepted by WSDM 2025, see you in Hannover! |
| Oct 10, 2024 | One benchmark about the episodic memory of LLMs is on Arxiv! You can read it here. |
| Jul 27, 2024 | One paper about the memorisation of LLMs is on Arxiv! You can read it here. |
| Apr 19, 2024 | One paper about the knowledge estimation of LLMs is on Arxiv! You can read it here. |