News

Funded PhD and postdoc positions for start Autumn 2026

Would you like to join our lab as a PhD student or postdoc in autumn 2026? We have openings on a new interdisciplinary project on “Human-Centered Explainable Retrieval-Augmented LLMs” funded by the Independent Research Foundation Denmark, led by Isabelle Augenstein and Irina Shklovski. Read more about reasons to join CopeNLU here. PhD position The PhD position is fully funded for three years and open to candidates with a Master’s degree or equivalent in Computer Science or a related field.

8 Papers Accepted to EMNLP 2025

8 papers by CopeNLU authors are accepted to appear at EMNLP 2025, on topics including explainability and cross-cultural NLP. Graph-Guided Textual Explanation Generation Framework. Shuzhou Yuan, Jingyi Sun, Michael Färber, Steffen Eger, Pepa Atanasova, Isabelle Augenstein. Self-Critique and Refinement for Faithful Natural Language Explanations. Yingming Wang, Pepa Atanasova. FLARE: Faithful Logic-Aided Reasoning and Exploration. Erik Arakelyan, Pasquale Minervini, Pat Verga, Patrick Lewis, Isabelle Augenstein. Explainability and Interpretability of Multilingual Large Language Models: A Survey.

PhD fellowships for start in Spring or Autumn 2026

Would you like to join our lab as a PhD student in 2026? We have several openings. Read more about reasons to join CopeNLU here. Start in Spring 2026 We have two fully funded 3-year PhD fellowships available for a start in Spring 2026: Position 1: Explainable Natural Language Understanding A fully funded 3-year PhD fellowship on explainable natural language understanding for a start in Spring 2026 is available as part of the ExplainYourself project on Explainable and Robust Automatic Fact Checking.

3 Papers to be Presented at ACL 2025

3 papers by CopeNLU authors are accepted to be presented ACL 2025, on topics including fact checking, retrieval-augmented generation and cultural NLP. Can Community Notes Replace Professional Fact-Checkers?. Greta Warren, Nadav Borenstein, Desmond Elliott, Isabelle Augenstein. A Reality Check on Context Utilisation for Retrieval-Augmented Generation. Lovisa Hagström, Sara Vera Marjanović, Haeun Yu, Arnav Arora, Christina Lioma, Maria Maistro, Pepa Atanasova, Isabelle Augenstein. Survey of Cultural Awareness in Language Models: Text and Beyond.

4 Papers Accepted to NAACL 2025

4 papers by CopeNLU authors are accepted to appear at NAACL 2025, on topics including interpretability and computational social science. A Unified Framework for Input Feature Attribution Analysis. Jingyi Sun, Pepa Atanasova, Isabelle Augenstein. Investigating Human Values in Online Communities. Nadav Borenstein, Arnav Arora, Lucie-Aimée Kaffee , Isabelle Augenstein. Specializing Large Language Models to Simulate Survey Response Distributions for Global Populations. Yong Cao, Arnav Arora, Isabelle Augenstein, Paul Röttger. Measuring and Benchmarking Large Language Models’ Capabilities to Generate Persuasive Language.

PhD fellowship on Interpretable Machine Learning available

One PhD fellowship on Interpretable Machine Learning is available for a start in Autumn 2025. The successfull candidate will be supervised by Pepa Atanasova and Isabelle Augenstein, and will join the Natural Language Processing Section at the Department of Computer Science, Faculty of Science, University of Copenhagen. The full call and application link can be found here; the application deadline is January 15, 2025.

5 Papers Accepted to EMNLP 2024

5 papers by CopeNLU authors are accepted to appear at EMNLP 2024, on topics including factuality and probing for bias. Social Bias Probing: Fairness Benchmarking for Language Models. Marta Marchiori Manerba, Karolina Stańczak, Riccardo Guidotti, Isabelle Augenstein. Can Transformers Learn n-gram Language Models?. Anej Svete, Nadav Borenstein, Mike Zhou, Isabelle Augenstein, Ryan Cotterell. DYNAMICQA: Tracing Internal Knowledge Conflicts in Language Models. Sara Vera Marjanović, Haeun Yu, Pepa Atanasova, Maria Maistro, Maria Maistro, Christina Lioma, Isabelle Augenstein.

Pepa has been appointed as a Tenure-Track Assistant Professor

We are delighted to share that Pepa, who has been a key member of the CopeNLU group during her PhD and postdoctoral fellowship, is now joining us as an Assistant Professor in the Department of Computer Science at the University of Copenhagen. Pepa’s research in Natural Language Processing has made significant progress in developing explainability techniques that enhance the fairness, transparency, and accountability of machine learning models. Her research, which aims to enhance the fairness, transparency, and accountability of machine learning models, particularly in the context of large language models, has already garnered significant recognition, including two prestigious awards (ELLIS, Informatics Europe) for her PhD thesis.

Participate in research on explainable fact checking

We are recruiting professional fact checkers to take part in an interview and/or a survey about their experiences of fact checking and fact checking technologies. If you are interested in participating in this research (interviews, surveys, or both), please complete the short online form linked below. A member of the research team will then contact you with more information about the study and taking part. Interview participants will be offered an online gift voucher to the value of 50 USD as compensation for their time.

Outstanding paper award at EACL 2024

We are honoured to share that our paper on measuring the fragility of natural language inference models has won an outstanding paper award at EACL 2024. The paper is based on the MSc thesis of Zhaoqi Liu, who was supervised by Isabelle Augenstein and Erik Arakelyan. Semantic Sensitivities and Inconsistent Predictions: Measuring the Fragility of NLI Models. Erik Arakelyan, Zhaoqi Liu, Isabelle Augenstein.