News

PhD fellowships for start in Spring or Autumn 2026

Would you like to join our lab as a PhD student in 2026? We have several openings. Read more about reasons to join CopeNLU here. Start in Spring 2026 A fully funded 3-year PhD fellowship on explainable natural language understanding for a start in Spring 2026 is available as part of the ExplainYourself project on Explainable and Robust Automatic Fact Checking. The position requires candidates to have completed a Master’s degree by the start date.

3 Papers to be Presented at ACL 2025

3 papers by CopeNLU authors are accepted to be presented ACL 2025, on topics including fact checking, retrieval-augmented generation and cultural NLP. Can Community Notes Replace Professional Fact-Checkers?. Greta Warren, Nadav Borenstein, Desmond Elliott, Isabelle Augenstein. A Reality Check on Context Utilisation for Retrieval-Augmented Generation. Lovisa Hagström, Sara Vera Marjanović, Haeun Yu, Arnav Arora, Christina Lioma, Maria Maistro, Pepa Atanasova, Isabelle Augenstein. Survey of Cultural Awareness in Language Models: Text and Beyond.

4 Papers Accepted to NAACL 2025

4 papers by CopeNLU authors are accepted to appear at NAACL 2025, on topics including interpretability and computational social science. A Unified Framework for Input Feature Attribution Analysis. Jingyi Sun, Pepa Atanasova, Isabelle Augenstein. Investigating Human Values in Online Communities. Nadav Borenstein, Arnav Arora, Lucie-Aimée Kaffee , Isabelle Augenstein. Specializing Large Language Models to Simulate Survey Response Distributions for Global Populations. Yong Cao, Arnav Arora, Isabelle Augenstein, Paul Röttger. Measuring and Benchmarking Large Language Models’ Capabilities to Generate Persuasive Language.

PhD fellowship on Interpretable Machine Learning available

One PhD fellowship on Interpretable Machine Learning is available for a start in Autumn 2025. The successfull candidate will be supervised by Pepa Atanasova and Isabelle Augenstein, and will join the Natural Language Processing Section at the Department of Computer Science, Faculty of Science, University of Copenhagen. The full call and application link can be found here; the application deadline is January 15, 2025.

5 Papers Accepted to EMNLP 2024

5 papers by CopeNLU authors are accepted to appear at EMNLP 2024, on topics including factuality and probing for bias. Social Bias Probing: Fairness Benchmarking for Language Models. Marta Marchiori Manerba, Karolina Stańczak, Riccardo Guidotti, Isabelle Augenstein. Can Transformers Learn n-gram Language Models?. Anej Svete, Nadav Borenstein, Mike Zhou, Isabelle Augenstein, Ryan Cotterell. DYNAMICQA: Tracing Internal Knowledge Conflicts in Language Models. Sara Vera Marjanović, Haeun Yu, Pepa Atanasova, Maria Maistro, Maria Maistro, Christina Lioma, Isabelle Augenstein.

Pepa has been appointed as a Tenure-Track Assistant Professor

We are delighted to share that Pepa, who has been a key member of the CopeNLU group during her PhD and postdoctoral fellowship, is now joining us as an Assistant Professor in the Department of Computer Science at the University of Copenhagen. Pepa’s research in Natural Language Processing has made significant progress in developing explainability techniques that enhance the fairness, transparency, and accountability of machine learning models. Her research, which aims to enhance the fairness, transparency, and accountability of machine learning models, particularly in the context of large language models, has already garnered significant recognition, including two prestigious awards (ELLIS, Informatics Europe) for her PhD thesis.

Participate in research on explainable fact checking

We are recruiting professional fact checkers to take part in an interview and/or a survey about their experiences of fact checking and fact checking technologies. If you are interested in participating in this research (interviews, surveys, or both), please complete the short online form linked below. A member of the research team will then contact you with more information about the study and taking part. Interview participants will be offered an online gift voucher to the value of 50 USD as compensation for their time.

Outstanding paper award at EACL 2024

We are honoured to share that our paper on measuring the fragility of natural language inference models has won an outstanding paper award at EACL 2024. The paper is based on the MSc thesis of Zhaoqi Liu, who was supervised by Isabelle Augenstein and Erik Arakelyan. Semantic Sensitivities and Inconsistent Predictions: Measuring the Fragility of NLI Models. Erik Arakelyan, Zhaoqi Liu, Isabelle Augenstein.

PhD and postdoc positions available at Pioneer Centre for AI

A PhD and two postdoc positions on natural language understanding are available. The positions are funded by the Pioneer Centre for AI. Read more about reasons to join us here. You can read more about the positions at the Pioneer Centre here. PhD Fellowship on Factual Text Generation While recent large language models demonstrate surprising fluency and predictive capabilities in their generated text, they have been demonstrated to generate factual inaccuracies even when they have encoded truthful information.

5 Papers Accepted to EMNLP 2023

5 papers by CopeNLU authors are accepted to appear at EMNLP 2023, on topics ranging from explainability to language modelling. Explaining Interactions Between Text Spans. Sagnik Ray Choudhury, Pepa Atanasova, Isabelle Augenstein. Why Should This Article Be Deleted? Transparent Stance Detection in Multilingual Wikipedia Editor Discussions. Lucie-Aimée Kaffee, Arnav Arora, Isabelle Augenstein. Thorny Roses: Investigating the Dual Use Dilemma in Natural Language Processing. Lucie-Aimée Kaffee, Arnav Arora, Zeerak Talat, Isabelle Augenstein.