A PhD and two postdoc positions on natural language understanding are available. The positions are funded by the Pioneer Centre for AI. Read more about reasons to join us here. You can read more about the positions at the Pioneer Centre here.
PhD Fellowship on Factual Text Generation While recent large language models demonstrate surprising fluency and predictive capabilities in their generated text, they have been demonstrated to generate factual inaccuracies even when they have encoded truthful information.
5 papers by CopeNLU authors are accepted to appear at EMNLP 2023, on topics ranging from explainability to language modelling.
Explaining Interactions Between Text Spans. Sagnik Ray Choudhury, Pepa Atanasova, Isabelle Augenstein.
Why Should This Article Be Deleted? Transparent Stance Detection in Multilingual Wikipedia Editor Discussions. Lucie-Aimée Kaffee, Arnav Arora, Isabelle Augenstein.
Thorny Roses: Investigating the Dual Use Dilemma in Natural Language Processing. Lucie-Aimée Kaffee, Arnav Arora, Zeerak Talat, Isabelle Augenstein.
A PhD fellowship on explainable natural language understanding is available in CopeNLU. The successful candidate will be supervised by Isabelle Augenstein and Pepa Atanasova. The positions are offered in the context of an ERC Starting Grant on ‘Explainable and Robust Automatic Fact Checking (ExplainYourself)’. ERC Starting Grant is a highly competitive funding program by the European Research Council to support the most talented early-career scientists in Europe with funding for a period of 5 years for blue-skies research to build up or expand their research groups.
On 1 September 2023, the ERC Starting Grant project ExplainYourself on ‘Explainable and Robust Automatic Fact Checking’ is officially kicking off. ERC Starting Grant is a highly competitive fellowship programme by the European Research Council to support talented early-career scientists who show potential to be a research leader. It provides funding of blue-skies research for a period of up to 5 years.
ExplainYourself proposes to study explainable automatic fact checking, the task of automatically predicting the veracity of textual claims using machine learning (ML) methods, while also producing explanations about how the model arrived at the prediction.
4 papers by CopeNLU authors are accepted to appear at ACL 2023. The papers make contributions within faithfulness of explanations, measuring intersectional biases, event extraction and few-shot stance detection.
Topic-Guided Sampling For Data-Efficient Multi-Domain Stance Detection. Erik Arakelyan, Arnav Arora, Isabelle Augenstein.
Faithfulness Tests for Natural Language Explanations. Pepa Atanasova, Oana-Maria Camburu, Christina Lioma, Thomas Lukasiewicz, Jakob Grue Simonsen, Isabelle Augenstein.
Measuring Intersectional Biases in Historical Documents. Nadav Borenstein, Karolina Stańczak, Thea Rolskov, Natacha Klein Käfer, Natália da Silva Perez, Isabelle Augenstein.
Three PhD fellowships and two postdoc positions on explainable stance detection are available in CopeNLU. The positions are offered in the context of an ERC Starting Grant on ‘Explainable and Robust Automatic Fact Checking (ExplainYourself)’. ERC Starting Grant is a highly competitive funding program by the European Research Council to support the most talented early-career scientists in Europe with funding for a period of 5 years for blue-skies research to build up or expand their research groups.
Isabelle Augenstein has been promoted to full professor, making her the youngest ever female full professor in Denmark. The former officially reported youngest female full professor was appointed in 2008 when she was 34 years old. Read more University of Copenhagen’s press release.
2 papers by CopeNLU authors are accepted to appear at EMNLP 2022, which are on scholarly document understanding.
Counterfactually Augmented Data and Unintended Bias: The Case of Sexism and Hate Speech Detection. Indira Sen, Mattia Samory, Claudia Wagner, Isabelle Augenstein.
Neighborhood Contrastive Learning for Scientific Document Representations with Citation Embeddings. Malte Ostendorff, Nils Rethmeier, Isabelle Augenstein, Bela Gipp, Georg Rehm.
2 papers by CopeNLU authors on probing question answering models are accepted to appear at Coling 2022.
Machine Reading, Fast and Slow: When Do Models ‘Understand’ Language?. Sagnik Ray Choudhury, Anna Rogers, Isabelle Augenstein.
Can Edge Probing Tasks Reveal Linguistic Knowledge in QA Models?. Sagnik Ray Choudhury, Nikita Bhutani, Isabelle Augenstein.
3 papers by CopeNLU authors are accepted to appear at NAACL 2022, which are on the topics of hatespeech detection, misinformation detection and multilingual probing.
Counterfactually Augmented Data and Unintended Bias: The Case of Sexism and Hate Speech Detection. Indira Sen, Mattia Samory, Claudia Wagner, Isabelle Augenstein.
A Survey on Stance Detection for Mis- and Disinformation Identification. Momchil Hardalov, Arnav Arora, Preslav Nakov, Isabelle Augenstein.
Same Neurons, Different Languages: Probing Morphosyntax in Multilingual Pre-trained Models.