We are honoured to share that our paper on measuring the fragility of natural language inference models has won an outstanding paper award at EACL 2024. The paper is based on the MSc thesis of Zhaoqi Liu, who was supervised by Isabelle Augenstein and Erik Arakelyan.
Semantic Sensitivities and Inconsistent Predictions: Measuring the Fragility of NLI Models. Erik Arakelyan, Zhaoqi Liu, Isabelle Augenstein.
5 papers by CopeNLU authors are accepted to appear at EMNLP 2023, on topics ranging from explainability to language modelling.
Explaining Interactions Between Text Spans. Sagnik Ray Choudhury, Pepa Atanasova, Isabelle Augenstein.
Why Should This Article Be Deleted? Transparent Stance Detection in Multilingual Wikipedia Editor Discussions. Lucie-Aimée Kaffee, Arnav Arora, Isabelle Augenstein.
Thorny Roses: Investigating the Dual Use Dilemma in Natural Language Processing. Lucie-Aimée Kaffee, Arnav Arora, Zeerak Talat, Isabelle Augenstein.
4 papers by CopeNLU authors are accepted to appear at ACL 2023. The papers make contributions within faithfulness of explanations, measuring intersectional biases, event extraction and few-shot stance detection.
Topic-Guided Sampling For Data-Efficient Multi-Domain Stance Detection. Erik Arakelyan, Arnav Arora, Isabelle Augenstein.
Faithfulness Tests for Natural Language Explanations. Pepa Atanasova, Oana-Maria Camburu, Christina Lioma, Thomas Lukasiewicz, Jakob Grue Simonsen, Isabelle Augenstein.
Measuring Intersectional Biases in Historical Documents. Nadav Borenstein, Karolina Stańczak, Thea Rolskov, Natacha Klein Käfer, Natália da Silva Perez, Isabelle Augenstein.
2 papers by CopeNLU authors on probing question answering models are accepted to appear at Coling 2022.
Machine Reading, Fast and Slow: When Do Models ‘Understand’ Language?. Sagnik Ray Choudhury, Anna Rogers, Isabelle Augenstein.
Can Edge Probing Tasks Reveal Linguistic Knowledge in QA Models?. Sagnik Ray Choudhury, Nikita Bhutani, Isabelle Augenstein.
3 papers by CopeNLU authors are accepted to appear at NAACL 2022, which are on the topics of hatespeech detection, misinformation detection and multilingual probing.
Counterfactually Augmented Data and Unintended Bias: The Case of Sexism and Hate Speech Detection. Indira Sen, Mattia Samory, Claudia Wagner, Isabelle Augenstein.
A Survey on Stance Detection for Mis- and Disinformation Identification. Momchil Hardalov, Arnav Arora, Preslav Nakov, Isabelle Augenstein.
Same Neurons, Different Languages: Probing Morphosyntax in Multilingual Pre-trained Models.
2 papers by CopeNLU authors are accepted to appear at AAAI 2022. One paper is on explanation generation, demonstrating how directly optimising for diagnostic properties of explanations, such as faithfulness, data consistency and confidence indication, can improve explanation quality. The other paper presents the most comprehensive study of cross-lingual stance detection to date, and proposes methods for learning with limited labelled data across languages and domains.
Diagnostics-Guided Explanation Generation. Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, Isabelle Augenstein.
A paper by CopeNLU author is accepted to appear at IJCAI 2021. The paper studies how to perform complex claim verification on naturally occurring political claims with multiple hops over evidence chunks.
Multi-Hop Fact Checking of Political Claims. Wojciech Ostrowski, Arnav Arora, Pepa Atanasova, Isabelle Augenstein.
2 papers by CopeNLU authors are accepted to appear at ACL 2021. One paper is on interpretability, examining how sparsity affects our ability to use attention as an explainability tool; whereas the other one is on scientific document understanding, introducing a new dataset for the task of cite-worthiness detection in scientific articles.
Is Sparse Attention more Interpretable? Clara Meister, Stefan Lazov, Isabelle Augenstein, Ryan Cotterell.
CiteWorth: Cite-Worthiness Detection for Improved Scientific Document Understanding.