5 papers by CopeNLU authors are accepted to appear at EMNLP 2023, on topics including factuality and probing for bias.
Social Bias Probing: Fairness Benchmarking for Language Models. Marta Marchiori Manerba, Karolina Stańczak, Riccardo Guidotti, Isabelle Augenstein.
Can Transformers Learn n-gram Language Models?. Anej Svete, Nadav Borenstein, Mike Zhou, Isabelle Augenstein, Ryan Cotterell.
DYNAMICQA: Tracing Internal Knowledge Conflicts in Language Models. Sara Vera Marjanović, Haeun Yu, Pepa Atanasova, Maria Maistro, Maria Maistro, Christina Lioma, Isabelle Augenstein.
Revealing Fine-Grained Values and Opinions in Large Language Models. Dustin Wright, Arnav Arora, Nadav Borenstein, Serge Belognie, Isabelle Augenstein.
Factcheck-Bench: Fine-Grained Evaluation Benchmark for Automatic Fact-Checkers. Yuxia Wang, Revanth Gangi Reddy, Zain Muhammad Mujahid, Arnav Arora, Aleksandr Rubashevskii , Jiahui Geng, Osama Mohammed Afzal, Liangming Pan, Nadav Borenstein, Aditya Pillai, Isabelle Augenstein, Iryna Gurevych , Preslav Nakov.