ExplainYourself Project Kick-Off


Date

On 1 September 2023, the ERC Starting Grant project ExplainYourself on ‘Explainable and Robust Automatic Fact Checking’ is officially kicking off. ERC Starting Grant is a highly competitive fellowship programme by the European Research Council to support talented early-career scientists who show potential to be a research leader. It provides funding of blue-skies research for a period of up to 5 years.

ExplainYourself proposes to study explainable automatic fact checking, the task of automatically predicting the veracity of textual claims using machine learning (ML) methods, while also producing explanations about how the model arrived at the prediction. Automatic fact checking methods often use opaque deep neural network models, whose inner workings cannot easily be explained. Especially for complex tasks such as automatic fact checking, this hinders greater adoption, as it is unclear to users when the models’ predictions can be trusted. Existing explainable ML methods partly overcome this by reducing the task of explanation generation to highlighting the right rationale. While a good first step, this does not fully explain how a ML model arrived at a prediction. For knowledge intensive natural language understanding (NLU) tasks such as fact checking, a ML model needs to learn complex relationships between the claim, multiple evidence documents, and common sense knowledge in addition to retrieving the right evidence. There is currently no explainability method that aims to illuminate this highly complex process. In addition, existing approaches are unable to produce diverse explanations, geared towards users with different information needs. ExplainYourself radically departs from existing work in proposing methods for explainable fact checking that more accurately reflect how fact checking models make decisions, and are useful to diverse groups of end users. It is expected that these innovations will apply to explanation generation for other knowledge-intensive NLU tasks, such as question answering or entity linking.

The following researchers affiliated with the ExplainYourself project are joining CopeNLU on 1 September 2023:

  • Haeun Yu (PhD Student), whose main research interests include enhancing explainability in fact-checking and transparency of knowledge-enhanced LM;
  • Jingyi Sun (PhD student), whose research interests include explainability, fact-checking, and question answering.

They will both be supervised by Isabelle Augenstein and Pepa Atanasova. A postdoctoral researcher with a focus on human-centered explainability methods for fact checking is expected to join the team in Spring 2024, and there will soon be openings for further positions for a start in autumn 2024.