Explainable Machine Learning

We are interested in studying method to explain relationships between inputs and outputs of black-box machine learning models, particularly in the context of challenging NLU tasks such as fact checking.

We are researching methods for explainable stance detection in the context of a DFF Sapere Aude Research Leader project, and explainable fact checking as part of an ERC Starting Grant project.

Moreover, we’re investigating fair and accountable Natural Language Processing methods to understand what influences the employer images that organisations project in job ads, as part of a Carlsberg-funded project.

Publications

Human values play a vital role as an analytical tool in social sciences, enabling the study of diverse dimensions within society as a …

Explainable AI methods facilitate the understanding of model behaviour, yet, small, imperceptible perturbations to inputs can vastly …

Recent studies of the emergent capabilities of transformer-based Natural Language Understanding (NLU) models have indicated that they …

How much meaning influences gender assignment across languages is an active area of research in modern linguistics and cognitive …

NLP models are used in a variety of critical social computing tasks, such as detecting sexist, racist, or otherwise hateful content. …

Answering complex queries on incomplete knowledge graphs is a challenging task where a model needs to answer complex logical queries in …

Explanations of neural models aim to reveal a model’s decision-making process for its predictions. However, recent work shows …

Language embeds information about social, cultural, and political values people hold. Prior work has explored social and potentially …

The success of pre-trained contextualized representations has prompted researchers to analyze them for the presence of linguistic …

Fact-checking systems have become important tools to verify fake and misguiding news. These systems become more trustworthy when …

There have been many efforts to try to understand what grammatical knowledge (e.g., ability to understand the part of speech of a …

Two of the most fundamental challenges in Natural Language Understanding (NLU) at present are: (a) how to establish whether deep …

With the substantial rise in the amount of mis- and disinformation online, fact checking has become an important task to automate. This …

Counterfactually Augmented Data (CAD) aims to improve out-of-domain generalizability, an indicator of model robustness. The improvement …

The success of multilingual pre-trained models is underpinned by their ability to learn representations shared by multiple languages …

Automating the fact checking (FC) process relies on information obtained from external sources. In this work, we posit that it is …

Explanations shed light on a machine learning model’s rationales and can aid in identifying deficiencies in its reasoning …

Medical artificial intelligence (AI) systems have been remarkably successful, even outperforming human performance at certain tasks. …

As NLP models are increasingly deployed in socially situated settings such as online abusive content detection, ensuring these models …

Sparse attention has been claimed to increase model interpretability under the assumption that it highlights influential inputs. Yet …

The past decade has seen a substantial rise in the amount of mis- and disinformation online, from targeted disinformation campaigns to …

Recent developments in machine learning have introduced models that approach human performance at the cost of increased architectural …

Adversarial attacks reveal important vulnerabilities and flaws of trained models. One potent type of attack are universal adversarial …

While state-of-the-art NLP explainability (XAI) methods focus on supervised, per-instance end or diagnostic probing task evaluation[4, …

This paper provides the first study of how fact checking explanations can be generated automatically based on available claim context, …

Talks

A PhD and two postdoc positions on natural language understanding are available. The positions are funded by the Pioneer Centre for AI.

A PhD position on explainable natural language understanding is available in CopeNLU. The positions is funded by the ERC Starting Grant …

On 1 September 2023, the ERC Starting Grant project ExplainYourself on ‘Explainable and Robust Automatic Fact Checking’ is …

PhD and postdoctoral fellowships on explainable fact checking are available in CopeNLU. The positions are funded by the ERC Starting …

On 1 September 2021, the DFF Sapere Aude project EXPANSE on ‘Learning to Explain Attitudes on Social Media’ is kicking off, …