Funded PhD and postdoc positions for start in Autumn 2026

Would you like to join our lab as a PhD student or postdoc in autumn 2026? We have openings on a new project titled “A Mechanistic Framework for Mitigating the Susceptibility of LLMs to Learning False Information” funded by the Independent Research Foundation Denmark, led by Isabelle Augenstein and Pepa Atanasova. The project goal’s will be to develop a novel theoretical frameworks for LLM security, new mechanistic interpretability methods, and new evaluation protocols, developed through research at the intersection of Natural Language Processing, LLM Security, and Explainable AI. In addition to the PIs, the postdoc and PhD student, the project also offers the opportunity to apply as an academic collaborator with NVIDIA as part of an existing relationship.

Read more about reasons to join CopeNLU here.

PhD position

The PhD position is fully funded for three years and open to candidates with a Master’s degree or equivalent in Computer Science or a related field. The PhD student’s research is expected to focus on researching mechanistic interpretability methods to curb the effects of false information attacks on LLMs at different stages of the model lifecycle.
The ideal candidate would thus have an educational background, prior research or work experience in ML or NLP.

The PhD student will be supervised by Isabelle Augenstein and co-supervised by Pepa Atanasova, and also collaborate with the larger project team.

Read more about the position and apply here by 31 May 2026 to be considered. The start date is September 2026 or as soon as possible thereafter.

Postdoc position

The postdoc position is also offered for three years and open to candidates with a PhD degree in Computer Science or another relevant field. The postdoc’s duties will be to characterise the spectrum of false information used by LLMs at different stages of their life cycle, in order to develop mitigation methods and prevent false information attacks on LLMs.
The ideal candidate would thus have an educational background, prior research or work experience on Natural Language Processing, LLM Security, and/or Explainable AI.

The postdoc will work with Isabelle Augenstein and Pepa Atanasova, and also collaborate with the larger project team.

Read more about the position and apply here by 31 May 2026 to be considered. The start date is September 2026 or as soon as possible thereafter.