We are back with a spring edition of “Explaining Explainable AI Research”. This semester, explanations are even more creative - from paintings to comics to infographics to professional-quality videos to code explanations. We are kicking this semester off with a bang! ✨
A painting interpretation of An Explainable AI System for the Diagnosis of High-Dimensional Biomedical Data (Utlsch et. al, 2024)
By Molly Bocock (Cybersecurity MEng)





A comic interpretation of STELA: a community-centred approach to norm elicitation for AI alignment (Bergman, et.al. 2024)
By Bochu Ding (Design & Technology Innovation MEng)
Artist Statement: This is a comic that displays the process of implementing STELA (SocioTEchnical Language agent Alignment), a participatory, community-based method for defining rules and principles in AI agents that integrate the preferences of historically marginalized groups. This comic portrays the process as “steps” in a board game, an artifact common in many community-building exercises.
A comic interpretation of Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead (Rudin, et.al., 2019)
By Divya Sharma (MIDS)
A video overview of A systematic review of Explainable Artificial Intelligence models and applications: Recent developments and future trends (Saranya, et.al. 2023)
By Michael Dankwah Agyeman-Prempeh (Design & Technology Innovation MEng)
A video storytelling interpretation of XAI meets LLMs: A Survey of the Relation between Explainable AI and Large Language Models (Cambria, et.al.)
By John Rohit Ernest (AI MEng)
Infographic explaining Explainable AI: from black box to glass box (Rai, 2020)
By Shaila Guereca Guzman (MIDS)
A video explanation and code resource for Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization (Selvaraju, et.al. 2016; updated 2019)
By Junyu Zhang (FinTech MEng)
A video overview of Explain To Decide: A Human-Centric Review on the Role of Explainable Artificial Intelligence in AI-assisted Decision Making (Rogha, 2023)
By Danny Ross (AI MEng)
A video overview of From Flexibility to Manipulation: The Slippery Slope of XAI Evaluation (Wickstrøm, et.al. 2024)
By Faraz Jawed (MIDS)
A video overview + code example of Explainable Artificial Intelligence in Cybersecurity: A Brief Review (Hariharan, et.al. 2021)
By Benjamin Tang (Cybersecurity MEng)
A video overview of code illustrating A Perspective on Explainable Artificial Intelligence Methods: SHAP and LIME (Salih, et.al. 2023)
By Shaunak Badani (AI MEng)
Link: https://duke.box.com/s/tieyj6q2ukgmamt2ltcugxytdid1rv4b
A video overview of Towards Robust Interpretability with Self-Explaining Neural Networks (Alvarez-Melis, et.al. 2018)
By Ryan Dai (Master of Engineering Management)
Link: https://drive.google.com/file/d/15iFDWQutcUDzZmuWMe0sHwmYskDbsWv7/view?usp=sharing