Skip to content

Latest commit

 

History

History
39 lines (26 loc) · 2.28 KB

README.md

File metadata and controls

39 lines (26 loc) · 2.28 KB

stability-stable test stars

Open in Gitpod

Tutorials for eXplainable Artificial Intelligence (XAI) methods

This repository contains a collection of self-explanatory tutorials for different model-agnostic and model-specific XAI methods. Each tutorial comes in a Jupyter Notebook wich contains a short video lecture and practical exercises. The material has already been used in the context of two courses: the Zero to Hero Summer Academy (fully online) and ml4hearth (hybrid setting). The course material can be adjusted according to the availabel time frame and the schedule. The material is self-explanatory and can be also be consumed offline.

The learning objectives are:

  • understand the importance of interpretability
  • discover the existing model-agnostic and model-specific XAI methods
  • learn how to interpret the outputs and graphs of those methods with hands-on exercises
  • learn to chose which method is suitable for a specific task

List of Tutorials for Model-Agnostic Methods

  • Permutation Feature Importance
  • SHapley Additive exPlanations (SHAP)
  • Local Interpretable Model-Agnostic Explanations (LIME)

List of Tutorials for Model-Specific Methods

  • Forest-Guided Clustering
  • Grad-CAM

Requirements and Setup

It is possible to either create an environment and install all the necessary packages locally (using the requirements.txt file) or to execute the notebooks on the browser, clicking the 'Open in Colab' button. This second option doesn't require any further installation, but the user must have acces to a Google account.

Contributions

Comments and input are very welcome! Please, if you have a suggestion or you think something should be changed, open an issue or submit a pull request.