Altruist: Argumentative Explanations through Local Interpretations of Predictive Models
This paper introduces the "Altruist" (Argumentative expLanaTions thRoUgh local InterpretationS of predicTive models) method for transforming FI interpretations of ML models into insightful and validated explanations using argumentation based on classical logic. Altruist provides the local maximum truthful interpretation, as well as reasons for the truthfulness justification, and can be used as an easy-to-choose tool between X number of different interpretation techniques based on a few specific criteria. Altruist has innate virtues such as truthfulness, transparency and user-friendliness that characterise it as an apt tool for the XAI community.
a person unselfishly concerned for or devoted to the welfare of others (opposed to egoist).
Please ensure you have docker installed on your desktop. Then:
docker pull johnmollas/altruist
After succesfully installing Altruist, please do:
docker run -p 8888:8888 johnmollas/altruist
Then, in your terminal copy the localhost url and open it in your browser.
Name | |
---|---|
Ioannis Mollas | [email protected] |
Grigorios Tsoumakas | [email protected] |
Nick Bassiliades | [email protected] |
LionLearn Interpretability Library containing:
- LioNets: Local Interpretation Of Neural nETworkS through penultimate layer decoding
- LionForests: Local Interpretation Of raNdom FORESts through paTh Selection