-
Notifications
You must be signed in to change notification settings - Fork 19
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #14 from zehio245/patch-1
Propuesta de Sergio Marco aceptada!
- Loading branch information
Showing
1 changed file
with
40 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,40 @@ | ||
# Beyond the Black Box: The Promise of SHAP in AI Interpretability | ||
|
||
## Authors | ||
|
||
Sergio Marco Chapapría, Verne Technology Group | ||
|
||
## Presenter | ||
|
||
Sergio Marco Chapapría, Verne Technology Group | ||
|
||
## Abstract (English) | ||
|
||
The presentation will explore how SHAP (SHapley Additive exPlanations) is transforming the interpretability of AI models, going beyond conventional techniques by providing accurate, | ||
coherent, and fair explanations. Based on Shapley value theory, which originated in cooperative game theory, SHAP allows for the assessment of the impact of each feature on predictions, | ||
overcoming the limitations of traditional approaches such as variable importance and local sensitivity. | ||
|
||
Case studies and practical applications will be presented, illustrating how this innovative approach improves the understanding of complex models, enabling informed and responsible decision-making across various fields, | ||
from medicine to banking. The benefits and challenges arising from adopting SHAP in different sectors and how to address them will be discussed. | ||
|
||
The implementation of SHAP in promoting ethics and transparency in artificial intelligence will also be examined, fostering trust from users and regulators. | ||
The implications of adopting SHAP in generating value for organizations and driving market competitiveness will be highlighted. | ||
|
||
Furthermore, the role of SHAP in designing and developing responsible AI solutions will be debated, and how it can encourage collaboration between AI experts, industry professionals, | ||
and business leaders to fully harness the potential of artificial intelligence in their businesses. | ||
|
||
## Resumen (Castellano) | ||
|
||
La ponencia explorará cómo SHAP (SHapley Additive exPlanations) está transformando la interpretabilidad en modelos de IA, yendo más allá de las técnicas convencionales al ofrecer explicaciones precisas, | ||
coherentes y justas. Basado en la teoría de valores de Shapley, que se originó en la teoría de juegos cooperativos, SHAP permite evaluar el impacto de cada característica en las predicciones, | ||
superando las limitaciones de enfoques tradicionales como la importancia de las variables y la sensibilidad local. | ||
|
||
Se presentarán casos de estudio y aplicaciones prácticas que ilustran cómo este enfoque innovador mejora la comprensión de modelos complejos, permitiendo una toma de decisiones informada y responsable en diversos campos, | ||
desde la medicina hasta la banca. Se discutirán los beneficios y desafíos que surgen al adoptar SHAP en diferentes sectores y cómo abordarlos. | ||
|
||
También se examinará cómo la implementación de SHAP promueve la ética y la transparencia en la inteligencia artificial, fomentando la confianza de usuarios y reguladores. | ||
Se destacarán las implicaciones de la adopción de SHAP en la generación de valor para las organizaciones y cómo puede impulsar la competitividad en el mercado. | ||
|
||
Además, se debatirá el papel de SHAP en el diseño y desarrollo de soluciones de IA responsables y cómo puede fomentar la colaboración entre expertos en IA, | ||
profesionales de la industria y líderes empresariales para aprovechar al máximo el potencial de la inteligencia artificial en sus negocios. | ||
|