Skip to content
This repository has been archived by the owner on Oct 21, 2019. It is now read-only.

TEAM NAME - PROJECT NAME #21

Open
4 of 7 tasks
arjunmohnot opened this issue Oct 20, 2019 · 0 comments
Open
4 of 7 tasks

TEAM NAME - PROJECT NAME #21

arjunmohnot opened this issue Oct 20, 2019 · 0 comments

Comments

@arjunmohnot
Copy link

Before you start, please follow this format for your issue title:
npHard - DL-Vizard

ℹ️ Project information

Please complete all applicable.

🔥 Your Pitch

Kindly write a pitch for your project. Please do not use more than 500 words

One of the most debated topics in deep learning is how to interpret and understand a trained model – particularly in the context of high-risk industries like healthcare. The term “black box” has often been associated with deep learning algorithms. How can we trust the results of a model if we can’t explain how it works? Take the example of a deep learning model trained for detecting cancerous tumours. The model tells you that it is 99% sure that it has detected cancer – but it does not tell you why or how it made that decision. Did it find an important clue in the MRI scan? Or was it just a smudge on the scan that was incorrectly detected as a tumour? This is a matter of life and death for the patient and doctors cannot afford to be wrong.

🔦 Any other specific thing you want to highlight?

(Optional)
Love 😍 to see more such hacks!

✅ Checklist

Before you post the issue:

  • You have followed the issue title format.
  • You have mentioned the correct labels.
  • You have provided all the information correctly.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants