-
Notifications
You must be signed in to change notification settings - Fork 1
Issues: bigscience-workshop/interpretability-ideas
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Allow to probe an arbitrary model's checkpoints with the off-the-shelf probes
#8
opened Feb 25, 2022 by
oserikov
Provide the unified Interpretability API for both Huggingface and NON-Huggingface models
#7
opened Feb 25, 2022 by
oserikov
Implement the unified activations interpretation API for similar models
#5
opened Feb 24, 2022 by
oserikov
Implement the unified attention interpretation API for similar models
#4
opened Feb 24, 2022 by
oserikov
Integrate existing tools: Captum, LIT, AllenNLP Interpret, NeuroX into the HF pipeline
#3
opened Feb 24, 2022 by
oserikov
Implement and perform the interpretability analysis of the BigScience models
#2
opened Feb 24, 2022 by
oserikov
Imlement tests for abstract structures such as in Curcuits thread
#1
opened Feb 24, 2022 by
oserikov
ProTip!
What’s not been updated in a month: updated:<2024-10-06.