-
Notifications
You must be signed in to change notification settings - Fork 223
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for torch.compile(fullgraph=True)
WIP
#271
Conversation
At a quick glance, this is in nice shape. |
I removed the layer_size requirement.
|
Sorry for how long it's taken me to get around to this. I fixed a conflict to enable the tests to run, though the absence of the I'll test your PR on a few different corner cases of models, if all of those work, then I'll go ahead and modify tests to be suited for your approach. |
The example in Otherwise, all my training-loop tests across CPU/GPU/different architectures are working. Just have to see what's happening |
The only other error I see is in My intention with this test was to show the case where two inputs are fed to the neuron (i.e., current input, and membrane potential), but At the moment, the external membrane potential overrides the instance variable. Testing a fix now. |
All tests are passing! I think this is good for merging. It might be good to keep this open here so other neurons can be refactored into the same style. |
These are experimental changes that add support for
torch.compile(fullgraph=True)
(please note only the default mode is supported right now so cudagraphs probably wont work)
Currently only Leaky is supported and changes can be tested in
examples/tutorial_5_fullgraph.ipynb