-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NIR <> lava-dl support #218
base: main
Are you sure you want to change the base?
Conversation
Thanks, @stevenabreu7 for getting this started. I'll take a close look at the PR soon. One thing that pops up is the license of https://github.com/neuromorphs/NIR I don't see a license for it. Can we sort out its licensing first? We prefer permissive licensing like BSD-3. Non-permissive licensing models, even GPL, are problematic to get legal approval. |
Thanks for the quick reply @bamsumit. The licensing should not be a problem, I will check with the others. Would MIT or BSD-3 be okay? The code is still work-in-progress, but if you could take a look and advise on the questions/comments that I added in the PR comment on top, that would be great. Thanks a lot! |
BSD-3 would be great :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@stevenabreu7 I have left some comments. As you get the PR ready, I'll add more comments on the dostrings and TODOs. General rule is we follow np style docstrings, avoid commented code, and TODO codes as much as possible.
|
||
__all__ = ['hdf5', 'blocks', 'utils'] | ||
__all__ = ['hdf5', 'blocks', 'utils', 'nir_lava'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I prefer just nir
, but its up to you. Anyway it needs to be consistent with README which says nir
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nir
is also the name of the NIR library itself, so I thought nir_lava
makes it easier to avoid conflicts (rather than import ... as ...
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure. Then lets make it consistent in the README.
src/lava/lib/dl/netx/nir_lava.py
Outdated
PATH_TYPE = typing.Union[str, pathlib.Path] | ||
|
||
|
||
def read_cuba_lif(layer: h5py.Group, shape: Tuple[int] = None) -> nir.NIRNode: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The name should probably be read_neuron
as it looks like it is going to handle other neuron types in the future.
src/lava/lib/dl/netx/nir_lava.py
Outdated
|
||
|
||
def nir_graph_to_lava_network(graph: nir.NIRGraph) -> NetworkNIR: | ||
"""Converts a NIRGraph to a Lava network.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It creates Lava-DL network which is different from Lava network. Lets make that clear.
netx.nir_lava.convert_to_nir(oxford_net_path, 'oxford.nir') | ||
oxford_nir = nir.read('oxford.nir') | ||
os.remove('oxford.nir') | ||
self.assertIsNotNone(oxford_nir) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would also further suggest exporting oxford_nir.export_hdf5('temp.net')
and checking diff with the original source.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
on it!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A question on this: do input layers modify the resulting dynamics of the network? in pytorch this is not the case, it's basically just a wrapper for the input data that stores the data shape, but in slayer the input layer also has neuron parameters which makes me think that the input spikes would be fed through an input layer, and the resulting spike train is then different from the original one. or is the slayer input layer just "mock input neurons" and the neuron parameters are irrelevant?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The input layers typically apply neuron dynamics. Exponential decaying dynamics in case of CUBA-LIF, Delta dynamics in case of Sigma-Delta and so on. This is different than PyTorch.
Thanks for all your help so far @bamsumit. I've been struggling to get the weights and parameters to match between an original lava-dl-generated .net file and the .net file that I get when going to NIR and back. So I have a general question for how to proceed with this. For lava-dl → NIR, should I 1) use a lava-dl network (i.e. torch.nn.Module with slayer blocks) to go to NIR or 2) use the generated netx hdf5 file to go to NIR? So far I have done 2), but now when I try to go back from the generated NIR graph to a lava-dl network, I need to reverse engineer the quantisation done by lava-dl to make the weights and parameters match. Similarly, for NIR → lava-dl, it seems that creating a torch.nn.Module with slayer blocks may be better because then we could simply use all the Loihi-specific quantisation schemes that are already part of lava-dl. With NIR, we want to be able to go lava-dl → NIR → lava-dl easily, but we also want to be able to go from other SNN simulators to NIR, and then to lava-dl. And most SNN simulators will not much/any quantization of the weights/params. What do you think is the best way to proceed? |
Hi @stevenabreu7 it is up to you. I am okay with lava-dl -> NIR -> lava-dl route. Only concern is that it adds an extra layer of dependency compared to netx<->NIR translation. Can you elaborate what issue with weight and parameters you are facing? I am guessing its a quantization scaling issue. Perhaps this might be helpful:
|
Issue number: #217 - interoperability of neuromorphic simulators and hardware platforms through NIR.
Objective of pull request: support neuromorphic intermediate representation (NIR) in lava-dl.
Questions
Network(torch.nn.Module)
with slayer blocks for each layer. Should I instead convert the NIR HDF5 file directly to a netx HDF5 file?Notes
Things left to do
simulation.Ts
,simulation.tSample
,Pull request checklist
Your PR fulfills the following requirements:
flakeheaven lint src/lava tests/
) and (bandit -r src/lava/.
) pass locallypytest
) passes locallyPull request type
Please check your PR type:
What is the current behavior?
What is the new behavior?
Does this introduce a breaking change?
Supplemental information