From eebdd823d179ffe8343a73cf0fe37bcc4d518960 Mon Sep 17 00:00:00 2001 From: Gregor Lenz Date: Fri, 4 Aug 2023 15:01:02 +0200 Subject: [PATCH] update links to Sinabs and Rockpool --- content/post/framework-benchmarking/index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/post/framework-benchmarking/index.md b/content/post/framework-benchmarking/index.md index b051b2b5..28918246 100644 --- a/content/post/framework-benchmarking/index.md +++ b/content/post/framework-benchmarking/index.md @@ -13,8 +13,8 @@ Open Neuromorphic's [list of SNN frameworks](https://github.com/open-neuromorphi ![Comparison of time taken for forward and backward passes in different frameworks, for 512 neurons.](framework-benchmarking-512.png) -The first figure shows results for a small 512 neuron network. Overall, [SpikingJelly](https://github.com/fangwei123456/spikingjelly) is the fastest when using the CuPy backend, at just 1.49ms for both forward and backward call. The libraries that use an implementation of [EXODUS](https://www.frontiersin.org/articles/10.3389/fnins.2023.1110444/full) ([Sinabs](https://github.com/synsense/sinabs-exodus) / [Rockpool](https://rockpool.ai/)) or [SLAYER](https://proceedings.neurips.cc/paper_files/paper/2018/hash/82f2b308c3b01637c607ce05f52a2fed-Abstract.html) ([Lava DL](https://github.com/lava-nc/lava-dl)) equally benefit from custom CUDA code and vectorization across the time dimension in both forward and backward passes. It is noteworthy that such custom implementations exist for specific neuron models (such as the LIF under test), but not for arbitrary neuron models. On top of that, custom CUDA/CuPy backend implementations need to be compiled and then it is up to the maintainer to test it on different systems. Networks that are implemented in SLAYER, EXODUS or SpikingJelly with a CuPy backend cannot be executed on a CPU (unless converted). -In contrast, frameworks such as [snnTorch](https://github.com/jeshraghian/snntorch), [Norse](https://github.com/norse/norse), Sinabs or Rockpool are very flexible when it comes to defining custom neuron models, but that flexibility comes at a cost of slower computation. SpikingJelly also supports a conventional PyTorch GPU backend with which it's possible to define neuron models more flexibly. Such implementations are also much easier to maintain, as relying on the extensive testing of PyTorch means that it will likely work on a given machine configuration. +The first figure shows results for a small 512 neuron network. Overall, [SpikingJelly](https://github.com/fangwei123456/spikingjelly) is the fastest when using the CuPy backend, at just 1.49ms for both forward and backward call. The libraries that use an implementation of [EXODUS](https://www.frontiersin.org/articles/10.3389/fnins.2023.1110444/full) ([Sinabs EXODUS](https://github.com/synsense/sinabs-exodus) / [Rockpool EXODUS](https://rockpool.ai/reference/_autosummary/nn.modules.LIFExodus.html?)) or [SLAYER](https://proceedings.neurips.cc/paper_files/paper/2018/hash/82f2b308c3b01637c607ce05f52a2fed-Abstract.html) ([Lava DL](https://github.com/lava-nc/lava-dl)) equally benefit from custom CUDA code and vectorization across the time dimension in both forward and backward passes. It is noteworthy that such custom implementations exist for specific neuron models (such as the LIF under test), but not for arbitrary neuron models. On top of that, custom CUDA/CuPy backend implementations need to be compiled and then it is up to the maintainer to test it on different systems. Networks that are implemented in SLAYER, EXODUS or SpikingJelly with a CuPy backend cannot be executed on a CPU (unless converted). +In contrast, frameworks such as [snnTorch](https://github.com/jeshraghian/snntorch), [Norse](https://github.com/norse/norse), [Sinabs](https://sinabs.ai) or [Rockpool](https://rockpool.ai) are very flexible when it comes to defining custom neuron models, but that flexibility comes at a cost of slower computation. SpikingJelly also supports a conventional PyTorch GPU backend with which it's possible to define neuron models more flexibly. Such implementations are also much easier to maintain, as relying on the extensive testing of PyTorch means that it will likely work on a given machine configuration. ![Comparison of time taken for forward and backward passes in different frameworks, for 4k neurons.](framework-benchmarking-4k.png)