Skip to content

GeNN v4.7.0

Compare
Choose a tag to compare
@neworderofjamie neworderofjamie released this 11 Feb 17:23
· 2196 commits to master since this release
c86cefe

Release Notes for GeNN v4.7.0

This release adds a number of significant new features to GeNN as well as including a number of bug fixes that have been identified since the 4.6.0 release.

User Side Changes

  1. While a wide range of convolutional type connectivity can be implemented using SynapseMatrixConnectivity::PROCEDURAL, the performance is often worse than sparse connectivity. SynapseMatrixConnectivity::TOEPLITZ provides a more efficient solution with InitToeplitzConnectivitySnippet::Conv2D and InitToeplitzConnectivitySnippet::AvgPoolConv2D implementing some typical connectivity patterns (#484).
  2. Shared weight kernels had to be previously provided as extra global parameters via the InitVarSnippet::Kernel variable initialisation snippet. This meant kernels had to be manually allocated to the correct size and couldn't be initialised using standard functionality. SynapseMatrixWeight::KERNEL allows kernels to be treated as standard state variables (#478).
  3. Some presynaptic updates need to update the state of presynaptic neurons as well as postsynaptic. These updates can now be made using the $(addToPre,...) function from presynaptic update code and the destination additional input variable can be specified using SynapseGroup::setPreTargetVar (#479).
  4. On Windows, all models in the same directory would build their generated code into DLLs with the same name, prevented the the caching system introduced in v4.5.0 working properly. CodeGenerator::PreferencesBase::includeModelNameInDLL includes the name of the model in the DLL filename, resolving this problem. This is now the default behaviour in PyGeNN but, when using GeNN from C++, the flag must be manually set and MSBuild projects updated to link to the correct DLL (#476).
  5. Neuron code can now sample the binomial distribution using $(gennrand_binomial) and this can be used to initialise variables with InitVarSnippet::Binomial (#498).
  6. In the latest version of Windows Subsystem for Linux, CUDA is supported but libcuda is mounted in a non-standard location. GeNN's CUDA backend now adds this location to the linker paths (#500).

Bug fixes:

  1. Fixed issues with some configurations of InitSparseConnectivitySnippet::Conv2D when stride > 1 which caused incorrect connectivity to be instantiated as well as crashes when this snippet was used to generate sparse connectivity (#489, #491).
  2. Fixed issue where, if $(addToInSynDelay) was used in spike-like event code, it was not detected and dendritic delay structures were not correctly created (#494).
  3. Fixed issue where precision wasn't being correctly applied to neuron additional input variable and sparse connectivity row build state variable initialisation meaning double precision code could unintentially be generated (#489).