Releases: genn-team/genn
GeNN 5.0.0
Release Notes for GeNN 5.0.0
This is a very large update to GeNN that has fixed a large number of longstanding bugs and we hope will make GeNN easier to use and enable various exciting new features in the near future. The licence has also been switched from GPL to LGPL making it mildly more liberal by allowing PyGeNN to be used as a component in closed-source systems.
This release breaks backward compatibility so all models are likely to require updating but the documentation has also been completely re-done and the pre-release version is at https://genn-team.github.io/genn/documentation/5/. This includes a guide to updating existing models
New features
- GeNN has a whole new code generator. This gives much better quality error messages to the user about syntax/typing errors in code strings and will enable use to do smarter optimisations in future but it does restrict user code to a well-defined subset of C99 (#595)**
- As well as simulation kernels, GeNN 4.x generated large amounts of boilerpalte for allocating memory and copying from device to host. This resulted in very long compile times with large models. In GeNN 5 we have replaced this with a new runtime which reduces compilation time by around 10x on very large models (#602)
- In GeNN 4.X, parameters were always "scalar" type. This resulted in poor code generation when these are used to store integers. Parameters now have types and can also be made dynamic to allow them to be changed at runtime (#607)
- Weight update models now have postsynaptic spike-like events, allowing a wider class of learning rules to be implemented (#609)
Bug fixes
- PyGeNN only really works with precision set to float (#289)
- Refine global - register -global transfers (#55)
- Avoiding creating unused variables enhancement (#47)
- PyGeNN doesn't correctly handle neuron variables with delay slots (#393)
- assign_external_pointer overrides should use explicitly sized integer types (#288)
- Repeat of spike-like-event conditions in synapse code flawed (#379)
- Dangerous conflict potential of user and system code (#385)
- Accessing queued pre and postsynaptic weight update model variables (#402)
- Linker-imposed model complexity limit on Windows (#408)
- Got 'error: duplicate parameter name' when ./generate_run test in userproject/Izh_sparse_project bug (#416)
- Issues with merging synapse groups where pre or postsynaptic neuron parameters are referenced (#566)
- Presynaptic Synapse Variable undefined in Event Threshold Condition (#594)
GeNN 5.0.0 RC1
Release Notes for GeNN 5.0.0
This is a very large update to GeNN that has fixed a large number of longstanding bugs and we hope will make GeNN easier to use and enable various exciting new features in the near future. The licence has also been switched from GPL to LGPL making it mildly more liberal by allowing PyGeNN to be used as a component in closed-source systems.
This release breaks backward compatibility so all models are likely to require updating but the documentation has also been completely re-done and the pre-release version is at https://genn-team.github.io/genn/documentation/5/. This includes a guide to updating existing models
User Side Changes
- Named parameters by (#493)
- Transpiler (#595)
- Variable dimensions (#598)
- Dynamic loader (#602)
- Replace implicit neuron variable references with explicit ones (#604)
- Dynamic and typed (#607)
- Fused event generation and postsynaptic spike-like events (#609)
- Single PSM code string (#612)
Bug fixes
- PyGeNN only really works with precision set to float (#289)
- Refine global - register -global transfers (#55)
- Avoiding creating unused variables enhancement (#47)
- PyGeNN doesn't correctly handle neuron variables with delay slots (#393)
- assign_external_pointer overrides should use explicitly sized integer types (#288)
- Repeat of spike-like-event conditions in synapse code flawed (#379)
- Dangerous conflict potential of user and system code (#385)
- Accessing queued pre and postsynaptic weight update model variables (#402)
- Linker-imposed model complexity limit on Windows (#408)
- Got 'error: duplicate parameter name' when ./generate_run test in userproject/Izh_sparse_project bug (#416)
- Issues with merging synapse groups where pre or postsynaptic neuron parameters are referenced (#566)
- Presynaptic Synapse Variable undefined in Event Threshold Condition (#594)
GeNN 4.9.0
Release Notes for GeNN 4.9.0
This release adds a number of significant new features to GeNN as well as including a number of bug fixes that have been identified since the 4.8.1 release.
It is intended as the last release for GeNN 4.X.X.
Fixes for serious bugs may be backported if requested but, otherwise, development will be switching to GeNN 5.
User Side Changes
- Implemented
pygenn.GeNNModel.unload
to manually unload GeNN models to improve control in scenarios such as parameter sweeping where multiple PyGeNN models need to be instantiated (#581). - Added Extra Global Parameter references to custom updates (see Defining Custom Updates, Defining your own custom update model and Extra Global Parameter references (#583).
- Expose
$(num_pre)
,$(num_post)
,$(num_batches)
to all user code strings (#576)
Bug fixes
- Fixed handling of indices specified as sequences types other than numpy arrays in
pygenn.SynapseGroup.set_sparse_connections
(#597). - Fixed bug in CUDA constant cache estimation bug which could cause nvLink errors in models with learning rules which required previous spike times (#589).
- Fixed longstanding issue with setuptools that meant PyGeNN sometimes had to be built twice to obtain a functional version. Massive thanks to @erolm-a for contributing this fix (#591).
Optimisations
GeNN 4.8.1
Release Notes for GeNN v4.8.1
This release fixes a number of issues found in the 4.8.0 release and also includes some optimisation which could be very beneficial for some classes of model.
Bug fixes
- Fixed bug relating to merging populations with variable references pointing to variables with different access duplication modes (#557).
- Fixed infinite loop that could occur in code generator if a bracket was missed calling a GeNN function in a code snippet (#559).
- Fixed bug that meant batched models which required previous spike times failed to compile (#565).
- Fixed bug with DLL-searching logic on Windows which meant CUDA backend failed to load on some systems (#579).
- Fixed a number of corner cases in the handling of
VarAccessDuplication::SHARED_NEURON
variables (#578).
Optimisations
- When building models with large numbers of populations using the CUDA backend, compile times could be very large. This was at least in part due to over-verbose error handling code being generated.
CodeGenerator::CUDA::Preferences::generateSimpleErrorHandling
enables the generation of much more minimal error-handling code and can speed up compilation by up to 10x (#554). - Turned on multi-processor compilation option in Visual Studio solutions which speeds up compilation of GeNN by a significant amount (#555).
- Fusing postsynaptic models was previously overly-conservative meaning large, highly-connected models using a postsynaptic model with additional state variables would perform poorly. These checks have been relaxed and brought into line with those used for fusing pre and postsynaptic updates coming from weight update models (#567).
GeNN 4.8.0
Release Notes for GeNN 4.8.0
This release adds a number of significant new features to GeNN as well as including a number of bug fixes that have been identified since the 4.7.1 release.
User Side Changes
- Custom updates extended to work on
SynapseMatrixWeight::KERNEL
weight update model variables (#524). - Custom updates extended to perform reduction operations across neurons as well as batches (#539).
- PyGeNN can now automatically find Visual Studio build tools using functionality in
setuptools.msvc.msvc14_get_vc_env
(#471) - GeNN now comes with a fully-functional Docker image and releases will be distributed via Dockerhub as well as existing channels. Special thanks to @Stevinson , @jamesturner246 and @bdevans for their help on this (see the README for more information) (#548 and #550).
Bug fixes
- Fixed bug relating to merging of synapse groups which perform presynaptic "revInSyn" updates (#520).
- Added missing parameter to PyGeNN. pygenn.genn_model.create_custom_postsynaptic_class function so postsynaptic models with extra global parameters can be created (#522).
- Correctly substitute 0 for $(batch) when using single-threaded CPU backend (#523).
- Fixed issues building PyGeNN with Visual Studio 2017 (#533).
- Fixed bug where model might not be rebuilt if sparse connectivity initialisation snippet was changed (#547).
- Fixed longstanding bug in the
gen_input_structured
tool -- used by some userprojects -- where data was written outside of array bounds (#551). - Fixed issue with debug mode of
genn-buildmodel.bat
when used with single-threaded CPU backend (#551). - Fixed issue where, if custom update models were the only part of a model that required an RNG for initialisation, one might not be instantiated (#540).
GeNN 4.7.1
Release Notes for GeNN v4.7.1
This release fixes a plethora of issues found in the 4.7.0 release and also includes an optimisation which could be very beneficial for some classes of model.
Bug fixes
- Fixed issue meaning that manual changes to max synaptic row length (via
SynapseGroup::setMaxConnections
) were not detected and model might not be rebuilt. Additionally, reduce the strictness of checks inSynapseGroup::setMaxConnections
andSynapseGroup::setMaxSourceConnections
so maximum synaptic row and column lengths can be overridden when sparse connectivity initialisation snippets are in use as long as overriding values are larger than those provided by snippet (#515). - Fixed issue preventing PyGeNN being built on Python 2.7 (#510)
- Fixed issue meaning that
inSyn
,denDelayInSyn
andrevInSynOutSyn
variables were not properly zeroed during initialisation (or reinitialisation) of batched models (#509). - Fixed issue where initialization code for synapse groups could be incorrectly merged (#508).
- Fixed issue when using custom updates on batched neuron group variables (#507).
- Fixed issue in spike recording system where some permutations of kernel and neuron population size would result in memory corruption (#502).
- Fixed (long-standing) issue where LLDB wasn't correctly invoked when running genn-buildmodel.sh -d on Mac (#518).
- Fixed issue where sparse initialisation kernels weren't correctly generated if they were only required to initialise custom updates (#517).
Optimisations
- Using synapse dynamics with sparse connectivity previously had very high memory requirements and poor performance. Both issues have been solved with a new algorithm (#511).
GeNN v4.7.0
Release Notes for GeNN v4.7.0
This release adds a number of significant new features to GeNN as well as including a number of bug fixes that have been identified since the 4.6.0 release.
User Side Changes
- While a wide range of convolutional type connectivity can be implemented using
SynapseMatrixConnectivity::PROCEDURAL
, the performance is often worse than sparse connectivity.SynapseMatrixConnectivity::TOEPLITZ
provides a more efficient solution withInitToeplitzConnectivitySnippet::Conv2D
andInitToeplitzConnectivitySnippet::AvgPoolConv2D
implementing some typical connectivity patterns (#484). - Shared weight kernels had to be previously provided as extra global parameters via the
InitVarSnippet::Kernel
variable initialisation snippet. This meant kernels had to be manually allocated to the correct size and couldn't be initialised using standard functionality.SynapseMatrixWeight::KERNEL
allows kernels to be treated as standard state variables (#478). - Some presynaptic updates need to update the state of presynaptic neurons as well as postsynaptic. These updates can now be made using the $(addToPre,...) function from presynaptic update code and the destination additional input variable can be specified using
SynapseGroup::setPreTargetVar
(#479). - On Windows, all models in the same directory would build their generated code into DLLs with the same name, prevented the the caching system introduced in v4.5.0 working properly.
CodeGenerator::PreferencesBase::includeModelNameInDLL
includes the name of the model in the DLL filename, resolving this problem. This is now the default behaviour in PyGeNN but, when using GeNN from C++, the flag must be manually set and MSBuild projects updated to link to the correct DLL (#476). - Neuron code can now sample the binomial distribution using $(gennrand_binomial) and this can be used to initialise variables with
InitVarSnippet::Binomial
(#498). - In the latest version of Windows Subsystem for Linux, CUDA is supported but libcuda is mounted in a non-standard location. GeNN's CUDA backend now adds this location to the linker paths (#500).
Bug fixes:
- Fixed issues with some configurations of
InitSparseConnectivitySnippet::Conv2D
when stride > 1 which caused incorrect connectivity to be instantiated as well as crashes when this snippet was used to generate sparse connectivity (#489, #491). - Fixed issue where, if $(addToInSynDelay) was used in spike-like event code, it was not detected and dendritic delay structures were not correctly created (#494).
- Fixed issue where precision wasn't being correctly applied to neuron additional input variable and sparse connectivity row build state variable initialisation meaning double precision code could unintentially be generated (#489).
GeNN 4.6.0
Release Notes for GeNN v4.6.0
This release adds a number of significant new features to GeNN as well as several usability improvements for PyGeNN.
It also includes a number of bug fixes that have been identified since the 4.5.1 release.
User Side Changes
- As well as performing arbitrary updates and calculating transposes of weight update model variables, custom updates can now be used to implement 'reductions' so, for example, duplicated variables can be summed across model batches (#447, #449).
- Previously, to connect a synapse group to a postsynaptic neuron's additional input variable, a custom postsynaptic model had to be used.
SynapseGroup::setPSTargetVar
andpygenn.SynapseGroup.ps_target_var
can now be used to set the target variable of any synapse group (#458). - Previously, weight update model pre and postsynaptic updates and variables got duplicated in the neuron kernel. This was very inefficient and these can now be 'fused' together by setting
ModelSpec::setFusePrePostWeightUpdateModels
(#461). - PyGeNN now shares a version with GeNN itself and this will be accessible via
pygenn.__version__
(#472). - The names of populations and variables are now validated to prevent code with invalid variable names being generated (#443,#448).
- As well as being able to read the current spikes via the pygenn.NeuronGroup.current_spikes property, they can now also be set (#445).
- Spike-like events were previously not exposed to PyGeNN. These can now be pushed and pulled via
pygenn.NeuronGroup.pull_spike_events_from_device
,pygenn.NeuronGroup.push_spike_events_to_device
,pygenn.NeuronGroup.pull_current_spike_events_from_device
andpygenn.NeuronGroup.push_current_spike_events_to_device
; and accessed viapygenn.NeuronGroup.current_spike_events
(#469). - Added additional error handling to prevent properties of
pygenn.GeNNModel
that can only be set before the model was built being set afterwards (#464). - Variable references can now reference custom update variables (#446).
- Updated the default parameters used in the MBody1 example to be more sensible (#473).
Bug fixes:
- Fixed an issue that was preventing
genn-buildmodel.sh
correctly handling paths with spaces (#444) - Fix multiple issues with sparse synapse index narrowing (#460)
- Fixed issue where, if GeNN is run in a locale where , is used for decimal point, some generated code was incorrectly formatted (#468).
- Fixed several small issues preventing GeNN from building on GCC 5 Visual C++ 2017 (#462)
GeNN 4.5.1
Release Notes for GeNN v4.5.1 (PyGeNN 0.4.6)
This release fixes several small issues found in the 4.5.0 release.
Bug fixes:
- Fixed cause of the warnings about memory leaks which were generated when sparse connectivity initialisation snippets were defined in PyGeNN (#438)
- Fixed bug in model change detection which resulted in memory usage estimate increasing every time the model subsequently changed (#440)
- Fixed several bugs effecting the implementation of custom update models in CUDA and OpenCL (#439)
GeNN 4.5.0
Release Notes for GeNN v4.5.0 (PyGeNN 0.4.5)
This release adds a number of significant new features to GeNN as well as several usability improvements for PyGeNN.
It also includes a number of bug fixes that have been identified since the 4.4.0 release.
User Side Changes
- When performing inference on datasets, batching helps fill the GPU and improve performance. This could be previously achieved using "master" and "slave" synapse populations but this didn't scale well. Models can now be automatically batched using
ModelSpec::setBatchSize
orpygenn.genn_model.GeNNModel.batch_size
(#392). - As well as more typical neuron, weight update, postsynaptic and current source models, you can now define custom update models which define a process which can be applied to any variable in the model. These can be used for e.g. resetting state variables or implementing optimisers for gradient-based learning (#405).
- Model compilation and CUDA block size optimisation could be rather slow in previous versions. More work is still required in this area but, code will now only be re-generated if the model has actually changed and block sizes will only be re-optimised for modules which have changed. Rebuilding can be forced with the
-f
flag togenn-buildmodel
or theforce_rebuild
flag topygenn.GeNNModel.build
(#427, #430). - Binary PyGeNN wheels are now always built with Python 3 (#401).
- To aid debugging, debug versions of PyGeNN can now be built (#396).
- OpenCL performance on AMD devices is improved - this has only been tested on a Radeon RX 5700 XT so any feedback from users with other devices would be much appreciated (#390).
- Exceptions raised by GeNN are now correctly passed through PyGeNN to Python (#433).
- Spike times (and spike-like event times) can now be accessed, pushed and pulled from PyGeNN (see
pygenn.genn_groups.NeuronGroup.spike_times
,pygenn.genn_groups.NeuronGroup.push_spike_times_to_device
andpygenn.genn_groups.NeuronGroup.pull_spike_times_from_device
) (#432) - On models where postsynaptic merging isn't enabled, the postsynaptic input current from a synapse group can now be accessed from PyGeNN via
pygenn.genn_groups.SynapseGroup.in_syn
; and pushed and pulled withpygenn.genn_groups.SynapseGroup.push_in_syn_to_device
andpygenn.genn_groups.SynapseGroup.pull_in_syn_from_device
respectively (#432). - Accessing extra global parameters from PyGeNN was previously rather cumbersome. Now, you don't need to manually pass a size to e.g.
pygenn.genn_groups.NeuronGroup.pull_extra_global_param_from_device
and, if you are using non-pointer extra global parameters, you no longer need to call e.g.pygenn.genn_groups.NeuronGroup.set_extra_global_param
before loading your model (#415).
Bug fixes:
cudaFree
was incorrectly called twice on zero-copy variables, causing crashes on exit (#395)- Build in Izhikevich neurons incorrectly used auto-refractory mechanism, limiting their maximum firing rate (#404)
- On Windows, 64-bit version of compiler is now always used (#407)
- Fixed issues with CUDA 9.0 and 9.1 introduced in v4.4.0 release (#412)
- Fixed race condition relating to accessing previous spike times (#414)
- Fixed bug in column-wise connectivity initialisation (#419)
- Fixed issue with
binomialInverseCDF
function (used for calculating the maximum row length of probabilistic connectivity) which could fail when using some parameter combinations (#426)