Release 0.4.0
Added
- Setting
kernel_initializer=None
now removes the dense input kernel. (#40) - The
keras_lmu.LMUFFT
layer now supportsmemory_d > 1
.keras_lmu.LMU
now uses this implementation for all values ofmemory_d
when feedforward conditions are satisfied (no hidden-to-memory or memory-to-memory connections, and the sequence length is notNone
). (#40) - Added
trainable_theta
option, which will allow thetheta
parameter to be learned during training. (#41) - Added
discretizer
option, which controls the method used to solve for theA
andB
LMU matrices. This is mainly useful in combination withtrainable_theta=True
, where settingdiscretizer="euler"
may improve the training speed (possibly at the cost of some accuracy). (#41) - The
keras_lmu.LMUFFT
layer can now use raw convolution internally (as opposed to FFT-based convolution). The newconv_mode
option exposes this. The newtruncate_ir
option allows truncating the impulse response when running with a raw convolution mode, for efficiency. Whether FFT-based or raw convolution is faster depends on the specific model, hardware, and amount of truncation. (#42)
Changed
- The
A
andB
matrices are now stored as constants instead of non-trainable variables. This can improve the training/inference speed, but it means that saved weights from previous versions will be incompatible. (#41) - Renamed
keras_lmu.LMUFFT
tokeras_lmu.LMUFeedforward
. (#42)
Fixed
- Fixed dropout support in TensorFlow 2.6. (#42)