Releases: lucidrains/x-transformers
Releases · lucidrains/x-transformers
0.0.18
add memory tokens, from Memory Transformers paper
0.0.17
add all-attention memory key/values, set with `attn_num_mem_kv = {int}`
0.0.16
add ability to use entmax15 for sparse attention
0.0.15
bump
0.0.14
add talking heads feature, turned on with `attn_talking_heads = True`
0.0.12
complete encoder / decoder
0.0.11
add T5 relative positional bias
0.0.10
simplify returning of embeddings or logits
0.0.9
rename layer blocks to attn layers
0.0.7
multiply by token embedding weight for logits by default