TensorLayer is a deep learning and reinforcement learning library on top of TensorFlow. It provides rich neural layers and utility functions to help researchers and engineers build real-world AI applications. TensorLayer is awarded the 2017 Best Open Source Software by the prestigious ACM Multimedia Society.
- Useful links: Documentation, Examples, 中文文档, 中文书
- [10 Apr] Load and visualize MPII dataset in one line of code.
- [05 Apr] Release models APIs for well-known pretained networks.
- [18 Mar] Release experimental APIs for binary networks.
- [18 Jan] [《深度学习:一起玩转TensorLayer》](http://www.broadview.com.cn/book/5059) (Deep Learning using TensorLayer)
- [17 Dec] Release experimental APIs for distributed training (by TensorPort). See tiny example.
- [17 Nov] Release data augmentation APIs for object detection, see tl.prepro.
- [17 Nov] Support Convolutional LSTM, see ConvLSTMLayer.
- [17 Nov] Support Deformable Convolution, see DeformableConv2dLayer.
- [17 Sep] New example Chatbot in 200 lines of code for Seq2Seq.
TensorLayer has pre-requisites including TensorFlow, numpy, matplotlib and nltk (optional). For GPU support, CUDA and cuDNN are required. The simplest way to install TensorLayer is:
# for master version (Recommended)
$ pip install git+https://github.com/tensorlayer/tensorlayer.git
# for stable version
$ pip install tensorlayer
Dockerfile is supplied to build images, build as usual
# for CPU version
$ docker build -t tensorlayer:latest .
# for GPU version
$ docker build -t tensorlayer:latest-gpu -f Dockerfile.gpu .
Please check documentation for detailed instructions.
Examples can be found in this folder and Github topic.
- Multi-layer perceptron (MNIST) - Classification task, see tutorial_mnist_simple.py.
- Multi-layer perceptron (MNIST) - Classification using Iterator, see method1 and method2.
- Denoising Autoencoder (MNIST). Classification task, see tutorial_mnist.py.
- Stacked Denoising Autoencoder and Fine-Tuning (MNIST). Classification task, see tutorial_mnist.py.
- Convolutional Network (MNIST). Classification task, see tutorial_mnist.py.
- Convolutional Network (CIFAR-10). Classification task, see tutorial_cifar10.py and tutorial_cifar10_tfrecord.py.
- VGG 16 (ImageNet). Classification task, see tl.models.VGG16 or tutorial_vgg16.py.
- VGG 19 (ImageNet). Classification task, see tutorial_vgg19.py.
- InceptionV3 (ImageNet). Classification task, see tutorial_inceptionV3_tfslim.py.
- SqueezeNet (ImageNet). Model compression, see tl.models.SqueezeNetV1 or tutorial_squeezenet.py
- MobileNet (ImageNet). Model compression, see tl.models.MobileNetV1 or tutorial_mobilenet.py.
- BinaryNet. Model compression, see mnist cifar10.
- Ternary Weight Network. Model compression, see mnist cifar10.
- DoReFa-Net. Model compression, see mnist cifar10.
- Wide ResNet (CIFAR) by ritchieng.
- More CNN implementations of TF-Slim can be connected to TensorLayer via SlimNetsLayer.
- Spatial Transformer Networks by zsdonghao.
- U-Net for brain tumor segmentation by zsdonghao.
- Variational Autoencoder (VAE) for (CelebA) by yzwxx.
- Variational Autoencoder (VAE) for (MNIST) by BUPTLdy.
- Image Captioning - Reimplementation of Google's im2txt by zsdonghao.
- Recurrent Neural Network (LSTM). Apply multiple LSTM to PTB dataset for language modeling, see tutorial_ptb_lstm.py and tutorial_ptb_lstm_state_is_tuple.py.
- Word Embedding (Word2vec). Train a word embedding matrix, see tutorial_word2vec_basic.py.
- Restore Embedding matrix. Restore a pre-train embedding matrix, see tutorial_generate_text.py.
- Text Generation. Generates new text scripts, using LSTM network, see tutorial_generate_text.py.
- Chinese Text Anti-Spam by pakrchen.
- Chatbot in 200 lines of code for Seq2Seq.
- FastText Sentence Classification (IMDB), see tutorial_imdb_fasttext.py by tomtung.
- DCGAN (CelebA). Generating images by Deep Convolutional Generative Adversarial Networks by zsdonghao.
- Generative Adversarial Text to Image Synthesis by zsdonghao.
- Unsupervised Image to Image Translation with Generative Adversarial Networks by zsdonghao.
- Improved CycleGAN with resize-convolution by luoxier
- Super Resolution GAN by zsdonghao.
- DAGAN: Fast Compressed Sensing MRI Reconstruction by nebulaV.
- Policy Gradient / Network (Atari Ping Pong), see tutorial_atari_pong.py.
- Deep Q-Network (Frozen lake), see tutorial_frozenlake_dqn.py.
- Q-Table learning algorithm (Frozen lake), see tutorial_frozenlake_q_table.py.
- Asynchronous Policy Gradient using TensorDB (Atari Ping Pong) by nebulaV.
- AC for discrete action space (Cartpole), see tutorial_cartpole_ac.py.
- A3C for continuous action space (Bipedal Walker), see tutorial_bipedalwalker_a3c*.py.
- DAGGER for (Gym Torcs) by zsdonghao.
- TRPO for continuous and discrete action space by jjkke88.
- Distributed Training. mnist and imagenet by jorgemf.
- Merge TF-Slim into TensorLayer. tutorial_inceptionV3_tfslim.py.
- Merge Keras into TensorLayer. tutorial_keras.py.
- Data augmentation with TFRecord. Effective way to load and pre-process data, see tutorial_tfrecord*.py and tutorial_cifar10_tfrecord.py.
- Data augmentation with TensorLayer, see tutorial_image_preprocess.py.
- TensorDB by fangde see here.
- A simple web service - TensorFlask by JoelKronander.
- Float 16 half-precision model, see tutorial_mnist_float16.py
TensorLayer provides two set of Convolutional layer APIs, see (Advanced) and (Basic) on readthedocs website.
As TensorFlow users, we have been looking for a library that can serve for various development phases. This library is easy for beginners by providing rich neural network implementations, examples and tutorials. Later, its APIs shall naturally allow users to leverage the powerful features of TensorFlow, exhibiting best performance in addressing real-world problems. In the end, the extra abstraction shall not compromise TensorFlow performance, and thus suit for production deployment. TensorLayer is a novel library that aims to satisfy these requirements. It has three key features:
- Simplicity : TensorLayer lifts the low-level dataflow abstraction of TensorFlow to high-level layers. It also provides users with massive examples and tutorials to minimize learning barrier.
- Flexibility : TensorLayer APIs are transparent: it does not mask TensorFlow from users; but leaving massive hooks that support diverse low-level tuning.
- Zero-cost Abstraction : TensorLayer is able to achieve the full performance of TensorFlow.
TensorLayer has negligible performance overhead. We benchmark classic deep learning models using TensorLayer and native TensorFlow on a Titan X Pascal GPU. Here are the training speeds of respective tasks:
CIFAR-10 | PTB LSTM | Word2Vec | |
---|---|---|---|
TensorLayer | 2528 images/s | 18063 words/s | 58167 words/s |
TensorFlow | 2530 images/s | 18075 words/s | 58181 words/s |
Similar to TensorLayer, Keras and TFLearn are also popular TensorFlow wrapper libraries. These libraries are comfortable to start with. They provide high-level abstractions; but mask the underlying engine from users. It is thus hard to customize model behaviors and touch the essential features of TensorFlow. Without compromise in simplicity, TensorLayer APIs are generally more flexible and transparent. Users often find it easy to start with the examples and tutorials of TensorLayer, and then dive into the TensorFlow low-level APIs only if need. TensorLayer does not create library lock-in. Users can easily import models from Keras, TFSlim and TFLearn into a TensorLayer environment.
The documentation [Online] [PDF] [Epub] [HTML] describes the usages of TensorLayer APIs. It is also a self-contained document that walks through different types of deep neural networks, reinforcement learning and their applications in Natural Language Processing (NLP) problems.
We have included the corresponding modularized implementations of Google TensorFlow Deep Learning tutorial, so you can read the TensorFlow tutorial [en] [cn] along with our document. Chinese documentation is also available.
TensorLayer has an open and fast growing community. It has been widely used by researchers from Imperial College London, Carnegie Mellon University, Stanford University, Tsinghua University, UCLA, Linköping University and etc., as well as engineers from Google, Microsoft, Alibaba, Tencent, Penguins Innovate, ReFULE4, Bloomberg, GoodAILab and many others.
- 🇬🇧 If you have any question, we suggest to create an issue to discuss with us.
- 🇨🇳 我们有中文讨论社区: 如QQ群和微信群.
If you find this project useful, we would be grateful if you cite the TensorLayer paper:
@article{tensorlayer2017, author = {Dong, Hao and Supratak, Akara and Mai, Luo and Liu, Fangde and Oehmichen, Axel and Yu, Simiao and Guo, Yike}, journal = {ACM Multimedia}, title = {{TensorLayer: A Versatile Library for Efficient Deep Learning Development}}, url = {http://tensorlayer.org}, year = {2017} }
TensorLayer is released under the Apache 2.0 license.