Skip to content
forked from shangyunhai/glow

Compiler for Neural Network hardware accelerators

License

Notifications You must be signed in to change notification settings

mballackh13/glow

 
 

Repository files navigation

Glow

Build Status Code Coverage

Glow is a machine learning compiler and execution engine for various hardware targets. It is designed to be used as a backend for high-level machine learning frameworks. The compiler is designed to allow state of the art compiler optimizations and code generation of neural network graphs. This library is in active development. The project plan is described in the Github issues section and in the Roadmap wiki page.

How does it work?

Glow lowers a traditional neural network dataflow graph into a two-phase strongly-typed intermediate representation (IR). The high-level IR allows the optimizer to perform domain-specific optimizations. The lower-level instruction-based address-only IR allows the compiler to perform memory-related optimizations, such as instruction scheduling, static memory allocation and copy elimination. At the lowest level, the optimizer performs machine-specific code generation to take advantage of specialized hardware features. Glow features a lowering phase which enables the compiler to support a high number of input operators as well as a large number of hardware targets by eliminating the need to implement all operators on all targets. The lowering phase is designed to reduce the input space and allow new hardware backends to focus on a small number of linear algebra primitives. The design philosophy is described in an arXiv paper.

Getting Started

System Requirements

Glow builds and runs on macOS and Linux. The software depends on a modern C++ compiler that supports C++11, on CMake, LLVM, protocol buffers, and libpng.

Get Glow!

git clone [email protected]:pytorch/glow.git  # or: git clone https://github.com/pytorch/glow.git
cd glow

Submodules

Glow depends on a few submodules: googletest, onnx, and a library for FP16 conversions.

To get them, from the glow directory, run:

git submodule update --init --recursive

macOS

Install the required dependencies using Homebrew:

brew install cmake graphviz libpng ninja protobuf wget
brew install --with-toolchain llvm@6

Note that LLVM is installed in a non-default location (/usr/local/opt/llvm) to avoid conflicts with the system's LLVM.

Ubuntu

[The following instructions have been tested on Ubuntu 16.04]

In order to build Glow on Ubuntu it is necessary to install a few packages. The following command should install the required dependencies:

sudo apt-get install clang clang-6.0 cmake graphviz libpng-dev \
    libprotobuf-dev llvm-6.0 ninja-build protobuf-compiler wget

It may be desirable to use update-alternatives to manage the version of clang/clang++:

sudo update-alternatives --install /usr/bin/clang clang \
    /usr/lib/llvm-6.0/bin/clang 50
sudo update-alternatives --install /usr/bin/clang++ clang++ \
    /usr/lib/llvm-6.0/bin/clang++ 50

Glow uses the system default C/C++ compiler (/usr/bin/c++), and so you may also want to switch your default C/C++ compiler to clang:

sudo update-alternatives --config cc
    # Select the option corresponding to /usr/bin/clang ...
sudo update-alternatives --config c++
    # Select the option corresponding to /usr/bin/clang++ ...

Glow should build just fine with gcc (e.g. gcc 5.4), but we mostly use clang and are more attentive to compatibility with clang.

Finally, in order to support the ONNX net serialization format, Glow requires protobuf >= 2.6.1, but the above command may install older version on older Ubuntu (e.g. 14.04). If this is the case, we suggest to look at utils/install_protobuf.sh to install a newer version from source.

OpenCL on Ubuntu

If you decide to use OpenCL, the easiest way is to install portable open source implementation of the OpenCL standard, pocl. Glow relies on pocl to run OpenCL tests on CI. All required steps are outlined in the [install_pocl] (https://github.com/pytorch/glow/blob/master/.circleci/build.sh#L9) method.

Alternatively, you can follow these steps:

  1. Install necessary packages:
sudo apt-get install ocl-icd-opencl-dev ocl-icd-libopencl1 opencl-headers \
    clinfo
  1. Install the appropriate runtime for your CPU/GPU. This will depend on your hardware. If you have an Intel CPU with onboard graphics, you can navigate to Intel's compute-runtime releases page on Github at https://github.com/intel/compute-runtime/releases/ and follow their instructions. You will probably want to choose the latest release and then download and install about ~4 prebuilt packages. At the time of this writing, the prebuilt packages of compute-runtime Release 18.45.11804 ran successfully with Glow on an Intel Core i7-7600U running Ubuntu 16.04.1.

  2. To determine if installation was successful, you can run the following command:

clinfo

This will display information about your OpenCL platforms and devices (if found). Lastly, build Glow with the cmake flag -DGLOW_WITH_OPENCL=ON and run the test OCLTest.

Configure and build

To build the compiler, create a build directory and run cmake on the source directory. It's a good idea to build two configurations (Release and Debug) because some programs take a really long time to run in Debug mode. It's also a good idea to build the project outside of the source directory.

mkdir build_Debug
cd build_Debug
cmake -G Ninja -DCMAKE_BUILD_TYPE=Debug ../glow
ninja all

It's possible to configure and build the compiler with any CMake generator, like GNU Makefiles, Ninja and Xcode build.

Building with dependencies (LLVM)

By default, Glow will use a system provided LLVM. Note that Glow requires LLVM 5.0 or later. If you have LLVM installed in a non-default location (for example, if you installed it using Homebrew on macOS), you need to tell CMake where to find llvm using -DCMAKE_PREFIX_PATH. For example, if LLVM were installed in /usr/local/opt:

cmake -G Ninja ../glow \
    -DCMAKE_BUILD_TYPE=Debug \
    -DCMAKE_PREFIX_PATH=/usr/local/opt/llvm

If LLVM is not available on your system you'll need to build it manually. Run the script '/utils/build_llvm.sh to clone, build and install LLVM in a local directory. You will need to configure Glow with the flag -DCMAKE_PREFIX_PATH to tell the build system where to find LLVM (e.g. the location of llvm_install/ if using build_llvm.sh).

For more platform-specific build instructions and advanced options, such as building with Address-Sanitizers refer to this guide: Building the Compiler.

Testing and Running

Unit tests

The project has a few unit tests in the tests/unittests subdirectory. To run all of them, simply run ninja test.

C++ API examples

A few test programs that use Glow's C++ API are found under the examples/ subdirectory. The mnist, cifar10, fr2en and ptb programs train and run digit recognition, image classification and language modeling benchmarks, respectively.

To run these programs, build Glow in Release mode, then run the following commands to download the cifar10, mnist and ptb databases.

python ../glow/utils/download_test_db.py --all

Now run the examples. Note that the databases should be in the current working directory.

./bin/mnist
./bin/cifar10
./bin/fr2en
./bin/ptb
./bin/char-rnn

If everything goes well you should see:

  • mnist: pictures from the mnist digits database
  • cifar10: image classifications that steadily improve
  • fr2en: an interactive French-to-English translator
  • ptb: decreasing perplexity on the dataset as the network trains
  • char-rnn: generates random text based on some document

Note that the default build mode is Debug, which means that the compiler itself is easy to debug because the binary contains debug info, lots of assertions, and the optimizations are disabled. It also means that the compiler and runtime are very slow, and the execution time can be hundreds of times slower than that of release builds. If you wish to benchmark the compiler, run long benchmarks, or release the product then you should compile the compiler in Release mode. Check the main CMake file for more details.

More details on testing and running Glow can be found in: Testing the Glow Compiler.

Ahead-of-time Compilation

Glow can be used to compile neural networks into object files containing native code. We provide resnet50 (both quantized and non-quantized versions) as an example of this capability in examples/bundles/resnet50. See Creating Standalone Executable Bundles for more detail.

Contributing

To get started, please refer to the following guides:

Communication

  • Forums: discuss implementations, research, etc: https://discuss.pytorch.org/c/glow. Make sure to label topic with the "glow" category.
  • GitHub issues: bug reports, feature requests, install issues, RFCs, thoughts, etc.

License

Glow is licensed under the Apache 2.0 License.

About

Compiler for Neural Network hardware accelerators

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 91.1%
  • C 4.4%
  • Python 2.0%
  • CMake 1.8%
  • Shell 0.7%