Skip to content

Latest commit

 

History

History

code

1. Requirements

  • GNU / Linux (we recommend Ubuntu 16.04).
  • Python 3.6.
  • libhdf5.

Additionally, we recommend the use of CUDA on an NVIDIA GPU to speed up TensorFlow experiments. Installation instructions are available here.

Automatic install:

For your convenience, we provide a script ./bootstrap.sh to automate the installation of these dependencies. Supported Linux distributions are: Ubuntu, CentOS, and Arch Linux. The script will print installation commands for any missing requirements. These can either be typed in by hand, or executed automatically using:

$ ./bootstrap.sh | bash

Installing system-wide dependencies requires sudo privileges. You may be prompted for your password. Please note, CUDA must be installed manually.

2. Installation

$ ./configure
... # answer yes/no prompts
$ make

The configure script determines python packages to install, based on the availability of CUDA. Installation does not require sudo privileges. The only directory modified outside of this repository is ~/.ipython/kernels.

3. Running the Code

The executable code is in the form of Jupyter notebooks. Launch the Jupter server using:

$ make run

Note: If you wish to run the Jupyter server on a remote machine (for example, you are working on a server over SSH), you will need to configure the Jupyter server for public access. See the official documentation for instructions.

3.1. Experiments

The following notebooks are available:

  1. Language Model.ipynb - demonstrates the transforming and encoding of OpenCL source code for machine learning.
  2. Case Study A.ipynb - code for the Heterogeneous Mapping experiments in the paper.
  3. Case Study B.ipynb - code for the OpenCL Thread Coarsening experiments in the paper.

3.2. Data caching

Many of the experiments are long running and computationally expensive. Run times can range from hours to days, depending on hardware. To amortize these costs, expensive experimental data is cached for re-use upon production. If you would like to remove any cached data, run:

$ make -C ../data refresh
...

3.3. Headless execution

Running long running experiments in Jupyter Notebooks can be a hassle, as any loss of connection to the notebook may halt execution. Because of this, we provide a headless execution mode, which converts the Jupyter Notebooks into standalone Python scripts. These scripts will produce the cached data which can be viewed from the Jupyter Notebooks. To use this headless execution mode, run:

$ make run-batch

3.4. Using our cached data

In addition to the code necessary to re-produce our experiments, we also supply the actual experimental results we used for the paper, as obtained from our hardware. To unpack our cached data, run:

$ make -C ../data all
...

Note that this replaces any cached data you may have produced.

4. Uninstallation

$ make clean

This does not require sudo privileges. The only directory modified outside of this repository is ~/.ipython/kernels. System-wide requirements are not removed.