StarLight helps in obtaining lightweight deep neural networks. StarLight consists primarily of three modules: network compression, neural architecture search, and visualization. The network compression module uses pruning and quantization techniques to convert a pre-trained network into a lightweight structure. The neural architecture search module designs efficient structures by utilizing differentiable architecture search methods. The visualization window can display all of the aforementioned processes, as well as visualizations of network intermediate features. We further provide a convenient tool QuiverPyTorch to visualize the intermediate features of any networks.
- Highlighted Features
- Available Algorithms
- Demo
- Installation
- Getting Started
- Guide for compressing your own networks
- Visualize your own networks in StarLight
- Acknowledgments
- Citation
- Contributing
- License
- Contact Us
- We present lightweight results of 6 popular networks, including image classification, semantic segmentation, and object detection.
- We have collected over 50 bugs and solutions during experiments in Bug Summary, which can enable an efficient lightweight experience when dealing with your own networks.
- With just 1 yaml file, you can easily visualize your own lightweight networks in StarLight.
- In addition to 2D convolution pruning, we also provide support for 3D convolution pruning. Please refer to our Manually Export (3D-Conv) for more details.
- To handle the unrecognized operations in ONNX models, we have collected 6 plugins for network quantization, which will be available soon.
- The lightweight models generated by StarLight can be seamlessly deployed on edge devices such as NVIDIA AGX XAVIER, without the need for additional processing.
- We provide a convenient tool to visualize the network intermediate features, namely QuiverPyTorch.
- Available tasks
Task Type | Pruning | Quantization | Neural Architecture Search |
---|---|---|---|
Image classification | ✅ | ✅ | ✅ |
Semantic Segmentation | ✅ | ✅ | |
Object Detection | ✅ | ✅ |
- Available algorithms
Method | Algorithms |
---|---|
Pruning | AGP, FPGM, Taylor, L1, L2 |
Quantization | PTQ |
Neural Architecture Search | DARTS, GDAS, DU-DARTS, DDSAS |
- Pruning, quantization and feature visualization in StarLight.
- Neural architecture search and feature visualization in StarLight.
- We summarized detailed steps for installation in this link.
After installing the required packages for StarLight, activate the environment to enjoy StarLight.
conda activate starlight
- Go to the folder of
algorithms/compression/nets
.
cd algorithms/compression/nets
- Select a provided network such as ResNet, DeepLabV3Plus, PSPNet, ResNet50_SSD, or VGG_SSD
- Follow the
README.md
in each folder of the network to compress them. - Note that ResNet50_SSD and VGG_SSD both require
cpu_nms
, which needs to be compiled manually. Go to the folder ofalgorithms/compression/nets/ResNet50_SSD/SSD_Pytorch
oralgorithms/compression/nets/VGG_SSD
. Please ensure thatcpython-36m
inmake.sh
is consistent with the version of your installed Python. Finally, simply run:
./make.sh
- Go to the folder of
algorithms/nas
.
cd algorithms/nas
- Select a provided NAS algorithm such as DARTS, GDAS, DDSAS or DU-DARTS.
- Follow the
README.md
in each folder of the NAS algorithm to conduct experiments.
- Download logs and pre-trained weights in
compression
from Baidu Netdisk with the passwordstar
. - Create the data folder under
StarLight
and add a soft link forcompression
.
cd StarLight && mkdir data
cd data && ln -s /path/to/compression
- Go to the folder of
StarLight
and run visualization for compression:
cd StarLight
python compression_vis/compression.py
- Download logs and pre-trained weights in
StarLight_Cache
from Baidu Netdisk with the passwordstar
. - Go to the data folder under StarLight and add a soft link for
StarLight_Cache
.
cd data && ln -s /path/to/StarLight_Cache
- Go to the folder of
nas_vis
and run visualization for NAS:
cd StarLight/nas_vis
python nas.py
You can easily compress your own networks according to our Compress Guide.
With just 1 yaml file, you can conveniently visualize your own lightweight networks in StarLight. Please refer to the Visualization in StarLight for more details.
- This work is supported in part by the National Key R&D Program of China under Grant No. 2018AAA0102701 and in part by the National Natural Science Foundation of China under Grant No. 62176250 and No. 62203424.
- The following people have helped test the StarLight toolkit, read the document and provid valuable feedback: Pengze Wu, Haoyu Li, and Jiancong Zhou.
- Our StarLight framework is built on top of NNI, incorporating their pruning and quantization algorithms. We extend our gratitude to NNI for their remarkable contributions.
- We would like to thank Just the Docs for providing the template for our document.
- We would like to thank ChatGPT for polishing the presentation of the document.
If you find that this project helps your research, you can cite StarLight as following:
@misc{StarLight,
author = {Shun Lu and Longxing Yang and Zihao Sun and Jilin Mei and Yu Hu,
year = {2023},
address = {Institute of Computing Technology, Chinese Academy of Sciences},
title = {StarLight: An Open-Source AutoML Toolkit for Lightweighting Deep Neural Networks},
url = {https://github.com/ICT-ANS/StarLight}
}
Thanks for your interest in StarLight and for willing to contribute! We'd love to hear your feedback.
- Please first try to check if an issue exists in our Bug Summary or Issues.
- If not, please describe the bug in detail and we will give a timely reply.
- We are happy to integrate your network to our StarLight. Please provide your network with the results and hyper-parameters to us. And a detailed description would be better. Thank you!
This project is under the MIT license - please see the LICENSE for details.
StarLight is an open-source project developed by the ANS@ICT research team. We welcome and value your feedback and suggestions, so please don't hesitate to contact us via email at [email protected]