Language:
🇺🇸
🇨🇳
«NetworkSlimming» re-implements the paper Learning Efficient Convolutional Networks through Network Slimming
More training statistics can see:
Network Slimming
uses L1
regularization to sparsely train the BN layer's scaling factor
; After the training, it performs channel-level pruning operation; Finally, by fine-tuning to recovery performance. it achieves good results in practical application.
First, you need set env for PYTHONPATH
and CUDA_VISIBLE_DEVICES
$ export PYTHONPATH=<project root path>
$ export CUDA_VISIBLE_DEVICES=0
Then, begin train-prune-finetuning
- For train
$ python tools/train.py -cfg=configs/vggnet/vgg16_bn_cifar100_224_e100_sgd_mslr_slim_1e_4.yaml
- For prune
$ python tools/prune/prune_vggnet.py
- For fine-tuning
$ python tools/train.py -cfg=configs/vggnet/refine_pruned_0_2_vgg16_bn_cifar100_224_e100_sgd_mslr_slim_1e_4.yaml
Finally, set the fine-tuning model path in the PRELOADED
option of the configuration file
$ python tools/test.py -cfg=configs/vggnet/refine_pruned_0_2_vgg16_bn_cifar100_224_e100_sgd_mslr_slim_1e_4.yaml
- zhujian - Initial work - zjykzj
- Eric-mingjie/network-slimming
- wlguan/MobileNet-v2-pruning
- 666DZY666/micronet
- foolwood/pytorch-slimming
@misc{liu2017learning,
title={Learning Efficient Convolutional Networks through Network Slimming},
author={Zhuang Liu and Jianguo Li and Zhiqiang Shen and Gao Huang and Shoumeng Yan and Changshui Zhang},
year={2017},
eprint={1708.06519},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Anyone's participation is welcome! Open an issue or submit PRs.
Small note:
- Git submission specifications should be complied with Conventional Commits
- If versioned, please conform to the Semantic Versioning 2.0.0 specification
- If editing the README, please conform to the standard-readme specification.
Apache License 2.0 © 2021 zjykzj