When the training process is finished, we can merge the batch normalization with the convolution or fully connected layer. Doing so will give us a forward acceleration.
For more details about batch normalization,see here
We demonstrate a demo of Mobilenet.
- the source model config with batch normalization. see
./demo/mobilenet_with_bn.py
- the source model with batch normalization. see
./demo/models/mobilenet_flowers102.tar.gz
- the dest model config without batch normalization see
./demo/mobilenet_without_bn.py
- modify the
SOURCE_MODEL_NAME
andDEST_MODEL_NAME
indo_merge.sh
- Run
sh do_merge.sh
- Separate modify the source and dest model in
./demo/verify.py
and Runpython ./demo/verify.py
- Merge batch normalization speeds up the forward process by around 30%.