diff --git a/README.md b/README.md index 8a9958526..5cb4f1f31 100644 --- a/README.md +++ b/README.md @@ -36,11 +36,11 @@ MNN Workbench could be downloaded from [MNN's homepage](http://www.mnn.zone), wh - iOS platform: static library size will full option for armv7+arm64 platforms is about 12MB, size increase of linked executables is about 2M. - Android platform: core so size is about 800KB (armv7a - c++_shared). - Use MNN_BUILD_MINI can reduce package size about 25% , with limit of fix model input size -- Support FP16 / Int8 qunatize, can reduce model size 50%-70% +- Support FP16 / Int8 quantize, can reduce model size 50%-70% ### Versatility - Supports `Tensorflow`, `Caffe`, `ONNX`,`Torchscripts` and supports common neural networks such as `CNN`, `RNN`, `GAN`, `Transformer`. -- Supports AI model with multi-inputs or multi-outputs, every kind of dimenstion format, dynamic inputs, controlflow. +- Supports AI model with multi-inputs or multi-outputs, every kind of dimension format, dynamic inputs, controlflow. - MNN supports approximate full OPs used for AI Model. The converter supports 178 `Tensorflow` OPs, 52 `Caffe` OPs, 163 `Torchscripts` OPs, 158 `ONNX` OPs. - Supports iOS 8.0+, Android 4.3+ and embedded devices with POSIX interface. - Supports hybrid computing on multiple devices. Currently supports CPU and GPU.