bisheng-rt is an open source inference serving framework that power the model inference and resource allocation. bisheng-rt makes the different model efficient deployment and provides a consistent user experience regardless of any model types.
The project is a sub-project of bisheng.
- High performance
- Comaptible with most computing cards (nv, atlas, cambricon, enflame)
- Friendly model management
- Easily integrate for new model
Use in Bisheng Platform Model Manager
We provide a open cloud service for easily use. See free trial.
For guidance on installation, development, deployment, and administration, check out bisheng-rt Dev Docs.
Reporting problems, asking questions We appreciate any feedback, questions or bug reporting regarding this project.
User can post Issues, follow the process outlined in the Stack Overflow document.
For questions, we recommend posting in our community GitHub Discussions.
bisheng-rt adopts dependencies from the following:
- Thanks to triton-inference-server for the basic framework.