Skip to content

NewFeatureIdeas

Valentina edited this page Oct 16, 2023 · 9 revisions

Ideas

New features:

  1. Extend the set of inference frameworks and models in regular benchmarks. Support for inference using ...
  2. Develop or check docker containers for Intel® Optimization for Caffe, Intel® Optimization for TensorFlow, MXNet, TensorFlow lite and ONNX Runtime.
  3. Provide collecting the performance metrics and the accuracy metrics for the following frameworks: Intel® Optimization for Caffe, Intel® Optimization for TensorFlow, MXNet, TensorFlow lite and ONNXRuntime.
  4. Develop demos for different frameworks to demonstrate the main scenario of the benchmarking system DLI.
  5. Develop and/or integrate utilities for converting models from one storage format to another one. Providing the ability to convert models between framework formats supported by the benchmarking system DLI, or to the ONNX format.
  6. Extend the set of hardware platforms for regular performance measurements (RaspberryPi 4 8GB) (if we will have access to hardware).
Clone this wiki locally