You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Other ML runtimes such as ONNX Runtime and TensorFlow Lite prepack weights for use with the selected matrix multiplication kernel when a model is loaded. This reduces inference latency when a model is run multiple times in a session, at the cost of longer load time.
RTen implements weight prepacking to amortize packing overhead when MatMul or Conv operators are applied to a batch of inputs. However it doesn't prepack inputs when the model is loaded, so packing costs are incurred for each inference.
Other ML runtimes such as ONNX Runtime and TensorFlow Lite prepack weights for use with the selected matrix multiplication kernel when a model is loaded. This reduces inference latency when a model is run multiple times in a session, at the cost of longer load time.
RTen implements weight prepacking to amortize packing overhead when
MatMul
orConv
operators are applied to a batch of inputs. However it doesn't prepack inputs when the model is loaded, so packing costs are incurred for each inference.References:
The text was updated successfully, but these errors were encountered: