You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Session inputs/outputs should probably be reworked. The way they work now is a bit messy for my liking, and there is currently no clear path for supporting important features like IOBinding or non-tensor types.
Support converting tensors to/from Vec<T> with a given shape (since some applications don't really need the input/output to be a complex tensor, i.e. Silero VAD)
Allocate outside of run function: Discussed in #37
Originally posted by stexa April 6, 2023
Hey!
For the usage in audio processes and in general for the performance it would be really nice if the inputs and outputs could be set and allocated before the actual run() function. Somewhat like suggested here: nbigaouette/onnxruntime-rs#41
This could be an additional function as well. Might this be something you would be interested in as well?
And thank you, I am very happy that someone is still working on a onnxruntime wrapper in rust, the crate is working out of the box like a charm for me :)
Session inputs/outputs should probably be reworked. The way they work now is a bit messy for my liking, and there is currently no clear path for supporting important features like
IOBinding
or non-tensor types.A few things I think should be addressed:
IOBinding
(IOBinding #15)sequence<T>
andmap<K, V>
types viaVec<T>
andHashMap<K, V>
(Support for sequence/map types #30)Vec<T>
with a given shape (since some applications don't really need the input/output to be a complex tensor, i.e. Silero VAD)Allocate outside of run function: Discussed in #37
Originally posted by stexa April 6, 2023
Hey!
For the usage in audio processes and in general for the performance it would be really nice if the inputs and outputs could be set and allocated before the actual run() function. Somewhat like suggested here:
nbigaouette/onnxruntime-rs#41
This could be an additional function as well. Might this be something you would be interested in as well?
And thank you, I am very happy that someone is still working on a onnxruntime wrapper in rust, the crate is working out of the box like a charm for me :)
IOBinding: Discussed in #15
Originally posted by dzhao January 30, 2023
Hi, do you plan to support IOBiding for cuda/tensorrt?
https://stackoverflow.com/questions/70740287/onnxruntime-inference-is-way-slower-than-pytorch-on-gpu
This seems a critical feature for gpu serving.
The text was updated successfully, but these errors were encountered: