Replies: 2 comments 1 reply
-
我这里没有OpenVINO支持的GPU设备,因此一直没测过OpenVINO推理GPU。 目前我能想到的是设定一些shape的节点,提前推理一下这几个shape,对于给定的图像,提前padding到最近的预先设定的shape大小,从而转换为静态推理 |
Beta Was this translation helpful? Give feedback.
0 replies
-
谢谢回复,对于padding到预设定的shape做静态推理,请问可以有coding参考么?或是有计划追加到现有branches么? Thanks! |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
环境:
CPU: Intel Core i5 @1.60GHz
GPU(integrated): Intel UHD Graphics
在以上环境下benchmark,CPU运行OpenVINO要比ONNXRuntime快将近2秒,因为想利用集成显卡进行推理加速,查了一下OpenVINO似乎是对应 iGPU的:https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Does-Openvino-support-Intel-UHD-Graphics-630/m-p/1386381
,但当我指定
compile_model = ie.compile_model(model=model_onnx, device_name="GPU")
后运行benchmark会报错而且速度非常慢。查了一下,可能和目前OpenVINO在GPU下还不支持dynamic feature有关 ,openvinotoolkit/openvino#13434所以想问下大佬,如果按照里面说的convert Paddle model to static shape IR, 应该如何设定input shape呢?或者有其他解决方案么?谢谢!
Beta Was this translation helpful? Give feedback.
All reactions