You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
first of all, thank you very much for the very user-friendly package! Great work!
I am currently training a deep reinforcement learning agent that has a differentiable optimization as part of its policy. In principle, this works fine, but training the agent takes a very long time because the available computing resources are not used efficiently. When I use the GPU as PyTorch device, only around 10% of its capacity is used, for CPU around 40%. When I run the same code with an agent that does not have the differentiable optimization as part of it, utilization is always basically 100%. Of course, I tried increasing the batch size, but this does not change anything.
Do you have any ideas what I could do to resolve this issue? I saw that the qpth-package (https://github.com/locuslab/qpth) offers batch solving of QPs, instead of multiprocessing via pooling, so maybe switching to that package would be an option? However, due to the seemingly more active development on cvxpylayers and the user-friendliness, I would like to stick with cvxpylayers if possible.
Thank you very much for your help!
The text was updated successfully, but these errors were encountered:
Hello,
first of all, thank you very much for the very user-friendly package! Great work!
I am currently training a deep reinforcement learning agent that has a differentiable optimization as part of its policy. In principle, this works fine, but training the agent takes a very long time because the available computing resources are not used efficiently. When I use the GPU as PyTorch device, only around 10% of its capacity is used, for CPU around 40%. When I run the same code with an agent that does not have the differentiable optimization as part of it, utilization is always basically 100%. Of course, I tried increasing the batch size, but this does not change anything.
Do you have any ideas what I could do to resolve this issue? I saw that the qpth-package (https://github.com/locuslab/qpth) offers batch solving of QPs, instead of multiprocessing via pooling, so maybe switching to that package would be an option? However, due to the seemingly more active development on cvxpylayers and the user-friendliness, I would like to stick with cvxpylayers if possible.
Thank you very much for your help!
The text was updated successfully, but these errors were encountered: