-
Notifications
You must be signed in to change notification settings - Fork 533
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[QST]Why is cuml GPU dozens of times slower than scikit-learn CPU in Lasso,Ridge and ElasticNet models? #6080
Comments
Thanks for the issue @JiayuChen02, I just tried your code on a machine with a T4 and this is the results I got:
In general, 300 rows is not a dataset where I would expect massive GPU speedups, but things seem to be working fine. What could be happening is you could be running maybe into some JIT compilation issues? What GPU are you using? It would be very helpful if you could run this script and put the output here: https://github.com/rapidsai/cuml/blob/branch-24.10/print_env.sh |
@dantegd Thanks for your reply. This is the output of print_env.sh Click here to see environment details
Could you please help me see what the possible errors are and how I can fix them next? |
I tried to compare the speed difference of the same model on CPU and GPU. Here is my code:
And the result is
Why?
The text was updated successfully, but these errors were encountered: