You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I aimed to run your code using Python hyperthreading module (I don't know if you are familiar with it) to speed up hyperparameter sweeps, as you may find in the file attached (it is in a .txt extension as I could not directly update a .py file).
However, most cases show how results hardly improve and strongly fluctuate even when using very low learning rates, opposite to when you do so sequentially without hyperthreading (where both loss and val_loss keep gradually decreasing).
Do you know if this is just an incompatibility issue or am I doing something wrong? How do you actually deal with hyperparameter sweeps? Are you using any scikit_tools or you do it manually such as I?
Thank you in advance, and best regards! May you have a nice weekend!
Hi @MisterTellini - I'm not sure what's going wrong in that instance, but I would suggest using of the tools in sklearn.model_selection, for instance, GridSearchCV seems to take care of threading for you.
Hello back,
I aimed to run your code using Python hyperthreading module (I don't know if you are familiar with it) to speed up hyperparameter sweeps, as you may find in the file attached (it is in a .txt extension as I could not directly update a .py file).
However, most cases show how results hardly improve and strongly fluctuate even when using very low learning rates, opposite to when you do so sequentially without hyperthreading (where both loss and val_loss keep gradually decreasing).
Do you know if this is just an incompatibility issue or am I doing something wrong? How do you actually deal with hyperparameter sweeps? Are you using any scikit_tools or you do it manually such as I?
Thank you in advance, and best regards! May you have a nice weekend!
The text was updated successfully, but these errors were encountered: