-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SequentialLR Scheduler #111
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
✅ All tests successful. No failed tests found. Additional details and impacted files@@ Coverage Diff @@
## main #111 +/- ##
=======================================
Coverage ? 97.14%
=======================================
Files ? 140
Lines ? 6028
Branches ? 0
=======================================
Hits ? 5856
Misses ? 172
Partials ? 0 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, let's add some tests for this - I suppose we need to test configure_optimizers
function as a whole by trying different optimizer and scheduler configurations.
CC: @kozlov721 on where would be the best way to put them. IMO under unittests but for them to run we need a LuxonisLighnitngModule instance I think so not sure if this then falls under integration tests?
I don't think this can be easily unit-tested. We can add |
Yes this makes sense. But can we use like a 10 image dataset (or as small as possible) for such traning runs so we don't increase the test times? We should use a very small dataset for all tests where we don't actually care about the performance and we are just testing funcionality. |
Will create a test for this as mentioned above. We can also reduce the CIFAR dataset to just about 10 images here. |
SequentialLR Support for Training
Overview
This PR adds support for the
SequentialLR
learning rate scheduler in the training configuration. TheSequentialLR
scheduler allows for chaining multiple learning rate schedulers, making it possible to implement a warm-up phase followed by a more complex learning rate schedule during training.Changes
SequentialLR
scheduler in the training configuration to combine multiple schedulers, such as a warm-up phase (LinearLR
) followed byCosineAnnealingLR
.