You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 1, 2024. It is now read-only.
First, thank you for this contribution and the release of code and pre-trained models. I have trained two of the smaller settings and get similar error rates to the released models.
In the MODEL_ZOO.md readme, you mention that "the reported errors are averaged across 5 reruns for robust estimates". If these additional models (5 models per setting) are saved somewhere and match the current codebase, it would be a valuable addition to what is currently released. For instance, research in ensemble methods or analysis in variations across models would benefit greatly from this contribution.
For the larger settings (e.g., RegNetX-32GF at 76 train hours for 8 GPUs), training 5 models would take over two weeks on 8 GPUs, making it difficult for most researchers to do. Thanks!
The text was updated successfully, but these errors were encountered:
First, thank you for this contribution and the release of code and pre-trained models. I have trained two of the smaller settings and get similar error rates to the released models.
In the MODEL_ZOO.md readme, you mention that "the reported errors are averaged across 5 reruns for robust estimates". If these additional models (5 models per setting) are saved somewhere and match the current codebase, it would be a valuable addition to what is currently released. For instance, research in ensemble methods or analysis in variations across models would benefit greatly from this contribution.
For the larger settings (e.g., RegNetX-32GF at 76 train hours for 8 GPUs), training 5 models would take over two weeks on 8 GPUs, making it difficult for most researchers to do. Thanks!
The text was updated successfully, but these errors were encountered: