-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training doubts #25
Comments
Hi, |
Okay, will check that and let you know. Thanks. |
i'm facing the same problem, is their any updated solution? |
Actually I haven't tried it after as I was working on other projects. But I will again try it and will let you know. |
@Rutvik21 @DynamicCodes @vijendra1125 I realised what the issue is after facing the same issue. The gradient explosion happens when your class names in label.pbtxt do not match those in your tfrecord file when training.
instead of
|
Still got the same loss explosion after 25000 steps. Edit: New Issue Checkout this article gives quite a clearer picture. But can't get sure how to fix it. Stackoverflow answers suggests:
Has anyone come to any other conclusions? or know how to make these changes in training mobilenetv2 from object detction zoo? An Update on my work further Now my learning is quite stable. Fluctuates by decimal points but thats fine with me. The tradeoff with this method is that now your training time has increased if your up with that. Stop training when Loss explosion in config Edited config
Good luck 🤩 |
Hello,
I have cloned your git hub repo. And run it in the colab with the configuration file available in your repo. But after some steps the loss explodes(reaches to >10^9). So what is the problem and what is your configuration when you have trained the model.
The text was updated successfully, but these errors were encountered: