-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add functionality to correctly load old metagraphs #117
Comments
I'm a little concerned about this:
In the context of the Web API as more than one model is likely to be available for transcribing at once. |
In the context of defining new models, or decoding from existing models this could be an issue. The solution is probably to use Tensorflow variable namespacing/scoping. (Note that In the context of decoding from saved models, this won't be an issue as in the persephone/persephone/model.py Lines 48 to 81 in e616919
|
I'll have a look into using the Tensorflow variable scoping. |
Will #197 close this issue? |
It will. But only for using saved models to decode; not so that training can be resumed (that can be addressed when we use keras down the line). |
What exactly is stopping training from being resumed? |
Based on your other emails I assume you figured this out, but let me know if you want clarification. |
I think this issue is closed, please reopen if I closed in error |
Happy to leave this closed, but a note to the reader: while we can load metagraphs for decoding, training can't be resumed. |
I think #180 covers that issue about resuming training. |
There occasionally are some problems loading old stored models. I've figured out how to resolve these issues (working on branch
SavedModel
) but need to integrate this functionality into model loading. The user should not need to specify the number of layers or hidden units as this is all stored in the metagraph.There are related design issues here too. For example, it looks like adding variables when creating an rnn_ctc model puts them in a global graph, undermining the namespacing attempt behind using a class in the first place. This isn't a huge deal, as the user is only really going to be using one model at a time, but it seems like a way bugs can creep in so that should be changed.
The text was updated successfully, but these errors were encountered: