Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add functionality to correctly load old metagraphs #117

Closed
oadams opened this issue Mar 20, 2018 · 10 comments
Closed

Add functionality to correctly load old metagraphs #117

oadams opened this issue Mar 20, 2018 · 10 comments
Assignees
Milestone

Comments

@oadams
Copy link
Collaborator

oadams commented Mar 20, 2018

There occasionally are some problems loading old stored models. I've figured out how to resolve these issues (working on branch SavedModel) but need to integrate this functionality into model loading. The user should not need to specify the number of layers or hidden units as this is all stored in the metagraph.

There are related design issues here too. For example, it looks like adding variables when creating an rnn_ctc model puts them in a global graph, undermining the namespacing attempt behind using a class in the first place. This isn't a huge deal, as the user is only really going to be using one model at a time, but it seems like a way bugs can creep in so that should be changed.

@oadams oadams self-assigned this Mar 20, 2018
@oadams oadams added this to the v1.0.0 milestone Mar 22, 2018
@shuttle1987
Copy link
Member

I'm a little concerned about this:

This isn't a huge deal, as the user is only really going to be using one model at a time, but it seems like a way bugs can creep in so that should be changed.

In the context of the Web API as more than one model is likely to be available for transcribing at once.

@oadams
Copy link
Collaborator Author

oadams commented May 23, 2018

In the context of defining new models, or decoding from existing models this could be an issue. The solution is probably to use Tensorflow variable namespacing/scoping.

(Note that In the context of decoding from saved models, this won't be an issue as in the SavedModel branch, the decode() function just reads from a stored metagraph:

def decode(model_path_prefix: Union[str, Path],
input_paths: Sequence[Path],
label_set: Set[str]) -> List[List[str]]:
model_path_prefix = str(model_path_prefix)
# TODO Confirm that that WAVs exist.
# TODO Confirm that the feature files exist. Create them if they don't.
# TODO Change the second argument to have some upper bound. If the caller
# requests 1000 WAVs be transcribed, they shouldn't all go in one batch.
fn_batches = utils.make_batches(input_paths, len(input_paths))
# Load the model and perform decoding.
metagraph = load_metagraph(model_path_prefix)
with tf.Session() as sess:
metagraph.restore(sess, model_path_prefix)
for fn_batch in fn_batches:
batch_x, batch_x_lens = utils.load_batch_x(fn_batch)
# TODO These placeholder names should be a backup if names from a newer
# naming scheme aren't present. Otherwise this won't generalize to
# different architectures.
feed_dict = {"Placeholder:0": batch_x,
"Placeholder_1:0": batch_x_lens}
dense_decoded = sess.run("SparseToDense:0", feed_dict=feed_dict)
# Create a human-readable representation of the decoded.
indices_to_labels = labels.make_indices_to_labels(label_set)
human_readable = dense_to_human_readable(dense_decoded, indices_to_labels)
return human_readable
)

@shuttle1987
Copy link
Member

shuttle1987 commented May 24, 2018

I'll have a look into using the Tensorflow variable scoping.

@shuttle1987
Copy link
Member

Will #197 close this issue?

@oadams
Copy link
Collaborator Author

oadams commented Sep 9, 2018

It will. But only for using saved models to decode; not so that training can be resumed (that can be addressed when we use keras down the line).

@shuttle1987
Copy link
Member

What exactly is stopping training from being resumed?

@oadams
Copy link
Collaborator Author

oadams commented Sep 12, 2018

Based on your other emails I assume you figured this out, but let me know if you want clarification.

@oadams oadams removed the bug label Oct 13, 2018
@shuttle1987
Copy link
Member

I think this issue is closed, please reopen if I closed in error

@oadams
Copy link
Collaborator Author

oadams commented Dec 29, 2018

Happy to leave this closed, but a note to the reader: while we can load metagraphs for decoding, training can't be resumed.

@shuttle1987
Copy link
Member

I think #180 covers that issue about resuming training.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants