You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 18, 2023. It is now read-only.
Hello! Our static bug checker has found a performance issue in baselines/train.py: train_step is repeatedly called in a for loop, but there is a tf.function decorated function train_inner_step defined and called in train_step.
In that case, when train_step is called in a loop, the function train_inner_step will create a new graph every time, and that can trigger tf.function retracing warning.
But there are some variables in the inner function depending on the outer function, code may be more complex if changes are made. Is it necessary? Do you have any idea?
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hello! Our static bug checker has found a performance issue in baselines/train.py:
train_step
is repeatedly called in a for loop, but there is a tf.function decorated functiontrain_inner_step
defined and called intrain_step
.In that case, when
train_step
is called in a loop, the functiontrain_inner_step
will create a new graph every time, and that can trigger tf.function retracing warning.Similarly,
train_inner_step
is defined intrain_model
and the outer function is repeatedly called here and here.Here is the tensorflow document to support it.
Briefly, for better efficiency, it's better to use:
than:
Looking forward to your reply.
The text was updated successfully, but these errors were encountered: