You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 29, 2022. It is now read-only.
NO,I am seeing the same error. I also used the same function(tf.clip_by_global_norm),but I found learning rate and function are not the key reasons. when i generate Vocab,i set the size is 4682,and the vocab_size is 4682 in train.py,too. as the same,i do not know whether decrease the batch size is useful.
I look at an answer that it might be related to the vanishing/exploding gradient? I do not have any methods.
In train.py
the error is
and the error is found here
The text was updated successfully, but these errors were encountered:
i found a questionable point is GPU:0. I think it may be related with my GPU,so i try to add the code like this
i do not know whether it is useful,i want to try.
NO,I am seeing the same error. I also used the same function(tf.clip_by_global_norm),but I found learning rate and function are not the key reasons. when i generate Vocab,i set the size is 4682,and the vocab_size is 4682 in train.py,too. as the same,i do not know whether decrease the batch size is useful.
I look at an answer that it might be related to the vanishing/exploding gradient? I do not have any methods.
In train.py
the error is
and the error is found here
The text was updated successfully, but these errors were encountered: