You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The way the AdvancedTreeSearchJob constructs the AdvancedTreeSearchLmImageAndGlobalCacheJob makes it an easy subject to hash issues.
I'm running decodings on LibriSpeech, and found the following job instances in my work folder. All compute the same 1.8G LM image but have different hashes. In my case this is likely due to differing TDP scales, which go into the CRP for the LM image job and change the hash.
Another candidate for (a different kind of) hash issues is the feature scorer, which also goes into the CRP but is explicitly left hashed.
The global caches that are computed are indeed different, and so splitting the job into two jobs (one for LM image, one for global cache) might be a solution:
Whats the status here? Is this "solved"?
If there is some replacement code which reproduces the joint behavior with the two jobs can you post an example @NeoLegends ?
The way the
AdvancedTreeSearchJob
constructs theAdvancedTreeSearchLmImageAndGlobalCacheJob
makes it an easy subject to hash issues.I'm running decodings on LibriSpeech, and found the following job instances in my work folder. All compute the same 1.8G LM image but have different hashes. In my case this is likely due to differing TDP scales, which go into the CRP for the LM image job and change the hash.
Another candidate for (a different kind of) hash issues is the feature scorer, which also goes into the CRP but is explicitly left hashed.
The global caches that are computed are indeed different, and so splitting the job into two jobs (one for LM image, one for global cache) might be a solution:
The text was updated successfully, but these errors were encountered: