You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This was required in earlier versions of PyTorch and that was so that the examples in a batch can be interleaved properly when performing forward + backprop. Knowing the size of each example before you pad with zeroes and sorting the sequences was crucial for the interleave to work. However, this is no longer needed as of PyTorch 1.1.0 as you can specify enforce_sorted=False so it can do the sorting internally. Also with newer versions of PyTorch, you can just use pack_sequence instead of pack_padded_sequence where you no longer need to pad the sequences. You would just provide a list of tensors, with each tensor being a sequence in the batch.
Why sort instances by sequence length in descending order step is needed?
The text was updated successfully, but these errors were encountered: