You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks for your great work! I would like to point out a potential bug in this code:
add_special_tokens without checking embedding size is very dangerous especially for llama. In fact, llama use <end_of_text> as eos and bos token during training. Otherwise, you need to resize the embedding after add_special_tokens '' or it would out of bounds while torch.gather.
Hi, thanks for your great work! I would like to point out a potential bug in this code:
add_special_tokens without checking embedding size is very dangerous especially for llama. In fact, llama use <end_of_text> as eos and bos token during training. Otherwise, you need to resize the embedding after add_special_tokens '' or it would out of bounds while
torch.gather
.Code line:
RRHF/train.py
Line 302 in e1a2b61
The text was updated successfully, but these errors were encountered: