You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, we found out that bert-based models report the same probabilities for all options if the total length of the passage + question exceeds 512 tokens since it's their maximum token length. We found the issue, but after discussion, we concluded that the truncation for the 512 tokens is not a fundamental solution to this problem since the key information for question-solving may be mentioned in the cut-offed part of the passage. We will soon prepare a solution for longer passages.
We are now considering splitting the passage into smaller sentences, and returning the maximum KDA value among calculations over all pairs of split passage - questions.
The text was updated successfully, but these errors were encountered:
Currently, we found out that bert-based models report the same probabilities for all options if the total length of the passage + question exceeds 512 tokens since it's their maximum token length. We found the issue, but after discussion, we concluded that the truncation for the 512 tokens is not a fundamental solution to this problem since the key information for question-solving may be mentioned in the cut-offed part of the passage. We will soon prepare a solution for longer passages.
We are now considering splitting the passage into smaller sentences, and returning the maximum KDA value among calculations over all pairs of split passage - questions.
The text was updated successfully, but these errors were encountered: