You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks for the useful code for us! I have questions about the accuracy of commonsense reasoning tasks. In the readme, the accuracy of Llama (for example) is
While the Llama2 paper is
Some tasks have lower accuracy after fine-tuning, like 76.5 -> 68.9 for BoolQ. Could you kindly explain this to me? Thanks a lot!
The text was updated successfully, but these errors were encountered:
Given the MMLU performance referenced in the Llama2 paper, I believe the results in Table 20 reflect a 5-shot scenario, while LLM-Adapters' performance is primarily zero-shot.
Hi, thanks for the useful code for us! I have questions about the accuracy of commonsense reasoning tasks. In the readme, the accuracy of Llama (for example) is
While the Llama2 paper is
Some tasks have lower accuracy after fine-tuning, like 76.5 -> 68.9 for BoolQ. Could you kindly explain this to me? Thanks a lot!
The text was updated successfully, but these errors were encountered: