You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
No, PMC_LLaMA_7B has not undergone any instruction tunining datasets while our latest PMC_LLaMA_13B has been instruction tuned.
We have not tried LLaMA-2 for continual pre-training, since in our evaluation, LLaMA-2, compared with LLaMA, is only enhanced with instruction following ability and in basic knowledge, the gain is limited.
It sits sIs the above checkpoint the pre-trained model with unsupervised data and hasn't seen any instruction-tuning datasets?
I have another question: did you use llama2-base as the base model to conduct continual pre-training with research papers and books?
The text was updated successfully, but these errors were encountered: