You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've running the updated code and observe that at the pretraining stage, the loss is coverged to ~3(slightly above 3), does my training show similar tendency as your official experiment setting? If it seems correct, in the orginal LLaVA-1.5 pretraining, the loss is finally converged to ~2, how to inteprete this difference?
May I know the rough converged loss value of the fine-tuning stage?
According to you paper, Sec. 3.1 "In our experiments, we show that ViT and position embedding parameters can be kept frozen during pretraining, and updating these parameters during the instruction-tuning stage is sufficient for good performance", it means the ViT is fine-tuned, but the author claims in another issue that the ViT is freezed all the time. Can you clarify on this point? From my understanding, the ViT positional embedding changed adapting the dynamic aspect ratio (similar to pix2instruct), the ViT need to be fine-tuned.
Many thanks!
The text was updated successfully, but these errors were encountered:
In our new implantation, llava-uhd v1 and llava-uhd v2 can finally be converged to ~2, which is a good phenomenon to check whether the model is converged.
In sft stage, llava-uhd v1 is about 0.750.8; llava-uhd v2 is about 0.650.7. You can re-produce our model for detailed check.
In our finding, the ViT does not need be fine-tuned when minimally changing to its position encoding. In contrast, it will improve the MLLM performance.
Moreover, our repository has been fully improved, and almost all bugs have been eliminated. For details, please refer to the main branch and the LLaVA-UHD v1 branch. If there are any new problems, feel free to open a new issue.
Hi, thanks for the interesting work!
I've running the updated code and observe that at the pretraining stage, the loss is coverged to ~3(slightly above 3), does my training show similar tendency as your official experiment setting? If it seems correct, in the orginal LLaVA-1.5 pretraining, the loss is finally converged to ~2, how to inteprete this difference?
May I know the rough converged loss value of the fine-tuning stage?
According to you paper, Sec. 3.1 "In our experiments, we show that ViT and position embedding parameters can be kept frozen during pretraining, and updating these parameters during the instruction-tuning stage is sufficient for good performance", it means the ViT is fine-tuned, but the author claims in another issue that the ViT is freezed all the time. Can you clarify on this point? From my understanding, the ViT positional embedding changed adapting the dynamic aspect ratio (similar to pix2instruct), the ViT need to be fine-tuned.
Many thanks!
The text was updated successfully, but these errors were encountered: