You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In our paper, during the process of one-shot tuning for video editing, we utilize an adapter that has been trained on the LAION dataset during the text-to-image stage. We then fine-tune this pretrained adapter with the newly introduced spatio-temporal information from the original video, leading to further enhancement of the desired effects. However, for the sake of convenience in open-sourcing, the current version available is trained on a new adapter based on the original video. Due to the fact that one-shot tuning only involves the original video and its corresponding prompt data, the retrained adapter has a minimal impact on the semantic aspects. Therefore, in the previously open-sourced version, we did not enable it. We have updated our repo, you can directly use it now.
在ContextDiff_finetune.py文件中虽然传入了context_shift为True
但在video_diffusion.pipelines.ddim_spatial_temporal.DDIMSpatioTemporalStableDiffusionPipeline的prepare_latents_ddim_inverted函数中却将context_shift注释了,这是什么意思,有使用到context_shift吗
以及ddim_clean2noisy_loop函数也将context_shift注释了
The text was updated successfully, but these errors were encountered: