You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, the autoregressive video2world inference is killed. Does anyone have this issue?
CUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/video2world.py \ --input_type=text_and_video \ --input_image_or_video_path=cosmos1/models/autoregressive/assets/v1p0/input.mp4 \ --prompt="A video recorded from a moving vehicle's perspective, capturing roads, buildings, landscapes, and changing weather and lighting conditions." \ --video_save_name=Cosmos-1.0-Autoregressive-5B-Video2World \ --ar_model_dir=Cosmos-1.0-Autoregressive-5B-Video2World \ --top_p=0.7 \ --temperature=1.0 \ --offload_guardrail_models \ --offload_diffusion_decoder \ --offload_ar_model \ --offload_tokenizer \ --offload_text_encoder_model
[01-17 23:20:56|INFO|cosmos1/models/autoregressive/inference/video2world.py:124:main] Run with input: {'visual_input': 'cosmos1/models/autoregressive/assets/v1p0/input.mp4', 'prompt': "A video recorded from a moving vehicle's perspective, capturing roads, buildings, landscapes, and changing weather and lighting conditions."} [01-17 23:20:56|INFO|cosmos1/models/autoregressive/inference/world_generation_pipeline.py:875:generate] Run guardrail on prompt Loading checkpoint shards: 100%|██████████████████| 3/3 [00:05<00:00, 1.84s/it] [01-17 23:21:16|INFO|cosmos1/models/autoregressive/inference/world_generation_pipeline.py:880:generate] Pass guardrail on prompt [01-17 23:21:16|INFO|cosmos1/models/autoregressive/inference/world_generation_pipeline.py:882:generate] Run text embedding on prompt
/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py:1617: FutureWarning: 'clean_up_tokenization_spaces' was not set. It will be set to 'True' by default. This behavior will be deprecated in transformers v4.45, and will be then set to 'False' by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884 warnings.warn( Killed
Btw, this is my 2nd time running the inference, it downloaded a ~45GB file before being killed at the 1st time. Thanks for help :)
The text was updated successfully, but these errors were encountered:
Hi, the autoregressive video2world inference is killed. Does anyone have this issue?
CUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/video2world.py \ --input_type=text_and_video \ --input_image_or_video_path=cosmos1/models/autoregressive/assets/v1p0/input.mp4 \ --prompt="A video recorded from a moving vehicle's perspective, capturing roads, buildings, landscapes, and changing weather and lighting conditions." \ --video_save_name=Cosmos-1.0-Autoregressive-5B-Video2World \ --ar_model_dir=Cosmos-1.0-Autoregressive-5B-Video2World \ --top_p=0.7 \ --temperature=1.0 \ --offload_guardrail_models \ --offload_diffusion_decoder \ --offload_ar_model \ --offload_tokenizer \ --offload_text_encoder_model
[01-17 23:20:56|INFO|cosmos1/models/autoregressive/inference/video2world.py:124:main] Run with input: {'visual_input': 'cosmos1/models/autoregressive/assets/v1p0/input.mp4', 'prompt': "A video recorded from a moving vehicle's perspective, capturing roads, buildings, landscapes, and changing weather and lighting conditions."}
[01-17 23:20:56|INFO|cosmos1/models/autoregressive/inference/world_generation_pipeline.py:875:generate] Run guardrail on prompt Loading checkpoint shards: 100%|██████████████████| 3/3 [00:05<00:00, 1.84s/it]
[01-17 23:21:16|INFO|cosmos1/models/autoregressive/inference/world_generation_pipeline.py:880:generate] Pass guardrail on prompt
[01-17 23:21:16|INFO|cosmos1/models/autoregressive/inference/world_generation_pipeline.py:882:generate] Run text embedding on prompt
/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py:1617: FutureWarning: 'clean_up_tokenization_spaces' was not set. It will be set to 'True' by default. This behavior will be deprecated in transformers v4.45, and will be then set to 'False' by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884 warnings.warn( Killed
Btw, this is my 2nd time running the inference, it downloaded a ~45GB file before being killed at the 1st time. Thanks for help :)
The text was updated successfully, but these errors were encountered: