Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Autoregressive inference killed #63

Open
wentao-ju opened this issue Jan 17, 2025 · 1 comment
Open

Autoregressive inference killed #63

wentao-ju opened this issue Jan 17, 2025 · 1 comment
Assignees
Labels
question Further information is requested

Comments

@wentao-ju
Copy link

wentao-ju commented Jan 17, 2025

Hi, the autoregressive video2world inference is killed. Does anyone have this issue?

CUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/video2world.py \ --input_type=text_and_video \ --input_image_or_video_path=cosmos1/models/autoregressive/assets/v1p0/input.mp4 \ --prompt="A video recorded from a moving vehicle's perspective, capturing roads, buildings, landscapes, and changing weather and lighting conditions." \ --video_save_name=Cosmos-1.0-Autoregressive-5B-Video2World \ --ar_model_dir=Cosmos-1.0-Autoregressive-5B-Video2World \ --top_p=0.7 \ --temperature=1.0 \ --offload_guardrail_models \ --offload_diffusion_decoder \ --offload_ar_model \ --offload_tokenizer \ --offload_text_encoder_model

[01-17 23:20:56|INFO|cosmos1/models/autoregressive/inference/video2world.py:124:main] Run with input: {'visual_input': 'cosmos1/models/autoregressive/assets/v1p0/input.mp4', 'prompt': "A video recorded from a moving vehicle's perspective, capturing roads, buildings, landscapes, and changing weather and lighting conditions."}
[01-17 23:20:56|INFO|cosmos1/models/autoregressive/inference/world_generation_pipeline.py:875:generate] Run guardrail on prompt Loading checkpoint shards: 100%|██████████████████| 3/3 [00:05<00:00, 1.84s/it]
[01-17 23:21:16|INFO|cosmos1/models/autoregressive/inference/world_generation_pipeline.py:880:generate] Pass guardrail on prompt
[01-17 23:21:16|INFO|cosmos1/models/autoregressive/inference/world_generation_pipeline.py:882:generate] Run text embedding on prompt

/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py:1617: FutureWarning: 'clean_up_tokenization_spaces' was not set. It will be set to 'True' by default. This behavior will be deprecated in transformers v4.45, and will be then set to 'False' by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884 warnings.warn( Killed

Btw, this is my 2nd time running the inference, it downloaded a ~45GB file before being killed at the 1st time. Thanks for help :)

@dhj-worker
Copy link

I suspect a problem with insufficient physical memory capacity.

@sophiahhuang sophiahhuang added the question Further information is requested label Jan 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants