You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, when I was running:
python -m demo.app --resume_from_checkpoint chenjoya/videollm-online-8b-v1plus --attn_implementation sdpa
I got a strange error. When I tried to ask a question on the web page, Assitant's reply would always get stuck at various places and report errors. The error on the terminal is:
Traceback (most recent call last):
File "/data/wj/anaconda3/envs/videollm/lib/python3.10/site-packages/gradio/queueing.py", line 706, in process_events
response = await route_utils.call_process_api(
File "/data/wj/anaconda3/envs/videollm/lib/python3.10/site-packages/gradio/route_utils.py", line 323, in call_process_api
output = await app.get_blocks().process_api(
File "/data/wj/anaconda3/envs/videollm/lib/python3.10/site-packages/gradio/blocks.py", line 2018, in process_api
result = await self.call_function(
File "/data/wj/anaconda3/envs/videollm/lib/python3.10/site-packages/gradio/blocks.py", line 1579, in call_function
prediction = await utils.async_iteration(iterator)
File "/data/wj/anaconda3/envs/videollm/lib/python3.10/site-packages/gradio/utils.py", line 691, in async_iteration
return await anext(iterator)
File "/data/wj/anaconda3/envs/videollm/lib/python3.10/site-packages/gradio/utils.py", line 685, in anext
return await anyio.to_thread.run_sync(
File "/data/wj/anaconda3/envs/videollm/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/data/wj/anaconda3/envs/videollm/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2441, in run_sync_in_worker_thread
return await future
File "/data/wj/anaconda3/envs/videollm/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 943, in run
result = context.run(func, *args)
File "/data/wj/anaconda3/envs/videollm/lib/python3.10/site-packages/gradio/utils.py", line 668, in run_sync_iterator_async
return next(iterator)
File "/data/wj/anaconda3/envs/videollm/lib/python3.10/site-packages/gradio/utils.py", line 829, in gen_wrapper
response = next(iterator)
File "/data/wj/videollm-online/demo/app.py", line 92, in gr_liveinfer_queue_refresher_change
query, response = liveinfer()
File "/data/wj/videollm-online/demo/inference.py", line 129, in call
query, response = self._call_for_response(video_time, query)
File "/data/wj/videollm-online/demo/inference.py", line 48, in _call_for_response
assert self.last_ids == 933, f'{self.last_ids} != 933' # HACK, 933 = ]\n
AssertionError: tensor([[0]], device='cuda:0') != 933
Could you please tell me what to do? Thanks a lot!
By the way, my cli.py is working fine.
The text was updated successfully, but these errors were encountered:
Hi, when I was running:
python -m demo.app --resume_from_checkpoint chenjoya/videollm-online-8b-v1plus --attn_implementation sdpa
I got a strange error. When I tried to ask a question on the web page, Assitant's reply would always get stuck at various places and report errors. The error on the terminal is:
Traceback (most recent call last):
File "/data/wj/anaconda3/envs/videollm/lib/python3.10/site-packages/gradio/queueing.py", line 706, in process_events
response = await route_utils.call_process_api(
File "/data/wj/anaconda3/envs/videollm/lib/python3.10/site-packages/gradio/route_utils.py", line 323, in call_process_api
output = await app.get_blocks().process_api(
File "/data/wj/anaconda3/envs/videollm/lib/python3.10/site-packages/gradio/blocks.py", line 2018, in process_api
result = await self.call_function(
File "/data/wj/anaconda3/envs/videollm/lib/python3.10/site-packages/gradio/blocks.py", line 1579, in call_function
prediction = await utils.async_iteration(iterator)
File "/data/wj/anaconda3/envs/videollm/lib/python3.10/site-packages/gradio/utils.py", line 691, in async_iteration
return await anext(iterator)
File "/data/wj/anaconda3/envs/videollm/lib/python3.10/site-packages/gradio/utils.py", line 685, in anext
return await anyio.to_thread.run_sync(
File "/data/wj/anaconda3/envs/videollm/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/data/wj/anaconda3/envs/videollm/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2441, in run_sync_in_worker_thread
return await future
File "/data/wj/anaconda3/envs/videollm/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 943, in run
result = context.run(func, *args)
File "/data/wj/anaconda3/envs/videollm/lib/python3.10/site-packages/gradio/utils.py", line 668, in run_sync_iterator_async
return next(iterator)
File "/data/wj/anaconda3/envs/videollm/lib/python3.10/site-packages/gradio/utils.py", line 829, in gen_wrapper
response = next(iterator)
File "/data/wj/videollm-online/demo/app.py", line 92, in gr_liveinfer_queue_refresher_change
query, response = liveinfer()
File "/data/wj/videollm-online/demo/inference.py", line 129, in call
query, response = self._call_for_response(video_time, query)
File "/data/wj/videollm-online/demo/inference.py", line 48, in _call_for_response
assert self.last_ids == 933, f'{self.last_ids} != 933' # HACK, 933 = ]\n
AssertionError: tensor([[0]], device='cuda:0') != 933
Could you please tell me what to do? Thanks a lot!
By the way, my cli.py is working fine.
The text was updated successfully, but these errors were encountered: