-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG]Still not working, there is an error (TypeError: unsupported opera type (s) for//: 'NoneType' and 'int') when running Python playground. py #35
Comments
this is my pip list: accelerate 0.24.1 |
this is my /root/MiniGPT-5/config/minigpt4.yaml: datasets: run: optimizerlr_sched: "linear_warmup_cosine_lr" weight_decay: 0.05 seed: 42 amp: True evaluate: False device: "cuda" |
this is my /root/MiniGPT-5/minigpt4/configs/models/minigpt4.yaml: model: vit encoderimage_size: 224 Q-Formernum_query_token: 32 Vicunallama_model: "/root/vicuna-7b-v1.1" generation configsprompt: "" preprocess: |
Including "Vicuna-7b-v1.1" is all good. The path and configuration file are fine, but why is there still such an error. My pip "torch=2.0.1, lighting=2.0.9.post0" |
I'm trying to help. According to your error record: File "/root/MiniGPT-5/minigpt4/models/eva_vit.py", line 190, in init
num_patches = (img_size[1] // patch_size[1]) * (img_size[0] // patch_size[0])
TypeError: unsupported operand type(s) for //: 'NoneType' and 'int' where But according to your config file, you have set |
Yes |
Please wait a moment, I need to set up a debugging environment |
File "/root/MiniGPT-5/model.py", line 68, in init Seems unable to: print('minigpt4_config.model_cfg.image_size' + str(minigpt4_config.model_cfg.image_size)) |
Then, can you check the |
We are currently creating the environment and will use 'pycharm' to debug once it is ready。Then I will take a screenshot to show you the situation |
This is weird. According to your error: Traceback (most recent call last):
File "/root/MiniGPT-5/examples/playground.py", line 40, in
minigpt5 = MiniGPT5_Model.load_from_checkpoint(stage1_ckpt, strict=False, map_location="cpu", encoder_model_config=model_args, **vars(training_args))
File "/root/anaconda3/envs/minigpt5/lib/python3.9/site-packages/lightning/pytorch/core/module.py", line 1552, in load_from_checkpoint
loaded = _load_from_checkpoint(
File "/root/anaconda3/envs/minigpt5/lib/python3.9/site-packages/lightning/pytorch/core/saving.py", line 89, in _load_from_checkpoint
model = _load_state(cls, checkpoint, strict=strict, kwargs)
File "/root/anaconda3/envs/minigpt5/lib/python3.9/site-packages/lightning/pytorch/core/saving.py", line 156, in _load_state
obj = cls(_cls_kwargs)
File "/root/MiniGPT-5/model.py", line 68, in init
self.model = MiniGPT5.from_config(minigpt4_config.model_cfg)
File "/root/MiniGPT-5/minigpt4/models/mini_gpt4.py", line 247, in from_config
model = cls(
File "/root/MiniGPT-5/minigpt4/models/mini_gpt5.py", line 46, in init
super().init(*args, **kwargs)
File "/root/MiniGPT-5/minigpt4/models/mini_gpt4.py", line 53, in init
self.visual_encoder, self.ln_vision = self.init_vision_encoder(
File "/root/MiniGPT-5/minigpt4/models/blip2.py", line 65, in init_vision_encoder
visual_encoder = create_eva_vit_g(
File "/root/MiniGPT-5/minigpt4/models/eva_vit.py", line 416, in create_eva_vit_g
model = VisionTransformer(
File "/root/MiniGPT-5/minigpt4/models/eva_vit.py", line 259, in init
self.patch_embed = PatchEmbed(
File "/root/MiniGPT-5/minigpt4/models/eva_vit.py", line 190, in init
num_patches = (img_size[1] // patch_size[1]) * (img_size[0] // patch_size[0])
TypeError: unsupported operand type(s) for //: 'NoneType' and 'int' Your error starts from |
yes,Start the next call with "self.model = MiniGPT5.from_config(minigpt4_config.model_cfg)". Before this, there were no errors reported。 About errors:“(img_size[1] // patch_size[1]) * (img_size[0] // patch_size[0])” |
I found that the values of ‘img_size[1]’ and ‘img_size[0]’ are none, and I think this is the reason for the error |
Because img_size[1]=None and img_size[0]=None, an error of "TypeError: unsupported operand type(s) for //: 'NoneType' and 'int'" occurred |
Same like I said here. |
I didn't see the wrong config in your file. To check whether you read the correct file. You should check the |
I think that's maybe the reason. You have multiple |
Please check the function default_config_path in Line 79 of |
**Thank you, the problem has been resolved. But a new error has occurred:** Traceback (most recent call last): During handling of the above exception, another exception occurred: Traceback (most recent call last): During handling of the above exception, another exception occurred: Traceback (most recent call last): During handling of the above exception, another exception occurred: Traceback (most recent call last): The above exception was the direct cause of the following exception: Traceback (most recent call last): During handling of the above exception, another exception occurred: Traceback (most recent call last): **I wonder if I want to download 'https://huggingface.co/julien-c/EsperBERTo-small/resolve/main/pytorch_model.bin'. The network environment here is not good, so I want to download 'pytorch_model.bin' first and put it in a local folder. But I don't know which folder is better to put it in, please let me know, thank you. If it's not this file, please tell me the other file names so that I can download it and place it locally.** |
I have now cloned 'stablityai/table diffusion 2-1-base', should I also put it in the/root/MiniGPT-5 directory? Where exactly is it placed? |
You can place it anywhere you want. Just change |
Now the models are all installed. Result error: CUDA out of memory. File "/root/anaconda3/envs/minigpt555/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl I remember the 'python3 playground.py --stage1_weight WEIGHT_FOLDER/stage1_cc3m.ckpt ' command, it doesn't take up much memory. My server has a single card with 24GB of graphics memory. 2 graphics cards |
Regarding the error: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 21.99 GiB total capacity; 21.42 GiB already allocated; 107.00 MiB free; 21.57 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF I once tried to reduce the video memory fragment to 32MB, but it sfailed, and it reported the same error. I estimate we need you to find another way to help solve it: export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:32 |
Hello, there is an error (TypeError: unsupported operator type (s) for//: 'NoneType' and 'int') running Python playground.py
Operating system: ubuntu 20.04
Python 3.9.18
Other parameters: Same as MiniGPT-5/requirements. txt
All three ckpt files are located in MiniGPT-5/config. The configuration files have all been changed. The weight used is Vicuna-7b-v1.1. However, the following error still occurred.
Run "Python playground. py --stage1_weight /root/MiniGPT-5/config/stage1_cc3m.ckpt --test_weight /root/MiniGPT-5/config/stage2_vist.ckpt" The following error occurred during command execution:
Seed set to 42
Loading VIT
Traceback (most recent call last):
File "/root/MiniGPT-5/examples/playground.py", line 40, in
minigpt5 = MiniGPT5_Model.load_from_checkpoint(stage1_ckpt, strict=False, map_location="cpu", encoder_model_config=model_args, **vars(training_args))
File "/root/anaconda3/envs/minigpt5/lib/python3.9/site-packages/lightning/pytorch/core/module.py", line 1552, in load_from_checkpoint
loaded = _load_from_checkpoint(
File "/root/anaconda3/envs/minigpt5/lib/python3.9/site-packages/lightning/pytorch/core/saving.py", line 89, in _load_from_checkpoint
model = _load_state(cls, checkpoint, strict=strict, kwargs)
File "/root/anaconda3/envs/minigpt5/lib/python3.9/site-packages/lightning/pytorch/core/saving.py", line 156, in _load_state
obj = cls(_cls_kwargs)
File "/root/MiniGPT-5/model.py", line 68, in init
self.model = MiniGPT5.from_config(minigpt4_config.model_cfg)
File "/root/MiniGPT-5/minigpt4/models/mini_gpt4.py", line 247, in from_config
model = cls(
File "/root/MiniGPT-5/minigpt4/models/mini_gpt5.py", line 46, in init
super().init(*args, **kwargs)
File "/root/MiniGPT-5/minigpt4/models/mini_gpt4.py", line 53, in init
self.visual_encoder, self.ln_vision = self.init_vision_encoder(
File "/root/MiniGPT-5/minigpt4/models/blip2.py", line 65, in init_vision_encoder
visual_encoder = create_eva_vit_g(
File "/root/MiniGPT-5/minigpt4/models/eva_vit.py", line 416, in create_eva_vit_g
model = VisionTransformer(
File "/root/MiniGPT-5/minigpt4/models/eva_vit.py", line 259, in init
self.patch_embed = PatchEmbed(
File "/root/MiniGPT-5/minigpt4/models/eva_vit.py", line 190, in init
num_patches = (img_size[1] // patch_size[1]) * (img_size[0] // patch_size[0])
TypeError: unsupported operand type(s) for //: 'NoneType' and 'int'
The text was updated successfully, but these errors were encountered: