-
Notifications
You must be signed in to change notification settings - Fork 709
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Padim model returns fixed size (256, 256) and not image size #2532
Comments
I've realized it's the additional default PreProcessor() to Padim that is forcing a resize to (256, 256). However, when I disable the preprocessor I end up with OOM issues during validation that weren't there in v1.2. |
In v1.2 Padim's forward pass accepts my (2048, 110) shape. So something's changed in v2 (main) that causes Padim to OOM on GPU's now unless it's forced down in size. |
I'm just going to say it's a skill issue on my part and it's just trying to learn and work with v2 beta pre-documentation. Look forward to the updates, keep up the good work. |
it is likely that when you disable the preprocessor the transforms are not applied, and hence your input is not resized to a smaller inputs, which causes OOM issues. If you want to read more about preprocessing, you could follow this link |
Thank you. I was originally looking through the code base. I probably got mixed up between v1.2 and v2 beta when I was looking for the docs. I went the lazy route and just cut my dataset in half to avoid the OOM issues. This allows me to finally work on my assigned task for investigating the Anomalib exporter. I was able to get OpenVINO working, but ONNX isn't exporting/inferring correctly. We originally wanted TorchScript but I'm going to learn OpenVINO and ONNX and push my work in that direction. This is today's problem and not relevant to the above issue though. |
Describe the bug
This is for the main branch.
BUG: My images are supposed to be (2048, 110) in size, but the model is returning predictions as (256, 256). Resizing to (2048, 110) just skews/distorts the image.
I'm trying to adapt usage code from v1.2 to v2 so I can attempt to look into the Export functionality (OpenVino/ONNX).
Due to the bug reported here: #2510
I can't continue with the current v2 beta release.
I saw that it was fixed in #2508 merge request to main.
I have cloned main so I can use this version of the code.
In
engine.py
I have added a print statement to check before thereturn self.trainer.predict(model, dataloaders, datamodule, return_predictions, ckpt_path)
line which shows the images are the correct size at this point.I have also added a check at the start of the forward pass in Padim:
Which shows it's resized before it gets to the model.
Dataset
Other (please specify in the text field below)
Model
PADiM
Steps to reproduce the behavior
I setup my environment using uv as it's much faster than normal pip.
uv venv .venv --python 3.11
source .venv/bin/activate
uv pip install pip
git clone https://github.com/openvinotoolkit/anomalib.git
cd anomalib/
git checkout main
uv pip install -e .[full]
OS information
OS information:
Expected behavior
In v1.2, predictions were returned at the correct size.
Screenshots
No response
Pip/GitHub
GitHub
What version/branch did you use?
main
Configuration YAML
None
Logs
Code of Conduct
The text was updated successfully, but these errors were encountered: