Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using multiscale training strategy with Plus_Ultra.yaml dataset configuration #53

Open
mingheyuemankong opened this issue Aug 23, 2024 · 1 comment

Comments

@mingheyuemankong
Copy link

Hello, the performance of the model in Plus_Ultra.yaml condition is now very good.

However, I noticed that in Plus_Ultra.yaml, you are using multiple low- and high-resolution datasets and uniformly resize them to [1024,1024] during training.

I would like to ask you if you have considered adding a multi-scale training strategy during training to further improve the model's performance on images with multiple resolutions? For example, in a single batch, randomly select from [384, 512, 640,768, 1024, 1280] or even more scales, resize the image to the selected size, and then feed it into the network for training.

(1) Is this feasible without altering the existing inspyrenet network structure?
(2) Is it possible to further improve the performance of the model in real scenes?

@crapthings
Copy link

something like aspect ratio bucketing?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants