You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, the performance of the model in Plus_Ultra.yaml condition is now very good.
However, I noticed that in Plus_Ultra.yaml, you are using multiple low- and high-resolution datasets and uniformly resize them to [1024,1024] during training.
I would like to ask you if you have considered adding a multi-scale training strategy during training to further improve the model's performance on images with multiple resolutions? For example, in a single batch, randomly select from [384, 512, 640,768, 1024, 1280] or even more scales, resize the image to the selected size, and then feed it into the network for training.
(1) Is this feasible without altering the existing inspyrenet network structure?
(2) Is it possible to further improve the performance of the model in real scenes?
The text was updated successfully, but these errors were encountered:
Hello, the performance of the model in Plus_Ultra.yaml condition is now very good.
However, I noticed that in Plus_Ultra.yaml, you are using multiple low- and high-resolution datasets and uniformly resize them to [1024,1024] during training.
I would like to ask you if you have considered adding a multi-scale training strategy during training to further improve the model's performance on images with multiple resolutions? For example, in a single batch, randomly select from [384, 512, 640,768, 1024, 1280] or even more scales, resize the image to the selected size, and then feed it into the network for training.
(1) Is this feasible without altering the existing inspyrenet network structure?
(2) Is it possible to further improve the performance of the model in real scenes?
The text was updated successfully, but these errors were encountered: