You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Went through the notebook to generate embeddings for dogs.jpg. However, when hovering over the image the mask is offset above the original image. I have tried this with various images and the mask is always offset or skewed in some way. I tried running through the demo on a clean repo and still ran into this issue. I suspect it has something to with the scaled down model being displayed (despite console.log telling me that image and maskImage are the same dimensions). Anyone have any pointers?
The text was updated successfully, but these errors were encountered:
If you really didn't modify anything in the code than this shouldn't happen. However, I encountered something similar, so did you dynamically derive the input_size from the transformed image? If you look into the SAM Predictor class they take the last two shape entries after transform which is necessary to derive the correct upscaled masks
Went through the notebook to generate embeddings for dogs.jpg. However, when hovering over the image the mask is offset above the original image. I have tried this with various images and the mask is always offset or skewed in some way. I tried running through the demo on a clean repo and still ran into this issue. I suspect it has something to with the scaled down model being displayed (despite console.log telling me that image and maskImage are the same dimensions). Anyone have any pointers?
The text was updated successfully, but these errors were encountered: