-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA Out of Memory Error During Inference in samapi Environment #16
Comments
Hi @halqadasi, thank you for reporting the issue. import org.elephant.sam.Utils
import qupath.lib.awt.common.AwtTools
def viewer = getCurrentViewer()
def renderedServer = Utils.createRenderedServer(viewer)
def region = AwtTools.getImageRegion(viewer.getDisplayedRegionShape(), viewer.getZPosition(),
viewer.getTPosition());
def viewerRegion = RegionRequest.createInstance(renderedServer.getPath(), viewer.getDownsampleFactor(),
region);
viewerRegion = viewerRegion.intersect2D(0, 0, renderedServer.getWidth(), renderedServer.getHeight())
def img = renderedServer.readRegion(viewerRegion)
println "Image size processed on the server: (" + img.getWidth() + ", " + img.getHeight() + ")" |
The size is 1133 * 731 and I got this error on the terminal:
|
@halqadasi the pixel size |
@halqadasi, it seems that the OOM issue was caused by an older version of dependencies. |
While running inference tasks in the
samapi
environment, I encountered aCUDA out of memory
error, causing the application to fallback to CPU inference. This issue significantly impacts performance. I'm looking for advice on mitigating this error or any potential fixes.Environment
Steps to Reproduce
samapi
environment:source activate samapi
uvicorn samapi.main:app --workers 2
Expected Behavior
I expected the GPU to handle the inference tasks without running out of memory, allowing for faster processing times.
Actual Behavior
Received a warning/error indicating CUDA out of memory. The system defaulted to using the CPU for inference, significantly slowing down the process. The error message was:
Additional Information
The text was updated successfully, but these errors were encountered: