You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am working with very large single cell datasets, attempting to embed various latent spaces generated from these data using pymde. Data sizes are commonly (30M, 30) or larger (currently up to 60M,30 and growing). I've been using a single-cell recipe from scVI, which is:
The preserve_neighbors API runs out of memory using default parameters (for both CPU and Cuda devices). As noted in the docs, using init="random" improves this somewhat, but it still OOMs with approx 10% of our full dataset on a GPU with 24GiB of RAM.
I was wondering if you could provide further advice on approaches to embedding extremely large matrices.
Thank you.
The text was updated successfully, but these errors were encountered:
Thank you for an excellent package!
I am working with very large single cell datasets, attempting to embed various latent spaces generated from these data using pymde. Data sizes are commonly (30M, 30) or larger (currently up to 60M,30 and growing). I've been using a single-cell recipe from scVI, which is:
The
preserve_neighbors
API runs out of memory using default parameters (for both CPU and Cuda devices). As noted in the docs, usinginit="random"
improves this somewhat, but it still OOMs with approx 10% of our full dataset on a GPU with 24GiB of RAM.I was wondering if you could provide further advice on approaches to embedding extremely large matrices.
Thank you.
The text was updated successfully, but these errors were encountered: