Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Superpoint excessive memory usage #18

Open
Eunchan24 opened this issue Jun 20, 2024 · 3 comments
Open

Superpoint excessive memory usage #18

Eunchan24 opened this issue Jun 20, 2024 · 3 comments

Comments

@Eunchan24
Copy link
Contributor

Hello,
As the title indicates, all memory seems to be occupied when loading the superpoint model.

The image below is the gpu state when only the superpoint model is loaded.
스크린샷 2024-06-20 오후 10 38 36

superpoint_extract.py
Line 40 : tf1.saved_model.loader.load(self._sess, [tf1.saved_model.tag_constants.SERVING
It doesn't load as much as the size of the model on this line and seems to be using all of the GPUs.

@Eunchan24 Eunchan24 changed the title Superpoint's excessive memory usage Superpoint excessive memory usage Jun 20, 2024
@kke19
Copy link

kke19 commented Jun 26, 2024

HI,I have the same problem, do you have any good solution now? @Eunchan24

@kke19
Copy link

kke19 commented Jun 26, 2024

Hi,
When testing, I only loaded the original og model, and my all memory was full without loading other models. This seems to be a problem when loading the og model.
This situation also occurs in graphics cards with different display memory.
In my scenario, both 3090 (24G) and L20 (48G) basically filled up the memory.

@Eunchan24
Copy link
Contributor Author

Hi, my error case user up all of my GPUs When loading superpoint models.
A temporary fix is that I can use set_memory_growth() before loading Omniglue to make it flexible without using up all of my GPUs.
omniglue_extract.py
Add the line below 36 lines

    physical_devices = tf.config.list_physical_devices('GPU')

    for gpu in physical_devices:
        try:
            tf.config.experimental.set_memory_growth(gpu, True)
            print(f"Memory growth set for {gpu}")
        except RuntimeError as e:
            print(e)

I'm not sure if this is the correct solution, but I'm looking for a more complete solution.

@kke19 kke19 mentioned this issue Jun 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants