Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

decouple embedding creation from optimizer #186

Merged

Conversation

zhuofan1123
Copy link
Contributor

This PR refactors the embedding creation interface, decoupling it from the optimizer dependency. Users now can designate the embeddings for optimization during optimizer initialization.
cpp:

wholememory_create_embedding(&wm_embedding, ...);
wholememory_create_embedding_optimizer(&optimizer, ...);
wholememory_embedding_set_optimizer(wm_embedding, optimizer);

python:

wm_embedding = wgth.create_embedding(...)
wm_optimizer = wgth.create_wholememory_optimizer(wm_embedding, "adam", {})

Copy link

copy-pr-bot bot commented Jun 12, 2024

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

Copy link
Contributor

@linhu-nv linhu-nv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems good to me.

@BradReesWork BradReesWork added improvement Improves an existing functionality non-breaking Introduces a non-breaking change labels Jun 12, 2024
@BradReesWork
Copy link
Member

/okay to test

@BradReesWork
Copy link
Member

/merge

@rapids-bot rapids-bot bot merged commit 8d4cd9b into rapidsai:branch-24.08 Jun 13, 2024
54 checks passed
@zhuofan1123 zhuofan1123 deleted the optimizer_api_modification branch June 14, 2024 05:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
improvement Improves an existing functionality non-breaking Introduces a non-breaking change
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants