Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: Added native Pytorch DDP example with neps #163

Merged
merged 2 commits into from
Jan 17, 2025

Conversation

gopaljigaur
Copy link
Collaborator

This pull request introduces a new example for using PyTorch's Distributed Data Parallel (DDP) in the neps_examples module. The changes include adding a new script and updating the module's __init__.py file to reference the new example.

New example addition:

  • neps_examples/__init__.py: Added "pytorch_native_ddp" to the list of available examples.
  • neps_examples/efficiency/pytorch_native_ddp.py: Introduced a new script that demonstrates the use of PyTorch's DDP for distributed training. This includes setting up the process group, defining a simple neural network model, and running a basic training loop across multiple GPUs.

@gopaljigaur gopaljigaur changed the title Added native Pytorch DDP example with neps docs: Added native Pytorch DDP example with neps Dec 12, 2024
@gopaljigaur
Copy link
Collaborator Author

@eddiebergman @DaStoll

@gopaljigaur gopaljigaur merged commit 5d8ce6d into master Jan 17, 2025
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

1 participant