-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix Chakra Errors #160
base: main
Are you sure you want to change the base?
Fix Chakra Errors #160
Conversation
MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅ |
d50a6c0
to
2833ef8
Compare
cf42d25
to
474b99f
Compare
640de25
to
c8b69b8
Compare
Hi @srinivas212 and @tushar-krishna . |
|
900279c
to
19fa0eb
Compare
Updating ETFeeder and Supporting newer PyTorch become separate PRs (#176, #177) and this PR is ready to review. |
Summary
This PR addresses multiple issues in the Chakra converter:
1. Improper Handling of NCCL All-to-All Communication
Chakra incorrectly distinguishes between point-to-point and collective communication. In NCCL, all-to-all is implemented as point-to-point communication, but Chakra's current logic treats these as distinct, leading to an incorrect type for
PyTorchNode
. More details on NCCL point-to-point can be found here.2. Logging Inconsistency
There was a mismatch in logging levels: sync dependencies log via
logging.info
, while other dependencies uselogging.debug
. This PR resolves the inconsistency by standardizing the logging approach.3. False Positive Dependencies from HTA
HTA returns false positives for sync dependencies, leading to invalid
later op -> earlier op
dependencies. This causes Chakra to fail in certain traces. The Chakra converter was found to encounter two critical failures:4. Update trace_linker to use external_id for finding GPU op's parent CPU op
There were many operations matched with wrong parent CPU during trace linking.
This PR solves this problem using
external_id
instead ofev_idx
.5. Handling HTA Errors in Chakra
The trace linker was terminating unexpectedly due to errors in HTA. Although this may stem from trace inconsistencies, the issue does not occur when HTA is excluded.
Updated Chakra to handle these errors by raising exceptions instead of terminating the trace linker.
6. Proper Encoding of pg_name in Collective Operations
Identified an issue where
SendRecv
,Reduce-Scatter
andAll-Gather
operations do not correctly encode pg_name following updates on the PyTorch side.Modified Chakra to ensure proper encoding of
pg_name
in these collective operations.Test Plan
I tested the fixes using Mixtral 8x3B traces collected with the NeMo framework (NVIDIA).
traces_device_0.zip