-
Notifications
You must be signed in to change notification settings - Fork 303
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow models to use a lightweight sparse structure #3782
Conversation
I think this works for us and we should priortize getting this done properly soon |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The overall implementation looks great. Thanks for prioritizing work on this.
Have asked a couple of followups and minor code changes.
ops_torch = import_optional("pylibcugraphops.pytorch") | ||
|
||
|
||
class BaseConv(nn.Module): | ||
class BaseConv(torch.nn.Module): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not a user-facing class. It is only used to handle the case where we fall back to full graph variant. In addition, with the recent cugraph-ops refactoring disabling the MFG-variant, we might totally remove this class.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I am sorry, I left the review at wrong line. I meant SparseGraph class
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resolved
csrc_ids: torch.Tensor, optional | ||
Compressed source indices. It is a monotonically increasing array of | ||
size (num_src_nodes + 1,). For the k-th source node, its neighborhood | ||
consists of the destinations between `dst_indices[csrc_indices[k]]` and | ||
`dst_indices[csrc_indices[k+1]]`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a question regarding when num_dst_nodes
> len(cdst_ids)-1
for this case.
Lets look at below case:
cdst_ids (Compressed Destinations Indices):0,2,5,7
src_indices: 1,2,2,3,4,4,5
I believe following will work (please correct me):
num_src_nodes = 6
num_dst_nodes = 3
And i guess below will fail ((please correct me):
num_src_nodes = 6
num_dst_nodes = 5 # Modified it to a higher value to ensure alignment for output nodes that are missing
Question:
So this will have to handled by ensuring correct creation because we want to handle alignment problem b/w blocks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should be illegal when num_dst_nodes
!= len(cdst_ids)-1
. I will improve the error handling in this case. For example, pyg does lots of assertions to check the size. We should throw proper exceptions.
cdst_ids (Compressed Destinations Indices):0,2,5,7
src_indices: 1,2,2,3,4,4,5
In your example with num_src_nodes = 6, num_dst_nodes = 3, this translates to a COO of
(1,2,2,3,4,4,5)
(0,0,1,1,1,2,2)
With num_src_nodes = 6, num_dst_nodes = 5, the constructor should have failed, unless cdst_ids
is augmented (cdst_ids = 0,2,5,7,7,7).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. Yup, this is what i was expecting. We will just make sure that the changes @seunghwak from cugraph sampling ensures that all the MFGs line up.
self.num_dst_nodes, | ||
out_int32=self._dst_ids.dtype == torch.int32, | ||
) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think we should remove dst_ids
if we are forcing CSC
conversion because that will mean memory overhead of maintaining it always ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think if we are forcing csc
conversions for now but in future we may want to expand to other formats right, I think we might want to either have this configurable via a class variable.
We can probably borrow the convention from formats
.
We wont follow their default of 'coo' -> 'csr' -> 'csc'
, but have our own version.
See formats docs .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As we discussed via slack, we will provide input_format
and output_format
to help specify which tensor is needed.
COO-format requires `src_ids` and `dst_ids`. | ||
CSC-format requires `cdst_ids` and `src_ids`. | ||
CSR-format requires `csrc_ids` and `dst_ids`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should force user to provide the format requirement to prevent confusion. Like add a format variable something like input_format
which will take values coo
, csc
and csr
.
Then we can raise errors according to the input what the user provided.
Also, i dont like input_format
variable name but you get the idea.
@VibhuJawa Please review it again. Here are some example usages. The |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks for making these changes quickly @tingyu66 . I think the PR is good to merge.
ops_torch = import_optional("pylibcugraphops.pytorch") | ||
|
||
|
||
class BaseConv(nn.Module): | ||
class BaseConv(torch.nn.Module): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resolved
@tingyu66 should this still be in draft? It seems to be ready for review. |
Yes, it's ready. Forgot to click the button. |
/merge |
This PR introduces
SparseGraph
class to allowSAGEConv
to use a more lightweight graph structure. The goal is to provide an option to bypassto_block
which is the bottleneck in the sampling workflow.I will submit another PR to extend the pattern to other models.