Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

torch.compile support #304

Open
theolundqvist opened this issue Dec 18, 2024 · 0 comments
Open

torch.compile support #304

theolundqvist opened this issue Dec 18, 2024 · 0 comments

Comments

@theolundqvist
Copy link

I am unable to torch.compile(encoder, fullgraph=True), it gives an error on instantiating "block" on line 2131. It seems to have to do with taking .next() on a listiterator, maybe with block arguments being a *args. I was unable to understand what "block" is and where it comes from. Would appreciate any help on this issue. Switched from the pytorch implementation due to numerical instability so would really like good alternative, this repository seems really promising.

python==3.10.15
torch==2.4.1+cu124
GPU: A100-80GB
 File "/miniforge/envs/iris/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py", line 780, in call_self_handler
    unimplemented(
  File "/miniforge/envs/iris/lib/python3.10/site-packages/torch/_dynamo/exc.py", line 221, in unimplemented
    raise Unsupported(msg)
torch._dynamo.exc.Unsupported: invalid handler args <bound method BuiltinVariable.call_next of BuiltinVariable()> [ListIteratorVariable(length=0, index=0), ConstantVariable()] {}

from user code:
   File "/mnt/task_runtime/src/modules/encoder.py", line 55, in forward
    feats = self.encoder.forward(x, mask=mask)
  File "/miniforge/envs/iris/lib/python3.10/site-packages/x_transformers/x_transformers.py", line 2131, in forward
    out, inter = block(x, mask = mask, context_mask = self_attn_kv_mask, attn_mask = attn_mask, rel_pos = self.rel_pos, pos = pos, rotary_pos_emb = rotary_pos_emb, prev_attn = prev_attn, cache = next(iter_attn_cache, None), mem = layer_mem, mem_mask = layer_mem_mask, attn_bias = attn_bias, value_residual = maybe_self_attn_value_residual, return_intermediates = True)

Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information


You can suppress this exception and fall back to eager by setting:
    import torch._dynamo
    torch._dynamo.config.suppress_errors = True
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant