Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for Blackwell and Thor #2710

Open
phantaurus opened this issue Jan 21, 2025 · 2 comments
Open

Support for Blackwell and Thor #2710

phantaurus opened this issue Jan 21, 2025 · 2 comments
Labels
triaged Issue has been triaged by maintainers

Comments

@phantaurus
Copy link

Hello!

I’m curious if there are plans to support the upcoming Blackwell architecture (e.g., the Thor GPU platform) in TensorRT-LLM. Is there a timeline or roadmap for this integration?

Thank you so much!

@nv-guomingz
Copy link
Collaborator

The coming release 0.17 will support Blackwell.

@nv-guomingz nv-guomingz added the triaged Issue has been triaged by maintainers label Jan 22, 2025
@johnnynunez
Copy link

@phantaurus with new cuda versions.
I have RTX5090
IMG_2517
IMG_2518

more references:
pytorch/pytorch#145270 I added to pytorch
Dao-AILab/flash-attention#1436

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
triaged Issue has been triaged by maintainers
Projects
None yet
Development

No branches or pull requests

3 participants