Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resolve __hfma2 intrinsic error by excluding unsupported Maxwell architecture in setup.py #1557

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

HIT-cwh
Copy link

@HIT-cwh HIT-cwh commented Jan 14, 2025

When attempting to compile torchao from source for Hopper GPUs using pip install -e ., I encountered a build issue. nvcc version in my env is 12.6. The error occurs in torchao/csrc/cuda/sparse_marlin/mma.h:

FAILED: /projects/ao/build/temp.linux-x86_64-cpython-310/torchao/csrc/cuda/sparse_marlin/marlin_kernel_nm.o
        /usr/local/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /cpfs01/shared/llm_ddd/caoweihan/projects/ao/build/temp.linux-x86_64-cpython-310/torchao/csrc/cuda/sparse_marlin/marlin_kernel_nm.o.d -I/usr/local/lib/python3.10/dist-packages/torch/include -I/usr/local/lib/python3.10/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.10/dist-packages/torch/include/TH -I/usr/local/lib/python3.10/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.10 -c -c /projects/ao/torchao/csrc/cuda/sparse_marlin/marlin_kernel_nm.cu -o /projects/ao/build/temp.linux-x86_64-cpython-310/torchao/csrc/cuda/sparse_marlin/marlin_kernel_nm.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -t=0 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1016"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=1 -gencode=arch=compute_52,code=sm_52 -gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_61,code=sm_61 -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_72,code=sm_72 -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_86,code=sm_86 -gencode=arch=compute_87,code=sm_87 -gencode=arch=compute_90,code=compute_90 -gencode=arch=compute_90,code=sm_90 -std=c++17
        /projects/ao/torchao/csrc/cuda/sparse_marlin/mma.h(142): error: identifier "__hfma2" is undefined

The issue arises because the __hfma2 intrinsic is not supported by the compute_52 (Maxwell) architecture, even though I am targeting the compute_90 (Hopper) architecture. It seems that we should explicitly specify the supported GPU architectures in setup.py to ensure compatibility with newer architectures. Here's the proposed update:

extra_compile_args["nvcc"].extend([
      "-gencode=arch=compute_60,code=sm_60",  # Pascal
      "-gencode=arch=compute_70,code=sm_70",  # Volta
      "-gencode=arch=compute_75,code=sm_75",  # Turing
      "-gencode=arch=compute_80,code=sm_80",  # Ampere
      "-gencode=arch=compute_90,code=sm_90",  # Hopper
  ])

Looking forward to your feedback!

Copy link

pytorch-bot bot commented Jan 14, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/1557

Note: Links to docs will display an error until the docs builds have been completed.

❌ 6 New Failures

As of commit 5ff5a07 with merge base ad61822 (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link

Hi @HIT-cwh!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

@supriyar supriyar requested review from drisspg and jcaip January 23, 2025 18:08
@drisspg
Copy link
Contributor

drisspg commented Jan 23, 2025

So this gating is typically done via env variable: TORCH_CUDA_ARCH_LIST This is great for local development since it lets you build only what you need, so its faster.

For instance we only build sm80 + in official wheels:

TORCH_CUDA_ARCH_LIST="8.0;8.6"
.

We dont set it locally since as you have found our kernels can build for some lower archs just not all of them. We should maybe lower bound this setting but still via above env var

@jcaip
Copy link
Contributor

jcaip commented Jan 24, 2025

cc @drisspg do you have an example of how we can split based on CUDA ARCH? Maybe we can add a check that way instead - although if it's unset by default (and if that equates to build everything) that might get a little tricky.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants