Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Display warning for unknown quants config instead of an error #35963

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

SunMarc
Copy link
Member

@SunMarc SunMarc commented Jan 29, 2025

What does this PR do?

This PR changes how we deal with unknown quantization config. Instead of raising an error, we just skip the quantization logic and display a warning. Partially fixes #35471

@SunMarc SunMarc requested a review from MekkCyber January 29, 2025 16:34
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@MekkCyber
Copy link
Contributor

Thanks for the fix ! Looks good !

@staticmethod
def supports_quant_method(quantization_config_dict):
quant_method = quantization_config_dict.get("quant_method", None)
# We need a special care for bnb models to make sure everything is BC ..
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please rewrite or remove the comment? What are bnb models? what is BC? BC = backwards compatible? If so please write it.

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add a test specifically that has an unsupported config? 🤠

Comment on lines +3637 to +3640
pre_quantized = getattr(
config, "quantization_config", None
) is not None and AutoHfQuantizer.supports_quant_method(config.quantization_config)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
pre_quantized = getattr(
config, "quantization_config", None
) is not None and AutoHfQuantizer.supports_quant_method(config.quantization_config)
pre_quantized = getattr(config, "quantization_config", None)
if pre_quantized is not None and AutoHfQuantizer.supports_quant_method(config.quantization_config):
pre_quantized = None # Unsupported methods are just skipped

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Unknown quantization type, got fp8
5 participants