Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Obtained different PPL for Wikitext and C4 compared to results reported in the paper #95

Open
yc2367 opened this issue Sep 12, 2024 · 2 comments

Comments

@yc2367
Copy link

yc2367 commented Sep 12, 2024

Hi, thank you so much for the amazing paper and repo.

I am trying to reproduce the Wikitext and C4 perplexity in the OmniQuant paper. I downloaded the repo and run the following experiment:

CUDA_VISIBLE_DEVICES=0 python main.py --model meta-llama/Llama-2-7b-hf --epochs 20 --output_dir ./log/llama-7b-w3a16g128 --eval_ppl --wbits 3 --abits 16 --group_size 128 --lwc

As shown in the paper, the ppl for Wikitext and C4 for Llama-2-7B at w3g128 should be 6.03 and 7.75, respectively.
But I obtained the following results from the log.

[2024-09-12 03:20:31 root] (main.py 144): INFO wikitext2 : 6.098666191101074
[2024-09-12 03:23:30 root] (main.py 144): INFO c4 : 7.8100385665893555

Did I set the hyperparameters wrongly? Hope you could help me clarify, thanks!

@ChenMnZ
Copy link
Collaborator

ChenMnZ commented Oct 11, 2024

The checkpoints have some mismatch with current code.

Retrain by yourself through current code can successfully reproduce the results.

@yc2367
Copy link
Author

yc2367 commented Oct 11, 2024

Thank you for the quick response!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants