-
Notifications
You must be signed in to change notification settings - Fork 212
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[PTQ4SR] Unfair comparison in paper? #53
Comments
I found that this code you mentioned is no longer available. Could you please share a code with me? My email is [email protected] |
Hi, unfortunately, I couldn't find the code as well, but I just left the issue here as the link from the paper leads to this repository. |
Sorry for that the code is not quite ready yet for personal resason. We will release our code within one month. And we will add QAT experiments based on EDSR (layers=16, dimension size=64) and present them in the latest code. |
Did the code release? |
Will this code be released? |
In the CVPR2023 paper, "Toward Accurate Post-Training Quantization for Image Super Resolution",
Table 5 (Sec. 4.2) of the paper seems unfair.
PAMS and FQSR are implemented on EDSR-baseline model (layers=16, dimension size=64),
while PTQ4SR (this works) is implemented on EDSR model (layers=32, dimension size=256).
Since the accuracy of EDSR and EDSR-baseline is largely different, isn't it unfair to compare methods with different backbones?
Are there comparisons made using the same EDSR-baseline backbone?
Looking forward to your code release!
The text was updated successfully, but these errors were encountered: