Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

predict txt is different between eval_rec.py and infer_rec.py #65

Open
maxh2010 opened this issue Dec 18, 2024 · 5 comments
Open

predict txt is different between eval_rec.py and infer_rec.py #65

maxh2010 opened this issue Dec 18, 2024 · 5 comments

Comments

@maxh2010
Copy link

I found that replace_batchnorm is call after model file is loaded in infer_rec.py(line:222), that is caused the pred_result is different
eval_rec.py between infer_rec.py.

why the replace_batchnorm function is added in infer_rec.py

@Topdu
Copy link
Owner

Topdu commented Dec 18, 2024

This is a result of the reparameterization technique, and batchnorm allows for convolutional operator merging. The detailed structure of the network can be found in RepViT.
You can give a sample with inconsistent results. Theoretically, replace_batchnorm will only affect inference speed, not accuracy.

@maxh2010
Copy link
Author

the pic and result has attached
model file is downloaded from
https://github.com/Topdu/OpenOCR/releases/download/develop0.0.1/openocr_repsvtr_ch.pth
0bnVjWMQ_4.zip

@Topdu
Copy link
Owner

Topdu commented Dec 19, 2024

0bnVjWMQ_4.zip

Are these images manually cropped?

@maxh2010
Copy link
Author

I have a project to recognize Captcha, the images are cropped for Captcha images , these crop imges are detected and cropped by detect model.
The detect model is trained by Yolo usring our own data.
0anRvlHA
0AntI9SV
0AQV37hS

@Topdu
Copy link
Owner

Topdu commented Dec 20, 2024

After our experiment, the third example you provided appears to have inconsistent results. In cases where the difference in data distribution between the test set and the training set is not significant, replace_norm is not affecting the recognition results. The few examples you have shown have a large difference in dataset distribution from the training set, which the recognition model is unable to recognize correctly, and thus may result in replace_norm affecting the recognition results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants