Table Recognition: Training model is transferred to the inference model, the prediction effect is inconsistent? #14505
Replies: 5 comments
-
The issue you are experiencing, where the inference results using the exported model are inconsistent or incorrect compared to the results from training or direct inference with the Potential Causes
Steps to Debug and Resolve1. Verify Preprocessing Consistency
2. Ensure Postprocessing Consistency
3. Confirm Character Dictionary
4. Pass All Necessary Parameters During Export
5. Run Direct Inference on Exported Model
6. Validate the Exported Model
7. Check for Known Issues
8. Debug Static Graph Behavior
9. Test with a Pretrained Model
Recommended Workflow
If none of these solutions resolve the issue, you might need to open a detailed issue on the PaddleOCR GitHub repository with a focus on the export process. Be sure to include:
Response generated by 🤖 feifei-bot | chatgpt-4o-latest |
Beta Was this translation helpful? Give feedback.
-
@vuthehuy1997 you may need try paddleocr 2.8.1. |
Beta Was this translation helpful? Give feedback.
-
Thanks for your supportBut not work
I use config from github is configs/table/SLANet_ch.yml -> change data and add pretrained according to finetune instructions -> configs/table/SLANet_finetune.yml
python3 tools/infer_table.py -c configs/table/SLANet_finetune.yml Get results tools/infer_table.py is used to run the infer model which does not support running mdel train
Use the paddle_infer API to check whether the exported model is working as expected: paddle_infer --model_dir=../translate-pdf-scan/src/weights/table_rec/ch_ppstructure_mobile_v2.0_SLANet_finetune I do not use
I can not get problem from it
Some differences might arise due to the transition from a dynamic to a static graph. Try retraining with the Global.infer_mode=True option to force the model into inference mode during training: Global:
python3 tools/infer_table.py -c configs/table/SLANet_ch.yml -o cd ppstructure/ -> The result is still wrong paddleocr 2.8.1 : I also tried to install it but it didn't work.. I noticed that the predicted output of the exported model is very small, and many, the model train predicts the box to shape (8,10) The result returns 50 instead of 8 coordinates xyxyxyxy |
Beta Was this translation helpful? Give feedback.
-
please try the latest paddleocr branch. |
Beta Was this translation helpful? Give feedback.
-
The issue you are experiencing, where the SLANet recognition table model produces correct results during training but incorrect results after exporting for inference, likely stems from differences in the configurations, preprocessing, or postprocessing steps between the training/finetuning and the inference stages. Below are the key areas to investigate and potential solutions. 1. Preprocessing and Postprocessing MismatchThe preprocessing and postprocessing steps used during training/finetuning and inference must be consistent. Any mismatch in these steps can lead to incorrect results. Steps to Check:
Fix:Compare the preprocessing pipeline in 2. Export ConfigurationThe export command you used references the finetuned model, but the exported inference model may not include all necessary configuration details (e.g., postprocessing parameters, dictionary paths). Steps to Check:
Fix:Update the export command to explicitly include all necessary parameters: python3 tools/export_model.py \
-c configs/table/SLANet_finetune.yml \
-o Global.pretrained_model=../translate-pdf-scan/src/weights/table_rec/SLANet_ch/best_accuracy \
Global.character_dict_path=ppocr/utils/dict/table_structure_dict_ch.txt \
Global.save_inference_dir=../translate-pdf-scan/src/weights/table_rec/ch_ppstructure_mobile_v2.0_SLANet_finetune 3. Postprocessing ParametersThe Steps to Check:
Fix:Modify 4. Dynamic vs Static ShapesIf the model was trained with dynamic shapes (e.g., variable input sizes), but the inference model expects static shapes, this can lead to inconsistencies. Steps to Check:
Fix:Ensure that the input image dimensions (e.g., 5. AMP (Automatic Mixed Precision)You have enabled Steps to Check:
Fix:If AMP is causing issues, try disabling it during training and re-export the model: python3 tools/train.py -c configs/table/SLANet_finetune.yml \
-o Global.pretrained_model=./weights/whl/table/ch_ppstructure_mobile_v2.0_SLANet_train/best_accuracy \
Global.use_amp=False 6. Model Architecture and Export LogicThere may be subtle issues in how the model is exported. For example, some layers or operations might behave differently during inference due to bugs in the export logic. Steps to Check:
Fix:Run this command to test the exported model: python3 tools/infer_table.py \
-c configs/table/SLANet_finetune.yml \
-o Global.pretrained_model=../translate-pdf-scan/src/weights/table_rec/ch_ppstructure_mobile_v2.0_SLANet_finetune \
Global.infer_img=ppstructure/docs/table/table.jpg If the results are correct, the issue lies within 7. Character DictionaryThe Steps to Check:
Fix:Explicitly pass the correct dictionary path during inference: python table/predict_structure.py \
--table_model_dir=../../translate-pdf-scan/src/weights/table_rec/ch_ppstructure_mobile_v2.0_SLANet_finetune \
--table_char_dict_path=../ppocr/utils/dict/table_structure_dict_ch.txt \
--image_dir=docs/table/table.jpg \
--output=../output/table 8. Debugging and LogsEnable debugging logs to identify where the issue arises. Steps to Check:
Fix:Compare the outputs of Summary of Recommendations:
By systematically addressing these potential issues, you should be able to resolve the inconsistencies between training and inference results. Response generated by feifei-bot | chatgpt-4o-latest |
Beta Was this translation helpful? Give feedback.
-
🔎 Search before asking
🐛 Bug (问题描述)
When I finished training the SLANet recognition table model, I exported it to a format for inference, but the results were wrong. i only edit SLANet.yml to SLANet_finetune.yml like tutorial on github document
Model training command (finetune from model ch_ppstructure_mobile_v2.0_SLANet_train)
export command:
If i run the model after training with the command:
Then it gives correct results
If running model was export:
Then the result will be wrong
Does anyone know what the error is when exporting?
🏃♂️ Environment (运行环境)
🌰 Minimal Reproducible Example (最小可复现问题的Demo)
Beta Was this translation helpful? Give feedback.
All reactions