How to use inference model? #17812
-
|
Can you help me, I don't understand how to use the model I retrained on my data. If I use the Python API, an error occurs. Му inferens dir have inference.json, inference.yml and inference.pdiparams AssertionError Traceback (most recent call last) File c:\progs\python\work\venv\Lib\site-packages\paddleocr_pipelines\ocr.py:163, in PaddleOCR.init(self, doc_orientation_classify_model_name, doc_orientation_classify_model_dir, doc_unwarping_model_name, doc_unwarping_model_dir, text_detection_model_name, text_detection_model_dir, textline_orientation_model_name, textline_orientation_model_dir, textline_orientation_batch_size, text_recognition_model_name, text_recognition_model_dir, text_recognition_batch_size, use_doc_orientation_classify, use_doc_unwarping, use_textline_orientation, text_det_limit_side_len, text_det_limit_type, text_det_thresh, text_det_box_thresh, text_det_unclip_ratio, text_det_input_shape, text_rec_score_thresh, return_word_box, text_rec_input_shape, lang, ocr_version, **kwargs) File c:\progs\python\work\venv\Lib\site-packages\paddleocr_pipelines\base.py:67, in PaddleXPipelineWrapper.init(self, paddlex_config, **common_args) File c:\progs\python\work\venv\Lib\site-packages\paddleocr_pipelines\base.py:105, in PaddleXPipelineWrapper._create_paddlex_pipeline(self) AssertionError: Model name mismatch,please input the correct model dir. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Beta Was this translation helpful? Give feedback.

Hello, the inference logic of PaddleOCR 3.0 is tied to the model names.
NRTR does not appear to be in the supported list of PaddleOCR 3.0.
If its pre-processing and post-processing logic are consistent with PP-OCRv5_server_rec, you can directly change the
model_name:to PP-OCRv5_server_rec.