CN111597956A - 基于深度学习模型和相对方位标定的图片文字识别方法 - Google Patents

基于深度学习模型和相对方位标定的图片文字识别方法 Download PDF

Info

Publication number
CN111597956A
CN111597956A CN202010397848.8A CN202010397848A CN111597956A CN 111597956 A CN111597956 A CN 111597956A CN 202010397848 A CN202010397848 A CN 202010397848A CN 111597956 A CN111597956 A CN 111597956A
Authority
CN
China
Prior art keywords
character
character area
field
extracted
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010397848.8A
Other languages
English (en)
Other versions
CN111597956B (zh
Inventor
连春华
詹开明
林隆永
郭炫志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Jiuyuan Yinhai Software Co ltd
Original Assignee
Sichuan Jiuyuan Yinhai Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Jiuyuan Yinhai Software Co ltd filed Critical Sichuan Jiuyuan Yinhai Software Co ltd
Priority to CN202010397848.8A priority Critical patent/CN111597956B/zh
Publication of CN111597956A publication Critical patent/CN111597956A/zh
Application granted granted Critical
Publication of CN111597956B publication Critical patent/CN111597956B/zh
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06V10/225Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on a marking or identifier characterising the area
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/414Extracting the geometrical structure, e.g. layout tree; Block segmentation, e.g. bounding boxes for graphics or text
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Character Discrimination (AREA)

Abstract

本发明公开了基于深度学习模型和相对方位标定的图片文字识别方法,包括:步骤S100:采用OCR检测模型检测图片中的文字区域,获取文字区域的边界坐标点;步骤S200:切出文字区域,并采用文字识别模型识别文字区域的内容;步骤S300:从识别的内容中获取要提取的字段;步骤S400:根据事先定义的文字方位关系找到与该字段匹配的提取值。本发明仅定义相对方位关系,不定义识别区域绝对位置;适应性极强,在一定范围内,识别准确度与拍照角度无关,即使图片歪斜,光照强,光照弱,文字扭曲,也能有很高识别率,不需要对图片位置纠偏。

Description

基于深度学习模型和相对方位标定的图片文字识别方法
技术领域
本发明涉及图像识别技术领域,具体的说,是基于深度学习模型和相对方位标定的图片文字识别方法。
背景技术
在生活与工作场景中经常需要对各种数码显示屏读取数据进行记录,如医院血透科室护士需要对血透仪的数码显示屏读取数据并记录到病案的场景,而医院中使用的血透仪有几十上百台,护士采用手工将数据一一录入需要花费很多时间。现有技术中也有一些利用OCR识别模型识别图像,但是,识别模型训练集中需要对字符的坐标进行事先标注,操作繁琐的问题。
发明内容
本发明的目的在于提供基于深度学习模型和相对方位标定的图片文字识别方法,用于解决现有技术中手工录入数据费时以及识别模型需要实现进行坐标标注操作繁琐的问题。
本发明通过下述技术方案解决上述问题:
基于深度学习模型和相对方位标定的图片文字识别方法,包括:
步骤S100:采用OCR检测模型检测图片中的文字区域,获取文字区域的边界坐标点;
步骤S200:切出文字区域,并采用文字识别模型识别文字区域的内容;
步骤S300:从识别的内容中获取要提取的字段;
步骤S400:根据事先定义的文字方位关系找到与该字段匹配的提取值。
所述步骤S400中对每一个要提取的字段,分别采用以下方法找到匹配的提取值:
A:将字段对应的文字区域的边界延伸,将其相邻区域分成八个方位;
B:根据预先定义的文字方位关系,找到位于该字段的预设方位的所有提取值,计算各个提取值到字段对应的文字区域的中心距离,中心距离最小的提取值即为该字段匹配的值。
所述计算提取值到文字区域的中心距离的方法为:分别计算各个提取值所在的文字区域的中心点,再分别计算提取值所在的文字区域的中心点与字段所在文字区域中心点的距离,得到中心距离。
所述中心点由文字区域的边界坐标点计算得到。
还包括将字段与提取值以及其对应关系导出至表格。
本发明与现有技术相比,具有以下优点及有益效果:
(1)本发明仅定义相对方位关系,不定义识别区域绝对位置;适应性极强,在一定范围内,识别准确度与拍照角度无关,即使图片歪斜,光照强,光照弱,文字扭曲,也能有很高识别率,不需要对图片位置纠偏。
(2)本发明迁移性强,对于一个新的机器信息识别,原始模型可以不用训练,也不用编写新的代码,只需要配置一个简单的新方位模板,也能有较高的准确率。
(3)模型具有一般性,遇到新的识别场景,只需标注数张或数十张图片,再进行迁移学习,即可获得很高的准确率,无需修改添加任何代码。
附图说明
图1为本发明的流程图;
图2为待识别的图片;
图3为文字方位关系示意图。
具体实施方式
下面结合实施例对本发明作进一步地详细说明,但本发明的实施方式不限于此。
实施例:
结合附图1所示,基于深度学习模型和相对方位标定的图片文字识别方法,包括:
步骤S100:事先使用EAST,DB等开源模型,把自有标注的数据集结合ICPR,LSVT,ART,RECTS等开源数据集一起训练,得到文字区域检测模型即OCR检测模型;本发明中采用的是现有技术训练文字区域检测模型,在此不再详述训练过程。采用OCR检测模型检测图片中的文字区域,获取每个区域组成的多边形每个角的坐标点;如对图2所示图片进行识别,得到如下坐标点:
[0.059,0.23,0.059,0.259,0.134,0.259,0.134,0.23]
[0.518,0.263,0.517,0.29,0.585,0.29,0.585,0.263]
[0.186,0.229,0.186,0.257,0.304,0.257,0.303,0.229]
[0.344,0.23,0.344,0.258,0.427,0.258,0.427,0.23]
[0.336,0.26,0.336,0.29,0.446,0.29,0.445,0.26]
[0.478,0.23,0.478,0.259,0.564,0.259,0.563,0.23]
[0.773,0.289,0.773,0.316,0.891,0.316,0.891,0.289]
[0.763,0.317,0.763,0.348,0.898,0.348,0.897,0.317]
[0.602,0.355,0.602,0.381,0.713,0.381,0.713,0.355]
[0.629,0.385,0.629,0.412,0.708,0.412,0.708,0.385]
[0.771,0.458,0.771,0.482,0.887,0.482,0.886,0.458]
[0.759,0.429,0.759,0.455,0.893,0.455,0.893,0.429]
[0.261,0.337,0.261,0.366,0.352,0.366,0.352,0.337]
[0.291,0.367,0.29,0.398,0.426,0.398,0.426,0.367]
[0.269,0.431,0.268,0.46,0.354,0.46,0.354,0.431]
[0.347,0.406,0.347,0.43,0.429,0.43,0.429,0.406]
[0.081,0.466,0.081,0.495,0.161,0.495,0.16,0.466]
[0.092,0.498,0.092,0.524,0.172,0.524,0.171,0.498]
[0.2,0.466,0.2,0.494,0.315,0.494,0.314,0.466]
[0.193,0.498,0.193,0.521,0.327,0.521,0.327,0.498]
[0.075,0.528,0.075,0.542,0.161,0.543,0.16,0.528]
[0.226,0.294,0.226,0.308,0.265,0.308,0.265,0.294]
[0.341,0.295,0.341,0.31,0.46,0.31,0.46,0.294]
[0.827,0.223,0.827,0.244,0.954,0.244,0.954,0.223]
[0.071,0.341,0.071,0.357,0.214,0.358,0.214,0.34]
[0.061,0.358,0.061,0.371,0.215,0.372,0.215,0.357]
[0.115,0.383,0.115,0.398,0.175,0.398,0.175,0.383]
[0.091,0.416,0.091,0.431,0.216,0.431,0.216,0.416]
[0.238,0.525,0.238,0.54,0.275,0.54,0.275,0.525]
[0.379,0.436,0.379,0.448,0.446,0.448,0.445,0.436]
[0.449,0.458,0.449,0.448,0.382,0.448,0.382,0.458]
[0.909,0.325,0.908,0.341,0.964,0.341,0.964,0.325]
[0.371,0.52,0.371,0.535,0.438,0.535,0.438,0.52]
[0.375,0.534,0.374,0.546,0.45,0.546,0.45,0.534]
[0.479,0.52,0.479,0.534,0.555,0.534,0.555,0.52]
[0.499,0.533,0.499,0.546,0.562,0.546,0.562,0.533]
步骤S200:切出文字区域,并采用文字识别模型识别每个多边形区域的内容;文字识别模型使用CNN+LSTM,AttentionOCR等模型,把自有标注的数据集结合ICPR,LSVT,ART,RECTS等开源数据集一起训练得到,本发明中采用的是现有技术训练文字识别模型,在此不再详述训练过程;
得到:
[0.059,0.23,0.059,0.259,0.134,0.259,0.134,0.23]Pa
[0.518,0.263,0.517,0.29,0.585,0.29,0.585,0.263]0
[0.186,0.229,0.186,0.257,0.304,0.257,0.303,0.229]ΔBV
[0.344,0.23,0.344,0.258,0.427,0.258,0.427,0.23]Qb
[0.336,0.26,0.336,0.29,0.446,0.29,0.445,0.26]235
[0.478,0.23,0.478,0.259,0.564,0.259,0.563,0.23]Ps
[0.773,0.289,0.773,0.316,0.891,0.316,0.891,0.289]UFR
[0.763,0.317,0.763,0.348,0.898,0.348,0.897,0.317]0.39
[0.602,0.355,0.602,0.381,0.713,0.381,0.713,0.355]TMP
[0.629,0.385,0.629,0.412,0.708,0.412,0.708,0.385]91
[0.771,0.458,0.771,0.482,0.887,0.482,0.886,0.458]UFV
[0.759,0.429,0.759,0.455,0.893,0.455,0.893,0.429]1.07
[0.261,0.337,0.261,0.366,0.352,0.366,0.352,0.337]Vinf
[0.291,0.367,0.29,0.398,0.426,0.398,0.426,0.367]12.7
[0.269,0.431,0.268,0.46,0.354,0.46,0.354,0.431]Rinf
[0.347,0.406,0.347,0.43,0.429,0.43,0.429,0.406]57
[0.081,0.466,0.081,0.495,0.161,0.495,0.16,0.466]Pv
[0.092,0.498,0.092,0.524,0.172,0.524,0.171,0.498]56
[0.2,0.466,0.2,0.494,0.315,0.494,0.314,0.466]Qbacc
[0.193,0.498,0.193,0.521,0.327,0.521,0.327,0.498]49.8
[0.075,0.528,0.075,0.542,0.161,0.543,0.16,0.528]mmHg
[0.226,0.294,0.226,0.308,0.265,0.308,0.265,0.294]%
[0.341,0.295,0.341,0.31,0.46,0.31,0.46,0.294]ml/min
[0.827,0.223,0.827,0.244,0.954,0.244,0.954,0.223]16:46
[0.071,0.341,0.071,0.357,0.214,0.358,0.214,0.34]SYS/DIA
[0.061,0.358,0.061,0.371,0.215,0.372,0.215,0.357]124/75
[0.115,0.383,0.115,0.398,0.175,0.398,0.175,0.383]67
[0.091,0.416,0.091,0.431,0.216,0.431,0.216,0.416]16:42
[0.238,0.525,0.238,0.54,0.275,0.54,0.275,0.525]1
[0.379,0.436,0.379,0.448,0.446,0.448,0.445,0.436]ml/
[0.449,0.458,0.449,0.448,0.382,0.448,0.382,0.458]min
[0.909,0.325,0.908,0.341,0.964,0.341,0.964,0.325]1/h
[0.371,0.52,0.371,0.535,0.438,0.535,0.438,0.52]Na+
[0.375,0.534,0.374,0.546,0.45,0.546,0.45,0.534]138
[0.479,0.52,0.479,0.534,0.555,0.534,0.555,0.52]HCO3-
[0.499,0.533,0.499,0.546,0.562,0.546,0.562,0.533]32
步骤S300:区域关系匹配识别:
从识别的内容中获取要提取的字段,将字段对应的文字区域的边界延伸,将其相邻区域分成八个方位,文字方位关系预定义如图3所示,包括N、NW、NE、W、E、SW、S、SE八个方位。对每一个要提取的字段,如字段TMP:S,找到位于TMP下面的所有提取值,分别计算各个提取值所在的文字区域的中心点,再分别计算提取值所在的文字区域的中心点(中心点由文字区域的边界坐标点计算得到)与UFR所在文字区域中心点的距离,得到中心距离,中心距离最小的提取值即为该字段匹配的值;
找到所有的字段与其对应的提取值之后,将字段与提取值以及其对应关系导出至表格。本发明与传统方案比较,具有如下优点,如表1。
Figure BDA0002488335130000071
表1对照表
由表1可知,本发明有如下优点:
1.适应性极强,在一定范围内,识别准确度与拍照角度无关,即使图片歪斜,光照强,光照弱,文字扭曲,也能有很高识别率;
2.迁移性强,对于一个新的机器信息识别,原始模型可以不用训练,也不用编写新的代码,只需要配置一个简单的新方位模板,也能有较高的准确率;
3.模型具有一般性,遇到新的识别场景,只需标注数张或数十张图片,再进行迁移学习,即可获得很高的准确率,无需修改添加任何代码。
尽管这里参照本发明的解释性实施例对本发明进行了描述,上述实施例仅为本发明较佳的实施方式,本发明的实施方式并不受上述实施例的限制,应该理解,本领域技术人员可以设计出很多其他的修改和实施方式,这些修改和实施方式将落在本申请公开的原则范围和精神之内。

Claims (5)

1.基于深度学习模型和相对方位标定的图片文字识别方法,其特征在于,包括:
步骤S100:采用OCR检测模型检测图片中的文字区域,获取文字区域的边界坐标点;
步骤S200:切出文字区域,并采用文字识别模型识别文字区域的内容;
步骤S300:从识别的内容中获取要提取的字段;
步骤S400:根据事先定义的文字方位关系找到与该字段匹配的提取值。
2.根据权利要求1所述的基于深度学习模型和相对方位标定的图片文字识别方法,其特征在于,所述步骤S400中对每一个要提取的字段,分别采用以下方法找到匹配的提取值:
A:将字段对应的文字区域的边界延伸,将其相邻区域分成八个方位;
B:根据预先定义的文字方位关系,找到位于该字段的预设方位的所有提取值,计算各个提取值到字段对应的文字区域的中心距离,中心距离最小的提取值即为该字段匹配的值。
3.根据权利要求2所述的基于深度学习模型和相对方位标定的图片文字识别方法,其特征在于,所述计算提取值到文字区域的中心距离的方法为:分别计算各个提取值所在的文字区域的中心点,再分别计算提取值所在的文字区域的中心点与字段所在文字区域中心点的距离,得到中心距离。
4.根据权利要求3所述的基于深度学习模型和相对方位标定的图片文字识别方法,其特征在于,所述中心点由文字区域的边界坐标点计算得到。
5.根据权利要求1所述的基于深度学习模型和相对方位标定的图片文字识别方法,其特征在于,还包括将字段与提取值以及其对应关系导出至表格。
CN202010397848.8A 2020-05-12 2020-05-12 基于深度学习模型和相对方位标定的图片文字识别方法 Active CN111597956B (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010397848.8A CN111597956B (zh) 2020-05-12 2020-05-12 基于深度学习模型和相对方位标定的图片文字识别方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010397848.8A CN111597956B (zh) 2020-05-12 2020-05-12 基于深度学习模型和相对方位标定的图片文字识别方法

Publications (2)

Publication Number Publication Date
CN111597956A true CN111597956A (zh) 2020-08-28
CN111597956B CN111597956B (zh) 2023-06-02

Family

ID=72185324

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010397848.8A Active CN111597956B (zh) 2020-05-12 2020-05-12 基于深度学习模型和相对方位标定的图片文字识别方法

Country Status (1)

Country Link
CN (1) CN111597956B (zh)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101916358A (zh) * 2010-09-16 2010-12-15 四川久远银海软件股份有限公司 一种利用条形码对影像文档自动分拣的方法及***
CN104036292A (zh) * 2014-06-12 2014-09-10 西安华海盈泰医疗信息技术有限公司 一种医学影像数字胶片中文字区域提取方法及提取***
CN106507285A (zh) * 2016-11-22 2017-03-15 宁波亿拍客网络科技有限公司 一种基于位置基点的定位方法、特定标记及相关设备方法
CN107977659A (zh) * 2016-10-25 2018-05-01 北京搜狗科技发展有限公司 一种文字识别方法、装置及电子设备
CN108417023A (zh) * 2018-05-02 2018-08-17 长安大学 基于出租车上下客点空间聚类的交通小区中心点选取方法
CN109189965A (zh) * 2018-07-19 2019-01-11 中国科学院信息工程研究所 图像文字检索方法及***
CN110059685A (zh) * 2019-04-26 2019-07-26 腾讯科技(深圳)有限公司 文字区域检测方法、装置及存储介质
CN110458158A (zh) * 2019-06-11 2019-11-15 中南大学 一种针对盲人辅助阅读的文本检测与识别方法
CN110765907A (zh) * 2019-10-12 2020-02-07 安徽七天教育科技有限公司 一种基于深度学习的视频中试卷纸质文档信息提取***及方法

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101916358A (zh) * 2010-09-16 2010-12-15 四川久远银海软件股份有限公司 一种利用条形码对影像文档自动分拣的方法及***
CN104036292A (zh) * 2014-06-12 2014-09-10 西安华海盈泰医疗信息技术有限公司 一种医学影像数字胶片中文字区域提取方法及提取***
CN107977659A (zh) * 2016-10-25 2018-05-01 北京搜狗科技发展有限公司 一种文字识别方法、装置及电子设备
CN106507285A (zh) * 2016-11-22 2017-03-15 宁波亿拍客网络科技有限公司 一种基于位置基点的定位方法、特定标记及相关设备方法
CN108417023A (zh) * 2018-05-02 2018-08-17 长安大学 基于出租车上下客点空间聚类的交通小区中心点选取方法
CN109189965A (zh) * 2018-07-19 2019-01-11 中国科学院信息工程研究所 图像文字检索方法及***
CN110059685A (zh) * 2019-04-26 2019-07-26 腾讯科技(深圳)有限公司 文字区域检测方法、装置及存储介质
CN110458158A (zh) * 2019-06-11 2019-11-15 中南大学 一种针对盲人辅助阅读的文本检测与识别方法
CN110765907A (zh) * 2019-10-12 2020-02-07 安徽七天教育科技有限公司 一种基于深度学习的视频中试卷纸质文档信息提取***及方法

Also Published As

Publication number Publication date
CN111597956B (zh) 2023-06-02

Similar Documents

Publication Publication Date Title
US9195907B1 (en) Method for omnidirectional processing of 2D images including recognizable characters
US7970213B1 (en) Method and system for improving the recognition of text in an image
CN110956138B (zh) 一种基于家教设备的辅助学习方法及家教设备
KR101165415B1 (ko) 이미지내 생체 얼굴 인식 방법 및 인식 장치
US20190122070A1 (en) System for determining alignment of a user-marked document and method thereof
EP0853293A1 (en) Subject image extraction method and apparatus
CN105009170A (zh) 物体识别设备、方法和存储介质
Skoryukina et al. Document localization algorithms based on feature points and straight lines
CN108805519B (zh) 纸质日程表电子化生成方法、装置及电子日程表生成方法
CN110909743B (zh) 图书盘点方法及图书盘点***
CN110992373B (zh) 一种基于深度学习的胸腔器官分割方法
CN110717492A (zh) 基于联合特征的图纸中字符串方向校正方法
Calvo-Zaragoza et al. Avoiding staff removal stage in optical music recognition: application to scores written in white mensural notation
CN110458158A (zh) 一种针对盲人辅助阅读的文本检测与识别方法
JP2016071898A (ja) 帳票認識装置、帳票認識システム、帳票認識システムのプログラム、帳票認識システムの制御方法、帳票認識システムプログラムを搭載した記録媒体
CN112270297A (zh) 用于显示识别结果的方法和计算机***
CN111259857A (zh) 一种人脸笑容评分方法及人脸情绪分类方法
JP2003346078A (ja) 2次元コード読取装置、画像入力装置、2次元コード読み取り方法、画像入力方法、そのプログラム、及びそのプログラムを記録した記録媒体
CN111597956A (zh) 基于深度学习模型和相对方位标定的图片文字识别方法
CN111401158A (zh) 难样本发现方法、装置及计算机设备
CN115457585A (zh) 作业批改的处理方法、装置、计算机设备及可读存储介质
KR101766787B1 (ko) Gpu장치를 기반으로 하는 딥러닝 분석을 이용한 영상 보정 방법
CN113257392B (zh) 一种超声机普适外接数据自动预处理方法
JPWO2022163508A5 (zh)
CN108734167B (zh) 一种被污染的胶片文字识别方法

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant