CN111353497B - Identification method and device for identity card information - Google Patents

Identification method and device for identity card information Download PDF

Info

Publication number
CN111353497B
CN111353497B CN201811572672.4A CN201811572672A CN111353497B CN 111353497 B CN111353497 B CN 111353497B CN 201811572672 A CN201811572672 A CN 201811572672A CN 111353497 B CN111353497 B CN 111353497B
Authority
CN
China
Prior art keywords
target text
text region
identity card
identification
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811572672.4A
Other languages
Chinese (zh)
Other versions
CN111353497A (en
Inventor
刘聪海
姚小龙
武晨
赵培
杨刘洋
吕骥图
李振威
吕朋伟
彭瑞
吴子凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SF Technology Co Ltd
Original Assignee
SF Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SF Technology Co Ltd filed Critical SF Technology Co Ltd
Priority to CN201811572672.4A priority Critical patent/CN111353497B/en
Publication of CN111353497A publication Critical patent/CN111353497A/en
Application granted granted Critical
Publication of CN111353497B publication Critical patent/CN111353497B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/34User authentication involving the use of external additional devices, e.g. dongles or smart cards
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Security & Cryptography (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Character Input (AREA)
  • Character Discrimination (AREA)

Abstract

The application discloses a method and a device for identifying identity card information, wherein the method comprises the following steps: acquiring an identity card image; detecting a text region of the identity card image, and determining a target text region and a label of the target text region, wherein the label is used for representing the text category in the target text region; identifying the target text region based on the tag to obtain an identification result of the target text region and a confidence coefficient of the identification result; the identification result comprises at least one of name, ID card number, gender, ethnicity and effective date of ID card; determining the quality value of the identity card image according to the confidence coefficient of the identification result of the target text region; and when the quality value is higher than a preset threshold value, determining the identification result of the target text area as the identity card information of the identity card image. The method can improve the identification accuracy of the identity card information.

Description

Identification method and device for identity card information
Technical Field
The application relates to the technical field of certificate identification, in particular to an identification method and device of identity card information.
Background
The identification of the certificate refers to the identification of the text information on the certificate by utilizing the optical character recognition (Optical Character Recognition, OCR) technology, and specifically refers to the process of analyzing and identifying the acquired certificate image by utilizing the OCR technology so as to obtain the text information on the certificate. Compared with the traditional manual input mode, the automatic information input of the OCR technology has great advantages, the speed and the accuracy are far superior to the working efficiency of human beings, and especially when people are in a fatigue state along with the increase of working time, the speed of inputting information is reduced, and the accuracy is reduced.
However, the OCR technology has a disadvantage of weak anti-interference capability and high accuracy in recognizing formatted documents such as word documents, but cannot well process identification of certificates in complex natural scenes, especially identification cards, resulting in a problem of low recognition accuracy.
Disclosure of Invention
In view of the foregoing drawbacks or shortcomings in the prior art, it is desirable to provide an identification scheme for identification card information, which can improve the accuracy of identification card information identification.
In a first aspect, the present application provides a method for identifying information of an identification card, where the method includes:
acquiring an identity card image;
detecting a text region of the identity card image, and determining a target text region and a label of the target text region, wherein the label is used for representing the text category in the target text region;
identifying the target text region based on the tag to obtain an identification result of the target text region and a confidence coefficient of the identification result; the identification result comprises at least one of name, ID card number, gender, ethnicity and effective date of ID card;
determining the quality value of the identity card image according to the confidence coefficient of the identification result of the target text region;
and when the quality value is higher than a preset threshold value, determining the identification result of the target text area as the identity card information of the identity card image.
In a second aspect, an embodiment of the present application provides an identification device for identification card information, where the device includes:
the acquisition unit is used for acquiring the identity card image;
the text region detection unit is used for detecting the text region of the identity card image, determining a target text region and a label of the target text region, wherein the label is used for representing the text category in the target text region;
the identification unit is used for identifying the target text area based on the tag to obtain an identification result of the target text area and the confidence of the identification result; the identification result comprises at least one of name, ID card number, gender, ethnicity and effective date of ID card;
the quality value determining unit is used for determining the quality value of the identity card image according to the confidence coefficient of the identification result of the target text region;
and the identity card information determining unit is used for determining the identification result of the target text area as the identity card information of the identity card image when the quality value is higher than a preset threshold value.
According to the identification method of the identity card information, character area detection is conducted on the acquired identity card image, a target text area and a label of the target text area in the identity card image are determined, then the target text area is identified based on the label, an identification result of the target text area and the confidence coefficient of the identification result are obtained, then the quality value of the identity card image is determined based on the confidence coefficient of the identification result of the target text area, and when the quality value is larger than a preset threshold value, the identification result of the target text area is determined to be the identity card information of the identity card image. Compared with the prior art, the confidence coefficient of the identification result and the judgment of the quality value of the identity card image are increased, when the quality value of the identity card image determined by the confidence coefficient of the identification result meets the preset threshold value, the acquired identity card image is better in definition, and the identification result is determined to be the identity card information at the moment, so that the accuracy of the identification result is ensured.
In some embodiments, angle correction and distortion correction are performed on the identification card image before the target text region is determined from the identification card image, so that the identification card image with the problems of inversion, inclination, shooting pitch angle and the like can be pre-aligned, and the identification accuracy of the identification card information is further improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the accompanying drawings in which:
FIG. 1 is a flow chart of a method for identifying identity card information according to an embodiment of the present application;
fig. 2 is a schematic flow chart of text segmentation for a target text region labeled as a name according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an identification device for identification card information according to another embodiment of the present application;
fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be noted that, for convenience of description, only the portions related to the application are shown in the drawings.
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other.
As mentioned in the background art, when the OCR technology adopted at present recognizes credentials in a complex natural scene, there is a problem of low recognition accuracy. Taking identification card recognition as an example, identification card information is particularly important in various finance and internet related services such as banks, insurance and the like at present, for example, on-line securities account opening and insurance purchasing are carried out, and the identification card information needs to be verified; for overseas mail receiving business, for example, identity card information customs clearance needs to be input; the mobile terminal is subjected to a large amount of scattered customer order service, and the identity card information needs to be verified according to the national requirements. When the OCR technology is adopted to identify the identity card, if the acquired identity card picture has the problems of unclear or improper position and the like, the obtained identification result is likely to be wrong, so that the determined identity card information is also wrong. Therefore, the identification accuracy of the identity card information cannot be guaranteed in the mode.
Based on the defects, the embodiment of the application provides an identification scheme of identity card information, which is characterized in that character area detection is carried out on an acquired identity card image, a target text area and a label of the target text area in the identity card image are determined, then the target text area is identified based on the label, the identification result of the target text area and the confidence of the identification result are obtained, then the quality value of the identity card image is determined based on the identification result of the target text area, and when the quality value is larger than a preset threshold value, the identification result of the target text area is determined as the identity card information of the identity card image. Compared with the prior art, the confidence coefficient of the identification result and the judgment of the quality value of the identity card image are increased, when the quality value of the identity card image determined by the confidence coefficient of the identification result meets the preset threshold value, the acquired identity card image is better in definition, and the identification result is determined to be the identity card information at the moment, so that the accuracy of the identification result is ensured.
The application will be described in detail below with reference to the drawings in connection with embodiments.
Fig. 1 is a flow chart of a method for identifying identification card information according to an embodiment of the present application. The method comprises the following steps:
s101, acquiring an identity card image.
In the embodiment of the application, the identity card image can be obtained by scanning, shooting or uploading by a user and the like, and comprises an identity card front image and an identity card back image.
S102, detecting the text region of the identity card image, and determining a target text region and a label of the target text region, wherein the label is used for representing the text category in the target text region.
According to the embodiment of the application, the text region detection of the identity card image can be realized by using the YOLO neural network. For an identity card image, the text area to be detected comprises the contents of a front name, an identity card number, gender, ethnicity and the effective date of the identity card on the back.
The YOLO neural network is briefly described below.
The YOLO neural network segments the input image into S x S grids, then predicts B bounding boxes for each grid, each bounding box containing 5 predictors x, y, w, h, and confidence, where x and y are predictors of the center coordinates of the bounding box, w and h are predictors of the width and height of the bounding box, and confidence is the confidence of the class to which the bounding box belongs. In addition, each grid predicts probabilities for C categories. The following operation is to take turns in the C categories: the large to small arrangement is performed in a certain class according to the confidence value, then a non-maximal value suppression algorithm (non maximum suppression, NMS) is adopted, the bounding box with high repetition rate is removed, and finally the needed bounding box is determined.
The identity card image is input into the YOLO neural network, so that a target text region required by people and a label of the target text region can be determined, wherein the label refers to a category related in the YOLO neural network. In the embodiment of the application, the label of the target text area can comprise at least one of a name, an identification card number, gender, ethnicity and an identification card valid date.
In addition, because the acquired identity card image may have the problems of inversion, inclination, shooting pitch angle and the like, in order to improve the identification accuracy, the embodiment of the application can also perform angle correction and/or distortion correction on the identity card image before performing text region detection on the identity card image.
The angle correction is used for solving the problem that the identity card image is inverted or inclined due to the shooting mode. In the embodiment of the application, the GoogleNet model can be adopted for implementation. Specifically, the identity card image is input into a pre-trained GoogLeNet model, the angle type of the identity card image is output, such as inclination by 40 degrees, inclination by 180 degrees, inclination by 270 degrees and the like, and then the identity card image is rotated according to the angle type of the obtained identity card image, so that the position of the identity card image is adjusted.
The distortion correction is to solve the problem of distortion of the identification card image caused by the inclination angle, the pitch angle and the like during shooting. In the embodiment of the application, the HED (Holistic-Nested Edge Detection) technology can be adopted. Specifically, the contour edge of the identity card image is firstly determined through the HED, then four vertexes of the identity card image can be calculated, and distortion of the identity card image is corrected through perspective transformation.
S103, identifying the target text region based on the label to obtain an identification result of the target text region and a confidence coefficient of the identification result; the identification result includes at least one of name, identification number, gender, ethnicity, and identification validity date.
After determining the target text region and the tag of the target text region, the target text region may be identified based on the tag. In view of the different characteristics of Chinese characters and numbers in the identity card information, in the embodiment of the application, different modes are adopted to respectively identify the target text areas containing the Chinese characters and the numbers:
1. and identifying target text areas with labels of names, sexes and nations by using a convolutional neural network (Convolutional Neural Networks, CNN) model to obtain identification results and confidence coefficients of the names, the sexes and the nations.
In general, when identifying a target text region, it is necessary to cut characters therein to obtain individual characters. Because the sex and the ethnicity in the identity card information only have one Chinese character, the cutting is not needed, and 2 words, 3 words or more are possible for the name, so that the target text area with the label as the name needs to be subjected to word segmentation first, and a plurality of single-word pictures are obtained.
Then inputting the obtained single-word pictures into a CNN model, and obtaining the recognition result and the confidence coefficient of each word; then, according to the recognition result of each word, the name is recognized, and the product of the confidence of each word is determined as the confidence of the name.
For the target text region with the labels of gender and ethnicity, which is a single-word picture, the text segmentation step can be omitted, and the target text region is directly input into a CNN model to obtain the identification results and confidence of the gender and ethnicity.
The embodiment of the application defines the network structure of the CNN model:
the CNN model consists of conv-relu-bn unit blocks, each block containing 1 convolution layer of 3*3 kernel, a relu layer, a patchnorm layer and a maxpoling layer. 13 such blocks are used throughout the CNN model, in a specific organization:
block1-block2(mp)-block4-block5-block6(mp)-block7-block8(mp)-block9-block10-block11-block12-block13(mp)-out_block。
where mp denotes whether a maxplling layer is used, the organization of the outblock is a flag-dense-relu-dense-relu-bn-dense-softmax.
Referring to fig. 2, a schematic flow chart of text segmentation for a target text region labeled as a name according to an embodiment of the present application is provided. The process comprises the following steps:
s201, calculating the illuminance of the target text region with the label being the name, and adjusting the illumination value of each pixel in the target text region with the label being the name according to the illuminance.
Specifically, the illuminance may be determined according to the following formula:
wherein ave is illuminance of the target text region, and the value is rounded in the interval of 0-255; w and h are the width and height of the target text region, p is the pixel point in the target text region, and c1, c2 and c3 are the color values of the RGB three channels of the pixel point p.
If the ave is too small, the target text area is too dark, and if the ave is too large, the target text area is too exposed, the illumination value of each pixel point in the target text area needs to be adjusted, and the following formula can be adopted during adjustment:
C=αc+β;
wherein, C is the illumination value after the pixel point adjustment, C is the illumination value before the pixel point adjustment, and α and β are parameters, which can be determined by the ave.
After the illuminance of the target text area is adjusted, the accuracy of subsequent text cutting can be improved.
S202, performing binarization processing and color reversal processing on the adjusted target text region in sequence.
After the binarization processing is performed on the adjusted target text region, an image of Bai De black words can be obtained, and in order to facilitate processing during pixel calculation, the image can be further subjected to the color reversal processing, so that the white-background black words are reversed into black-background white words.
S203, performing pixel projection in the vertical direction and the horizontal direction on the processed target text region to obtain text information in the processed target text region, wherein the text information comprises: character height, character spacing, character width, and name length.
Specifically, the processed target text area is reduced to one dimension, and then pixels are deposited in the vertical direction and the horizontal direction in a histogram mode, so that the character height and the character interval are calculated. The character width can be determined according to the character height of 0.8, so that the phenomenon that some names cannot be separated due to complex font interference can be avoided. The name length can be calculated according to the concentration of the pixel projection in the vertical direction, and then the dividing point of the name is determined.
S204, dividing the characters in the processed target text region according to the obtained character information to obtain a plurality of single-character pictures.
2. And identifying a target text area with the label of the identity card number and the identity card effective date by using a convolutional neural network (Convolutional Recurrent Neural Network, CRNN) model to obtain an identification result and a confidence coefficient of the identity card number and the identity card effective date.
Specifically, angle correction and latitude selection can be performed on a target text area with a label being an identity card number and an identity card effective date, so that the target text area with the label being the identity card number and the identity card effective date contains complete digital information;
and then inputting the processed target text region into a CRNN model to obtain the identification result and the confidence coefficient of the identification card number and the effective date of the identification card.
The current CRNN model is generally composed of CNN+BILSTM+CTC, but LSTM training is difficult and effect improvement is not obvious, and the CRNN model is replaced by a biGRU layer in the embodiment of the application so as to improve recognition efficiency.
In addition, the original network fixes the image at 32 in height and does not limit the width, but the identification effect on the identification card number with 18 characters and the identification card effective date with 21 characters is not ideal. In the embodiment of the application, the width is fixed at the ratio of 32:200, and the sign of the effective date is excluded, so that the identification accuracy can be further improved.
S104, determining the quality value of the identity card image according to the confidence coefficient of the identification result of the target text region.
In the embodiment of the application, the confidence of the recognition result of the target text region comprises:
confidence of name, confidence of identification card number, confidence of gender, confidence of ethnicity, and confidence of identification card expiration date.
Therefore, the average value of the 5 confidence degrees can be used as the quality value of the identity card image, so as to measure whether the identity card image meets the identification requirement.
Before determining the quality value of the identity card image, the embodiment of the application can further comprise:
judging whether the identification card number is wrongly identified or not by using the check bit of the identified identification card number, if so, updating the confidence coefficient of the identification card number to 0; and/or
And judging whether the identified effective date of the identity card is identified to be wrong or not by utilizing a preset identification rule, if so, updating the confidence coefficient of the effective date of the identity card to be 0. The identification rules may include, but are not limited to: the interval between two years of the expiration date is a preset value, and both the months and the dates are the same.
The confidence of the identification card number and the validity period of the identification card is updated to ensure that the quality value of the determined identification card image is more accurate. In terms of the identification card number, the identification card number is provided with a check bit, and the identification card number can be verified according to the check bit, so that after the confidence coefficient of the identification card number and the identification card number is determined by using the CRNN model, the determined identification card number can be verified by using the check bit, if the identification card number is verified to be wrong in identification, the confidence coefficient of the identification card number is directly updated to be 0, and compared with the confidence coefficient determined by using the CRNN model, the updated confidence coefficient is more accurate. Similarly, the updated confidence level is more accurate in terms of the effective date of the identity card. The accuracy of the quality value of the identification card image determined by the updated confidence level is also more reliable.
And S105, when the quality value is higher than a preset threshold value, determining the identification result of the target text area as the identity card information of the identity card image.
In the embodiment of the application, the definition of the identity card image is judged in a preset threshold mode, and when the obtained quality value is higher than the preset threshold, the definition of the identity card image is higher, and the reliability of the obtained identification result can be also high, so that the identification result can be directly determined as the identity card information of the identity card image.
Further, when the quality value is not higher than the preset threshold, the definition of the identity card image is poor, and a notification message for prompting that the identity card image is unqualified can be output at the moment, so that the unqualified identity card photo is rejected, and the service efficiency of a service party can be greatly improved.
According to the identification method of the identity card information, character area detection is conducted on the acquired identity card image, a target text area and a label of the target text area in the identity card image are determined, then the target text area is identified based on the label, an identification result of the target text area and the confidence coefficient of the identification result are obtained, then the quality value of the identity card image is determined based on the confidence coefficient of the identification result of the target text area, and when the quality value is larger than a preset threshold value, the identification result of the target text area is determined to be the identity card information of the identity card image. Compared with the prior art, the confidence coefficient of the identification result and the judgment of the quality value of the identity card image are increased, when the quality value of the identity card image determined by the confidence coefficient of the identification result meets the preset threshold value, the acquired identity card image is better in definition, and the identification result is determined to be the identity card information at the moment, so that the accuracy of the identification result is ensured.
In some embodiments, angle correction and distortion correction are performed on the identification card image before the target text region is determined from the identification card image, so that the identification card image with the problems of inversion, inclination, shooting pitch angle and the like can be pre-aligned, and the identification accuracy of the identification card information is further improved.
Fig. 3 is a schematic structural diagram of an identification device for identification card information according to an embodiment of the present application. As shown in fig. 3, the apparatus may implement the method shown in fig. 1 and 2, and the apparatus may include:
an acquisition unit 31 for acquiring an identification card image;
a text region detection unit 32, configured to perform text region detection on the identification card image, and determine a target text region and a label of the target text region, where the label is used to characterize a text category in the target text region;
a recognition unit 33, configured to recognize the target text region based on the tag, and obtain a recognition result of the target text region and a confidence level of the recognition result; the identification result comprises at least one of name, ID card number, gender, ethnicity and effective date of ID card;
a quality value determining unit 34, configured to determine a quality value of the identification card image according to a confidence level of the recognition result of the target text region;
and the identification card information determining unit 35 is configured to determine the identification result of the target text area as the identification card information of the identification card image when the quality value is higher than a preset threshold.
Optionally, the apparatus may further include:
and the output unit is used for outputting a notification message for prompting that the identity card image is unqualified when the quality value is not higher than the preset threshold value.
Optionally, the apparatus may further include:
and the preprocessing unit is used for carrying out angle correction and/or distortion correction on the identity card image before carrying out text region detection on the identity card image.
Alternatively, the text region detecting unit 32 may be specifically configured to:
and based on the Yolo neural network, detecting the text region of the identity card image, and determining a target text region and a label of the target text region.
Optionally, the identifying unit 33 specifically includes:
the first recognition module is used for recognizing target text areas of names, sexes and nations of the labels by utilizing a convolutional neural network CNN model to obtain recognition results and confidence coefficients of the names, the sexes and the nations;
and the second recognition module is used for recognizing the target text area with the tag being the identity card number and the identity card effective date by utilizing the convolutional neural network CRNN model to obtain a recognition result and a confidence coefficient of the identity card number and the identity card effective date.
The first identification module is specifically configured to:
performing text segmentation on the target text region with the label being a name to obtain a plurality of single-word pictures;
inputting the plurality of single-word pictures into the CNN model to obtain the recognition result and the confidence coefficient of each word;
determining the recognition result of the name according to the recognition result of each word, and determining the product of the confidence coefficient of each word as the confidence coefficient of the name;
and respectively inputting the target text regions of which the tag is gender and ethnicity into the CNN model to obtain the identification results and the confidence of the gender and ethnicity.
Further, the first recognition module performs text segmentation on the target text region with the tag being a name, and is specifically configured to:
calculating the illuminance of the target text region with the label being a name, and adjusting the illumination value of each pixel in the target text region with the label being the name according to the illuminance;
sequentially performing binarization processing and color reversal processing on the adjusted target text region;
performing pixel projection in the vertical direction and the horizontal direction on the processed target text region to obtain text information in the processed target text region; the text information comprises: character height, character spacing, character width, and name length;
and dividing the characters in the processed target text region according to the character information to obtain a plurality of single-character pictures.
The second execution module is specifically configured to:
performing angle correction and latitude selection on a target text area of which the tag is an identity card number and an identity card effective date;
and respectively inputting the processed target text regions into a CRNN model to obtain identification results and confidence of the identification card number and the identification card effective date.
Optionally, the quality value determining unit 34 is specifically configured to:
and determining an average value of the confidence coefficient of the identification result as a quality value of the identity card image.
Further, the apparatus may further include a confidence updating unit configured to:
judging whether the identification card number is wrongly identified or not by using the check bit of the identified identification card number, if so, updating the confidence coefficient of the identification card number to be zero; and/or
And judging whether the identified effective date of the identity card is identified to be wrong or not by utilizing a preset identification rule, if so, updating the confidence coefficient of the effective date of the identity card to be zero.
The identification device for the identity card information provided by the embodiment of the application can execute the embodiment of the method, and the implementation principle and the technical effect are similar and are not repeated here.
Fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 4, a schematic diagram of a computer system 400 suitable for use in implementing a terminal device or server of an embodiment of the present application is shown.
As shown in fig. 4, the computer system 400 includes a Central Processing Unit (CPU) 401, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 402 or a program loaded from a storage section 408 into a Random Access Memory (RAM) 403. In RAM 403, various programs and data required for the operation of system 400 are also stored. The CPU 601, ROM 402, and RAM 403 are connected to each other through a bus 404. An input/output (I/O) interface 406 is also connected to bus 404.
The following components are connected to the I/O interface 405: an input section 406 including a keyboard, a mouse, and the like; an output portion 407 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker, and the like; a storage section 408 including a hard disk or the like; and a communication section 409 including a network interface card such as a LAN card, a modem, or the like. The communication section 409 performs communication processing via a network such as the internet. The drive 410 is also connected to the I/O interface 406 as needed. A removable medium 411 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed on the drive 410 as needed, so that a computer program read therefrom is installed into the storage section 408 as needed.
In particular, according to embodiments of the present disclosure, the processes described above with reference to fig. 1-2 may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the identification method of the identification card information of fig. 1-2. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 409 and/or installed from the removable medium 411.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules involved in the embodiments of the present application may be implemented in software or in hardware. The described units or modules may also be provided in a processor, for example, as: a processor comprises an acquisition unit, a text region detection unit, an identification unit, a quality value determination unit and an identity card information determination unit. The names of these units or modules do not in any way limit the unit or module itself, for example, the acquisition unit may also be described as "unit for acquiring an image of an identification card".
As another aspect, the present application also provides a computer-readable medium that may be contained in the electronic device described in the above embodiment; or may exist alone without being incorporated into the electronic device. The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to implement the identification method of identification card information as described in the above embodiments.
For example, the electronic device may implement the method as shown in fig. 1: s101, acquiring an identity card image; s102, detecting a text region of an identity card image, and determining a target text region and a label of the target text region, wherein the label is used for representing the text category in the target text region; s103, identifying the target text region based on the label to obtain an identification result of the target text region and a confidence coefficient of the identification result; the identification result comprises at least one of name, ID card number, gender, ethnicity and ID card effective date; s104, determining the quality value of the identity card image according to the confidence coefficient of the identification result of the target text region.
The above description is only illustrative of the preferred embodiments of the present application and of the principles of the technology employed. It will be appreciated by persons skilled in the art that the scope of the application referred to in the present application is not limited to the specific combinations of the technical features described above, but also covers other technical features formed by any combination of the technical features described above or their equivalents without departing from the inventive concept. Such as the above-mentioned features and the technical features disclosed in the present application (but not limited to) having similar functions are replaced with each other.

Claims (8)

1. A method for identifying identity card information, the method comprising:
acquiring an identity card image;
detecting a text region of the identity card image, and determining a target text region and a label of the target text region, wherein the label is used for representing the text category in the target text region;
identifying the target text region based on the tag to obtain an identification result of the target text region and a confidence coefficient of the identification result; the identification result comprises at least one of name, ID card number, gender, ethnicity and effective date of ID card;
determining the quality value of the identity card image according to the confidence coefficient of the identification result of the target text region;
when the quality value is higher than a preset threshold value, determining the identification result of the target text area as the identity card information of the identity card image;
the method for detecting the text region of the identity card image and determining the target text region and the label of the target text region comprises the following steps:
based on a Yolo neural network, detecting a text region of the identity card image, and determining a target text region and a label of the target text region;
identifying the target text region based on the tag to obtain an identification result of the target text region and a confidence of the identification result, wherein the identification method comprises the following steps:
identifying target text areas of names, sexes and nations of the labels by utilizing a convolutional neural network CNN model to obtain identification results and confidence coefficients of the names, the sexes and the nations;
and identifying the target text area with the tag being the identity card number and the identity card effective date by utilizing a convolutional recurrent neural network CRNN model to obtain an identification result and a confidence coefficient of the identity card number and the identity card effective date.
2. The method according to claim 1, wherein the method further comprises:
and outputting a notification message for prompting that the identity card image is unqualified when the quality value is not higher than the preset threshold value.
3. The method of claim 1, wherein prior to text region detection of the identification card image, the method further comprises:
and carrying out angle correction and/or distortion correction on the identity card image.
4. The method of claim 1, wherein identifying the target text region labeled name, gender and ethnicity using a CNN model to obtain the identification result and confidence of the name, gender and ethnicity comprises:
performing text segmentation on the target text region with the label being a name to obtain a plurality of single-word pictures;
inputting the plurality of single-word pictures into the CNN model to obtain the recognition result and the confidence coefficient of each word;
determining the recognition result of the name according to the recognition result of each word, and determining the product of the confidence coefficient of each word as the confidence coefficient of the name;
and respectively inputting the target text regions of which the tag is gender and ethnicity into the CNN model to obtain the identification results and the confidence of the gender and ethnicity.
5. The method of claim 4, wherein the text segmentation of the target text region with the tag being a name to obtain a plurality of single-word pictures comprises:
calculating the illuminance of the target text region with the label being a name, and adjusting the illumination value of each pixel in the target text region with the label being the name according to the illuminance;
sequentially performing binarization processing and color reversal processing on the adjusted target text region;
performing pixel projection in the vertical direction and the horizontal direction on the processed target text region to obtain text information in the processed target text region; the text information comprises: character height, character spacing, character width, and name length;
and dividing the characters in the processed target text region according to the character information to obtain a plurality of single-character pictures.
6. The method of claim 1, wherein determining the quality value of the identification card image based on the confidence level of the recognition result of the target text region comprises:
and determining an average value of the confidence coefficient of the identification result as a quality value of the identity card image.
7. The method of claim 6, wherein prior to determining the quality value of the identification card image, the method further comprises:
judging whether the identification card number is wrongly identified or not by using the check bit of the identified identification card number, if so, updating the confidence coefficient of the identification card number to be zero; and/or
And judging whether the identified effective date of the identity card is identified to be wrong or not by utilizing a preset identification rule, if so, updating the confidence coefficient of the effective date of the identity card to be zero.
8. An identification device for identification card information, the device comprising:
the acquisition unit is used for acquiring the identity card image;
the text region detection unit is used for detecting the text region of the identity card image, determining a target text region and a label of the target text region, wherein the label is used for representing the text category in the target text region;
the identification unit is used for identifying the target text area based on the tag to obtain an identification result of the target text area and the confidence of the identification result; the identification result comprises at least one of name, ID card number, gender, ethnicity and effective date of ID card;
the quality value determining unit is used for determining the quality value of the identity card image according to the confidence coefficient of the identification result of the target text region;
the identification card information determining unit is used for determining the identification result of the target text area as the identification card information of the identification card image when the quality value is higher than a preset threshold value;
wherein, the text region detection unit further includes:
based on a Yolo neural network, detecting a text region of the identity card image, and determining a target text region and a label of the target text region;
the identification unit further includes:
identifying target text areas of names, sexes and nations of the labels by utilizing a convolutional neural network CNN model to obtain identification results and confidence coefficients of the names, the sexes and the nations;
and identifying the target text area with the tag being the identity card number and the identity card effective date by utilizing a convolutional recurrent neural network CRNN model to obtain an identification result and a confidence coefficient of the identity card number and the identity card effective date.
CN201811572672.4A 2018-12-21 2018-12-21 Identification method and device for identity card information Active CN111353497B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811572672.4A CN111353497B (en) 2018-12-21 2018-12-21 Identification method and device for identity card information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811572672.4A CN111353497B (en) 2018-12-21 2018-12-21 Identification method and device for identity card information

Publications (2)

Publication Number Publication Date
CN111353497A CN111353497A (en) 2020-06-30
CN111353497B true CN111353497B (en) 2023-11-28

Family

ID=71193708

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811572672.4A Active CN111353497B (en) 2018-12-21 2018-12-21 Identification method and device for identity card information

Country Status (1)

Country Link
CN (1) CN111353497B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111881810B (en) * 2020-07-23 2024-03-29 前海人寿保险股份有限公司 Certificate identification method, device, terminal and storage medium based on OCR
CN111683285B (en) * 2020-08-11 2021-01-26 腾讯科技(深圳)有限公司 File content identification method and device, computer equipment and storage medium
CN111950554A (en) * 2020-08-17 2020-11-17 深圳市丰巢网络技术有限公司 Identification card identification method, device, equipment and storage medium
CN112818979B (en) * 2020-08-26 2024-02-02 腾讯科技(深圳)有限公司 Text recognition method, device, equipment and storage medium
CN112699775A (en) * 2020-12-28 2021-04-23 中国平安人寿保险股份有限公司 Certificate identification method, device and equipment based on deep learning and storage medium
CN113051901B (en) * 2021-03-26 2023-03-24 重庆紫光华山智安科技有限公司 Identification card text recognition method, system, medium and electronic terminal
CN113591829B (en) * 2021-05-25 2024-02-13 上海一谈网络科技有限公司 Character recognition method, device, equipment and storage medium
CN113469029A (en) * 2021-06-30 2021-10-01 上海犀语科技有限公司 Text recognition method and device for financial pdf scanned piece
CN113378232A (en) * 2021-08-11 2021-09-10 成方金融科技有限公司 Information acquisition method and device, computer equipment and storage medium
CN113963149A (en) * 2021-10-29 2022-01-21 平安科技(深圳)有限公司 Medical bill picture fuzzy judgment method, system, equipment and medium
CN115063913B (en) * 2022-05-27 2023-05-30 平安银行股份有限公司 Identity information input method and device based on optical character recognition and related equipment
CN115375998B (en) * 2022-10-24 2023-03-17 成都新希望金融信息有限公司 Certificate identification method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751568A (en) * 2008-12-12 2010-06-23 汉王科技股份有限公司 ID No. locating and recognizing method
CN104680130A (en) * 2015-01-09 2015-06-03 安徽清新互联信息科技有限公司 Chinese character recognition method for identification cards
CN106156712A (en) * 2015-04-23 2016-11-23 信帧电子技术(北京)有限公司 A kind of based on the ID (identity number) card No. recognition methods under natural scene and device
CN106886774A (en) * 2015-12-16 2017-06-23 腾讯科技(深圳)有限公司 The method and apparatus for recognizing ID card information
CN108959462A (en) * 2018-06-19 2018-12-07 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN109034050A (en) * 2018-07-23 2018-12-18 顺丰科技有限公司 ID Card Image text recognition method and device based on deep learning

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5649509B2 (en) * 2011-05-10 2015-01-07 株式会社日立ソリューションズ Information input device, information input system, and information input method
US10026020B2 (en) * 2016-01-15 2018-07-17 Adobe Systems Incorporated Embedding space for images with multiple text labels

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751568A (en) * 2008-12-12 2010-06-23 汉王科技股份有限公司 ID No. locating and recognizing method
CN104680130A (en) * 2015-01-09 2015-06-03 安徽清新互联信息科技有限公司 Chinese character recognition method for identification cards
CN106156712A (en) * 2015-04-23 2016-11-23 信帧电子技术(北京)有限公司 A kind of based on the ID (identity number) card No. recognition methods under natural scene and device
CN106886774A (en) * 2015-12-16 2017-06-23 腾讯科技(深圳)有限公司 The method and apparatus for recognizing ID card information
CN108959462A (en) * 2018-06-19 2018-12-07 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN109034050A (en) * 2018-07-23 2018-12-18 顺丰科技有限公司 ID Card Image text recognition method and device based on deep learning

Also Published As

Publication number Publication date
CN111353497A (en) 2020-06-30

Similar Documents

Publication Publication Date Title
CN111353497B (en) Identification method and device for identity card information
CN109241894B (en) Bill content identification system and method based on form positioning and deep learning
CN109492643B (en) Certificate identification method and device based on OCR, computer equipment and storage medium
CN110008944B (en) OCR recognition method and device based on template matching and storage medium
CN111325203B (en) American license plate recognition method and system based on image correction
CN110766014B (en) Bill information positioning method, system and computer readable storage medium
CN107798299B (en) Bill information identification method, electronic device and readable storage medium
CN108229509B (en) Method and device for identifying object class and electronic equipment
CN110569878B (en) Photograph background similarity clustering method based on convolutional neural network and computer
CN110909690B (en) Method for detecting occluded face image based on region generation
Zhang et al. Image segmentation based on 2D Otsu method with histogram analysis
US20180137321A1 (en) Method and system for decoding two-dimensional code using weighted average gray-scale algorithm
CN111626190A (en) Water level monitoring method for scale recognition based on clustering partitions
CN108090511B (en) Image classification method and device, electronic equipment and readable storage medium
WO2022156178A1 (en) Image target comparison method and apparatus, computer device and readable storage medium
CN111626292B (en) Text recognition method of building indication mark based on deep learning technology
CN112396047B (en) Training sample generation method and device, computer equipment and storage medium
CN112580507A (en) Deep learning text character detection method based on image moment correction
CN111368632A (en) Signature identification method and device
CN110443184A (en) ID card information extracting method, device and computer storage medium
CN113139535A (en) OCR document recognition method
CN109741273A (en) A kind of mobile phone photograph low-quality images automatically process and methods of marking
CN116030453A (en) Digital ammeter identification method, device and equipment
RU2633182C1 (en) Determination of text line orientation
CN112949649B (en) Text image identification method and device and computing equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant