CN115578606A - Two-dimensional code identification method and device, computer equipment and readable storage medium - Google Patents

Two-dimensional code identification method and device, computer equipment and readable storage medium Download PDF

Info

Publication number
CN115578606A
CN115578606A CN202211564732.4A CN202211564732A CN115578606A CN 115578606 A CN115578606 A CN 115578606A CN 202211564732 A CN202211564732 A CN 202211564732A CN 115578606 A CN115578606 A CN 115578606A
Authority
CN
China
Prior art keywords
predicted
point
image
feature
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211564732.4A
Other languages
Chinese (zh)
Other versions
CN115578606B (en
Inventor
陈帅
刘枢
吕江波
沈小勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Smartmore Technology Co Ltd
Original Assignee
Shenzhen Smartmore Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Smartmore Technology Co Ltd filed Critical Shenzhen Smartmore Technology Co Ltd
Priority to CN202211564732.4A priority Critical patent/CN115578606B/en
Publication of CN115578606A publication Critical patent/CN115578606A/en
Application granted granted Critical
Publication of CN115578606B publication Critical patent/CN115578606B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Toxicology (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a two-dimension code identification method, a two-dimension code identification device, computer equipment and a readable storage medium, wherein the method comprises the following steps: acquiring an image to be identified; extracting predicted characteristic point information of the image to be recognized by using a characteristic recognition model, wherein the predicted characteristic point information is used for indicating a predicted outer corner point, a predicted centripetal vector, a predicted central point and a predicted outward vector; the predicted centripetal vector is a predicted vector pointing to the center of the two-dimensional code from a predicted outer corner point, and the predicted outward vector is a predicted vector pointing to four corners of the two-dimensional code from a predicted center point; and identifying one or more two-dimensional codes in the image to be identified based on the predicted feature point information. The method is beneficial to reducing the situation of confusion of characteristic point reading among different two-dimension codes in the two-dimension code identification process, so that the efficiency of two-dimension code identification can be improved.

Description

Two-dimensional code identification method and device, computer equipment and readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a two-dimensional code recognition method and apparatus, a computer device, and a readable storage medium.
Background
Two-dimensional codes have been rapidly developed and widely used in recent years due to the advantages of large amount of information contained, easy recognition, low cost, and the like. At present, the two-dimensional code is scanned through the terminal device so as to obtain corresponding information, and the method is generally applied to industrial production and manufacturing and daily life. However, the current two-dimensional code image recognition has the problems of low efficiency, low accuracy and the like.
Disclosure of Invention
The embodiment of the application provides a two-dimensional code identification method and device, a computer device and a readable storage medium, which can improve the efficiency and accuracy of two-dimensional code image identification.
In a first aspect, a two-dimensional code recognition method is provided, including:
acquiring an image to be identified;
extracting predicted characteristic point information of the image to be recognized by using a characteristic recognition model, wherein the predicted characteristic point information is used for indicating a predicted outer corner point, a predicted centripetal vector, a predicted central point and a predicted outward vector; the predicted centripetal vector is a predicted vector pointing to the center of the two-dimensional code from a predicted outer corner point, and the predicted outward vector is a predicted vector pointing to four corners of the two-dimensional code from a predicted center point;
and identifying one or more two-dimensional codes in the image to be identified based on the predicted feature point information.
Based on the technical scheme, when a plurality of two-dimensional codes exist on one image, the predicted characteristic point information can be output through the characteristic recognition model and is used for indicating the predicted outer corner points and the predicted centripetal vectors, and the predicted central points and the predicted outward vectors, so that the two-dimensional codes can be recognized based on the predicted characteristic point information, the mutual influence between different two-dimensional codes during two-dimensional code recognition can be favorably reduced, the occurrence of situations such as reading confusion of the characteristic points between different two-dimensional codes can be favorably reduced, and the efficiency and the accuracy of two-dimensional code image recognition can be further improved.
With reference to the first aspect, in a possible implementation manner of the first aspect, the feature recognition model includes a corner point branch and a central point branch,
the corner branch outputs a corner classification feature map and a corner regression feature map, the corner classification feature map indicates a first probability, and the first probability is the probability that points on the image to be identified are outer corners of the two-dimensional code; the angular point regression feature map indicates a centripetal vector corresponding to each point;
the central point branch outputs a central point classification characteristic map and a central point regression characteristic map, the central point classification characteristic map indicates a second probability, and the second probability is the probability that a point on the image to be recognized is the central point of the two-dimensional code; the central point regression feature map indicates the outward vectors corresponding to each point.
Based on the technical scheme, the outer corner point and the central point can be respectively predicted through the corner point branch and the central point branch of the feature recognition model, so that the accuracy of two-dimensional code image recognition can be improved.
With reference to the first aspect, in a possible implementation manner of the first aspect, the extracting, by using a feature recognition model, predicted feature point information of an image to be recognized includes:
determining a corner classification feature map and a corner regression feature map corresponding to the image to be identified through the corner branches;
determining points with the first probability larger than a first threshold value as predicted outer corner points;
and determining a centripetal vector corresponding to the predicted outer corner point as a predicted centripetal vector.
With reference to the first aspect, in a possible implementation manner of the first aspect, the extracting, by using a feature recognition model, predicted feature point information of an image to be recognized includes:
determining a central point classification characteristic diagram and a central point regression characteristic diagram corresponding to the image to be identified through the central point branch;
determining the point with the second probability larger than a second threshold value as a predicted central point;
and determining the outward vector corresponding to the prediction central point as a prediction outward vector.
With reference to the first aspect, in a possible implementation manner of the first aspect, identifying one or more two-dimensional codes in an image to be identified based on predicted feature point information includes:
grouping the predicted outer corner points and the predicted central points to obtain at least one characteristic point group, wherein the predicted outer corner points and the predicted central points in each characteristic point group belong to the same two-dimensional code;
and determining the number and the positions of the two-dimensional codes in the image to be recognized according to the at least one characteristic point group.
Based on the technical scheme, the feature points belonging to one two-dimensional code can be accurately combined, so that the mutual influence among different two-dimensional codes can be reduced, and the accuracy and efficiency of two-dimensional code image identification can be further improved.
With reference to the first aspect, in a possible implementation manner of the first aspect, the grouping the prediction outer corner points and the prediction center points to obtain at least one feature point group includes:
determining that an included angle between the predicted centripetal vector and the predicted outward vector is smaller than a third threshold value, and a quadrangle formed by the four predicted outer corner points is a parallelogram;
and combining the predicted outer corner points and the predicted central points to obtain at least one characteristic point group.
With reference to the first aspect, in a possible implementation manner of the first aspect, the image to be identified includes a QR code, the predicted feature point information is further used to indicate a predicted contour corner point, and the predicted contour corner point is a vertex of the position detection pattern on the two-dimensional code contour line;
identifying one or more two-dimensional codes in the image to be identified based on the predicted feature point information further comprises: and identifying the version information of the QR code based on the predicted contour corner points, the predicted outer corner points and the predicted central points.
With reference to the first aspect, in a possible implementation manner of the first aspect, the image to be recognized includes a QR code, and the predicted feature point information is further used to indicate predicted inner corner points, where the predicted inner corner points are vertices of the position detection graph located inside the two-dimensional code;
identifying one or more two-dimensional codes in the image to be identified based on the predicted feature point information further comprises: and identifying the version information of the QR code based on the predicted inner corner points, the predicted outer corner points and the predicted central points.
With reference to the first aspect, in a possible implementation manner of the first aspect, the feature recognition model is obtained by training based on a plurality of sample images and feature point information of the sample images, and the training includes: and constructing a first loss function, wherein the first loss function is constructed on the basis of the extension function of the classified cross entropy loss.
With reference to the first aspect, in a possible implementation manner of the first aspect, the feature recognition model is obtained by training based on a plurality of sample images and feature point information of the sample images, and the training includes: and constructing a second loss function, wherein the second loss function is constructed based on the smooth average absolute error loss function.
In a second aspect, an apparatus for picture recognition is provided, including:
the acquisition module is used for acquiring an image to be identified;
the extraction module is used for extracting the predicted characteristic point information of the image to be recognized by using the characteristic recognition model, wherein the predicted characteristic point information is used for indicating a predicted outer corner point, a predicted centripetal vector, a predicted central point and a predicted outward vector; the predicted centripetal vector is a predicted vector pointing to the center of the two-dimensional code from a predicted outer corner point, and the predicted outward vector is a predicted vector pointing to four corners of the two-dimensional code from a predicted center point;
and the identification module is used for identifying one or more two-dimensional codes in the image to be identified based on the predicted characteristic point information.
Based on the technical scheme, when a plurality of two-dimensional codes exist on one image, the image to be identified can be acquired through the acquisition module, and the predicted characteristic point information is output through the extraction module, so that the two-dimensional codes can be identified by the identification module based on the predicted characteristic point information, the mutual influence among different two-dimensional codes during two-dimensional code identification can be reduced, the occurrence of situations such as confusion of characteristic point reading among different two-dimensional codes can be reduced, and the accuracy and the efficiency of two-dimensional code image identification can be improved.
With reference to the second aspect, in a possible implementation manner of the second aspect, the feature recognition model includes a corner point branch and a central point branch,
the corner branch outputs a corner classification feature map and a corner regression feature map, the corner classification feature map indicates a first probability, and the first probability is the probability that points on the image to be identified are outer corners of the two-dimensional code; the angular point regression feature map indicates centripetal vectors corresponding to each point;
the central point branch outputs a central point classification characteristic map and a central point regression characteristic map, the central point classification characteristic map indicates a second probability, and the second probability is the probability that a point on the image to be recognized is the central point of the two-dimensional code; the central point regression feature map indicates the outward vectors corresponding to each point.
With reference to the second aspect, in a possible implementation manner of the second aspect, in terms of extracting the predicted feature point information of the image to be recognized by using the feature recognition model, the extraction module is specifically configured to:
determining a corner classification feature map and a corner regression feature map corresponding to the image to be identified through the corner branches;
determining points with the first probability larger than a first threshold value as predicted outer corner points;
and determining a centripetal vector corresponding to the predicted outer corner point as a predicted centripetal vector.
With reference to the second aspect, in a possible implementation manner of the second aspect, in terms of extracting the predicted feature point information of the image to be recognized by using the feature recognition model, the extraction module is specifically configured to:
determining a central point classification characteristic diagram and a central point regression characteristic diagram corresponding to the image to be identified through the central point branch;
determining a point with the second probability larger than a second threshold value as a predicted central point;
and determining the outward vector corresponding to the prediction central point as a prediction outward vector.
With reference to the second aspect, in a possible implementation manner of the second aspect, in identifying one or more two-dimensional codes in the image to be identified based on the predicted feature point information, the identifying module is specifically configured to:
grouping the predicted outer corner points and the predicted central points to obtain at least one characteristic point group, wherein the predicted outer corner points and the predicted central points in each characteristic point group belong to the same two-dimensional code;
and determining the number and the positions of the two-dimensional codes in the image to be recognized according to at least one characteristic point group.
With reference to the second aspect, in a possible implementation manner of the second aspect, the predicting outer corner points and the predicting central points are grouped to obtain at least one feature point group, and the identifying module is specifically configured to:
determining that an included angle between the predicted centripetal vector and the predicted outward vector is smaller than a third threshold value, and a quadrangle formed by the four predicted outer corner points is a parallelogram;
and combining the predicted outer corner points and the predicted central points to obtain at least one characteristic point group.
With reference to the second aspect, in a possible implementation manner of the second aspect, the image to be identified includes a QR code, the predicted feature point information is further used to indicate a predicted contour corner point, and the predicted contour corner point is a vertex of the position detection pattern on the two-dimensional code contour line;
the identification module is further configured to: and identifying the version information of the QR code based on the predicted contour corner points, the predicted outer corner points and the predicted central points.
With reference to the second aspect, in a possible implementation manner of the second aspect, the image to be identified includes a QR code, the predicted feature point information is further used to indicate a predicted inner corner point, and the predicted inner corner point is a vertex of the position detection graph located inside the two-dimensional code;
the identification module is further configured to: and identifying the version information of the QR code based on the predicted inner corner points, the predicted outer corner points and the predicted central points.
With reference to the second aspect, in a possible implementation manner of the second aspect, the feature recognition model is obtained by training based on a plurality of sample images and feature point information of the sample images, and the training includes: and constructing a first loss function, wherein the first loss function is constructed on the basis of the extension function of the classified cross entropy loss.
With reference to the second aspect, in a possible implementation manner of the second aspect, the feature recognition model is obtained by training based on a plurality of sample images and feature point information of the sample images, and the training includes: and constructing a second loss function, wherein the second loss function is constructed based on the smooth average absolute error loss function.
In a third aspect, a computer device is provided, where the computer device includes a memory and a processor, the memory stores a computer program, and the processor implements the two-dimensional code recognition method as in the first aspect or any possible implementation manner of the first aspect when executing the computer program.
In a fourth aspect, a computer-readable storage medium is provided, where a computer program is stored, and the computer program, when executed by a processor, implements a two-dimensional code recognition method as in the first aspect or any one of the possible implementations of the first aspect.
In a fifth aspect, a computer program product is provided, the computer program product including a computer program, and the computer program when executed by a processor implements the two-dimensional code identification method as in the first aspect or any one of the possible implementations of the first aspect.
Drawings
Fig. 1 is a schematic structural diagram of a system architecture according to an embodiment of the present application.
Fig. 2 is a schematic flowchart of a two-dimensional code identification method according to an embodiment of the present application.
Fig. 3 is a schematic view of a scene where a plurality of two-dimensional codes exist in an image according to an embodiment of the present application.
Fig. 4 is a schematic diagram of an outer corner point and a centripetal vector and a center point and an outward vector provided by an embodiment of the present application.
Fig. 5 is a schematic structural block diagram of a two-dimensional code recognition apparatus according to an embodiment of the present application.
Fig. 6 is a schematic structural block diagram of a computer device provided in an embodiment of the present application.
Fig. 7 is a schematic block diagram of a computer-readable storage medium according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the accompanying drawings. It should be understood that the specific examples are provided herein only to assist those skilled in the art in better understanding the embodiments of the present application and are not intended to limit the scope of the embodiments of the present application.
It should also be understood that, in the various embodiments of the present application, the sequence numbers of the processes do not mean the execution sequence, and the execution sequence of the processes should be determined by the functions and the inherent logic of the processes, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Unless otherwise defined, all technical and scientific terms used in the examples of this application have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present application.
Common two-dimensional codes (2-dimensional bar codes) include Quick Response codes (QR codes), data Matrix codes (Data Matrix codes), and the like.
Two-dimensional codes, originally originated in japan, are alternating black and white patterns regularly distributed in a plane (two-dimensional direction) using a specific geometric figure. The two-dimensional code skillfully utilizes the concepts of '0' and '1' bit streams forming the internal logic basis of a computer in code programming, uses a plurality of geometric shapes corresponding to binary systems to represent character numerical value information, and realizes automatic processing of the information by automatically recognizing and reading through an image input device or an optoelectronic scanning device. The two-dimensional bar code has the characteristics of large storage capacity, high confidentiality, high traceability, strong damage resistance, high redundancy, low cost and the like, is particularly suitable for the aspects of forms, security, confidentiality, tracking, license, stock checking, data redundancy and the like, is widely applied in recent years, and can realize multiple functions of information acquisition, mobile phone payment, anti-counterfeiting traceability, account login and the like by scanning the two-dimensional bar code. The Matrix two-dimensional Code system commonly used includes a QR Code, a Data Matrix, a Maxi Code, an Aztec Code, and the like.
In the identification of the two-dimensional code, firstly, feature points of the two-dimensional code, such as outer corner points, need to be identified to locate an area of the two-dimensional code, and then, the direction, version information, carried content information, and the like of the two-dimensional code are further identified. However, in the current identification method, when a plurality of two-dimensional codes exist on one image, as shown in fig. 3, the feature point identifications of the plurality of two-dimensional codes may affect each other, for example, the feature point readings of different two-dimensional codes are mixed up, so that the accuracy of identifying the two-dimensional codes is reduced, and the efficiency is reduced. In view of this, the two-dimension code recognition method provided by the application can combine the feature points belonging to the same two-dimension code when a plurality of two-dimension codes exist on the image to be recognized, so as to distinguish the plurality of two-dimension codes, and can improve the accuracy and efficiency of two-dimension code recognition.
As shown in fig. 1, the present embodiment provides a system architecture 100. In fig. 1, the image capturing device 110 is configured to acquire a first image 1101 to be identified, and input the first image 1101 to the image processing device 120. The image capturing device 110 may be any device having an image capturing or capturing function, such as a camera, a video camera, a scanner, a mobile phone, a tablet computer, or a barcode scanner, for capturing or capturing the first image 1101; it may also be a device having a data storage function in which the first image 1101 is stored. The type of the image capturing device 110 is not limited in this application. For the two-dimensional code recognition method according to the embodiment of the present application, the first image 1101 may be an image including a two-dimensional code. The image processing device 120 is used to recognize the first image 1101. The image processing device 120 may be any device having image processing capabilities, such as a computer, smartphone, workstation, or other device having a central processor. The present application is not limited as to the type of the image processing apparatus 120.
In some embodiments, the image capturing device 110 may be the same device as the image processing device 120. For example, the image capturing device 110 and the image processing device 120 are both smart phones or both scanners.
In other embodiments, the image capturing device 110 may be a different device than the image processing device 120. For example, the image capturing device 110 is a terminal device, the image processing device 120 is a computer, a workstation, or the like, and the image capturing device 110 may interact with the image processing device 120 through a communication network of any communication mechanism/communication standard, such as a wide area network, a local area network, a peer-to-peer connection, or any combination thereof.
As shown in fig. 2, the method 200 of image recognition comprises the following steps:
s210: the computer equipment acquires an image to be identified;
s220: the computer equipment extracts the predicted characteristic point information of the image to be recognized by using the characteristic recognition model, wherein the predicted characteristic point information is used for indicating a predicted outer corner point and a predicted centripetal vector as well as a predicted central point and a predicted outward vector; the predicted centripetal vector is a predicted vector pointing to the center of the two-dimensional code from a predicted outer corner point, and the predicted outward vector is a predicted vector pointing to four corners of the two-dimensional code from a predicted center point;
s230: the computer device identifies one or more two-dimensional codes in the image to be identified based on the predicted feature point information.
The computer equipment obtains a feature recognition model through training of a large number of sample images containing two-dimensional codes and feature point information of the sample images, the images to be recognized are input into the feature recognition model, the feature recognition model outputs predicted feature point information in the images to be recognized, and two-dimensional code recognition is conducted on the images to be recognized based on the predicted feature point information so as to obtain one or more two-dimensional codes in the images to be recognized.
Based on the technical scheme, when a plurality of two-dimensional codes exist on one image, the computer equipment can output predicted characteristic point information through the characteristic identification model, the predicted characteristic point information is used for indicating a predicted outer corner point, a predicted centripetal vector, a predicted central point and a predicted outer vector, and based on the predicted characteristic point information, the characteristic points of different two-dimensional codes can be favorably distinguished, the mutual influence between different two-dimensional codes during the two-dimensional code identification can be reduced, the situations of reading confusion and the like of the characteristic points between different two-dimensional codes can be reduced, and the accuracy and the efficiency of the two-dimensional code image identification can be improved.
The predicted feature point information is used for indicating the predicted outer corner points, the predicted centripetal vectors, the predicted central points and the predicted outer vectors of all the two-dimensional codes in the image to be recognized. Fig. 4 shows a schematic diagram of four outer corner points and four centripetal vectors of a two-dimensional code and a two-dimensional code center point and four outward vectors.
In some possible implementations, in step S220, the feature recognition model may be a neural network trained based on a plurality of sample images and sample image feature point information, including: taking a sample image as the input of a neural network to be trained, taking two-dimensional code characteristic point information contained in the sample image as the output of the neural network to be trained, and training the neural network to be trained by adopting a plurality of sample images; and constructing a loss function based on the sample image characteristic point information output by the neural network to be trained and the actual sample image characteristic point information, and determining that the training is finished if the training times reach the preset iteration times, so that the neural network to be trained becomes a target neural network. Illustratively, if the training frequency does not reach the preset iteration frequency, it is determined that the training is not finished, the weight of the neural network can be adjusted through a back propagation algorithm, and the training is continued.
In some possible embodiments, the plurality of sample images may be images containing two-dimensional codes in a plurality of different scenes.
In some possible embodiments, the neural network includes three parts, namely a backbone network (also called backbone network), a detection head network (also called head network) and a feature fusion layer (also called hack network). The backhaul network is mainly used for feature extraction, pre-training is completed on a large data set, and a convolutional neural network with pre-training parameters is a framework of the whole detection algorithm and can adopt VGGNet, resNet, denseNet and the like; the hack network is arranged between the backbone network and the head network and is used for better utilizing the features extracted by the backbone network and extracting some more complex features; the head network is mainly used for predicting the types and the positions of the targets by using the extracted features.
Specifically, the backbone network layer in the present application adopts a mobilenetv2 structure. Inputting an image to be recognized into the backbone network layer, and outputting 5 stage feature layers by the backbone network layer; the sock network layer fuses the feature layers of the 5 stages and outputs an intermediate feature map; the intermediate feature map is input into a head network layer, which outputs a predicted feature map.
In some possible embodiments, the head network layer may include a corner branch and a central branch, where the corner branch is used to predict positions of four outer corners of the two-dimensional code and a centripetal vector; the center point branch is used to predict the center point and four outward vectors of the two-dimensional code.
The corner branch outputs a corner classification feature map and a corner regression feature map, the corner classification feature map indicates a first probability, the first probability is the probability that points on the image to be identified are outer corners of the two-dimensional code, and the corner regression feature map indicates centripetal vectors corresponding to the points; and when the first probability is larger than a first threshold value, determining the point as a predicted outer corner point, simultaneously obtaining the coordinates of the predicted outer corner point, and determining a centripetal vector corresponding to the predicted outer corner point as a predicted centripetal vector.
The central point branch outputs a central point classification characteristic graph and a central point regression characteristic graph, the central point classification characteristic graph indicates a second probability, the second probability is the probability that a point on the image to be recognized is the central point of the two-dimensional code, and the central point regression characteristic graph indicates an outward vector corresponding to each point; and when the second probability is greater than a second threshold value, determining the point as a prediction central point, simultaneously obtaining the coordinate of the prediction central point, and determining an outward vector corresponding to the prediction central point as a prediction outward vector.
In some possible implementations, the feature map is a feature matrix obtained by extracting features from an image to be recognized, the dimension is H × W × C, H and W are the height and width of the feature map and are related to the size of the input image, C is the number of channels, and the probability that each position in the image is a feature point is indicated in the form of a data matrix.
In some possible implementations, during the feature recognition model training process, a loss function is constructed, the loss function including a first loss function. The first loss function can be constructed based on the extension function of the classified cross entropy loss, which has the following specific formula:
Figure 418828DEST_PATH_IMAGE001
wherein p is cij Predicted probability value, y, for a single point on a feature map output by a target neural network cij For the probability target value corresponding to the point, for example, a feature point of a two-dimensional code in the sample image is mapped to a position on the feature map, and the predicted probability value of the position is p cij Target probability value y thereof cij Is 1.H, W and C represent the height, width and channel number of the output characteristic diagram, N represents the number of two-dimensional codes contained in one diagram, and alpha and beta are hyper-parameters.
In some possible implementations, the loss function includes a second loss function, and the second loss function may be constructed based on a smoothed mean absolute error loss function smoothL1 loss, and the specific formula is as follows:
Figure 876354DEST_PATH_IMAGE002
x is the difference of the predicted offset vector and the true offset vector.
In step S230, in the process of identifying the two-dimensional codes in the image to be identified, in order to determine the number and the positions of the two-dimensional codes in the image to be identified, the predicted outer corner points and the predicted center points are grouped, the predicted outer corner points and the predicted center points belonging to the same two-dimensional code are grouped into one feature point group, and at least one feature point group is obtained by grouping.
The specific grouping method is as follows: determining that an included angle between the predicted centripetal vector and the predicted outward vector is smaller than a third threshold value, and a quadrangle formed by the four predicted outer corner points is a parallelogram; the predicted outer corner points and the predicted center points are combined to obtain at least one feature point group. Therefore, the number of the two-dimensional codes in the image to be recognized can be determined, and the position of each two-dimensional code can be judged according to each outer predicted corner point and each predicted central point.
Specifically, a two-dimensional code can be represented by one nine-degree-of-freedom (x 1, y1, x2, y2, x3, y3, x4, y4, α), and a complex aggregation strategy is not needed when only one code exists in a picture, but when multiple codes appear in a scene, the detected feature points need to be distinguished to which codes belong respectively.
Specifically, the following algorithm is used for realizing the following steps:
inputting: feature map output by the feature recognition model
Outputting M; wherein, M comprises the predicted characteristic point information of M (x 1, y1, x2, y2, x3, y3, x4, y4, alpha) two-dimensional codes
1. Carrying out maximum suppression to obtain an index, and searching a corresponding position in the graph through the index to obtain k key points (x, y), regression points (j, k) and a direction (o);
2. let M = [ ];
3. For i->k;
4. obtaining regression points (j _ i, k _ i);
5. if the point is already in M, exiting, and calculating the next point;
6. if not in M, 4 points (x 1_ i, y1_ i, x2_ i, y2_ i, x3_ i, y3_ i, x4_ i, y4_ i, alpha _ i) around the point, wherein the distance is smaller than a preset distance threshold value, are obtained through calculation;
7. adding (x 1_ i, y1_ i, x2_ i, y2_ i, x3_ i, y3_ i, x4_ i, y4_ i, alpha _ i) into M;
8. Return M.
based on the algorithm, the predicted feature points belonging to the same two-dimensional code in the image are combined to obtain a coordinate set of the feature points belonging to the same two-dimensional code.
The QR code is one of two-dimensional codes, the QR code has three position detection graphs which are respectively positioned on three corners of the two-dimensional code, and the position detection graphs are in a shape of Chinese character hui. One of four vertexes of each position detection graph is coincided with an outer corner point of the two-dimensional code, two vertexes are positioned on a contour line of the two-dimensional code (called contour corner points), and one vertex is positioned inside the two-dimensional code (called inner corner point). As an embodiment of the present application, when the image to be recognized contains a QR code, the predicted feature point information is also used to indicate predicted contour corner points. Specifically, an image containing a QR code is input into a feature recognition model, and predicted feature point information output by the feature recognition model is also used for indicating predicted contour corner points. Furthermore, the predicted contour corner, the predicted outer corner and the predicted central point which belong to the same two-dimensional code can be combined. Furthermore, the side length of the position detection graph and the side length of the two-dimensional code image can be determined according to the distance and the position relation between the outer prediction angular point and the outline prediction angular point, and therefore corresponding version information is determined according to the ratio of the side length of the position detection graph to the side length of the two-dimensional code image.
As another embodiment of the present application, the feature points may further include three inner corner points of three position detection patterns of the QR code. Specifically, an image containing a QR code is input into a feature recognition model, and predicted feature point information output by the feature recognition model is also used for indicating predicted inner corner points. Furthermore, the predicted inner corner, the predicted outer corner and the predicted central point which belong to the same two-dimensional code can be combined. Furthermore, the side length of the position detection graph and the side length of the two-dimensional code image can be determined according to the distance and the position relation between the inner prediction angular point and the outline prediction angular point, so that corresponding version information can be determined according to the ratio of the side length of the position detection graph to the side length of the two-dimensional code image. According to the embodiment, when the partial outline of the QR code is shielded or damaged, the version information of the QR code can still be identified.
Before step S210, the image to be recognized may be preprocessed, such as denoising, graying, binarization processing, and the like.
In some possible embodiments, the image to be recognized may be grayed out by a weighted average method.
In some possible embodiments, a bilateral filtering algorithm may be used to perform noise reduction on the image to be recognized.
In some possible embodiments, a modified binarization algorithm may be used to binarize the image to be recognized.
The method embodiments of the present application are described above in detail, the apparatus embodiments of the present application are described below, the apparatus embodiments correspond to the method embodiments, and therefore, the apparatus embodiments may refer to the method embodiments not described in detail.
Fig. 5 shows a schematic block diagram of a two-dimensional code recognition apparatus 500 according to an embodiment of the present application, where the apparatus 500 includes:
an obtaining module 510, configured to obtain an image to be identified;
an extracting module 520, configured to extract predicted feature point information of the image to be recognized by using the feature recognition model, where the predicted feature point information is used to indicate a predicted outer corner point and a predicted centripetal vector, and a predicted center point and a predicted outward vector; the predicted centripetal vector is a predicted vector pointing to the center of the two-dimensional code from a predicted outer corner point, and the predicted outward vector is a predicted vector pointing to four corners of the two-dimensional code from a predicted central point;
an identifying module 530, configured to identify one or more two-dimensional codes in the image to be identified based on the predicted feature point information.
Based on the technical scheme, when a plurality of two-dimensional codes exist on the image to be recognized, the feature point information can be output through the feature recognition model, according to the predicted outer corner points and the predicted centripetal vectors indicated by the predicted feature point information, and the predicted central points and the predicted outward vectors, the feature points of different two-dimensional codes can be favorably distinguished, the mutual influence between different two-dimensional codes during two-dimensional code recognition can be reduced, the occurrence of situations such as feature point reading confusion between different two-dimensional codes can be reduced, and the accuracy and the efficiency of two-dimensional code image recognition can be improved.
In some embodiments, the feature recognition model is a neural network, and the training of the feature point information based on a plurality of sample images and the sample images obtains the feature recognition model, including: the extraction module 520 takes the sample image as the input of the neural network to be trained, takes the two-dimension code feature point information contained in the sample image as the output of the neural network to be trained, and trains the neural network to be trained by adopting a plurality of sample images; the extracting module 520 is further configured to construct a loss function based on the output sample image feature point information and the actual sample image feature point information; the extraction module 520 determines that the training times reach the preset iteration times, and then determines that the neural network to be trained becomes the target neural network. Illustratively, if the training frequency does not reach the preset iteration frequency, the extraction module 520 determines that the training is not completed, and the extraction module 520 adjusts the weight of the neural network through a back propagation algorithm and continues the training.
In some embodiments, the neural network includes a backbone network, a detection head network, and a feature fusion layer. Illustratively, the backbone network employs a mobilenetv2 structure.
In some embodiments, the feature recognition model includes corner and center point branches,
the corner branch outputs a corner classification feature map and a corner regression feature map, the corner classification feature map indicates a first probability, and the first probability is the probability that points on the image to be identified are outer corners of the two-dimensional code; the angular point regression feature map indicates centripetal vectors corresponding to each point;
the central point branch outputs a central point classification characteristic map and a central point regression characteristic map, the central point classification characteristic map indicates a second probability, and the second probability is the probability that a point on the image to be recognized is the central point of the two-dimensional code; the central point regression feature map indicates the outward vectors corresponding to each point.
In some embodiments, in extracting the predicted feature point information of the image to be recognized by using the feature recognition model, the extraction module 520 is specifically configured to:
determining a corner classification feature map and a corner regression feature map corresponding to the image to be identified through the corner branches;
determining points with the first probability larger than a first threshold value as predicted outer corner points;
and determining a centripetal vector corresponding to the predicted outer corner point as a predicted centripetal vector.
In some embodiments, in extracting the predicted feature point information of the image to be recognized by using the feature recognition model, the extraction module 520 is specifically configured to:
determining a central point classification characteristic diagram and a central point regression characteristic diagram corresponding to the image to be identified through the central point branch;
determining a point with the second probability larger than a second threshold value as a predicted central point;
and determining the outward vector corresponding to the prediction central point as a prediction outward vector.
As an embodiment of the present application, in identifying one or more two-dimensional codes in an image to be identified based on the predicted feature point information, the identifying module 530 is specifically configured to:
grouping the predicted outer corner points and the predicted central points to obtain at least one characteristic point group, wherein the predicted outer corner points and the predicted central points in each characteristic point group belong to the same two-dimensional code;
and determining the number and the positions of the two-dimensional codes in the image to be recognized according to at least one characteristic point group.
In some embodiments, after grouping the predicted outer corner points and the predicted center points to obtain at least one feature point group, the identifying module 530 is specifically configured to:
determining that an included angle between the predicted centripetal vector and the predicted outward vector is smaller than a third threshold value, and a quadrangle formed by the four predicted outer corner points is a parallelogram;
the predicted outer corner points and the predicted center points are combined to obtain at least one feature point group.
In some embodiments, when the image to be identified contains a QR code, the predicted feature point information is further used for indicating a predicted contour corner point, which is a vertex of the position detection pattern on the two-dimensional code contour line;
the identification module 520 is further configured to: and identifying the version information of the QR code based on the predicted contour corner, the predicted outer corner and the predicted central point.
In some embodiments, when the image to be identified contains a QR code, the predicted feature point information is further used to indicate a predicted inner corner point, which is a vertex of the position detection pattern located inside the two-dimensional code; the identification module 530 is further configured to: and identifying the version information of the QR code based on the predicted inner corner points, the predicted outer corner points and the predicted central points.
In some embodiments, the extraction module 520 constructs a first loss function when training the feature recognition model, the first loss function constructed based on an extended function of the categorical cross-entropy loss.
In some embodiments, the extraction module 520 constructs a second loss function when training the feature recognition model, the second loss function constructed based on a smoothed mean absolute error loss function.
As shown in fig. 6, computer device 600 includes memory 601, processor 602, and communication interface 603. The memory 601, the processor 602, and the communication interface 603 are communicatively connected to each other via a bus 604.
The memory 601 may be a read-only memory (ROM), a static storage device, and a Random Access Memory (RAM). The memory 601 may store a program, and when the program stored in the memory 601 is executed by the processor 602, the processor 602 and the communication interface 603 are configured to perform the steps of the two-dimensional code recognition method according to the embodiment of the present application.
The processor 602 may be a general-purpose Central Processing Unit (CPU), a microprocessor, an Application Specific Integrated Circuit (ASIC), a Graphics Processing Unit (GPU), or one or more integrated circuits, and is configured to execute related programs to implement functions required to be executed by units in the two-dimensional code recognition apparatus according to the embodiment of the present disclosure, or to execute the two-dimensional code recognition method according to the embodiment of the present disclosure.
The processor 602 may also be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the two-dimensional code identification apparatus according to the embodiment of the present application may be implemented by an integrated logic circuit of hardware in the processor 602 or instructions in the form of software.
The processor 602 may also be a general purpose processor, a Digital Signal Processor (DSP), an ASIC, an FPGA (field programmable gate array) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 601, and the processor 602 reads information in the memory 601, and completes, in combination with hardware of the processor, functions required to be executed by units included in the two-dimensional code identification device according to the embodiment of the present application, or executes the two-dimensional code identification method according to the embodiment of the present application.
Communication interface 603 enables communication between apparatus 600 and other devices or communication networks using transceiver means such as, but not limited to, a transceiver. For example, traffic data of an unknown device may be obtained through the communication interface 603.
Bus 604 may include a pathway to transfer information between various components of apparatus 600 (e.g., memory 601, processor 602, communication interface 603).
It should be noted that although the apparatus 600 described above shows only memories, processors, and communication interfaces, in a specific implementation, those skilled in the art will appreciate that the apparatus 600 may also include other components necessary to achieve proper operation. Also, those skilled in the art will appreciate that the apparatus 600 may also include hardware components to implement other additional functions, according to particular needs. Furthermore, those skilled in the art will appreciate that apparatus 600 may also include only those elements necessary to implement embodiments of the present application, and need not include all of the elements shown in FIG. 6.
As shown in fig. 7, an embodiment of the present application further provides a computer-readable storage medium 700, where the computer-readable storage medium stores a computer program 710, and the computer program 710, when executed by a processor, implements the two-dimensional code identification method in any of the above embodiments.
The embodiment of the present application further provides a computer program product, where the computer program product includes a computer program, and when executed by a processor, the computer program implements the two-dimensional code identification method in any of the above embodiments.
The computer-readable storage medium described above may be a transitory computer-readable storage medium or a non-transitory computer-readable storage medium.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is only a logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
The words used in this application are words of description only and not of limitation of the claims. As used in the description of the embodiments and the claims, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Similarly, the term "and/or" as used in this application is meant to encompass any and all possible combinations of one or more of the associated listed. In addition, the terms "comprises" and/or "comprising," when used in this application, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The various aspects, implementations, or features of the described embodiments can be used alone or in any combination. Aspects of the described embodiments may be implemented by software, hardware, or a combination of software and hardware. The described embodiments may also be embodied by a computer-readable medium having computer-readable code stored thereon, the computer-readable code comprising instructions executable by at least one computing device. The computer readable medium can be associated with any data storage device that can store data which can be read by a computer system. Exemplary computer readable media can include Read-Only Memory, random-access Memory, compact Disk Read-Only Memory (CD-ROM), hard Disk Drive (HDD), digital Video Disk (DVD), magnetic tape, and optical data storage. The computer readable medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
The above description of the technology may refer to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration embodiments in which the embodiments are described. These embodiments, while described in sufficient detail to enable those skilled in the art to practice them, are non-limiting; other embodiments may be utilized and changes may be made without departing from the scope of the described embodiments. For example, the order of operations described in a flowchart is non-limiting, and thus the order of two or more operations illustrated in and described in accordance with the flowchart may be altered in accordance with several embodiments. As another example, in several embodiments, one or more operations illustrated in and described with respect to the flowcharts are optional or may be eliminated. In addition, certain steps or functions may be added to the disclosed embodiments, or a sequence of two or more steps may be substituted. All such variations are considered to be encompassed by the disclosed embodiments and the claims.
Furthermore, terminology is used in the above description of the technology to provide a thorough understanding of the described embodiments. However, no unnecessary detail is required to implement the described embodiments. Accordingly, the foregoing description of the embodiments has been presented for purposes of illustration and description. The embodiments presented in the foregoing description and the examples disclosed in accordance with these embodiments are provided solely to add context and aid in the understanding of the described embodiments. The above description is not intended to be exhaustive or to limit the described embodiments to the precise form disclosed. Many modifications, alternative uses, and variations are possible in light of the above teaching. In some instances, well known process steps have not been described in detail in order to avoid unnecessarily obscuring the described embodiments. While the application has been described with reference to a preferred embodiment, various modifications may be made and equivalents may be substituted for elements thereof without departing from the scope of the application. In particular, the technical features mentioned in the embodiments can be combined in any way as long as there is no structural conflict. The present application is not intended to be limited to the particular embodiments disclosed herein but is to cover all embodiments that may fall within the scope of the appended claims.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present invention, and shall cover the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (13)

1. A two-dimensional code recognition method is characterized by comprising the following steps:
acquiring an image to be identified;
extracting predicted feature point information of the image to be recognized by using a feature recognition model, wherein the predicted feature point information is used for indicating a predicted outer corner point and a predicted centripetal vector, and a predicted central point and a predicted outward vector; the predicted centripetal vector is a predicted vector pointing to the center of the two-dimensional code from the predicted outer corner point, and the predicted outward vector is a predicted vector pointing to four corners of the two-dimensional code from the predicted center point;
and identifying one or more two-dimensional codes in the image to be identified based on the predicted feature point information.
2. The method of claim 1, wherein the feature recognition model comprises corner point branches and center point branches,
the corner branch outputs a corner classification feature map and a corner regression feature map, the corner classification feature map indicates a first probability, and the first probability is the probability that a point on the image to be identified is an outer corner of the two-dimensional code; the angular point regression feature map indicates centripetal vectors corresponding to each point;
the central point branch outputs a central point classification feature map and a central point regression feature map, the central point classification feature map indicates a second probability, and the second probability is the probability that a point on the image to be identified is the central point of the two-dimensional code; the central point regression feature map indicates the outward vectors corresponding to each point.
3. The method according to claim 2, wherein the extracting the predicted feature point information of the image to be recognized by using the feature recognition model comprises:
determining the corner classification feature map and the corner regression feature map corresponding to the image to be identified through the corner branches;
determining points at which the first probability is greater than a first threshold as the predicted outer corner points;
and determining the centripetal vector corresponding to the predicted external corner point as the predicted centripetal vector.
4. The method according to claim 2, wherein the extracting the predicted feature point information of the image to be recognized by using the feature recognition model comprises:
determining the central point classification characteristic graph and the central point regression characteristic graph corresponding to the image to be identified through the central point branch;
determining a point at which the second probability is greater than a second threshold as the predicted center point;
and determining the outward vector corresponding to the prediction central point as the prediction outward vector.
5. The method according to claim 1 or 2, wherein the identifying one or more two-dimensional codes in the image to be identified based on the predicted feature point information comprises:
grouping the predicted outer corner points and the predicted central points to obtain at least one characteristic point group, wherein the predicted outer corner points and the predicted central points in each characteristic point group belong to the same two-dimensional code;
and determining the number and the positions of the two-dimensional codes in the image to be recognized according to the at least one characteristic point group.
6. The method of claim 5, wherein grouping the predicted outlier and the predicted center point to obtain at least one feature point group comprises:
determining that an included angle between the predicted centripetal vector and the predicted outward vector is smaller than a third threshold value, and a quadrangle formed by the four predicted outer corner points is a parallelogram;
and combining the predicted outer corner points and the predicted central points to obtain at least one characteristic point group.
7. The method according to claim 1 or 2,
the image to be identified comprises a QR code, the prediction characteristic point information is also used for indicating a prediction contour corner point, and the prediction contour corner point is a vertex of the position detection graph positioned on a two-dimensional code contour line;
the identifying one or more two-dimensional codes in the image to be identified based on the predicted feature point information further comprises: and identifying version information of the QR code based on the predicted contour corner points, the predicted outer corner points and the predicted central points.
8. The method according to claim 1 or 2,
the image to be identified comprises a QR code, the predicted characteristic point information is also used for indicating a predicted inner corner point, and the predicted inner corner point is a vertex of the position detection graph positioned in the two-dimensional code;
the identifying one or more two-dimensional codes in the image to be identified based on the predicted feature point information further comprises: and identifying the version information of the QR code based on the predicted inner corner, the predicted outer corner and the predicted central point.
9. The method according to claim 1 or 2, wherein the feature recognition model is trained based on a plurality of sample images and feature point information of the sample images, and the training comprises:
constructing a first loss function, wherein the first loss function is constructed based on the extension function of the classified cross-entropy loss.
10. The method according to claim 1 or 2, wherein the feature recognition model is trained based on a plurality of sample images and feature point information of the sample images, the training comprising:
constructing a second loss function, the second loss function constructed based on the smoothed mean absolute error loss function.
11. A two-dimensional code recognition device, comprising:
the acquisition module is used for acquiring an image to be identified;
the extraction module is used for extracting the predicted characteristic point information of the image to be recognized by using a characteristic recognition model, and the predicted characteristic point information is used for indicating a predicted outer corner point and a predicted centripetal vector as well as a predicted central point and a predicted outward vector; the predicted centripetal vector is a predicted vector pointing to the center of the two-dimensional code from the predicted outer corner point, and the predicted outward vector is a predicted vector pointing to four corners of the two-dimensional code from the predicted center point;
and the identification module is used for identifying one or more two-dimensional codes in the image to be identified based on the predicted characteristic point information.
12. A computer device, characterized in that the computer device comprises a memory and a processor, the memory stores a computer program, and the processor implements the two-dimensional code recognition method according to any one of claims 1 to 10 when executing the computer program.
13. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the two-dimensional code recognition method according to any one of claims 1 to 10.
CN202211564732.4A 2022-12-07 2022-12-07 Two-dimensional code identification method and device, computer equipment and readable storage medium Active CN115578606B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211564732.4A CN115578606B (en) 2022-12-07 2022-12-07 Two-dimensional code identification method and device, computer equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211564732.4A CN115578606B (en) 2022-12-07 2022-12-07 Two-dimensional code identification method and device, computer equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN115578606A true CN115578606A (en) 2023-01-06
CN115578606B CN115578606B (en) 2023-03-31

Family

ID=84590494

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211564732.4A Active CN115578606B (en) 2022-12-07 2022-12-07 Two-dimensional code identification method and device, computer equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN115578606B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190156092A1 (en) * 2016-07-22 2019-05-23 Alibaba Group Holding Limited Method and system for recognizing location information in two-dimensional code
CN111488753A (en) * 2019-01-29 2020-08-04 北京骑胜科技有限公司 Two-dimensional code identification method and device, electronic equipment and readable storage medium
CN114022558A (en) * 2022-01-05 2022-02-08 深圳思谋信息科技有限公司 Image positioning method and device, computer equipment and storage medium
CN114549857A (en) * 2022-04-25 2022-05-27 深圳思谋信息科技有限公司 Image information identification method and device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190156092A1 (en) * 2016-07-22 2019-05-23 Alibaba Group Holding Limited Method and system for recognizing location information in two-dimensional code
CN111488753A (en) * 2019-01-29 2020-08-04 北京骑胜科技有限公司 Two-dimensional code identification method and device, electronic equipment and readable storage medium
CN114022558A (en) * 2022-01-05 2022-02-08 深圳思谋信息科技有限公司 Image positioning method and device, computer equipment and storage medium
CN114549857A (en) * 2022-04-25 2022-05-27 深圳思谋信息科技有限公司 Image information identification method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN115578606B (en) 2023-03-31

Similar Documents

Publication Publication Date Title
TWI726422B (en) Two-dimensional code recognition method, device and equipment
CN107665324B (en) Image identification method and terminal
CN107729790B (en) Two-dimensional code positioning method and device
RU2601185C2 (en) Method, system and computer data medium for face detection
US20050242186A1 (en) 2D rectangular code symbol scanning device and 2D rectangular code symbol scanning method
EP2393035A2 (en) QR barcode decoding chip and decoding method thereof
US9430711B2 (en) Feature point matching device, feature point matching method, and non-transitory computer readable medium storing feature matching program
RU2729399C1 (en) Method for detection and recognition of visual markers of long range and high density
WO2015002719A1 (en) Method of improving contrast for text extraction and recognition applications
CN107578011A (en) The decision method and device of key frame of video
EP2695106B1 (en) Feature descriptor for image sections
CN115578606B (en) Two-dimensional code identification method and device, computer equipment and readable storage medium
EP3494545B1 (en) Methods and apparatus for codeword boundary detection for generating depth maps
CN111311573B (en) Branch determination method and device and electronic equipment
CN111325199B (en) Text inclination angle detection method and device
US11893764B1 (en) Image analysis for decoding angled optical patterns
CN111753573B (en) Two-dimensional code image recognition method and device, electronic equipment and readable storage medium
CN100454327C (en) Identification method for data matrix code
US20140212050A1 (en) Systems and methods for processing an image
KR101198595B1 (en) An extracting/decoding device for 2D barcode image of arbitrary contrast
CN109448013B (en) QR code image binarization processing method with local uneven illumination
CN114781417A (en) Two-dimensional code identification method, two-dimensional code identification device and electronic equipment
KR102660603B1 (en) Detection of high-resolution, machine-readable tags using a mosaic image sensor
JP2007094584A (en) Method for detecting two dimensional code, detecting device, and detecting program
CN108388825B (en) Fast response code searching method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
OL01 Intention to license declared
OL01 Intention to license declared