CN114970576A - Identification code identification method, related electronic equipment and computer readable storage medium - Google Patents

Identification code identification method, related electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN114970576A
CN114970576A CN202110215292.0A CN202110215292A CN114970576A CN 114970576 A CN114970576 A CN 114970576A CN 202110215292 A CN202110215292 A CN 202110215292A CN 114970576 A CN114970576 A CN 114970576A
Authority
CN
China
Prior art keywords
identification code
image
information
code image
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110215292.0A
Other languages
Chinese (zh)
Inventor
徐溢璇
于德权
贾明波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110215292.0A priority Critical patent/CN114970576A/en
Publication of CN114970576A publication Critical patent/CN114970576A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10712Fixed beam scanning
    • G06K7/10722Photodetector array or CCD scanning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/146Methods for optical code recognition the method including quality enhancement steps
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/146Methods for optical code recognition the method including quality enhancement steps
    • G06K7/1465Methods for optical code recognition the method including quality enhancement steps using several successive scans of the optical code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/30Payment architectures, schemes or protocols characterised by the use of specific devices or networks
    • G06Q20/32Payment architectures, schemes or protocols characterised by the use of specific devices or networks using wireless devices
    • G06Q20/327Short range or proximity payments by means of M-devices
    • G06Q20/3276Short range or proximity payments by means of M-devices using a pictured code, e.g. barcode or QR-code, being read by the M-device

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Toxicology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Quality & Reliability (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Accounting & Taxation (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an identification code identification method, related electronic equipment and a computer readable storage medium, which relate to the field of artificial intelligence and are related to code detection, and the method comprises the following steps: acquiring a first identification code image; wherein the first identification code image comprises an identification code; acquiring key information of the first identification code image, wherein the key information comprises attribute information used for indicating the identification code contained in the first identification code image to correspond to and information used for indicating the quality of the first identification code image; and analyzing the identification code according to the key information so as to execute the operation flow identified in the identification code. By implementing the method and the device, the accuracy of identifying the identification code in a complex scene can be improved.

Description

Identification code identification method, related electronic equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence and corresponding sub-field code detection, and more particularly, to a method for identifying an identification code, a related electronic device, and a computer-readable storage medium.
Background
The identification code is scanned through a camera of the electronic equipment, and after the identification code obtained by scanning is identified, a user is prompted to perform operation related to the scanned identification code, so that the intelligent electronic equipment is an important application scene of the existing intelligent electronic equipment. Under the environment that intelligent electronic equipment is popularized at present, the method for scanning and identifying the identification code is applied to the fields of mobile payment, sharing a single vehicle, social contact, application downloading, navigation, shopping and the like.
Taking the two-dimensional code identification as an example, in the process of identifying the two-dimensional code, a holder of the intelligent electronic device is required to align a view frame to an image to be identified when scanning the two-dimensional code, and the image to be identified does not contain other contents except the two-dimensional code, once the intelligent electronic device cannot meet the above special requirements (for example, the holder of the intelligent electronic device includes other contents in a complex scene, and the shooting angle of the holder of the intelligent electronic device to the two-dimensional code is not good) when identifying the two-dimensional code, the two-dimensional code in the image to be identified cannot be correctly identified, that is: this implementation greatly reduces the accuracy of identifying the two-dimensional code. Therefore, how to improve the accuracy of identifying the identification code in a complex scene is a technical problem which needs to be solved urgently.
Disclosure of Invention
The application provides an identification code identification method, related electronic equipment and a computer readable storage medium, which can improve the accuracy of identification codes in complex scenes. The "identification code" referred to herein may be a bar code, a two-dimensional code, other types of identification codes for identification systems, and the like.
It should be noted that, in the embodiments provided in the present application, there may be multiple possible implementation manners of the execution sequence of each step, and some or all of the steps may be executed sequentially or in parallel.
In a first aspect, a method for identifying an identification code is provided, which may include, but is not limited to, the following steps: firstly, the electronic equipment acquires a first identification code image; wherein the first identification code image comprises an identification code; the electronic equipment acquires key information of the first identification code image, wherein the key information comprises attribute information used for indicating the corresponding identification code contained in the first identification code image and information used for indicating the quality of the first identification code image; the electronic equipment analyzes the identification code according to the key information so as to execute the operation flow identified in the identification code.
By implementing the embodiment of the application, the electronic equipment can analyze the identification code according to the acquired key information of the identification code image, and still can accurately analyze the identification code under the condition that the identification code image comprises other contents except the identification code or the image quality is poor due to the fact that the shooting angle of the holder of the electronic equipment to the identification code image is poor, so that the operation flow of the identification in the identification code is executed, and the accuracy rate of identifying the identification code in a complex scene is improved.
In a possible implementation manner, the electronic device parses the identification code according to the key information to execute the operation procedure identified in the identification code, including: acquiring an optimization strategy aiming at the first identification code image based on the key information; processing the first identification code image through an optimization strategy to obtain a second identification code image; and analyzing the identification code in the second identification code image to execute the operation flow identified in the identification code. By implementing the embodiment of the application, the quality of the identification code image acquired in the daily code scanning scene cannot be guaranteed, and the electronic equipment can acquire the optimization strategy aiming at the first identification code image based on the key information, so that the first identification code image can be processed through the optimization strategy, and the success rate of identification code analysis is guaranteed. In practical application, the optimization strategy for the first identification code image is carried in a bitmap table, the bitmap table includes identification information of the first identification code image and state information corresponding to each optimization strategy, and the state information corresponding to each optimization strategy includes one of indicating that the first identification code image needs to be optimized through a first value and indicating that the first identification code image does not need to be optimized through a second value. Illustratively, the optimization strategy for the first identification code image includes 5, which are respectively whether the illumination estimation value of the first identification code image needs to be adjusted, whether the bending degree of the identification code needs to be adjusted, whether the identification code needs to be rotated to coincide with a preset reference direction, whether the signal-to-noise ratio information of the first identification code image needs to be adjusted, and whether the image quality of the first identification code image needs to be adjusted, wherein the illumination estimation value of the first identification code image is indicated to be adjusted by a first value in the case that the illumination estimation value of the first identification code image is smaller than the preset illumination estimation value; under the condition that the illumination estimated value of the first identification code image is larger than a preset illumination estimated value, indicating that the illumination estimated value of the first identification code image does not need to be adjusted through a second value; under the condition that the bending degree of the identification code in the first identification code image is greater than a preset bending value, indicating that the bending degree of the identification code needs to be adjusted through a first value; indicating, by a second value, that there is no need to adjust the degree of curvature of the identification code in the first identification code image in a case where the degree of curvature of the identification code is smaller than a preset value of curvature; under the condition that the identification code in the first identification code image is a bar code and the bar code is not coincident with a preset reference direction, indicating that the identification code needs to be rotated to be coincident with the preset reference direction through a first value, and indicating that the identification code does not need to be rotated to be coincident with the preset reference direction through a second value; under the condition that the signal-to-noise ratio information of a first identification code image is smaller than preset signal-to-noise ratio information, indicating that the signal-to-noise ratio information of the first identification code image needs to be adjusted through a first value, and indicating that the signal-to-noise ratio information of the first identification code image does not need to be adjusted through a second value; in the case where the image quality evaluation value of the first identification code image is smaller than a preset evaluation value, it is indicated by a first value that the image quality of the first identification code image needs to be adjusted, and it is indicated by a second value that the image quality of the first identification code image does not need to be adjusted.
In a possible implementation manner, the attribute information of the identification code includes at least one of position information of the identification code, category information to which the identification code belongs, a rotation angle of the identification code with respect to a preset reference direction, and mask information included in the identification code; the information on the quality of the first identification code image includes at least one of an illumination estimation value of the first identification code image, signal-to-noise ratio information of the first identification code image, and a quality evaluation value of the first identification code image.
In a possible implementation manner, after the electronic device acquires the key information of the first identification code image, the method may further include the following steps: displaying the plurality of identification codes through a touch screen of the electronic equipment under the condition that the electronic equipment determines that the first identification code image comprises the plurality of identification codes; receiving an analysis request of a user for a target identification code; the analysis request is used for requesting to analyze the target identification code; the electronic equipment analyzes the identification code according to the key information so as to execute the operation flow of the identification in the identification code, and the method comprises the following steps: the electronic equipment analyzes the target identification code according to the attribute information corresponding to the target identification code and the information of the quality of the first identification code image so as to identify the operation flow of the identifier in the target identification code. If the identification code image only contains one identification code, analyzing the key information corresponding to the identification code acquired by the multitask deep learning model; if the fact that the identification code image comprises a plurality of identification codes is detected, the user is required to select the target identification code to be analyzed, and therefore the electronic equipment can analyze the key information corresponding to the target identification code acquired by the multitask deep learning model.
In a possible implementation manner, the implementation process of acquiring, by the electronic device, the first identification code image may include: under the condition that the motion type of the electronic equipment is determined to be code scanning motion, starting a camera and controlling the camera to start to shoot images; detecting whether an image containing an identification code exists in at least one image obtained by shooting; if yes, acquiring a first identification code image. Specifically, the motion attitude of the electronic device may be determined by a gyro sensor and/or an acceleration sensor, and for example, the angular velocity of the electronic device about three axes (X, Y, and Z axes of the electronic device) may be determined by the gyro sensor. When the electronic equipment is placed in a vertical screen mode, the vertical screen direction is the Y-axis direction of the gyroscope. When the electronic equipment tilts left and right and returns, the gyroscope sensor can integrate the angular speed rotating around the Y axis in time to obtain the angular displacement of the left and right tilts of the electronic equipment, namely the radian of the left or right tilt when the mobile terminal is placed in a vertical screen mode, namely the tilt angle. Therefore, the operating state of the left and right tilt when the electronic apparatus 100 is placed on the portrait screen can be recognized by using the characteristics of the gyro sensor. It is understood that whether the electronic device is in the code scanning motion can be determined based on the change of the angle. Typically, the acceleration sensor can detect the magnitude of acceleration of the electronic device in various directions (typically three axes). In the application, whether the motion type of the electronic equipment is code scanning motion or not can be judged through a plurality of groups of accelerations acquired by the acceleration sensor. According to the implementation mode, any application program does not need to be opened to enter the scanning page of the identification code, and only the electronic equipment needs to be moved, so that the electronic equipment has scanning motion of code scanning characteristics. Because the scanning page of the identification code is not required to be opened by any application program, the time for opening the scanning page of the identification code by the corresponding application program is saved.
In a second aspect, an embodiment of the present application provides an identification apparatus, which may include: a first acquisition unit configured to acquire a first identification code image; wherein the first identification code image comprises an identification code; a second obtaining unit, configured to obtain key information of the first identifier image, where the key information includes attribute information indicating that an identifier included in the first identifier image corresponds to the first identifier image and information indicating that the first identifier image is good or bad in quality; and the processing unit is used for analyzing the identification code according to the key information so as to execute the operation flow identified in the identification code.
In a possible implementation manner, the processing unit is specifically configured to: acquiring an optimization strategy for the first identification code image based on the key information; processing the first identification code image through the optimization strategy to obtain a second identification code image; and analyzing the identification code in the second identification code image to execute the operation flow identified in the identification code.
In a possible implementation manner, the optimization policy for the first identifier image is carried in a bitmap table, the bitmap table includes identification information of the first identifier image and status information corresponding to each optimization policy, and the status information corresponding to each optimization policy includes one of indicating that optimization processing needs to be performed on the first identifier image by a first value and indicating that optimization processing does not need to be performed on the first identifier image by a second value.
In a possible implementation manner, the attribute information of the identification code includes at least one of position information of the identification code, category information to which the identification code belongs, a rotation angle of the identification code with respect to a preset reference direction, and mask information included in the identification code; the information on the quality of the first identification code image includes at least one of an illumination estimation value of the first identification code image, signal-to-noise ratio information of the first identification code image, and a quality evaluation value of the first identification code image.
In one possible implementation, the apparatus further includes: the display unit is used for displaying the plurality of identification codes through a touch screen of the electronic equipment under the condition that the electronic equipment determines that the first identification code image comprises the plurality of identification codes; the receiving unit is used for receiving an analysis request of a user for the target identification code; the analysis request is used for requesting to analyze the target identification code; the processing unit is specifically configured to: and analyzing the target identification code according to the attribute information corresponding to the target identification code and the information of the quality of the first identification code image so as to identify the operation flow of the identifier in the target identification code.
In a possible implementation manner, the first obtaining unit is specifically configured to: under the condition that the motion type of the electronic equipment is determined to be code scanning motion, starting a camera and controlling the camera to start to shoot images; detecting whether an image containing an identification code exists in at least one image obtained by shooting; and if so, acquiring the first identification code image.
In a third aspect, an embodiment of the present application further provides an electronic device, which may include a memory and a processor, where the memory is used to store a computer program that supports the electronic device to execute the above method, and the computer program includes program instructions, and the processor is configured to call the program instructions to execute the method of the first aspect.
In a fourth aspect, embodiments of the present application further provide a computer-readable storage medium, in which a computer program is stored, the computer program comprising program instructions, which, when executed by a processor, cause the processor to perform the method of the first aspect.
In a fifth aspect, the present application further provides a computer program comprising computer software instructions which, when executed by a computer, cause the computer to perform the method according to the first aspect.
Drawings
Fig. 1 is a schematic view illustrating an operation flow of voting included in an identification code recognized by an electronic device according to an embodiment of the present disclosure;
fig. 2 is a schematic view illustrating an operation flow of payment included in an identification code recognized by an electronic device according to an embodiment of the present disclosure;
fig. 3a is a schematic structural diagram of a convolutional neural network 300 according to an embodiment of the present disclosure;
fig. 3b is a schematic structural diagram of another convolutional neural network 300 provided in the embodiment of the present application;
fig. 3c is a schematic diagram of a chip hardware structure according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of an application interface provided by an embodiment of the present application;
fig. 5a is a schematic flowchart of an identification code identification method according to an embodiment of the present disclosure;
fig. 5b is a schematic diagram of a camera adjustment strategy provided in an embodiment of the present application;
fig. 5c is a schematic diagram of a coordinate system of an electronic device according to an embodiment of the present application;
fig. 5d is a schematic diagram of performing optimization processing on the first identification code image according to the embodiment of the present application;
fig. 5e is a schematic diagram of a display page output by an electronic device in a multi-code detection scenario according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an identification apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 8 is a schematic diagram of a software architecture according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application are described below clearly and completely with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
The terms "first" and "second" and the like in the description and drawings of the present application are used for distinguishing different objects or for distinguishing different processes for the same object, and are not used for describing a specific order of the objects. Furthermore, the terms "including" and "having," and any variations thereof, as referred to in the description of the present application, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. It should be noted that in the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design method described herein as "exemplary" or "e.g.," should not be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion. In the examples of the present application, "A and/or B" means both A and B, A or B. "A, and/or B, and/or C" means either A, B, C, or means either two of A, B, C, or means A and B and C.
The application provides an identification code identification method, which can analyze an identification code according to key information of an identification code image so as to execute an operation flow of identification in the identification code. The identification code may include, but is not limited to, a bar code, a two-dimensional code, and the like. Taking a two-dimensional code as an example, the two-dimensional code records data symbol information by using black and white patterns which are distributed on a plane (two-dimensional direction) according to a certain rule by using a certain specific geometric figure; the concept of '0' and '1' bit stream forming the internal logic basis of the computer is skillfully utilized in code making, thousands of geometric shapes corresponding to binary systems are used for representing character numerical value information, and the information is automatically read through an image input device or an optoelectronic scanning device so as to realize automatic information processing. It has some commonality of barcode technology: each code system has its specific character set; each character occupies a certain width; has certain checking function and the like. Meanwhile, the method has the characteristics of automatic recognition function of information of different lines, processing of graph rotation change and the like. The two-dimensional code can express information in both the horizontal and vertical directions, and thus can express a large amount of information in a small area. As a new technology for automatic identification and information carrier, two-dimensional code is known and recognized more and more economically and reliably. At present, two-dimensional codes are widely applied in various fields. The following is an exemplary description:
a first application scenario: and (6) voting.
For example, a show program is shown on television that allows users to participate in the program by voting. After entering a voting link, displaying a two-dimensional code on a television screen, enabling a user to obtain a video code stream containing the two-dimensional code through a camera of electronic equipment, obtaining an image containing the two-dimensional code in the video code stream, and obtaining key information of the image containing the two-dimensional code through a multitask deep learning model, wherein the key information comprises attribute information used for indicating the correspondence of the two-dimensional code and information of the quality of the image containing the two-dimensional code, and then the electronic equipment can analyze the two-dimensional code according to the key information so as to execute an operation flow about voting identified in the two-dimensional code. For example, the operation flow may be specifically as shown in fig. 1, and may include, but is not limited to, the following steps:
step 1, prompting a user whether to vote or not;
step 2, the user selects 'confirmation', and the electronic equipment acquires basic information (pictures, background data, introduction and the like) of the relevant player;
step 3, the electronic equipment presents some candidate items to the user, each item can be composed of a picture and a simple text description, and when the user selects one item, a more detailed description can be displayed;
step 4, pressing a 'confirm' key after a user selects a certain item;
step 5, the electronic equipment outputs a prompt message that 'you select XX player and cast one ticket';
step 6, the user selects 'confirm', the electronic equipment sends the identification information corresponding to the player selected by the user to a relevant server, so as to count the selection result of the user through the relevant server;
and 7, the electronic equipment outputs a prompt message that the user successfully throws XX player ticket.
A second application scenario: and (4) mobile payment.
For example, when it is determined that the motion type of the electronic device is code scanning motion, the camera is turned on and is controlled to start shooting images, the electronic device detects whether an image containing a two-dimensional code exists in at least one shot image, if so, the image containing the two-dimensional code is obtained, key information of the image containing the two-dimensional code is obtained through a multitask deep learning model, for example, the key information includes attribute information used for indicating that the two-dimensional code corresponds to and information that the quality of the image containing the two-dimensional code is good, and then the electronic device can analyze the two-dimensional code according to the key information to execute an operation flow about payment identified in the two-dimensional code. For example, the operation flow may be specifically as shown in fig. 2, and may include, but is not limited to, the following steps:
step 1, after the electronic equipment inputs the two-dimensional code, prompting a user to input payment amount;
step 2, after the user confirms payment, the electronic equipment calls the electronic payment module to pay;
and 3, the electronic equipment outputs prompt information 'confirm payment', and after the user selects 'confirm payment', a payment receipt is output, so that the payment process can be completed.
It should be noted that the above-described application scenarios are only examples, and the identification of the identification code described in this application is not limited to the above-described application scenarios.
Therefore, the identification code identification method can improve the identification code identification accuracy rate in complex scenes.
Since the embodiments of the present application relate to the application of a large number of neural networks, for the convenience of understanding, the related terms and related concepts such as neural networks related to the embodiments of the present application will be described below.
(1) Neural network
The neural network may be composed of neural units, which may be referred to as x s And an arithmetic unit with intercept b as input, the output of the arithmetic unit may be:
Figure BDA0002952883710000061
wherein s is 1, 2, … … n, n is a natural number greater than 1, and w is s Is x s B is the bias of the neural unit. f is an activation function (activation functions) of the neural unit for introducing a nonlinear characteristic into the neural network to convert an input signal in the neural unit into an output signal. The output signal of the activation function may be used as an input to the next convolutional layer. The activation function may be a sigmoid function. The neural network is formed by connecting a plurality of single neural unitsThe output of a network, i.e. one neural unit, may be the input of another neural unit. The input of each neural unit can be connected with the local receiving domain of the previous layer to extract the characteristics of the local receiving domain, and the local receiving domain can be a region composed of a plurality of neural units.
(2) Deep neural network
Deep Neural Networks (DNNs), also known as multi-layer neural networks, can be understood as neural networks having many hidden layers, where "many" has no particular metric. From the division of DNNs by the location of different layers, neural networks inside DNNs can be divided into three categories: input layer, hidden layer, output layer. Generally, the first layer is an input layer, the last layer is an output layer, and the middle layers are hidden layers. The layers are all connected, that is, any neuron of the ith layer is necessarily connected with any neuron of the (i + 1) th layer. Although DNN appears complex, it is not really complex in terms of the work of each layer, simply the following linear relational expression:
Figure BDA0002952883710000062
wherein,
Figure BDA0002952883710000063
is the input vector of the input vector,
Figure BDA0002952883710000064
is the output vector, b is the offset vector, w is the weight matrix (also called coefficient), and α () is the activation function. Each layer is only for the input vector
Figure BDA0002952883710000065
Obtaining the output vector through such simple operation
Figure BDA0002952883710000066
Due to the large number of DNN layers, the number of coefficients w and offset vectors b is also large. The definition of these parameters in DNN is as follows: taking coefficient w as an example: suppose in a three-layer DNN, the second layerThe linear coefficient from the 4 th neuron to the 2 nd neuron of the third layer is defined as
Figure BDA0002952883710000067
The superscript 3 represents the number of layers in which the coefficient W is located, while the subscripts correspond to the third layer index 2 of the output and the second layer index 4 of the input. The summary is that: the coefficients of the kth neuron of the L-1 th layer to the jth neuron of the L-1 th layer are defined as
Figure BDA0002952883710000068
Note that the input layer is without w-parameters. In deep neural networks, more hidden layers make the network more able to depict complex situations in the real world. Theoretically, the more parameters the higher the model complexity, the larger the "capacity", which means that it can accomplish more complex learning tasks. The final goal of the process of training the deep neural network, i.e., learning the weight matrix, is to obtain the weight matrix (the weight matrix formed by the vectors w of many layers) of all the layers of the deep neural network that is trained.
(3) Convolutional neural network
A Convolutional Neural Network (CNN) is a deep neural network with a convolutional structure. The convolutional neural network includes a feature extractor consisting of convolutional layers and sub-sampling layers. The feature extractor may be considered a filter and the convolution process may be considered as convolving with an input data (e.g., image data, such as described by way of example) or a convolved feature plane (feature map) using a trainable filter. The convolutional layer is a neuron layer for performing convolutional processing on an input signal in a convolutional neural network. In convolutional layers of convolutional neural networks, one neuron may be connected to only a portion of the neighbor neurons. In a convolutional layer, there are usually several characteristic planes, and each characteristic plane may be composed of several neural units arranged in a rectangular shape. The neural units of the same feature plane share weights, where the shared weights are convolution kernels. Sharing weights may be understood as the way image information is extracted is location independent. The underlying principle is: the statistics of a certain part of the image are the same as the other parts. Meaning that image information learned in one part can be used in another part as well. The same learned image information can be used for all positions on the image. In the same convolution layer, a plurality of convolution kernels can be used to extract different image information, and generally, the greater the number of convolution kernels, the more abundant the image information reflected by the convolution operation.
The convolution kernel can be initialized in the form of a matrix of random size, and can be learned to obtain reasonable weights in the training process of the convolutional neural network. In addition, sharing weights brings the direct benefit of reducing connections between layers of the convolutional neural network, while reducing the risk of overfitting.
As described in the introduction of the basic concept, the convolutional neural network is a deep neural network with a convolutional structure, and is a deep learning (deep learning) architecture, and the deep learning architecture refers to performing multiple levels of learning at different abstraction levels through a machine learning algorithm. As a deep learning architecture, CNN is a feed-forward artificial neural network in which individual neurons can respond to images input thereto. In order to better understand the identification method of the identification code described in the present application, the following specifically describes the structure of the convolutional neural network that may be involved:
in some possible implementations, as shown in fig. 3a, Convolutional Neural Network (CNN)300 may include an input layer 310, convolutional/pooling layer 320 (where pooling layer is optional), and neural network layer 330.
Convolutional layer/pooling layer 320:
and (3) rolling layers:
the convolutional layer/pooling layer 320 as shown in fig. 3a may comprise layers such as examples 321-326, for example: in one implementation, 321 layers are convolutional layers, 322 layers are pooling layers, 323 layers are convolutional layers, 324 layers are pooling layers, 325 layers are convolutional layers, 326 layers are pooling layers; in another implementation, 321, 322 are convolutional layers, 323 are pooling layers, 324, 325 are convolutional layers, and 326 are pooling layers. I.e., the output of a convolutional layer may be used as input to a subsequent pooling layer, or may be used as input to another convolutional layer to continue the convolution operation.
The inner working principle of one convolution layer will be described below by taking convolution layer 321 as an example.
Convolution layer 321 may include a plurality of convolution operators, also called kernels, whose role in image processing is equivalent to a filter for extracting specific information from the input image matrix, and the convolution operator may be essentially a weight matrix, which is usually predefined, and during the convolution operation on the image, the weight matrix is usually processed pixel by pixel (or two pixels by two pixels, depending on the value of step stride) on the input image in the horizontal direction, so as to complete the task of extracting specific features from the image. The size of the weight matrix should be related to the size of the image, and it should be noted that the depth dimension (depth dimension) of the weight matrix is the same as the depth dimension of the input image, and the weight matrix extends to the entire depth of the input image during the convolution operation. Thus, convolving with a single weight matrix will produce a single depth dimension of the convolved output, but in most cases not a single weight matrix is used, but a plurality of weight matrices of the same size (row by column), i.e. a plurality of matrices of the same type, are applied. The outputs of each weight matrix are stacked to form the depth dimension of the convolved image, which dimension is understood herein to be determined by the "plurality" described above. Different weight matrices may be used to extract different features in the image, e.g., one weight matrix to extract image edge information, another weight matrix to extract a particular color of the image, yet another weight matrix to blur unwanted noise in the image, etc. The plurality of weight matrices have the same size (row × column), the feature maps extracted by the plurality of weight matrices having the same size also have the same size, and the extracted feature maps having the same size are combined to form the output of the convolution operation.
The weight values in these weight matrices need to be obtained through a large amount of training in practical application, and each weight matrix formed by the trained weight values can be used to extract information from the input image, so that the convolutional neural network 300 can make correct prediction.
When convolutional neural network 300 has multiple convolutional layers, the initial convolutional layer (e.g., 321) tends to extract more general features, which may also be referred to as low-level features; as the depth of convolutional neural network 300 increases, the more convolutional layers (e.g., 326) later extract more complex features, such as features with high levels of semantics, the more highly semantic features are suitable for the problem to be solved.
A pooling layer:
since it is often desirable to reduce the number of training parameters, it is often desirable to periodically introduce pooling layers after the convolutional layer, where the layers 321-326 as illustrated in 320 in fig. 3a may be one convolutional layer followed by one pooling layer, or may be multiple convolutional layers followed by one or more pooling layers. Specifically, the pooling layer is used for sampling data and reducing the number of data. For example, taking data as image data as an example, in the image processing process, the spatial size of the image can be reduced by the pooling layer. In general, the pooling layer may include an average pooling operator and/or a maximum pooling operator for sampling the input image to smaller sized images. The average pooling operator may calculate pixel values in the image over a certain range to produce an average as a result of the average pooling. The max pooling operator may take the pixel with the largest value in a particular range as a result of the max pooling. In addition, just as the size of the weighting matrix used in the convolutional layer should be related to the image size, the operators in the pooling layer should also be related to the image size. The size of the image output after the processing by the pooling layer may be smaller than the size of the image input to the pooling layer, and each pixel point in the image output by the pooling layer represents an average value or a maximum value of a corresponding sub-region of the image input to the pooling layer.
The neural network layer 330:
after processing by convolutional layer/pooling layer 320, convolutional neural network 300 is not sufficient to output the required output information. Because, as previously described, the convolutional layer/pooling layer 320 only extracts features and reduces the parameters associated with the input image. However, to generate the final output information (class information required or other relevant information), the convolutional neural network 300 needs to generate one or a set of the number of required classes of output using the neural network layer 330. Therefore, a plurality of hidden layers (331, 332 to 33n shown in fig. 3 a) and an output layer 340 may be included in the neural network layer 330, and parameters included in the plurality of hidden layers may be obtained by pre-training according to related training data of a specific task type, for example, the task type may include image recognition, image classification, image super-resolution reconstruction, and the like.
After the hidden layers in the neural network layer 330, i.e. the last layer of the whole convolutional neural network 300 is the output layer 340, the output layer 340 has a loss function similar to the classification cross entropy, and is specifically used for calculating the prediction error, once the forward propagation (i.e. the propagation from the direction 310 to the direction 340 in fig. 3a is the forward propagation) of the whole convolutional neural network 300 is completed, the backward propagation (i.e. the propagation from the direction 340 to the direction 310 in fig. 3a is the backward propagation) starts to update the weight values and the bias of the aforementioned layers, so as to reduce the loss of the convolutional neural network 300, and the error between the result output by the convolutional neural network 300 through the output layer and the ideal result.
It should be noted that the convolutional neural network 300 shown in fig. 3a is only an example of a convolutional neural network, and in a specific application, the convolutional neural network may also exist in the form of other network models. For example, as shown in fig. 3b, a plurality of convolutional layers/pooling layers are arranged in parallel, and the features extracted respectively are all input to the neural network layer 330 for processing.
It should be noted that the multitask deep learning model related to the present application may be constructed based on the above convolutional neural network 300, and the multitask deep learning model may include an identification code detection model, an identification code angle detection model, and an image quality evaluation model. In a specific application, the multitask deep learning model may also exist in the form of other network models, which is not limited herein.
A hardware structure of a chip provided in an embodiment of the present application is described below.
Fig. 3c is a hardware structure of a chip provided in an embodiment of the present application, where the chip includes an artificial intelligence processor 30.
Specifically, the artificial intelligence processor 30 may be any processor suitable for large-scale exclusive or operation Processing, such as a neural Network Processor (NPU), a Tensor Processor (TPU), or a Graphics Processing Unit (GPU). Taking NPU as an example: the NPU may be mounted as a coprocessor to a main CPU (host CPU), which is assigned tasks. The core portion of the NPU is an arithmetic circuit 303, and the arithmetic circuit 303 is controlled by a controller 304 to extract matrix data in memories (301 and 302) and perform a multiply-add operation.
In some implementations, the arithmetic circuit 303 includes a plurality of processing units (PEs) therein. In some implementations, the operational circuitry 303 is a two-dimensional systolic array. The arithmetic circuit 303 may also be a one-dimensional systolic array or other electronic circuit capable of performing mathematical operations such as multiplication and addition. In some implementations, the arithmetic circuitry 303 is a general-purpose matrix processor.
For example, assume that there is an input matrix A, a weight matrix B, and an output matrix C. The arithmetic circuit 303 fetches the weight data of the matrix B from the weight memory 302 and buffers on each PE in the arithmetic circuit 303. The arithmetic circuit 303 acquires input data of the matrix a from the input memory 301, performs matrix arithmetic on the input data of the matrix a and weight data of the matrix B, and stores a partial result or a final result of the obtained matrix in an accumulator (accumulator) 308.
The unified memory 306 is used to store input data as well as output data. The weight data is directly transferred to the weight Memory 302 through a Memory Access Controller (DMAC) 305. The input data is also carried through the DMAC into the unified memory 306.
A Bus Interface Unit (BIU) 310, configured to interact between the DMAC and an Instruction Fetch memory (Instruction Fetch Buffer) 309; bus interface unit 301 is also used to fetch instructions from external memory by instruction fetch memory 309; the bus interface unit 301 is also used for the memory unit access controller 305 to obtain the original data of the input matrix a or the weight matrix B from the external memory.
The DMAC is mainly used to transfer input data in the external memory DDR to the unified memory 306, or transfer weight data to the weight memory 302, or transfer input data to the input memory 301.
The vector calculation unit 307 may include a plurality of operation processing units, and further processes the output of the operation circuit 303, such as vector multiplication, vector addition, exponential operation, logarithmic operation, magnitude comparison, and the like, if necessary. The vector calculation unit 307 is mainly used for calculating a non-convolutional layer or a fully connected layer (FC) in the neural network, and specifically may process: pooling (Pooling), Normalization, etc. For example, the vector calculation unit 307 may apply a non-linear function to the output of the arithmetic circuit 303, such as a vector of accumulated values, to generate the activation value. In some implementations, the vector calculation unit 307 generates normalized values, combined values, or both.
In some implementations, the vector calculation unit 307 stores the processed vectors to the unified memory 306. In some implementations, the vectors processed by the vector amount calculation unit 307 can be used as activation inputs for the operation circuit 303, for example, for use in subsequent layers in a neural network, as shown in fig. 3a, if the current processing layer is the hidden layer 1(231), then the vectors processed by the vector amount calculation unit 307 can also be used for calculation in the hidden layer 2 (232).
An instruction fetch buffer (issue fetch buffer)309 connected to the controller 304 for storing instructions used by the controller 304;
the unified memory 306, the input memory 301, the weight memory 302, and the instruction fetch memory 309 are all On-Chip memories. The external memory is independent of the NPU hardware architecture.
The operations of the layers in the convolutional neural networks shown in fig. 3a and 3b may be performed by the operation circuit 303 or the vector calculation unit 307.
The following describes how to trigger the electronic device to acquire an identification code image through the camera.
The first method is as follows: opening an Application (APP), for example, opening a payment bank (refer to a in fig. 4), starting a camera of the electronic device through a "scan" function and controlling the camera to start shooting images (refer to b in fig. 4), upon detecting that an image containing an identification code exists in at least one of the captured images, an image 30 containing an identification code (i.e., an identification code image) is acquired, and thereafter, the identification code image can be processed through a multitask deep learning model to obtain key information of the identification code image, for example, the key information includes attribute information indicating that the identification code contained in the identification code image corresponds to and information indicating that the identification code image is good or bad in quality, the identification code can be analyzed according to the key information to obtain a scanning result, for example, the scanning result may be an operation flow identified in the identification code (refer to c in fig. 4).
The second method comprises the following steps: the electronic equipment determines the motion state of the electronic equipment through a gyroscope sensor and/or an acceleration sensor, when the motion type of the electronic equipment is determined to be code scanning motion, a camera is started, the camera is controlled to start shooting images, then whether the images containing the identification codes exist in at least one shot image is detected, and if the images containing the identification codes exist, the images containing the identification codes are obtained. Then, the identification code image may be processed through a multitask deep learning model, so as to obtain key information of the identification code image, where the key information includes, for example, attribute information indicating that the identification code included in the identification code image corresponds to the identification code image and information indicating that the identification code image is good or bad in quality, so that the identification code may be analyzed according to the key information, and a scanning result may be obtained, for example, the scanning result may be an operation flow identified in the identification code.
The method according to the embodiments of the present application is described in detail below. Fig. 5a is a method for identifying an identification code according to an embodiment of the present application, where the method may be implemented in an electronic device, and the method may include, but is not limited to the following steps:
step S501, the electronic equipment acquires a first identification code image; wherein the first identification code image includes an identification code.
In one possible embodiment, the electronic device determines the angular velocity of the electronic device 100 about three axes (the X-axis, the Y-axis, and the Z-axis of the electronic device) through the gyro sensor, so that the motion state of the electronic device can be determined based on the angular velocities of the three axes. When the motion type of the electronic equipment is determined to be code scanning motion, a camera is started, the camera is controlled to start shooting images, then whether an image containing an identification code exists in at least one shot image or not is detected, and if the image containing the identification code exists, the image containing the identification code is obtained. According to the implementation mode, any application program does not need to be opened to enter the scanning page of the identification code, and only the electronic equipment needs to be moved, so that the electronic equipment has scanning motion of code scanning characteristics. Because the scanning page of the identification code is not required to be opened by any application program, the time for opening the scanning page of the identification code by the corresponding application program is saved.
In one possible embodiment, the electronic device determines accelerations around three axes (an X-axis, a Y-axis, and a Z-axis of the electronic device) through the acceleration sensor, so that the motion state of the electronic device can be determined based on the accelerations of the three axes. When the motion type of the electronic equipment is determined to be code scanning motion, a camera is started, the camera is controlled to start shooting images, then whether an image containing an identification code exists in at least one shot image or not is detected, and if the image containing the identification code exists, the image containing the identification code is obtained. According to the implementation mode, any application program does not need to be opened to enter the scanning page of the identification code, and only the electronic equipment needs to be moved, so that the electronic equipment has scanning motion of code scanning characteristics. Because the scanning page of the identification code is not required to be opened by any application program, the time for opening the scanning page of the identification code by the corresponding application program is saved.
In one possible embodiment, the user may open the application and enter a scan page of the identification code to capture an image containing the identification code via the camera.
In a possible embodiment, the electronic device may obtain a video stream through the camera, obtain a video frame image based on the video stream extraction, and then classify a plurality of video frame images through a preset classifier, so as to obtain a video frame image containing the identification code, that is, the first identification code image.
It should be noted that, if a position area containing a small identification code or a suspected identification code in an identification code image is detected, at this time, the detected position area can be focused and amplified, which provides convenience for the subsequent analysis of the identification code, and can improve the detection rate of a small code scene to a certain extent. Moreover, for a scene of an image shot by a camera, if the shot image contains an identification code (for example, the identification code is too small, the camera is not focused, and shooting is fuzzy), but the identification code is not detected by the multitask deep learning model, at this time, the camera can be zoomed, an imaging picture in the center of the shooting picture is enlarged, the occupation ratio of the identification code in the identification code image is increased, and then the identification code is detected by the multitask deep learning model, so that the detection rate of a small code scene can be improved to a certain extent by the implementation mode. In particular, the adjustment strategy of the camera may be as shown in fig. 5 b.
Step S502, the electronic equipment acquires key information of the first identification code image, wherein the key information comprises attribute information used for indicating that the identification code contained in the first identification code image corresponds to and information used for indicating the quality of the first identification code image is good or bad.
In the embodiment of the present application, the attribute information of the identification code may include at least one of position information of the identification code, category information to which the identification code belongs, a rotation angle of the identification code with respect to a preset reference direction, and mask information included in the identification code.
Illustratively, the position information of the identification code includes an abscissa of an upper left corner of an envelope box of the identification code, an ordinate, and a width and a height of the envelope box.
Illustratively, the category information to which the identification code belongs may include, but is not limited to, a bar code, a two-dimensional code, and the like.
The rotation angle of the identification code with respect to the preset reference direction, which may be determined based on the coordinate system shown in fig. 5c, is exemplarily between 0-180 deg.. As shown in fig. 5c, the coordinate system of the electronic device may be defined by: the X axis is parallel to the short side direction of the screen of the electronic equipment and points to the right side of the screen from the left side of the screen; the Y axis is parallel to the long edge direction of the screen and points to the top of the screen from the bottom of the screen; the Z axis is perpendicular to the plane formed by the X axis and the Y axis, namely the Z axis is perpendicular to the plane of the screen. When the electronic device is placed horizontally and the screen is facing upwards, the direction of the Z axis is opposite to the direction of gravity.
It should be noted that, in the embodiments of the present application, the top, the bottom, the left, and the right are relative, and are exemplary descriptions in specific implementation manners, and should not be construed as limiting the embodiments of the present application. It can be understood that when the posture of the electronic device is changed, the top, the bottom, the left side and the right side of the electronic device mentioned in the embodiment of the present application are not changed.
In the embodiment of the present application, the information on the quality of the first identification code image may include at least one of an illumination evaluation value of the first identification code image, signal-to-noise ratio information of the first identification code image, and a quality evaluation value of the first identification code image.
In a possible embodiment, the key information of the first identification code image can be obtained through a multitask deep learning model. For example, a multi-batch (batch) inference prediction is used in the multitask deep learning model, and the multi-batch inference prediction increases the proportion of the identification code in the first identification code image by combining image blocks with different area sizes in the first identification code image, so that the detection accuracy of the identification code can be improved.
In a possible embodiment, the multitask deep learning model includes an identification code detection model, an identification code angle detection model and an image quality evaluation model, where the identification code detection model is used to detect whether the first identification code image includes the identification code, and obtain information related to the identification code, such as location information of the identification code; the identification code angle model is used for predicting the rotation angle of the bar code when the identification code is detected to be the bar code. It should be noted that, instead of the entire first identification code image, the cutout of the angle model on the first identification code image for each barcode is input. In this implementation, since the proportion of the identification code in the image is increased, the detection accuracy of the identification code can be improved. The image quality evaluation model is used to determine information indicating the quality of the first identification code image.
Step S503, the electronic device analyzes the identification code according to the key information to execute the operation process identified in the identification code.
In an embodiment of the present application, the operation flow identified in the identification code may include a series of actions, so that the electronic device may execute an application service flow local to the electronic device according to the action sequence included in the operation flow, for example, the application service flow may be voting, mobile payment, or the like as referred to in the present application.
In this embodiment of the application, the electronic device may analyze the identification code according to the key information to execute an implementation process of the operation procedure identified in the identification code, where the implementation process includes: acquiring an optimization strategy aiming at the first identification code image based on the key information; processing the first identification code image through an optimization strategy to obtain a second identification code image; and analyzing the identification code in the second identification code image to execute the operation flow identified in the identification code.
In this embodiment of the application, the optimization strategy for the first identifier image is carried in a bitmap table, the bitmap table includes identification information of the first identifier image and status information corresponding to each optimization strategy, and the status information corresponding to each optimization strategy includes one of indicating that the first identifier image needs to be optimized by a first value and indicating that the first identifier image does not need to be optimized by a second value. For example, it may be indicated by a first value "1" that the optimization processing is required for the first recognition code image, and it may be indicated by a second value "0" that the optimization processing is not required for the first recognition code image. The first value and the second value are merely examples, and should not be construed as limiting.
Illustratively, the optimization strategy for the first identification code image includes M (for example, M is 5) which are respectively whether the illumination estimation value of the first identification code image needs to be adjusted, whether the bending degree of the identification code needs to be adjusted, whether the identification code needs to be rotated to coincide with a preset reference direction, whether the signal-to-noise ratio information of the first identification code image needs to be adjusted, and whether the image quality of the first identification code image needs to be adjusted, wherein in a case that the illumination estimation value of the first identification code image is smaller than the preset illumination estimation value, the illumination estimation value of the first identification code image is indicated to be adjusted by a first value; under the condition that the illumination estimation value of the first identification code image is larger than a preset illumination estimation value, indicating that the illumination estimation value of the first identification code image does not need to be adjusted through a second value; under the condition that the bending degree of the identification code in the first identification code image is larger than a preset bending value, indicating that the bending degree of the identification code needs to be adjusted through a first value; indicating, by a second value, that there is no need to adjust the degree of curvature of the identification code in the first identification code image in a case where the degree of curvature of the identification code is smaller than a preset value of curvature; under the condition that the identification code in the first identification code image is a bar code and the bar code is not coincident with a preset reference direction, indicating that the identification code needs to be rotated to be coincident with the preset reference direction through a first value, and indicating that the identification code does not need to be rotated to be coincident with the preset reference direction through a second value; under the condition that the signal-to-noise ratio information of a first identification code image is smaller than preset signal-to-noise ratio information, indicating that the signal-to-noise ratio information of the first identification code image needs to be adjusted through a first value, and indicating that the signal-to-noise ratio information of the first identification code image does not need to be adjusted through a second value; in the case where the image quality evaluation value of the first identification code image is smaller than a preset evaluation value, it is indicated by a first value that the image quality of the first identification code image needs to be adjusted, and it is indicated by a second value that the image quality of the first identification code image does not need to be adjusted.
It is understood that the optimization strategy for the first identification code image depends on the key information of the acquired first identification code image.
After determining the optimization strategy for the first identifier, specific technical means can be adopted to optimize the first identifier. For example, the illumination estimation value of the first identification code image can be adjusted by adjusting the contrast and/or Gamma value (Gamma) of the first identification code image; for another example, the purpose of adjusting the degree of curvature of the identification code can be achieved by performing distortion correction on the first identification code image; for another example, the identification code can be rotated to coincide with a preset reference direction through affine transformation to achieve the purpose of rotating the identification code; for another example, the purpose of adjusting the signal-to-noise ratio information of the first identification code image can be achieved by performing image filtering and denoising on the first identification code image; for another example, the image quality of the first identification code image can be adjusted by performing expansion, erosion, and binarization processing on the first identification code image.
For example, as shown in fig. 5d, the optimization strategy for the first identification code image is: 01110, wherein "0" indicates that the first barcode image does not need to be optimized, and "1" indicates that the first barcode image needs to be optimized, the optimization strategy can be described as: after the electronic device acquires the optimization strategy for the first identification code image, the electronic device can process the first identification code image through the optimization strategy, for example, the curved surface distortion is corrected, so as to adjust the bending degree of the identification code; rotating the identification code to coincide with a preset reference direction; and removing noise in the first identification code image. It should be noted that the example is only an example, and should not be construed as a limitation.
After processing the first identification code image by the optimization strategy, a second identification code image may be obtained. Taking the second identifier image as an image including the two-dimensional code as an example, in the process of analyzing the two-dimensional code, the two-dimensional code may be analyzed through a position detection pattern (FIP) technology, so that the operation procedure identified in the two-dimensional code may be executed. For how to parse the barcode, please refer to the prior art, and details thereof are not repeated herein.
By implementing the embodiment of the application, the electronic equipment can analyze the identification code according to the acquired key information of the identification code image, and can still accurately analyze the identification code under the condition that the identification code image comprises other contents except the identification code or the image quality is poor due to the poor shooting angle of the holder of the electronic equipment to the identification code image so as to execute the operation flow of the identification in the identification code, thereby improving the accuracy rate of the identification code in a complex scene.
It should be noted that, in the foregoing method embodiment, if it is detected that the identification code image only contains one identification code, the key information corresponding to the identification code obtained by the multitask deep learning model may be analyzed; if it is detected that the identification code image includes multiple identification codes, at this time, the electronic device may output and input the application interface shown in fig. 5e, and the user is required to select the target identification code to be analyzed, so that the electronic device may analyze the key information corresponding to the target identification code acquired by the multitask deep learning model.
In summary, the method described above in the present application enables an electronic device to support multi-code detection and small-sized identification code detection without increasing the consumption of redundant resources. The method analyzes the identification code according to the key information of the acquired identification code image, so that the identification code can be accurately analyzed under the condition that the identification code image comprises other contents except the identification code or the image quality is poor due to the poor shooting angle of the holder of the electronic equipment to the identification code image, so that the operation flow of identification in the identification code is executed, and the accuracy rate of identifying the identification code in a complex scene is improved.
Fig. 1 to fig. 5e describe the identification method of the identification code according to the embodiment of the present application in detail, and the following describes the apparatus according to the embodiment of the present application with reference to the drawings.
Fig. 6 is a schematic structural diagram of an identification device 60 according to an embodiment of the present disclosure. The identification device 60 shown in fig. 6 may include:
a first obtaining unit 600 configured to obtain a first identification code image; wherein the first identification code image comprises an identification code;
a second obtaining unit 602, configured to obtain key information of the first identification code image, where the key information includes attribute information indicating that an identification code included in the first identification code image corresponds to the first identification code image and information indicating that the first identification code image is good or bad in quality;
the processing unit 604 is configured to parse the identifier according to the key information, so as to execute the operation procedure identified in the identifier.
In a possible implementation manner, the processing unit 604 is specifically configured to:
acquiring an optimization strategy for the first identification code image based on the key information;
processing the first identification code image through the optimization strategy to obtain a second identification code image;
and analyzing the identification code in the second identification code image to execute the operation flow identified in the identification code.
In a possible implementation manner, the optimization policy for the first identifier image is carried in a bitmap table, the bitmap table includes identification information of the first identifier image and status information corresponding to each optimization policy, and the status information corresponding to each optimization policy includes one of indicating that optimization processing needs to be performed on the first identifier image by a first value and indicating that optimization processing does not need to be performed on the first identifier image by a second value.
In a possible implementation manner, the attribute information of the identification code includes at least one of position information of the identification code, category information to which the identification code belongs, a rotation angle of the identification code with respect to a preset reference direction, and mask information included in the identification code; the information on the quality of the first identification code image includes at least one of an illumination estimation value of the first identification code image, signal-to-noise ratio information of the first identification code image, and a quality evaluation value of the first identification code image.
In one possible implementation, the apparatus 60 further includes:
the display unit is used for displaying the plurality of identification codes through a touch screen of the electronic equipment under the condition that the electronic equipment determines that the first identification code image comprises the plurality of identification codes;
the receiving unit is used for receiving an analysis request of a user for the target identification code; the analysis request is used for requesting to analyze the target identification code;
the processing unit 604 is specifically configured to:
and analyzing the target identification code according to the attribute information corresponding to the target identification code and the information of the quality of the first identification code image so as to identify the operation flow of the identifier in the target identification code.
In a possible implementation manner, the first obtaining unit 600 is specifically configured to:
under the condition that the motion type of the electronic equipment is determined to be code scanning motion, starting a camera and controlling the camera to start to shoot images;
detecting whether an image containing an identification code exists in at least one image obtained by shooting;
and if so, acquiring the first identification code image. It should be noted that, in the embodiment of the present application, specific implementations of each unit may refer to relevant descriptions in the foregoing embodiments, and are not described herein again.
By implementing the embodiment of the application, the electronic equipment can analyze the identification code according to the acquired key information of the identification code image, and still can accurately analyze the identification code under the condition that the identification code image comprises other contents except the identification code or the image quality is poor due to the fact that the shooting angle of the holder of the electronic equipment to the identification code image is poor, so that the operation flow of the identification in the identification code is executed, and the accuracy rate of identifying the identification code in a complex scene is improved.
Fig. 7 shows a schematic structural diagram of the electronic device 100.
The following specifically describes an embodiment by taking the electronic device 100 as an example. It should be understood that electronic device 100 may have more or fewer components than shown, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. Wherein, the different processing units may be independent devices or may be integrated in one or more processors.
The controller may be, among other things, a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
In this application, the processor 110 may be configured to obtain key information of the first identification image, so that the identification code may be parsed according to the key information to execute the operation procedure identified in the identification code.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and is not limited to the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charging management module 140, and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the camera, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. Including mid-focus, tele, wide, super wide, tof (time of night) depth-sensing, movie, macro, etc. The electronic equipment can carry the cameras of multiple combinations such as two camera(s), three camera(s), four camera(s), five camera(s) and even six camera(s) to the functional requirement of difference to improve the performance of shooing. The object generates an optical image through the camera and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The software system of the electronic device 100 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present invention uses an Android system with a layered architecture as an example to exemplarily illustrate a software structure of the electronic device 100.
Fig. 8 is a block diagram of the software configuration of the electronic apparatus 100 according to the embodiment of the present invention.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 8, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
In the present application, the application layer may also add a floating launcher (floating launcher) for displaying the application as a default in the above-mentioned portlet 30, and providing the user with access to other applications.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 8, the application framework layer may include a window manager (window manager), a content provider, a view system, a phone manager, a resource manager, a notification manager, an activity manager (activity manager), and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the display screen, intercept the display screen and the like. In the present application, the flowingwindow can be expanded based on the PhoneWindow of the Android native, and is specifically used for displaying the above-mentioned small window 30, so as to distinguish from a common window, and the window has the property of being displayed on the topmost layer of the series of windows in a floating manner. In some alternative embodiments, the window size may be given a suitable value according to the size of the actual screen, according to an optimal display algorithm. In some possible embodiments, the aspect ratio of the window may default to the screen aspect ratio of a conventional mainstream handset. Meanwhile, in order to facilitate the user to close the exit and hide the small window, a close key and a minimize key can be additionally drawn at the upper right corner. In addition, in the window management module, some gesture operations of the user can be received, and if the gesture operations of the small window are met, the window freezing is carried out, and the animation effect playing of the small window moving is carried out.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures. In the application, the button views for closing, minimizing and other operations on the small window can be correspondingly added and bound to the floating window in the window manager.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears in the form of a dialog window on the display. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The activity manager is used for managing the active services running in the system, and comprises processes (processes), applications, services (services), task information and the like. In the present application, an Activity task stack dedicated to managing the application Activity displayed in the small window 30 may be added in the Activity manager module, so as to ensure that the application Activity and task in the small window do not conflict with the application displayed in the full screen in the screen.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: input manager, input dispatcher, surface manager, Media Libraries, three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
And the input manager is responsible for acquiring event data from the input driver at the bottom layer, analyzing and packaging the event data and then transmitting the event data to the input scheduling manager.
The input scheduling manager is used for storing window information, and after receiving an input event from the input manager, the input scheduling manager searches a proper window in the stored window and distributes the event to the window.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following describes exemplary workflow of the software and hardware of the electronic device 100 in connection with capturing a photo scene.
When the touch sensor 180K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, a time stamp of the touch operation, and other information). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and taking a control corresponding to the click operation as a control of a camera application icon as an example, the camera application calls an interface of an application framework layer, starts the camera application, further starts a camera drive by calling a kernel layer, and captures a still image or a video through the camera 193.
The software system shown in fig. 8 relates to application rendering (e.g., gallery, file manager) using micro-display capability, instant sharing module providing sharing capability, content providing module providing storage and retrieval of data, and application framework layer providing WLAN service, bluetooth service, and kernel and underlying layer providing WLAN bluetooth capability and basic communication protocol.
The embodiment of the application also provides a computer readable storage medium. All or part of the processes in the above method embodiments may be performed by relevant hardware instructed by a computer program, which may be stored in the above computer storage medium, and when executed, may include the processes in the above method embodiments. The computer-readable storage medium includes: various media that can store program codes, such as a read-only memory (ROM) or a Random Access Memory (RAM), a magnetic disk, or an optical disk.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in or transmitted over a computer-readable storage medium. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs. The terms "according to", "by" and "through" as used herein are to be understood as meaning "at least according to", "at least through".
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and these modifications or substitutions do not depart from the scope of the technical solutions of the embodiments of the present application.

Claims (14)

1. A method for identifying an identification code, comprising:
the electronic equipment acquires a first identification code image; wherein the first identification code image comprises an identification code;
the electronic equipment acquires key information of the first identification code image, wherein the key information comprises attribute information used for indicating that the identification code contained in the first identification code image corresponds to the identification code image and information used for indicating the quality of the first identification code image;
and the electronic equipment analyzes the identification code according to the key information so as to execute the operation flow identified in the identification code.
2. The method of claim 1, wherein the electronic device parses the identification code according to the key information to perform the operation procedure identified in the identification code, comprising:
acquiring an optimization strategy for the first identification code image based on the key information;
processing the first identification code image through the optimization strategy to obtain a second identification code image;
and analyzing the identification code in the second identification code image to execute the operation flow identified in the identification code.
3. The method of claim 2, wherein the optimization strategy for the first identification code image is carried in a bitmap table, the bitmap table includes identification information of the first identification code image and status information corresponding to each optimization strategy, and the status information corresponding to each optimization strategy includes one of indicating that optimization processing needs to be performed on the first identification code image through a first value and indicating that optimization processing does not need to be performed on the first identification code image through a second value.
4. The method according to any one of claims 1 to 3, wherein the attribute information of the identification code includes at least one of position information of the identification code, category information to which the identification code belongs, a rotation angle of the identification code with respect to a preset reference direction, and mask information contained in the identification code; the information on the quality of the first identification code image includes at least one of an illumination estimation value of the first identification code image, signal-to-noise ratio information of the first identification code image, and a quality evaluation value of the first identification code image.
5. The method of any one of claims 1-4, wherein after the electronic device obtains key information of the first identification code image, the method further comprises:
displaying a plurality of identification codes through a touch screen of the electronic equipment under the condition that the electronic equipment determines that the first identification code image comprises the plurality of identification codes;
receiving an analysis request of a user for a target identification code; the analysis request is used for requesting to analyze the target identification code;
the electronic equipment analyzes the identification code according to the key information so as to execute the operation flow identified in the identification code, and the method comprises the following steps:
and the electronic equipment analyzes the target identification code according to the attribute information corresponding to the target identification code and the information of the quality of the first identification code image so as to identify the operation flow of the identifier in the target identification code.
6. The method of claim 1, wherein the electronic device acquiring the first identification code image comprises:
under the condition that the motion type of the electronic equipment is determined to be code scanning motion, starting a camera and controlling the camera to start to shoot images;
detecting whether an image containing an identification code exists in at least one image obtained by shooting;
and if so, acquiring the first identification code image.
7. An identification device, comprising:
a first acquisition unit configured to acquire a first identification code image; wherein the first identification code image comprises an identification code;
a second obtaining unit, configured to obtain key information of the first identifier image, where the key information includes attribute information indicating that an identifier included in the first identifier image corresponds to the first identifier image and information indicating that the first identifier image is good or bad in quality;
and the processing unit is used for analyzing the identification code according to the key information so as to execute the operation flow identified in the identification code.
8. The apparatus according to claim 7, wherein the processing unit is specifically configured to:
acquiring an optimization strategy for the first identification code image based on the key information;
processing the first identification code image through the optimization strategy to obtain a second identification code image;
and analyzing the identification code in the second identification code image to execute the operation flow identified in the identification code.
9. The apparatus of claim 8, wherein the optimization policy for the first ID image is carried in a bitmap table, the bitmap table includes identification information of the first ID image and status information corresponding to each optimization policy, and the status information corresponding to each optimization policy includes one of indicating that optimization processing needs to be performed on the first ID image by a first value and indicating that optimization processing does not need to be performed on the first ID image by a second value.
10. The apparatus according to any one of claims 7 to 9, wherein the attribute information of the identification code includes at least one of position information of the identification code, category information to which the identification code belongs, a rotation angle of the identification code with respect to a preset reference direction, and mask information contained in the identification code; the information on the quality of the first identification code image includes at least one of an illumination estimation value of the first identification code image, signal-to-noise ratio information of the first identification code image, and a quality evaluation value of the first identification code image.
11. The apparatus of any one of claims 7-10, further comprising:
the display unit is used for displaying the plurality of identification codes through a touch screen of the electronic equipment under the condition that the electronic equipment determines that the first identification code image comprises the plurality of identification codes;
the receiving unit is used for receiving an analysis request of a user for the target identification code; the analysis request is used for requesting to analyze the target identification code;
the processing unit is specifically configured to:
and analyzing the target identification code according to the attribute information corresponding to the target identification code and the information of the quality of the first identification code image so as to identify the operation flow of the identifier in the target identification code.
12. The apparatus of claim 7, wherein the first obtaining unit is specifically configured to:
under the condition that the motion type of the electronic equipment is determined to be code scanning motion, starting a camera and controlling the camera to start to shoot images;
detecting whether an image containing an identification code exists in at least one image obtained by shooting;
and if so, acquiring the first identification code image.
13. An electronic device comprising a memory, one or more processors; wherein the memory has stored therein one or more programs; wherein the one or more processors, when executing the one or more programs, cause the electronic device to implement the method of any of claims 1-6.
14. A computer readable storage medium comprising computer instructions which, when executed on an electronic device, cause the electronic device to perform the method of any of claims 1-6.
CN202110215292.0A 2021-02-25 2021-02-25 Identification code identification method, related electronic equipment and computer readable storage medium Pending CN114970576A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110215292.0A CN114970576A (en) 2021-02-25 2021-02-25 Identification code identification method, related electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110215292.0A CN114970576A (en) 2021-02-25 2021-02-25 Identification code identification method, related electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN114970576A true CN114970576A (en) 2022-08-30

Family

ID=82973633

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110215292.0A Pending CN114970576A (en) 2021-02-25 2021-02-25 Identification code identification method, related electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114970576A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116055211A (en) * 2023-02-14 2023-05-02 成都理工大学工程技术学院 Method and system for identifying identity and automatically logging in application based on neural network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116055211A (en) * 2023-02-14 2023-05-02 成都理工大学工程技术学院 Method and system for identifying identity and automatically logging in application based on neural network
CN116055211B (en) * 2023-02-14 2023-11-17 成都理工大学工程技术学院 Method and system for identifying identity and automatically logging in application based on neural network

Similar Documents

Publication Publication Date Title
CN109299315B (en) Multimedia resource classification method and device, computer equipment and storage medium
JP7154678B2 (en) Target position acquisition method, device, computer equipment and computer program
CN111738122B (en) Image processing method and related device
CN111476306A (en) Object detection method, device, equipment and storage medium based on artificial intelligence
CN111782879B (en) Model training method and device
CN113538273B (en) Image processing method and image processing apparatus
CN116048244B (en) Gaze point estimation method and related equipment
CN113050860B (en) Control identification method and related device
WO2022156473A1 (en) Video playing method and electronic device
WO2022143314A1 (en) Object registration method and apparatus
CN113724151B (en) Image enhancement method, electronic equipment and computer readable storage medium
CN112528760B (en) Image processing method, device, computer equipment and medium
CN111612723B (en) Image restoration method and device
CN112818979B (en) Text recognition method, device, equipment and storage medium
WO2023216957A1 (en) Target positioning method and system, and electronic device
CN114970576A (en) Identification code identification method, related electronic equipment and computer readable storage medium
CN114943976B (en) Model generation method and device, electronic equipment and storage medium
CN115661941A (en) Gesture recognition method and electronic equipment
CN115460343A (en) Image processing method, apparatus and storage medium
CN117499797B (en) Image processing method and related equipment
CN114510192B (en) Image processing method and related device
CN117727073B (en) Model training method and related equipment
CN117131213B (en) Image processing method and related equipment
CN116049347B (en) Sequence labeling method based on word fusion and related equipment
CN117710786A (en) Image processing method, optimization method of image processing model and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination