CN115908841A - Image recognition method and device, computer equipment and computer readable storage medium - Google Patents

Image recognition method and device, computer equipment and computer readable storage medium Download PDF

Info

Publication number
CN115908841A
CN115908841A CN202211355178.9A CN202211355178A CN115908841A CN 115908841 A CN115908841 A CN 115908841A CN 202211355178 A CN202211355178 A CN 202211355178A CN 115908841 A CN115908841 A CN 115908841A
Authority
CN
China
Prior art keywords
image
gradient
target object
recognized
edge sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211355178.9A
Other languages
Chinese (zh)
Inventor
陈思远
刘枢
吕江波
沈小勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Smartmore Technology Co Ltd
Original Assignee
Shenzhen Smartmore Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Smartmore Technology Co Ltd filed Critical Shenzhen Smartmore Technology Co Ltd
Priority to CN202211355178.9A priority Critical patent/CN115908841A/en
Publication of CN115908841A publication Critical patent/CN115908841A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application relates to the technical field of image processing, and provides an image identification method, an image identification device, computer equipment and a computer readable storage medium. The method comprises the following steps: acquiring an image to be recognized, wherein the image to be recognized comprises a target object; identifying one or more sections of edge sequences from the image to be identified according to the gradient information of each pixel point in the image to be identified; the edge sequence is composed of corresponding pixel points of which the gradient information meets a preset gradient difference condition; determining target angle information corresponding to a target object according to one or more edge sequences; and determining the positioning position of the target object in the image to be recognized according to the preset morphological characteristics and the target angle information of the target object. The method and the device can improve the efficiency and accuracy of determining the positioning position of the target object in the image.

Description

Image recognition method and device, computer equipment and computer readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image recognition method and apparatus, a computer device, and a computer-readable storage medium.
Background
With the rapid development of the mobile internet, some important information is increasingly carried by a target object such as a two-dimensional code, and reading information by scanning the target object in an image after acquiring the image has gradually become an important means for reading information. However, due to the problems of shooting angle, illumination, background pattern and the like in the image acquisition process, it is difficult to determine the positioning position of the target object in the image.
In the conventional technology, a straight line is usually searched in an image, and then a matching operation is performed on the straight line to determine the positioning position of the target object in the image, but because many straight line features are easy to appear in the image, the algorithm complexity is very high, and the efficiency of determining the positioning position of the target object in the image is low.
Disclosure of Invention
In view of the above, it is necessary to provide an image recognition method, an image recognition apparatus, a computer device, and a computer-readable storage medium, which can improve the efficiency and accuracy of determining the location of the target object in the image.
In a first aspect, the present application provides an image recognition method, including:
acquiring an image to be identified; the image to be recognized comprises a target object;
identifying one or more sections of edge sequences from the image to be identified according to the gradient information of each pixel point in the image to be identified; the edge sequence is composed of corresponding pixel points of which the gradient information meets a preset gradient difference condition;
determining target angle information corresponding to a target object according to one or more edge sequences;
and determining the positioning position of the target object in the image to be recognized according to the preset morphological characteristics and the target angle information of the target object.
In a second aspect, the present application further provides an image recognition apparatus, including:
the acquisition module is used for acquiring an image to be identified; the image to be recognized comprises a target object;
the identification module is used for identifying one or more edge sequences from the image to be identified according to the gradient information of each pixel point in the image to be identified; the edge sequence is composed of corresponding pixel points of which the gradient information meets a preset gradient difference condition;
the determining module is used for determining target angle information corresponding to the target object according to one or more sections of edge sequences;
and the positioning module is used for determining the positioning position of the target object in the image to be identified according to the preset morphological characteristics and the target angle information of the target object.
In a third aspect, the present application further provides a computer device, where the computer device includes a memory and a processor, the memory stores a computer program, and the processor implements the steps in the image recognition method when executing the computer program.
In a fourth aspect, the present application further provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, realizes the steps in the image recognition method described above.
In a fifth aspect, the present application further provides a computer program product comprising a computer program which, when executed by a processor, performs the steps of the image recognition method described above.
The scheme includes that an image to be identified containing a target object is obtained, gradient information of each pixel point in the image to be identified is determined, corresponding pixel points of which the gradient information meets a preset gradient difference condition are determined according to the gradient information of each pixel point, the corresponding pixel points form an edge sequence, the edge sequence can be a candidate edge line of the target object in the image to be identified, target angle information corresponding to the target object is determined according to the edge sequence, the target angle information can be an edge angle line of the target object, and a positioning position of the target object in the image to be identified is determined according to preset morphological characteristics and target angle information of the target object, so that the efficiency and the accuracy of determining the positioning position of the target object in the image are improved.
Drawings
Fig. 1 is a schematic flowchart of an image recognition method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of determining an edge sequence according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of determining target angle information according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a pixel point in an edge sequence according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of an edge sequence provided in an embodiment of the present application;
fig. 6 is a schematic flowchart of another image recognition method according to an embodiment of the present application;
fig. 7 is a block diagram of an image recognition apparatus according to an embodiment of the present application;
fig. 8 is an internal structural diagram of a computer device according to an embodiment of the present application;
fig. 9 is a schematic diagram of a computer-readable storage medium according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In some embodiments, as shown in fig. 1, an image recognition method is provided, which is exemplified by being applied to a computer device, and includes the following steps:
and step S101, acquiring an image to be recognized.
In this step, the image to be recognized contains the target object, for example, the image to be recognized may be an image taken by a computer device; the target object may be a two-dimensional code or a barcode, for example, a datamatrix code of one member of the two-dimensional code.
And S102, identifying one or more edge sequences from the image to be identified according to the gradient information of each pixel point in the image to be identified.
In this step, the edge sequence is composed of corresponding pixel points whose gradient information satisfies a preset gradient difference condition; the preset gradient difference condition may be a preset gradient strength difference condition and a preset gradient direction difference condition, such as a gradient strength difference condition and a gradient direction difference condition.
Specifically, the computer device determines gradient information of each pixel point in the image to be recognized, screens out corresponding pixel points meeting a preset gradient difference condition from the image to be recognized according to the gradient information of each pixel point, and combines the corresponding pixel points into one or more sections of edge sequences.
Step S103, determining target angle information corresponding to the target object according to one or more edge sequences.
In this step, the target angle information may be composed of two straight lines and an intersection of the two straight lines, for example, the target object may be in the shape of a quadrangle (e.g., a parallelogram), the target angle information may be the intersection of two adjacent sides of the target object and two adjacent sides, and the intersection may be a vertex of the target object.
Specifically, the computer device identifies the edge line of the target object according to one or more edge sequences, determines candidate angles which accord with the angle characteristics from the edge line, and determines the target angle information which best accords with the angle information belonging to the target object from the candidate angles.
And step S104, determining the positioning position of the target object in the image to be recognized according to the preset morphological characteristics and the target angle information of the target object.
In this step, the preset morphological feature may be an inherent morphological feature or a common morphological feature of the target object, such as a parallelogram, a square or a rectangle.
Specifically, after the target angle information of the target object is determined, since the preset morphological feature of the target object is known in advance, a region conforming to the preset morphological feature can be obtained from the target angle information, and the region is the positioning position of the target object in the image to be recognized.
In the image identification method, the scheme obtains an image to be identified containing a target object, determines gradient information of each pixel point in the image to be identified, determines corresponding pixel points of which the gradient information meets a preset gradient difference condition according to the gradient information of each pixel point, forms the corresponding pixel points into an edge sequence, wherein the edge sequence can be a candidate edge line of the target object in the image to be identified, determines target angle information corresponding to the target object according to the edge sequence, the target angle information can be an edge angle line of the target object, and determines the positioning position of the target object in the image to be identified according to preset morphological characteristics and the target angle information of the target object, so that the efficiency and the accuracy of determining the positioning position of the target object in the image are improved.
In some embodiments, the identifying one or more edge sequences from the image to be identified according to the gradient information of each pixel point in the image to be identified in step S102 specifically includes:
and according to the gradient direction and the gradient strength of each pixel point in the image to be recognized, enabling the gradient direction to meet the gradient direction difference condition, and enabling the gradient strength to meet the corresponding pixel points of the gradient strength difference condition to form one or more edge sequences included in the image to be recognized.
In this embodiment, the gradient information includes a gradient direction and a gradient strength; the preset gradient difference conditions include a gradient direction difference condition and a gradient strength difference condition, such as a gradient direction difference condition and a gradient strength difference condition.
Specifically, the computer device makes up one or more edge sequences included in the image to be recognized according to the gradient direction and the gradient strength of each pixel point in the image to be recognized, wherein the gradient direction meets the gradient direction difference condition, and the gradient strength meets the corresponding pixel points of the gradient strength difference condition.
Illustratively, the computer device performs edge extraction processing on an image to be recognized by using a first-order edge extraction algorithm to obtain a gradient strength response graph and a gradient direction (angle) response graph of the image to be recognized, determines the gradient direction and the gradient strength of each pixel point in the image to be recognized, and forms one or more edge sequences included in the image to be recognized by using corresponding pixel points of which the gradient strength meets the gradient strength difference condition and the gradient direction meets the gradient strength difference condition according to the gradient direction and the gradient strength of each pixel point.
According to the technical scheme of the embodiment, the edge sequence is formed by the corresponding pixel points of which the gradient directions meet the gradient direction difference condition and the gradient strengths meet the gradient strength difference condition, so that the more accurate edge sequence is favorably obtained, and the accuracy of determining the positioning position of the target object in the image is favorably improved subsequently.
In some embodiments, the method may further include composing one or more edge sequences included in the image to be recognized by the following steps, specifically including:
according to the gradient direction and the gradient strength of each pixel point in the image to be recognized, corresponding pixel points of which the gradient direction meets the gradient direction difference condition and the gradient strength meets the gradient strength difference condition form a current segment edge sequence included in the image to be recognized;
according to the gradient direction and the gradient strength of pixel points in the image to be recognized except the pixel points included in the current section of edge sequence, corresponding pixel points of which the gradient direction meets the gradient direction difference condition and the gradient strength meets the gradient strength difference condition are combined to form the next section of edge sequence included in the image to be recognized;
and taking the next section of edge sequence as the current section of edge sequence, skipping to corresponding pixel points of which the gradient direction meets the gradient direction difference condition and the gradient strength meets the gradient strength difference condition according to the gradient direction and the gradient strength of the pixel points in the image to be recognized except the pixel points included in the current section of edge sequence, and forming the next section of edge sequence included in the image to be recognized until the next section of edge sequence is the last section of edge sequence included in the image to be recognized.
In this embodiment, the number of edge sequences is multiple segments; the current segment edge sequence may be a first segment edge sequence, or may be another segment edge sequence after the first segment edge sequence; the next segment of edge sequence may be the second segment of edge sequence, or may be another segment of edge sequence after the second segment of edge sequence;
specifically, as shown in fig. 2, the computer device forms a current segment edge sequence by using the gradient direction and the gradient strength of each pixel in the image to be recognized, marks the pixels added/formed into the edge sequence as the traversed (and the pixels not added/formed into the edge sequence as the non-traversed state), forms a next segment edge sequence by using the corresponding pixels in the image to be recognized except for the pixels included in the current segment edge sequence (i.e., the pixels in the image to be recognized except for the pixels included in the current segment edge sequence are not the traversed state), marks the pixels added/formed into the edge sequence as the traversed, and jumps to the next segment edge sequence by using the gradient direction and the gradient strength of the pixels other than the pixels included in the current segment edge sequence as the current segment edge sequence, and adds the latest next segment edge sequence as the current segment edge sequence until all the pixels in the image to be recognized are added as the next segment edge sequence, and marks all the pixels in the next segment edge sequence as the next segment edge sequence to be recognized until the gradient direction and the gradient strength of the pixels in the image to be recognized are added (i.e., the next segment edge sequence is added as the next segment edge sequence, and the image to be recognized is the last image to be recognized, if the sequence length is not greater than 10 (if the number of pixel points in the edge sequence is not greater than 10), the edge sequence is not used for subsequently finding/determining candidate angle information or target angle information.
According to the technical scheme of the embodiment, after the first section of edge sequence is obtained, corresponding pixel points are selected from the remaining pixel points which are not added/formed into the edge sequence in a circulating mode to form the edge sequence, so that a plurality of sections of edge sequences are obtained, more candidate edge sequences are obtained, angle identification/angle matching is carried out on the plurality of sections of candidate edge sequences subsequently, more accurate target angle information is obtained subsequently, and accuracy of determining the positioning position of the target object in the image is improved subsequently.
In some embodiments, the method may further include the following steps to form a current segment edge sequence included in the image to be recognized, specifically including:
selecting pixel points with the maximum gradient strength from all pixel points in the image to be recognized as current round seed points of a current segment of edge sequence included in the image to be recognized;
selecting a corresponding pixel point of which the gradient direction meets a gradient direction difference condition and the gradient strength meets a gradient strength difference condition from neighborhood pixel points of the current round of seed points of the current section of the edge sequence as a next round of seed points of the current section of the edge sequence;
taking the next round of seed points of the current segment edge sequence as the current round of seed points of the current segment edge sequence, jumping to the neighborhood pixel points of the current round of seed points of the current segment edge sequence, selecting the corresponding pixel points of which the gradient directions meet the gradient direction difference condition and the gradient strength meets the gradient strength difference condition as the next round of seed points of the current segment edge sequence, and till the next round of seed points of the current segment edge sequence cannot be selected from the neighborhood pixel points of the current round of seed points of the current segment edge sequence;
and forming the current segment edge sequence by the selected seed points of each round of the current segment edge sequence.
In this embodiment, as shown in fig. 2, the neighborhood pixel points of the current round seed point may be pixel points of eight neighborhoods of the current round seed point, that is, pixel points of eight positions, namely, upper, lower, left, right, upper left, lower left, upper right, and lower right, of the current round seed point; the gradient direction difference condition may be a gradient direction difference condition between the seed point of the next round and the seed point of the current round, for example, an absolute value of the gradient direction difference between the seed point of the next round and the seed point of the current round is less than a, and a may be 45 degrees; the gradient strength difference condition may refer to a condition of a difference between gradient strengths of the seed point of the next round and the seed point of the current round, for example, an absolute value of a difference between gradient strengths of the seed point of the next round and the seed point of the current round is less than B, and B may be 30.
Specifically, as shown in fig. 2, the computer device selects a pixel point with the largest gradient strength from the pixel points as a current round of seed point of the current segment edge sequence, determines a pixel point with the largest gradient strength from the pixel points in the neighborhood of the current round of seed point of the current segment edge sequence as a next round of seed point to be verified, determines whether an absolute value of a difference value between the next round of seed point and the current round of seed point in the gradient direction is smaller than a, and whether an absolute value of a difference value between the next round of seed point and the current round of seed point in the gradient direction is smaller than B, if both are, determines that the next round of seed point is in accordance with a requirement, and can be used as the next round of seed point of the current segment edge sequence, if not all are (i.e., if one of the conditions is not satisfied), determines that the next round of seed point is not in accordance with the requirement, and then indicates that the next round of seed point in the neighborhood of the current round of seed point of the current segment edge sequence cannot be selected, then, each round of seed points of the selected current segment edge sequence is formed into a current segment edge sequence, then, the judgment/formation of the next segment edge sequence is started, and under the condition that the seed point of the next round is judged to meet the requirement and can be used as the next round of seed point of the current segment edge sequence, the seed point of the next round of the current segment edge sequence is used as the current round of seed point of the current segment edge sequence and jumps to the pixel points in the neighborhood of the current round of seed point of the current segment edge sequence, the corresponding pixel points of which the gradient directions meet the gradient direction difference condition and the gradient strengths meet the gradient strength difference condition are selected and used as the next round of seed point of the current segment edge sequence until the next round of seed point of the current segment edge sequence can not be selected from the pixel points in the neighborhood of the current round of seed points of the current segment edge sequence, and forming the current segment edge sequence by the selected seed points of each round of the current segment edge sequence.
Illustratively, the selected seed points are all pixels that have not been added to/made up into the edge sequence (i.e., the selected seed points are all in an unretraversed state, i.e., have not been marked as traversed).
According to the technical scheme of the embodiment, after the current round of seed points of the current segment edge sequence are obtained, the corresponding pixel points are selected from the remaining pixel points which are not added/combined into the edge sequence in a circulating mode to serve as the next round of seed points of the current segment edge sequence, so that the selected seed points are combined into the current segment edge sequence, more accurate current segment edge sequence is obtained, and accuracy and efficiency of determining the positioning position of the target object in the image are improved subsequently.
In some embodiments, the method may further include the following step of forming a next segment of edge sequence included in the image to be recognized, specifically including:
selecting pixel points with the maximum gradient intensity from pixel points of the image to be recognized except for pixel points included in the current section of edge sequence as current round seed points of the next section of edge sequence included in the image to be recognized;
selecting a corresponding pixel point of which the gradient direction meets the gradient direction difference condition and the gradient strength meets the gradient strength difference condition from neighborhood pixel points of the current round of seed points of the next section of edge sequence as a next round of seed points of the next section of edge sequence;
taking the next round of seed points of the next section of edge sequence as the current round of seed points of the next section of edge sequence, skipping to the adjacent pixel points of the current round of seed points of the next section of edge sequence, selecting the corresponding pixel points of which the gradient directions meet the gradient direction difference condition and the gradient strength meets the gradient strength difference condition as the next round of seed points of the next section of edge sequence until the next round of seed points of the next section of edge sequence cannot be selected from the adjacent pixel points of the current round of seed points of the next section of edge sequence;
and forming the next section of edge sequence by each round of seed points of the selected next section of edge sequence.
Specifically, the computer device selects a pixel point with the maximum gradient strength from pixel points (i.e. from pixel points which are not traversed) in the image to be identified except for the pixel point included in the current segment of edge sequence, as a current round of seed point of the next segment of edge sequence, selects a pixel point with the maximum gradient strength from pixel points in the neighborhood of the current round of seed point of the next segment of edge sequence, as a next round of seed point of the next segment of edge sequence to be verified, determines whether the absolute value of the difference value of the gradient directions of the next round of seed point of the next segment of edge sequence and the current round seed point of the next segment of edge sequence is less than A, and whether the absolute value of the difference value of the gradient strengths of the next round seed point and the current round seed point is less than B, if yes, determines that the next round seed point meets the requirement, and can be used as the next round seed point of the next segment of edge sequence, if not (i.e. if one of the conditions is not satisfied), judging that the next round of seed points does not satisfy the requirement, then representing that the next round of seed points of the next segment of edge sequence cannot be selected from the pixel points of the neighborhood of the current round of seed points of the next segment of edge sequence, then forming the next segment of edge sequence by using each round of seed points of the selected next segment of edge sequence, then starting to judge/form the next segment of edge sequence, and taking the next round of seed points of the next segment of edge sequence as the current round of seed points of the next segment of edge sequence under the condition that the next round of seed points are judged to satisfy the requirement and can be taken as the next round of seed points of the next segment of edge sequence, jumping to the pixel points of the neighborhood of the current round of seed points of the next segment of edge sequence, and selecting the gradient direction to satisfy the gradient direction difference condition, and taking the corresponding pixel point with the gradient strength meeting the gradient strength difference condition as a next round of seed point of the next section of edge sequence until the next round of seed point without the next section of edge sequence is selected from the pixel points in the neighborhood of the current round of seed point of the next section of edge sequence, and forming the next section of edge sequence by each round of seed points of the selected next section of edge sequence.
For example, after obtaining all the edge sequences, the computer device determines the position relationship between the start and stop points of each edge sequence and the start and stop points of the adjacent edge sequences (the start and stop points are the first and last pixel points of the edge sequence), and when two start and stop points of two edge sequences (e.g., the start or stop point of one edge sequence and the start or stop point of the other edge sequence) satisfy the gradient direction difference condition and the gradient strength difference condition (e.g., the gradient strength difference is less than 30 and the gradient direction difference is less than 45 degrees), splices the two edge sequences into one edge sequence.
According to the technical scheme of the embodiment, after the current round of seed points of the next section of edge sequence are obtained, the corresponding pixel points are selected from the remaining pixel points which are not added/combined into the edge sequence in a circulating manner to serve as the next round of seed points of the next section of edge sequence, so that the selected seed points are combined into the next section of edge sequence, the next section of edge sequence which is more accurate is favorably obtained, and the accuracy and the efficiency of determining the positioning position of the target object in the image are favorably improved subsequently.
In some embodiments, the determining, according to one or more edge sequences, target angle information corresponding to the target object in step S103 specifically includes:
performing straight line fitting processing on one or more sections of edge sequences to obtain a plurality of fitting straight lines corresponding to one or more sections of edge sequences;
combining the fitting straight lines to obtain a plurality of candidate angle information corresponding to the target object;
and determining target angle information from the plurality of candidate angle information according to the straight line fitting error and the straight line length information corresponding to each candidate angle information.
In the present embodiment, as shown in fig. 3, the straight line fitting process may be a process of fitting a straight line by the least square method; the fitting straight line can be a fitting straight line obtained after the straight line is fitted to the pixel points in the edge sequence; the candidate angle information may be composed of two straight lines and an intersection of the two straight lines; the straight line fitting error can be a fitting error corresponding to a fitting straight line obtained by performing straight line fitting processing on the pixel points; the straight line length information may be the length of the straight line.
Specifically, the computer device performs straight line fitting processing on the edge sequence to obtain a plurality of fitted straight lines corresponding to the edge sequence, the plurality of fitted straight lines can correspondingly form angle information, each angle information can be formed by combining two straight lines (wherein the two straight lines intersect at an intersection point) to obtain candidate angle information corresponding to the target object, according to straight line fitting errors and straight line length information corresponding to the two straight lines in each candidate angle information, the straight line fitting errors and the straight line length information corresponding to the two straight lines can be used as judgment scores (wherein the scores of the straight line fitting errors being small and the straight line lengths being large are high), and candidate angle information with the highest score is output as target angle information.
Illustratively, as shown in fig. 3, for each edge sequence, the computer device selects the leftmost end point of each edge sequence as an initial seed point, adds C points (C may be 5) to the left straight line point sequence in the left direction of the seed point (the left direction may be understood as the pixels added in the edge sequence before, i.e. the pixels added in the edge sequence before) and performs least square fitting on the straight line, determines whether the fitting error is smaller than E (E may be 5), if so, repeatedly adds C points to the left straight line point sequence in the left direction of the seed point until the fitting error is not smaller than E, and similarly, adds D points (D may be 5) to the right straight line point sequence in the right direction of the seed point (the right direction may be understood as the pixels added in the edge sequence after, i.e. the pixels added in the edge sequence after), performing least square method to fit a straight line, judging whether a fitting error is smaller than E (E can be 5), if so, repeatedly taking D points along the right direction of the seed point and adding a right straight line point sequence until the fitting error is not smaller than E, then judging whether an included angle between the two straight lines meets an angle interval (for example, larger than 60 degrees and smaller than 120 degrees), if so, adding candidate angle information (adding a data structure formed by the two straight lines into the candidate angle information), if not, moving the position of the seed point to the right direction by one bit, judging whether the position of the seed point is superposed with the rightmost end (namely, whether the last pixel point of the edge sequence is reached), if not, repeatedly executing the steps (namely, starting from the step of taking C points along the left direction of the seed point and adding the left straight line point sequence), and screening candidate angle information if the seed point position coincides with the rightmost end, taking the fitting error of the two straight lines and the lengths of the two straight lines as judgment scores, and outputting the candidate angle information with the highest score as target angle information.
According to the technical scheme of the embodiment, the candidate angle information is obtained through the edge sequence, and then the target angle information is determined from the candidate angle information, so that the accuracy of determining the positioning position of the target object in the image is improved.
In some embodiments, the method may further identify one or more edge sequences from the processed image to be identified by the following steps, specifically including:
carrying out noise elimination processing and gray mapping processing on an image to be recognized to obtain a processed image to be recognized;
and identifying one or more sections of edge sequences from the processed image to be identified according to the gradient information of each pixel point in the processed image to be identified.
In the present embodiment, the noise removal processing may be smoothing filter processing.
Specifically, after acquiring the image to be recognized, the computer device performs noise elimination processing and gray mapping processing on the image to be recognized to obtain a processed image to be recognized, and recognizes an edge sequence from the processed image to be recognized according to gradient information of each pixel point in the processed image to be recognized.
Exemplarily, as shown in fig. 2, after acquiring an image to be recognized, a computer device performs noise elimination and gray mapping on the image to be recognized to obtain a processed image to be recognized, filters (removes) pixel points with gradient strength smaller than an F value (the F value may be 30) in the image, performs gradient strength sorting on remaining pixel points in the image (so as to subsequently and quickly determine a pixel point with maximum gradient strength among the pixel points that are not traversed), selects a strongest gradient point that is not traversed as a seed point, and determines whether a full map gradient is traversed, if not, adds the seed point into an edge sequence, performs eight neighborhood region growing based on a gradient direction and gradient strength of the seed point, marks the traversed point as traversed, sorts eight neighborhood gradient strengths, selects a strongest gradient point as a next seed point, and gradually obtains the edge sequence, and finally outputs an edge sequence group (the edge sequence group contains all edge sequences) if the full map gradient is traversed.
According to the technical scheme, the noise elimination processing and the gray mapping processing are carried out on the image to be identified, so that the problem of uneven noise and gray caused by image acquisition is solved, a more accurate edge sequence is obtained, and the accuracy of determining the positioning position of the target object in the image is improved subsequently.
In some embodiments, the determining, according to the preset morphological feature and the target angle information of the target object in step S104, the positioning position of the target object in the image to be recognized specifically includes:
determining a positioning frame meeting the preset morphological characteristics of the target object according to the target angle information, and taking the positioning frame meeting the preset morphological characteristics of the target object as the positioning frame of the target object;
and determining the positioning position of the target object in the image to be recognized according to the positioning frame of the target object.
In this embodiment, the positioning frame may be a parallelogram, a square, or a rectangle.
Specifically, the computer device determines a positioning frame meeting the preset morphological characteristics of the target object according to the target angle information, uses the positioning frame meeting the preset morphological characteristics of the target object as the positioning frame of the target object, and determines the positioning position of the target object in the image to be recognized according to the positioning frame of the target object.
Illustratively, as shown in fig. 4 and 5, the computer device determines the location position of the target object in the image to be recognized according to the location frame of the target object by calculating a symmetric point where the vertex of the corner of the target corner information (e.g., the intersection of two straight lines, such as the corner point in fig. 4 and 5) is connected to a straight line with respect to two end points (e.g., the vertex 1 and the vertex 2 in fig. 4 and 5) according to the preset morphological feature of the target object, complementing the obtained corner information into a parallelogram as the location frame of the target object. The hollow dots shown in fig. 4 and 5 are equivalent to pixel points in the edge sequence, and straight line fitting processing is performed on the pixel points to obtain two arrow lines shown in fig. 5 (the two arrow lines shown in fig. 5 may be represented as the edge sequence of the same segment).
According to the technical scheme of the embodiment, the positioning position of the target object in the image to be identified is determined through the positioning frame, so that the accuracy and the efficiency of determining the positioning position of the target object in the image are improved.
In some embodiments, the method may further include obtaining object information of the target object by the following steps, specifically including:
decoding the target object according to the positioning position to obtain decoding information of the target object;
and identifying the decoding information to obtain the object information of the target object.
In this embodiment, the object information of the target object may be information included in the target object, for example, information included in a two-dimensional code.
Specifically, after determining the positioning position of the target object in the image to be recognized according to the preset morphological feature and the target angle information of the target object, the computer device performs decoding processing on the target object according to the positioning position to obtain decoding information of the target object, and recognizes the decoding information to obtain object information of the target object.
Illustratively, the computer device performs projection change on an image based on the positioning result (positioning position), performs sampling binarization on the projectively transformed thumbnail, then decodes the target object based on the decoding definition of the target object, translates ascii code (american standard code for information exchange) output after decoding, and then outputs the result (object information of the target object) to the front end. For example, as shown in fig. 6, the computer device first performs image acquisition, then performs angle search and matching positioning, and finally performs target object identification and result output.
According to the technical scheme of the embodiment, the target information of the target object is obtained by decoding and identifying the target object, so that the information of the target object is read after the target object is positioned, and the efficiency of identifying/scanning the target object is improved.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially shown as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the above embodiments may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the application also provides an image recognition device. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme recorded in the method, so the specific limitations in the embodiment of the image recognition device provided below can be referred to the limitations of the image recognition method in the above, and are not described again here.
In some embodiments, as shown in fig. 7, an image recognition apparatus 700 is provided, the apparatus 700 may include:
an obtaining module 701, configured to obtain an image to be identified; the image to be recognized comprises a target object;
the identification module 702 is configured to identify one or more edge sequences from the image to be identified according to gradient information of each pixel point in the image to be identified; the edge sequence is composed of corresponding pixel points of which the gradient information meets a preset gradient difference condition;
a determining module 703, configured to determine target angle information corresponding to a target object according to one or more edge sequences;
and the positioning module 704 is configured to determine a positioning position of the target object in the image to be recognized according to the preset morphological feature and the target angle information of the target object.
In some embodiments, the gradient information includes a gradient direction and a gradient strength; the preset gradient difference condition comprises a gradient direction difference condition and a gradient strength difference condition; in terms of identifying one or more edge sequences from the image to be identified according to the gradient information of each pixel point in the image to be identified, the identifying module 702 is specifically configured to:
and according to the gradient direction and the gradient strength of each pixel point in the image to be recognized, forming one or more edge sequences included in the image to be recognized by corresponding pixel points of which the gradient direction meets the gradient direction difference condition and the gradient strength meets the gradient strength difference condition.
In some embodiments, the number of edge sequences is multiple segments; in terms of forming one or more edge sequences included in the image to be recognized by using pixel points corresponding to the gradient direction and the gradient strength satisfying the gradient direction difference condition according to the gradient direction and the gradient strength of each pixel point in the image to be recognized, the apparatus 700 further includes: the edge sequence composition module is specifically used for composing the current segment edge sequence included by the image to be recognized according to the gradient direction and the gradient strength of each pixel point in the image to be recognized, wherein the gradient direction meets the gradient direction difference condition, and the gradient strength meets the corresponding pixel points of the gradient strength difference condition; according to the gradient direction and the gradient strength of pixel points in the image to be recognized except for the pixel points included in the current section of edge sequence, corresponding pixel points of which the gradient direction meets the gradient direction difference condition and the gradient strength meets the gradient strength difference condition are combined to form the next section of edge sequence included in the image to be recognized; and taking the next section of edge sequence as the current section of edge sequence, skipping to corresponding pixel points of which the gradient directions meet the gradient direction difference condition and the gradient strength meets the gradient strength difference condition according to the gradient direction and the gradient strength of the pixel points in the image to be recognized except the pixel points included by the current section of edge sequence, and forming the next section of edge sequence included by the image to be recognized until the next section of edge sequence is the last section of edge sequence included by the image to be recognized.
In some embodiments, in terms of forming a current segment edge sequence included in the image to be recognized by using corresponding pixel points whose gradient directions satisfy the gradient direction difference condition and whose gradient strengths satisfy the gradient strength difference condition according to the gradient direction and the gradient strength of each pixel point in the image to be recognized, the apparatus 700 further includes: the current segment edge sequence composition module is specifically used for selecting a pixel point with the maximum gradient intensity from all pixel points in the image to be recognized as a current round seed point of the current segment edge sequence included in the image to be recognized; selecting a corresponding pixel point of which the gradient direction meets a gradient direction difference condition and the gradient strength meets a gradient strength difference condition from neighborhood pixel points of the current round of seed points of the current section of the edge sequence as a next round of seed points of the current section of the edge sequence; taking the next round of seed points of the current segment edge sequence as the current round of seed points of the current segment edge sequence, jumping to the neighborhood pixel points of the current round of seed points of the current segment edge sequence, selecting the corresponding pixel points of which the gradient directions meet the gradient direction difference condition and the gradient strength meets the gradient strength difference condition as the next round of seed points of the current segment edge sequence, and till the next round of seed points of the current segment edge sequence cannot be selected from the neighborhood pixel points of the current round of seed points of the current segment edge sequence; and forming the current segment edge sequence by the selected seed points of each round of the current segment edge sequence.
In some embodiments, in terms of determining target angle information corresponding to a target object according to one or more edge sequences, the determining module 703 is specifically configured to perform straight line fitting processing on one or more edge sequences to obtain multiple fitted straight lines corresponding to one or more edge sequences; combining the fitting straight lines to obtain a plurality of candidate angle information corresponding to the target object; and determining target angle information from the plurality of candidate angle information according to the straight line fitting error and the straight line length information corresponding to each candidate angle information.
In some embodiments, the apparatus 700 further comprises: the device comprises a to-be-identified image obtaining module, a processing module and a processing module, wherein the to-be-identified image obtaining module is specifically used for carrying out noise elimination processing and gray mapping processing on the to-be-identified image to obtain a processed to-be-identified image; the identifying module 702 is further configured to identify one or more edge sequences from the processed image to be identified according to the gradient information of each pixel point in the processed image to be identified.
In some embodiments, in terms of determining the location position of the target object in the image to be recognized according to the preset morphological feature of the target object and the target angle information, the location module 704 is specifically configured to determine a location frame satisfying the preset morphological feature of the target object according to the target angle information, and use the location frame satisfying the preset morphological feature of the target object as the location frame of the target object; and determining the positioning position of the target object in the image to be recognized according to the positioning frame of the target object.
In some embodiments, the apparatus 700 further comprises: the object information obtaining module is specifically used for decoding the target object according to the positioning position to obtain the decoding information of the target object; and identifying the decoding information to obtain the object information of the target object.
The modules in the image recognition device can be wholly or partially realized by software, hardware and a combination thereof. The modules may be embedded in hardware or independent of a processor in the computer device, or may be stored in a memory in the computer device in software, so that the processor calls and executes operations corresponding to the modules.
In some embodiments, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 8. The computer equipment comprises a processor, a memory, an Input/Output (I/O) interface, a communication interface, a display screen and an Input device. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the input device and the display screen are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The input and output interface of the computer device is used for exchanging information between the processor and the external device. The computer program is executed by a processor to implement the steps in the image recognition method described above. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 8 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In some embodiments, a computer device is further provided, the computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the above method embodiments when executing the computer program.
In some embodiments, as illustrated in fig. 9, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, realizes the steps of the above-mentioned method embodiments.
In some embodiments, a computer program product is provided, comprising a computer program which, when executed by a processor, performs the steps in the above-described method embodiments.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, displayed data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the relevant laws and regulations and standards of the relevant country and region.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, the computer program can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), magnetic Random Access Memory (MRAM), ferroelectric Random Access Memory (FRAM), phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not to be construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (11)

1. An image recognition method, comprising:
acquiring an image to be identified; the image to be recognized comprises a target object;
identifying one or more edge sequences from the image to be identified according to the gradient information of each pixel point in the image to be identified; the edge sequence is composed of corresponding pixel points of which the gradient information meets a preset gradient difference condition;
determining target angle information corresponding to the target object according to the one or more edge sequences;
and determining the positioning position of the target object in the image to be recognized according to the preset morphological characteristics of the target object and the target angle information.
2. The method of claim 1, wherein the gradient information comprises a gradient direction and a gradient strength; the preset gradient difference condition comprises a gradient direction difference condition and a gradient strength difference condition;
the identifying one or more segments of edge sequences from the image to be identified according to the gradient information of each pixel point in the image to be identified comprises:
and according to the gradient direction and the gradient strength of each pixel point in the image to be recognized, forming one or more edge sequences included in the image to be recognized by corresponding pixel points of which the gradient direction meets the gradient direction difference condition and the gradient strength meets the gradient strength difference condition.
3. The method of claim 2, wherein the number of edge sequences is a number of segments;
the method for forming one or more edge sequences included in the image to be recognized by using the corresponding pixel points, in which the gradient direction meets the gradient direction difference condition and the gradient strength meets the gradient strength difference condition, according to the gradient direction and the gradient strength of each pixel point in the image to be recognized includes:
according to the gradient direction and the gradient strength of each pixel point in the image to be recognized, corresponding pixel points of which the gradient direction meets the gradient direction difference condition and the gradient strength meets the gradient strength difference condition are combined to form a current segment edge sequence included in the image to be recognized;
according to the gradient direction and the gradient strength of the pixel points in the image to be recognized except the pixel points included in the current segment of edge sequence, forming the next segment of edge sequence included in the image to be recognized by the corresponding pixel points of which the gradient direction meets the gradient direction difference condition and the gradient strength meets the gradient strength difference condition;
and taking the next section of edge sequence as the current section of edge sequence, skipping to the corresponding pixel points of which the gradient direction meets the gradient direction difference condition and the gradient strength meets the gradient strength difference condition according to the gradient direction and the gradient strength of the pixel points in the image to be recognized except the pixel points included in the current section of edge sequence, and forming the next section of edge sequence included in the image to be recognized until the next section of edge sequence is the last section of edge sequence included in the image to be recognized.
4. The method according to claim 3, wherein the step of forming the current segment edge sequence included in the image to be recognized by using the corresponding pixel points whose gradient directions satisfy the gradient direction difference condition and whose gradient strengths satisfy the gradient strength difference condition according to the gradient direction and the gradient strength of each pixel point in the image to be recognized comprises:
selecting pixel points with the maximum gradient strength from all pixel points in the image to be recognized as current round seed points of a current segment of edge sequence included in the image to be recognized;
selecting a corresponding pixel point of which the gradient direction meets the gradient direction difference condition and the gradient strength meets the gradient strength difference condition from neighborhood pixel points of the current round of seed points of the current segment edge sequence as a next round of seed points of the current segment edge sequence;
taking the next round of seed points of the current segment edge sequence as the current round of seed points of the current segment edge sequence, jumping to the neighborhood pixel points of the current round of seed points of the current segment edge sequence, selecting corresponding pixel points of which the gradient directions meet the gradient direction difference condition and the gradient strengths meet the gradient strength difference condition as the next round of seed points of the current segment edge sequence, and till the next round of seed points of the current segment edge sequence cannot be selected from the neighborhood pixel points of the current round of seed points of the current segment edge sequence;
and forming the current segment edge sequence by the selected seed points of each round of the current segment edge sequence.
5. The method according to claim 1, wherein the determining target angle information corresponding to the target object according to the one or more edge sequences comprises:
performing straight line fitting processing on the one or more sections of edge sequences to obtain a plurality of fitting straight lines corresponding to the one or more sections of edge sequences;
combining the fitting straight lines to obtain a plurality of candidate angle information corresponding to the target object;
and determining the target angle information from the candidate angle information according to the straight line fitting error and the straight line length information corresponding to each candidate angle information.
6. The method of claim 1, after the obtaining the image to be recognized, further comprising:
carrying out noise elimination processing and gray mapping processing on the image to be recognized to obtain a processed image to be recognized;
the identifying one or more segments of edge sequences from the image to be identified according to the gradient information of each pixel point in the image to be identified comprises:
and identifying the one or more edge sequences from the processed image to be identified according to the gradient information of each pixel point in the processed image to be identified.
7. The method according to claim 1, wherein the determining the positioning position of the target object in the image to be recognized according to the preset morphological features of the target object and the target angle information comprises:
determining a positioning frame meeting the preset morphological characteristics of the target object according to the target angle information, and taking the positioning frame meeting the preset morphological characteristics of the target object as the positioning frame of the target object;
and determining the positioning position of the target object in the image to be recognized according to the positioning frame of the target object.
8. The method according to claim 1, after determining the location position of the target object in the image to be recognized according to the preset morphological feature of the target object and the target angle information, further comprising:
decoding the target object according to the positioning position to obtain decoding information of the target object;
and identifying the decoding information to obtain the object information of the target object.
9. An image recognition apparatus, comprising:
the acquisition module is used for acquiring an image to be identified; the image to be recognized comprises a target object;
the identification module is used for identifying one or more edge sequences from the image to be identified according to the gradient information of each pixel point in the image to be identified; the edge sequence is composed of corresponding pixel points of which the gradient information meets a preset gradient difference condition;
the determining module is used for determining target angle information corresponding to the target object according to the one or more edge sequences;
and the positioning module is used for determining the positioning position of the target object in the image to be identified according to the preset morphological characteristics of the target object and the target angle information.
10. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 8 when executing the computer program.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
CN202211355178.9A 2022-11-01 2022-11-01 Image recognition method and device, computer equipment and computer readable storage medium Pending CN115908841A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211355178.9A CN115908841A (en) 2022-11-01 2022-11-01 Image recognition method and device, computer equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211355178.9A CN115908841A (en) 2022-11-01 2022-11-01 Image recognition method and device, computer equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN115908841A true CN115908841A (en) 2023-04-04

Family

ID=86475455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211355178.9A Pending CN115908841A (en) 2022-11-01 2022-11-01 Image recognition method and device, computer equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN115908841A (en)

Similar Documents

Publication Publication Date Title
CN110738207B (en) Character detection method for fusing character area edge information in character image
EP3258604A1 (en) System and method for compressing graphs via cliques
CN115759148B (en) Image processing method, device, computer equipment and computer readable storage medium
CN113223078B (en) Mark point matching method, device, computer equipment and storage medium
CN115239644B (en) Concrete defect identification method, device, computer equipment and storage medium
CN112580382B (en) Two-dimensional code positioning method based on target detection
CN110807342B (en) Bar code positioning method, bar code positioning device, computer equipment and storage medium
US9734550B1 (en) Methods and apparatus for efficiently determining run lengths and identifying patterns
CN115880362B (en) Code region positioning method, device, computer equipment and computer readable storage medium
CN115908841A (en) Image recognition method and device, computer equipment and computer readable storage medium
CN115908363A (en) Tumor cell counting method, device, equipment and storage medium
CN113705270A (en) Method, device, equipment and storage medium for identifying two-dimensional code positioning code area
CN110909097B (en) Polygonal electronic fence generation method and device, computer equipment and storage medium
CN111639506A (en) Method and device for positioning bar code in image and code scanning equipment
CN106777280B (en) Data processing method and device based on super large data set
CN107766863B (en) Image characterization method and server
CN115759145B (en) Bar code identification method, device, storage medium and computer equipment
CN112927149B (en) Method and device for enhancing spatial resolution of hyperspectral image and electronic equipment
CN114564978B (en) Method and device for decoding two-dimensional code, electronic equipment and storage medium
CN116527908B (en) Motion field estimation method, motion field estimation device, computer device and storage medium
CN115600620B (en) Code scanning method, device, electronic equipment and storage medium
CN115577728B (en) One-dimensional code positioning method, device, computer equipment and storage medium
CN111161351B (en) Target component coordinate acquisition method and system
KR101038198B1 (en) An Information Dot Pattern
CN117710655A (en) Connected domain laser spot detection method based on FPGA

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination