CN114972157A - Edge defect detection method, device and storage medium - Google Patents

Edge defect detection method, device and storage medium Download PDF

Info

Publication number
CN114972157A
CN114972157A CN202210021105.XA CN202210021105A CN114972157A CN 114972157 A CN114972157 A CN 114972157A CN 202210021105 A CN202210021105 A CN 202210021105A CN 114972157 A CN114972157 A CN 114972157A
Authority
CN
China
Prior art keywords
region
area
label
determining
quadrilateral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210021105.XA
Other languages
Chinese (zh)
Inventor
许�鹏
董一凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202210021105.XA priority Critical patent/CN114972157A/en
Publication of CN114972157A publication Critical patent/CN114972157A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to an edge defect detection method, an edge defect detection device and a storage medium, wherein the method comprises the following steps: determining a first area where a label is located from an image to be detected, wherein the shape of the label is rectangular; quadrilateral region extraction is carried out on the first region to obtain a second region, and the second region is a quadrilateral region with the highest fitting degree with the first region; and determining the edge defect detection result of the label according to the first area and the second area. The embodiment of the application performs edge defect detection on the label in a self-matching mode, has wide applicable scenes and strong robustness, does not need modeling and registration and has high processing speed.

Description

Edge defect detection method, device and storage medium
Technical Field
The present application relates to the field of computer vision, and in particular, to a method and an apparatus for detecting edge defects, and a storage medium.
Background
Labels (labels) are widely used in manufacturing industry, and can be adhered to the positions of components, appearance surfaces, outer packages and the like of products, and are used for describing information such as names, models, codes/bar codes, functions, production places, production dates and the like of the products or the components thereof. For example, labels are usually attached to the exterior surfaces of consumer products such as mobile phones, computers, televisions, refrigerators, and the like. If the label is abnormally adhered, the appearance is influenced, the brand effect of manufacturers is also influenced, and even adverse consequences such as customer complaints and product return are brought.
In the related art, edge defect detection is generally performed on a label attached to a product or a component thereof by means of template matching. However, this method cannot detect not only tags of different sizes but also tags of uneven brightness or size distortion caused by a change in the imaging position in an image. Furthermore, since the execution time of the matching algorithm is usually long, this approach is also difficult to adapt to a production line with a fast tempo.
Disclosure of Invention
In view of the above, an edge defect detecting method, an edge defect detecting device and a storage medium are provided.
In a first aspect, an embodiment of the present application provides an edge defect detection method.
According to a first aspect, in a first possible implementation manner of the edge defect detection method, the method includes: determining a first area where a label is located from an image to be detected, wherein the shape of the label is rectangular; quadrilateral region extraction is carried out on the first region to obtain a second region, and the second region is a quadrilateral region with the highest fitting degree with the first region; and determining the edge defect detection result of the label according to the first area and the second area.
The embodiment of the application, can follow and wait to detect the image, determine the first region that the label belongs to, wherein, the shape of label is the rectangle, and carry out quadrangle regional extraction to the first region, obtain the second region, the second region is the quadrangle region with first regional fitting degree is the highest, then according to first region and second region, confirm the marginal defect detection result of label, thereby can be through the mode from the matching, carry out marginal defect detection to the label, not only to the size of label, position, the light change when rotatory and shooting are waited to detect the image etc. do not restrict, applicable scene is extensive, robustness is strong, and need not to model building and register, the processing speed is fast, can be applicable to the faster production line of beat.
According to the first aspect, in a first possible implementation manner of the edge defect detection method, the performing quadrilateral region extraction on the first region to obtain a second region includes: extracting peripheral contour points of the first area; performing linear fitting on the peripheral contour points to obtain four straight lines; and determining a closed area formed by the four straight lines as a second area.
According to the embodiment of the application, the peripheral contour points of the first area are extracted, linear fitting is carried out on the peripheral contour points, four straight lines are obtained, then the closed area formed by the four straight lines is determined to be the second area, the method is simple and quick, effective linear fitting can be carried out as long as most of edges of the first area are segmented, the effectiveness of quadrilateral area extraction can be improved, and the robustness of the edge defect detection method is improved.
According to the first aspect, in a second possible implementation manner of the edge defect detection method, the performing quadrilateral region extraction on the first region to obtain a second region includes: extracting four vertexes of the first area; and determining a quadrilateral area formed by sequentially connecting the four vertexes as a second area.
According to the embodiment of the application, the four vertexes of the first area are extracted, and the quadrilateral area formed by sequentially connecting the four vertexes is determined as the second area, so that the defect area (especially the defect area in the scene of angle missing and angle tilting) can be effectively enlarged, and the sensitivity of edge defect identification can be improved.
According to the second possible implementation manner of the first aspect, in a third possible implementation manner of the edge defect detection method, the extracting four vertices of the first region includes: determining a first pixel point which is closest to the upper left corner of the image to be detected from all pixel points in the first area; and determining the first pixel point as the top left corner vertex of the first area.
The embodiment of the application can determine the first pixel point closest to the upper left corner of the image to be detected from each pixel point of the first region, and determines the first pixel point as the top left corner vertex of the first region, so that the top left corner vertex of the first region can be simply and quickly extracted through distance calculation, and the processing efficiency can be improved.
According to the second possible implementation manner of the first aspect, in a fourth possible implementation manner of the edge defect detection method, the extracting four vertices of the first region includes: according to the first region, carrying out binarization processing on the image to be detected to obtain a first characteristic diagram; performing convolution processing on the first characteristic diagram according to a preset first convolution kernel to obtain a second characteristic diagram; determining a second pixel point with the maximum characteristic value in the second characteristic diagram; and determining pixel points corresponding to the second pixel points in the first region as top left corners of the first region.
According to the embodiment of the application, binarization processing can be carried out on an image to be detected according to a first region to obtain a first feature map, convolution processing is carried out on the first feature map according to a preset first convolution kernel to obtain a second feature map, then a second pixel point with the largest feature value in the second feature map is determined, a pixel point corresponding to the second pixel point in the first region is determined to be the top left corner vertex of the first region, so that the top left corner vertex of the first region can be extracted quickly through convolution filtering, and the processing efficiency can be improved.
According to a fourth possible implementation manner of the first aspect, in a fifth possible implementation manner of the edge defect detection method, in the first feature map, a feature value corresponding to the first region is 1, a feature value corresponding to a background region outside the first region is-1, a size of the first convolution kernel is 2 × 2, and four parameters of the first convolution kernel are-1, and 3, respectively.
In the embodiment of the present application, the feature value corresponding to the first region in the first feature map is set to 1, the feature value corresponding to the background region other than the first region is set to-1, the size of the first convolution kernel is set to 2 × 2, and the four parameters of the first convolution kernel are set to-1, and 3, respectively, thereby improving the processing efficiency when extracting the vertex at the upper left corner of the first region.
According to the first aspect or one or more of various possible implementation manners of the first aspect, in a sixth possible implementation manner of the edge defect detecting method, the determining an edge defect detection result of the label according to the first area and the second area includes: judging whether a non-coincident region exists between the position of the first region and the position of the second region; and if the position of the first area and the position of the second area have a non-coincident area, determining the non-coincident area as a defect area of the label.
According to the embodiment of the application, whether the misalignment area exists between the position of the first area and the position of the second area or not is judged, and the misalignment area is determined to be the defect area of the label under the condition that the misalignment area exists between the position of the first area and the position of the second area, so that the method is simple and rapid, and the processing efficiency of edge defect detection can be improved.
In a second aspect, an embodiment of the present application provides an edge defect detecting apparatus, including: the label area extraction module is used for determining a first area where a label is located from an image to be detected, and the shape of the label is rectangular; the quadrilateral region extraction module is used for extracting a quadrilateral region from the first region to obtain a second region, and the second region is the quadrilateral region with the highest fitting degree with the first region; and the defect determining module is used for determining the edge defect detection result of the label according to the first area and the second area.
The embodiment of the application, can follow and wait to detect the image, determine the first region that the label belongs to, wherein, the shape of label is the rectangle, and carry out quadrangle regional extraction to the first region, obtain the second region, the second region is the quadrangle region with first regional fitting degree is the highest, then according to first region and second region, confirm the marginal defect detection result of label, thereby can be through the mode from the matching, carry out marginal defect detection to the label, not only to the size of label, position, the light change when rotatory and shooting are waited to detect the image etc. do not restrict, applicable scene is extensive, robustness is strong, and need not to model building and register, the processing speed is fast, can be applicable to the faster production line of beat.
In a first possible implementation manner of the edge defect detecting apparatus according to the second aspect, the quadrilateral region extracting module includes: the contour point extraction submodule is used for extracting peripheral contour points of the first area; the linear fitting submodule is used for performing linear fitting on the peripheral contour points to obtain four straight lines; and the first determining submodule is used for determining the closed area formed by the four straight lines as a second area.
According to the embodiment of the application, the peripheral contour points of the first area are extracted, linear fitting is carried out on the peripheral contour points, four straight lines are obtained, then the closed area formed by the four straight lines is determined to be the second area, the method is simple and quick, effective linear fitting can be carried out as long as most of edges of the first area are segmented, the effectiveness of quadrilateral area extraction can be improved, and the robustness of the edge defect detection method is improved.
According to a second aspect, in a second possible implementation manner of the edge defect detecting apparatus, the quadrilateral region extracting module includes: a vertex extraction submodule for extracting four vertices of the first region; and the second determining submodule is used for sequentially connecting the four vertexes to form a quadrilateral area and determining the quadrilateral area as a second area.
According to the embodiment of the application, the four vertexes of the first area are extracted, and the quadrilateral area formed by sequentially connecting the four vertexes is determined as the second area, so that the defect area (especially the defect area in the scene of angle missing and angle tilting) can be effectively enlarged, and the sensitivity of edge defect identification can be improved.
In a third possible implementation manner of the edge defect detecting apparatus according to the second possible implementation manner of the second aspect, the vertex extracting sub-module is configured to: determining a first pixel point closest to the upper left corner of the image to be detected from all pixel points of the first area; and determining the first pixel point as the top left corner vertex of the first area.
The embodiment of the application can determine the first pixel point closest to the distance of the upper left corner of the image to be detected from each pixel point of the first region, and determines the first pixel point as the top left corner vertex of the first region, so that the top left corner vertex of the first region can be simply and quickly extracted through distance calculation, and the processing efficiency can be improved.
In a fourth possible implementation manner of the edge defect detection apparatus according to the second possible implementation manner of the second aspect, the vertex extraction sub-module is configured to: according to the first region, carrying out binarization processing on the image to be detected to obtain a first characteristic diagram; performing convolution processing on the first characteristic diagram according to a preset first convolution kernel to obtain a second characteristic diagram; determining a second pixel point with the maximum characteristic value in the second characteristic diagram; and determining pixel points corresponding to the second pixel points in the first region as top left corners of the first region.
According to the embodiment of the application, binarization processing can be carried out on an image to be detected according to a first region to obtain a first feature map, convolution processing is carried out on the first feature map according to a preset first convolution kernel to obtain a second feature map, then a second pixel point with the largest feature value in the second feature map is determined, a pixel point corresponding to the second pixel point in the first region is determined to be the top left corner vertex of the first region, so that the top left corner vertex of the first region can be extracted quickly through convolution filtering, and the processing efficiency can be improved.
According to a fourth possible implementation manner of the second aspect, in a fifth possible implementation manner of the edge defect detection apparatus, in the first feature map, a feature value corresponding to the first region is 1, a feature value corresponding to a background region other than the first region is-1, a size of the first convolution kernel is 2 × 2, and four parameters of the first convolution kernel are-1, and 3, respectively.
In the embodiment of the present application, the feature value corresponding to the first region in the first feature map is set to 1, the feature value corresponding to the background region other than the first region is set to-1, the size of the first convolution kernel is set to 2 × 2, and the four parameters of the first convolution kernel are set to-1, and 3, respectively, so that the processing efficiency when extracting the top left corner vertex of the first region can be improved.
In a sixth possible implementation manner of the edge defect detecting apparatus according to the second aspect or one or more of the multiple possible implementation manners of the second aspect, the defect determining module includes: the judging submodule is used for judging whether a non-coincident region exists between the position of the first region and the position of the second region; and the defective area determining submodule determines the non-coincident area as the defective area of the label if the non-coincident area exists between the position of the first area and the position of the second area.
According to the embodiment of the application, whether the misalignment area exists between the position of the first area and the position of the second area or not is judged, and the misalignment area is determined to be the defect area of the label under the condition that the misalignment area exists between the position of the first area and the position of the second area, so that the method is simple and rapid, and the processing efficiency of edge defect detection can be improved.
In a third aspect, an embodiment of the present application provides an edge defect detecting apparatus, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to implement the edge defect detection method of the first aspect or one or more of the many possible implementation manners of the first aspect when executing the instructions.
The embodiment of the application, can follow and wait to detect the image, determine the first region that the label belongs to, wherein, the shape of label is the rectangle, and carry out quadrangle regional extraction to the first region, obtain the second region, the second region is the quadrangle region with first regional fitting degree is the highest, then according to first region and second region, confirm the marginal defect detection result of label, thereby can be through the mode from the matching, carry out marginal defect detection to the label, not only to the size of label, position, the light change when rotatory and shooting are waited to detect the image etc. do not restrict, applicable scene is extensive, robustness is strong, and need not to model building and register, the processing speed is fast, can be applicable to the faster production line of beat.
In a fourth aspect, embodiments of the present application provide a non-transitory computer-readable storage medium having stored thereon computer program instructions, which, when executed by a processor, implement the edge defect detection method of the first aspect or one or more of the many possible implementations of the first aspect.
The embodiment of the application, can follow and wait to detect the image, determine the first region that the label belongs to, wherein, the shape of label is the rectangle, and carry out quadrangle regional extraction to the first region, obtain the second region, the second region is the quadrangle region with first regional fitting degree is the highest, then according to first region and second region, confirm the marginal defect detection result of label, thereby can be through the mode from the matching, carry out marginal defect detection to the label, not only to the size of label, position, the light change when rotatory and shooting are waited to detect the image etc. do not restrict, applicable scene is extensive, robustness is strong, and need not to model building and register, the processing speed is fast, can be applicable to the faster production line of beat.
In a fifth aspect, embodiments of the present application provide a computer program product, which includes computer readable code or a non-transitory computer readable storage medium carrying computer readable code, and when the computer readable code runs in an electronic device, a processor in the electronic device executes an edge defect detection method of one or more of the first aspect or the multiple possible implementations of the first aspect.
The embodiment of the application, can follow and wait to detect the image, determine the first region that the label belongs to, wherein, the shape of label is the rectangle, and carry out quadrangle regional extraction to the first region, obtain the second region, the second region is the quadrangle region with first regional fitting degree is the highest, then according to first region and second region, confirm the marginal defect detection result of label, thereby can be through the mode from the matching, carry out marginal defect detection to the label, not only to the size of label, position, the light change when rotatory and shooting are waited to detect the image etc. do not restrict, applicable scene is extensive, robustness is strong, and need not to model building and register, the processing speed is fast, can be applicable to the faster production line of beat.
These and other aspects of the present application will be more readily apparent from the following description of the embodiment(s).
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the application and, together with the description, serve to explain the principles of the application.
FIG. 1a shows a schematic diagram of a normal tag.
Fig. 1b, 1c, 1d, 1e and 1f each show a schematic view of a label with an edge defect.
Fig. 2 is a schematic diagram illustrating an application scenario of an edge defect detection method according to an embodiment of the present application.
FIG. 3 shows a flow chart of an edge defect detection method according to an embodiment of the present application.
Fig. 4 shows a schematic diagram of quadrilateral region extraction according to an embodiment of the present application.
Fig. 5 shows a schematic diagram of quadrilateral region extraction according to an embodiment of the present application.
FIG. 6 shows a schematic diagram of a first convolution kernel according to an embodiment of the present application.
FIG. 7 illustrates a schematic diagram of vertex extraction for a first region according to an embodiment of the present application.
Fig. 8 illustrates a schematic diagram of quadrilateral region extraction according to an embodiment of the present application.
Fig. 9 illustrates a schematic diagram of quadrilateral region extraction according to an embodiment of the present application.
FIG. 10 is a schematic diagram illustrating a processing procedure of an edge defect detection method according to an embodiment of the present application.
FIG. 11 shows a schematic diagram of a defective area of a label according to an embodiment of the present application.
FIG. 12 is a schematic diagram illustrating a processing procedure of an edge defect detection method according to an embodiment of the present application.
FIG. 13 shows a schematic diagram of a defective area of a label according to an embodiment of the present application.
FIG. 14 shows a block diagram of an edge defect detection apparatus according to an embodiment of the present application.
Detailed Description
Various exemplary embodiments, features and aspects of the present application will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present application. It will be understood by those skilled in the art that the present application may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present application.
The edge position of the label may be defective when the label is applied to a product or its components. For example, FIG. 1a shows a schematic view of a normal label, as shown in FIG. 1a, where the edge position of the label 101 is defect free; fig. 1b, 1c, 1d, 1e and 1f each show a schematic view of a label with an edge defect.
As shown in fig. 1b, the upper edge of the label 102 has partial missing, and the defect of the label 102 can be regarded as edge missing; as shown in fig. 1c, the top right corner of the label 103 is missing, and the defect of the label 103 can be regarded as a corner missing; as shown in fig. 1d, there is distortion on both the upper side and the right side of the label 104, and the defect of the label 104 can be regarded as edge distortion; as shown in fig. 1e, the upper edge and the lower edge of the label 105 are both tilted downward, and the defect of the label 105 can be regarded as edge tilting; as shown in fig. 1f, the entire tag 106 is deformed, and the defect of the tag 106 can be regarded as the entire deformation.
In the related art, edge defect detection is generally performed on a label attached to a product or a component thereof by means of template matching. For example, modeling may be performed according to the normal label 101 in fig. 1a to obtain a corresponding template, and then comparing the label in the image to be detected with the template to obtain a detection result.
However, in practical applications, a plurality of different models or different types of products may be produced simultaneously on one production line, the size of the label may be changed, and the label with different sizes cannot be matched by using a preset template (for example, a template corresponding to a label with a certain size), and thus edge defect detection cannot be performed on the labels with different sizes.
Moreover, when the label in the image to be detected is matched with the template, gray level matching or contour matching is usually used, and the matching mode is only suitable for the situations that the label in the image to be detected is translated, rotated and scaled relative to a normal label. However, in practical applications, the positions of the light source and the camera are usually unchanged, and the label to be detected may shift, rotate, and the like, so that the shooting position of each time changes, and further the label in the image to be detected is shot may have uneven brightness (for example, brightness gradient), size distortion, and the like, and thus template matching cannot be performed. That is, edge defect detection cannot be performed on a label having uneven brightness or size distortion due to a change in the imaging position in an image.
In addition, when the edge defect detection is performed on the label in the template matching mode, the execution time of the matching algorithm is usually long, so that the method is difficult to be applied to a production line with a fast beat.
In order to solve the technical problem, the method can determine a first area where a label is located from an image to be detected, wherein the shape of the label is rectangular, quadrilateral area extraction is carried out on the first area to obtain a second area, the second area is a quadrilateral area with the highest fitting degree with the first area, and then the edge defect detection result of the label is determined according to the first area and the second area.
Fig. 2 is a schematic diagram illustrating an application scenario of an edge defect detection method according to an embodiment of the present application. As shown in fig. 2, the edge defect detection method is applied to an edge defect detection system 200. The edge defect detection system 200 includes an image capture device 210 and a defect detection device 220.
The image capturing device 210 may capture an image of the label to be detected to obtain an image to be detected, and send the image to be detected to the defect detecting device 220. Wherein, the label to be detected is pasted on the product or the component thereof, and the shape of the label is rectangular. The image capture device 210 may be, for example, a camera, a video camera, or the like. In some examples, image acquisition device 210 may also include a light source. Image capture device 210 may be deployed at a first preset location of the production line. It should be noted that, a person skilled in the art may select an appropriate image capturing device 210 and determine a specific deployment position of the image capturing device 210 according to actual situations, which is not limited in the present application.
After receiving the image to be detected sent by the image acquisition device 210, the defect detection device 220 may first determine a first region where the label is located from the image to be detected, perform quadrilateral region extraction on the first region to obtain a second region, where the second region is a quadrilateral region with the highest fitting degree with the first region, and then determine an edge defect detection result of the label according to the first region and the second region. The edge defect detection result of the label can comprise whether the label has edge defects or not. In the case that the label has an edge defect, the edge defect detection result of the label may further include an edge defect area.
The defect detection device 220 may be a terminal device such as an industrial personal computer or a general computer, or may be a server. In some examples, defect detection device 220 may also be any other electronic device that may perform data processing. The defect inspection apparatus 220 may be deployed at a second predetermined location on the production line, or may be deployed remotely. The present application is not limited to the specific type and deployment location of the defect detection device 220.
FIG. 3 shows a flow chart of an edge defect detection method according to an embodiment of the present application. As shown in fig. 3, the edge defect detecting method includes:
step S310, a first region where the label is located is determined from the image to be detected.
In a possible implementation manner, when detecting an edge defect of a label that is pasted on a product or a component thereof and has a rectangular shape, the image of the label to be detected may be acquired by the image acquisition device 210 shown in fig. 2, so as to obtain an image to be detected.
After the image to be detected is obtained, the image to be detected is subjected to region detection, target detection and other processing through the existing related technology, and the first region where the label is located is determined from the image to be detected. That is, the region where the label is located in the image to be detected and the background region other than the label may be determined, and the region where the label is located may be determined as the first region.
In a possible implementation mode, a first area where a label is located can be determined from an image to be detected through a traditional image processing mode such as threshold segmentation and extraction; or processing the image to be detected through a pre-trained image segmentation neural network, such as Unet, Unet + +, Deeplab, and the like, to obtain a first region where the label is located; and the position of the label in the image to be detected can be detected to determine the position of the label, and then the first region where the label is located can be determined by modes of matting, dividing and the like.
It should be noted that, a person skilled in the art may set a specific manner of determining the first region from the image to be detected according to practical situations, and the present application does not limit this.
Step S320, performing quadrilateral region extraction on the first region to obtain a second region.
After the first region where the label in the image to be detected is located is determined, in step S320, quadrilateral region extraction may be performed on the first region, and the quadrilateral region with the highest fitting degree with the first region is determined as the second region.
In a possible implementation manner, the quadrilateral region extraction may be performed on the first region in a manner of extracting a closed region after four sides are fitted, so as to obtain the second region. Specifically, the peripheral contour points of the first region may be extracted first, linear fitting may be performed on the peripheral contour points to obtain four straight lines, which are up, down, left, and right, and then the closed region formed by the four straight lines is determined as the second region.
Fig. 4 shows a schematic diagram of quadrilateral region extraction according to an embodiment of the present application. As shown in fig. 4, when the label in the image 400 to be detected is a normal label, the first region where the label determined from the image 400 to be detected is located is a region 410, and when the quadrilateral region extraction is performed on the first region, the peripheral contour points of the region 410 are extracted first, and linear fitting is performed on the peripheral contour points to obtain four straight lines, i.e., a straight line a1, a2, a3, and a4, and then the closed region (the region 420 in fig. 4) formed by the straight lines a1, a2, a3, and a4 is determined as a second region.
Fig. 5 shows a schematic diagram of quadrilateral region extraction according to an embodiment of the present application. As shown in fig. 5, the label in the image 500 to be detected is a label with an edge defect (missing upper right corner), the first region where the label determined from the image 500 to be detected is located is the region 510, when the quadrilateral region extraction is performed on the first region, the peripheral contour point of the region 510 is extracted first, and linear fitting is performed on the peripheral contour point to obtain four straight lines, i.e., upper, lower, left, and right straight lines, which are respectively the straight lines b1, b2, b3, and b4, and then the closed region (the region 520 in fig. 5) formed by the straight lines b1, b2, b3, and b4 is determined as the second region.
The peripheral contour points of the first region are extracted, linear fitting is carried out on the peripheral contour points to obtain four straight lines, then the closed region formed by the four straight lines is determined to be the second region, the method is simple and quick, and effective Linear Fitting (LF) can be carried out as long as most edges of the first region are segmented, so that the effectiveness of quadrilateral region extraction can be improved, and the robustness of the edge defect detection method is improved.
In a possible implementation manner, the quadrilateral region extraction may be performed on the first region in a manner of extracting a four-vertex back-connecting line, so as to obtain the second region. When this method is used, the four vertices of the first region may be extracted first, where the four vertices of the first region are: the image detection method comprises the steps of obtaining a first region, a first region and a second region, wherein the first region comprises a top left corner vertex (namely a pixel point closest to the top left corner of an image to be detected in the first region), a top right corner vertex (namely a pixel point closest to the top right corner of the image to be detected in the first region), a bottom left corner vertex (namely a pixel point closest to the bottom left corner of the image to be detected in the first region), and a bottom right corner vertex (namely a pixel point closest to the bottom right corner of the image to be detected in the first region).
In one possible implementation, four vertices of the first region may be extracted by a shortest distance method. When the top left corner vertex of the first region is extracted, the distance (such as Euclidean distance, Manhattan distance and the like) between each pixel point in the first region and the top left corner of the image to be detected can be calculated respectively, then the first pixel point closest to the top left corner of the image to be detected is selected from all the pixel points of the first region, the first pixel point is determined as the top left corner vertex of the first region, and the position of the first pixel point is determined as the position of the top left corner vertex of the first region.
When the top right corner vertex of the first region is extracted, the distances (such as Euclidean distance, Manhattan distance and the like) between each pixel point in the first region and the top right corner of the image to be detected can be respectively calculated, then a third pixel point which is closest to the top right corner of the image to be detected is selected from all pixel points of the first region, the third pixel point is determined as the top right corner vertex of the first region, and the position of the second pixel point is determined as the position of the top right corner vertex of the first region.
When the vertex of the lower left corner of the first region is extracted, the distance (such as Euclidean distance, Manhattan distance and the like) between each pixel point in the first region and the lower left corner of the image to be detected can be respectively calculated, then a fourth pixel point closest to the lower left corner of the image to be detected is selected from all pixel points in the first region, the fourth pixel point is determined as the vertex of the lower left corner of the first region, and the position of the third pixel point is determined as the position of the vertex of the lower left corner of the first region.
When the top point of the lower right corner of the first area is extracted, the distance (such as Euclidean distance, Manhattan distance and the like) between each pixel point in the first area and the lower right corner of the image to be detected can be respectively calculated, then a fifth pixel point closest to the lower right corner of the image to be detected is selected from all pixel points in the first area, the fifth pixel point is determined as the top point of the lower right corner of the first area, and the position of the fourth pixel point is determined as the position of the top point of the upper right corner of the first area.
In a possible implementation manner, after the vertex at the upper left corner of the first region is determined in the above manner, the other three vertices of the first region may also be determined in the following manner: the image to be detected can be overturned in the horizontal direction, and the top right corner vertex of the first area is determined in a manner similar to the determination of the top left corner vertex of the first area; the image to be detected is subjected to vertical direction overturning, the lower left corner vertex of the first area is determined in a mode similar to the upper left corner vertex of the first area, the image to be detected is subjected to horizontal direction overturning and vertical direction overturning, and the lower right corner vertex of the first area is determined in a mode similar to the upper left corner vertex of the first area.
Through the shortest distance method, Vertex Extraction (VE) is carried out on the first area, four vertexes of the first area are obtained, the method is simple and rapid, and the processing efficiency can be improved.
In one possible implementation, four vertices of the first region may be extracted by convolution filtering. Firstly, according to the first area where the label is located, binarization processing can be performed on an image to be detected to obtain a first feature map. In the first feature map, the feature value corresponding to the first region is 1, and the feature value corresponding to the background region other than the first region is-1.
After the first feature map is determined, the first feature map can be convolved according to a preset first convolution kernel to obtain a second feature map. The convolution step is 1, the size of the first convolution kernel is 2 multiplied by 2, and the parameters of the first convolution kernel are-1, -1 and 3 respectively. If the first convolution kernel is regarded as being composed of parameters located in four quadrants, the parameters located in the first quadrant, the second quadrant, the third quadrant and the fourth quadrant of the first convolution kernel are-1, -1 and 3 respectively.
FIG. 6 shows a schematic diagram of a first convolution kernel according to an embodiment of the present application. As shown in fig. 6, the size of the first convolution kernel 610 is 2 × 2, and the first convolution kernel 610 can be regarded as being composed of parameters located in four quadrants, so that the parameters located in the first quadrant, the second quadrant, the third quadrant, and the fourth quadrant of the first convolution kernel 610 are-1, and 3, respectively.
After the second characteristic diagram is determined, the maximum characteristic value can be selected from the second characteristic diagram, and the pixel point corresponding to the maximum characteristic value is determined as a second pixel point, namely, the pixel point with the maximum characteristic value in the second characteristic diagram is determined as a second pixel point; and then determining pixel points corresponding to the second pixel points in the first area as top left corners of the first area.
FIG. 7 illustrates a schematic diagram of vertex extraction for a first region according to an embodiment of the present application. As shown in fig. 7, after binarization processing is performed on an image to be detected (not shown), a first feature map 710 is obtained, in the first feature map 710, the feature value of a region 720 corresponding to a first region is 1, and the feature value of a region 730 corresponding to a background region other than the first region is-1.
The first signature 710 may be convolved with the first convolution kernel 610 shown in fig. 6 at step 1 to obtain a second signature 740. In the second feature map 740, the feature value of the pixel P1 is 6, the feature value of the pixel P2 is-2, and except for P1 and P2, the feature values of the other pixels corresponding to the region 720 in the first feature map 710 are 0, and the feature value of the pixel corresponding to the region 730 in the first feature map 710 is also 0. The maximum feature value in the second feature map 740 is 6, the pixel point P1 corresponding to the maximum feature value may be determined as the second pixel point, and then the pixel point corresponding to the second pixel point P1 in the first region may be determined as the top left corner vertex of the first region.
In a similar manner, the other three vertices of the first region can be determined, as follows:
the first feature map may be convolved according to a preset second convolution kernel to obtain a third feature map. The convolution step is 1, the size of the second convolution kernel is 2 multiplied by 2, and the parameters of the second convolution kernel are-1, 3 and-1 respectively. If the second convolution kernel is regarded as being composed of parameters located in four quadrants, then the parameters located in the first, second, third, and fourth quadrants of the second convolution kernel are-1, 3, -1, respectively. The maximum characteristic value can be selected from the third characteristic graph, a pixel point corresponding to the maximum characteristic value is determined as a sixth pixel point, and then a pixel point corresponding to the sixth pixel point in the first area is determined as the top right corner vertex of the first area.
The first feature map may be convolved according to a preset third convolution kernel to obtain a fourth feature map. The convolution step is 1, the size of the third convolution kernel is 2 multiplied by 2, and the parameters of the second convolution kernel are 3, -1 and-1 respectively. If the third convolution kernel is considered to be composed of parameters located in four quadrants, the parameters located in the first quadrant, the second quadrant, the third quadrant, and the fourth quadrant of the third convolution kernel are 3, -1, respectively. The maximum characteristic value can be selected from the fourth characteristic diagram, a pixel point corresponding to the maximum characteristic value is determined as a seventh pixel point, and then a pixel point corresponding to the seventh pixel point in the first area is determined as a vertex of the lower left corner of the first area.
The first feature map may be convolved according to a preset fourth convolution kernel to obtain a fifth feature map. The convolution step is 1, the size of the fourth convolution kernel is 2 multiplied by 2, and the parameters of the fourth convolution kernel are-1, 3, -1 and-1 respectively. If the fourth convolution kernel is considered to be composed of parameters located in four quadrants, then the parameters located in the first, second, third, and fourth quadrants of the fourth convolution kernel are-1, 3, -1, respectively. The maximum characteristic value can be selected from the fifth characteristic diagram, the pixel point corresponding to the maximum characteristic value is determined as an eighth pixel point, and then the pixel point corresponding to the eighth pixel point in the first area is determined as the top right corner vertex of the first area.
It should be noted that, the above is only an example, the characteristic value of the first feature map is exemplarily described, and also, the convolution step, the size and the parameter of the four convolution kernels (the first convolution kernel, the second convolution kernel, the third convolution kernel, and the fourth convolution kernel) are exemplarily described, and a person skilled in the art may set the specific value according to an actual application scenario, as long as the four vertices of the first region can be extracted through convolution processing, which is not limited in the present application.
Through the convolution filtering mode, four vertexes of the first area are extracted, the method is simple and quick, and the processing efficiency can be improved.
After the four vertexes of the first region are extracted, the four vertexes can be sequentially connected according to the sequence of the upper left corner, the lower right corner and the upper right corner, or the four vertexes can be sequentially connected according to the sequence of the upper left corner, the upper right corner, the lower right corner and the lower left corner, and the four vertexes are sequentially connected to form a quadrilateral region, so that the quadrilateral region is determined to be the second region.
Fig. 8 illustrates a schematic diagram of quadrilateral region extraction according to an embodiment of the present application. As shown in fig. 8, when the label in the image 800 to be detected is a normal label, the first region where the label determined from the image 800 to be detected is located is a region 810, and when the quadrilateral region extraction is performed on the first region, vertices of the region 810 are extracted to obtain four vertices, which are respectively C1, C2, C3, and C4, and then the four vertices are sequentially connected according to the order of C1, C2, C3, and C4, and the four vertices are sequentially connected to form a quadrilateral region, which is a region 820 in fig. 8, and is determined as a second region.
Fig. 9 illustrates a schematic diagram of quadrilateral region extraction according to an embodiment of the present application. As shown in fig. 9, the label in the image 900 to be detected is a label with an edge defect (missing upper right corner), the first region where the label determined from the image 900 to be detected is located is a region 910, when a quadrilateral region is extracted from the first region, vertices of the region 910 are extracted to obtain four vertices D1, D2, D3, and D4, and then the four vertices are sequentially connected according to the sequence of D1, D2, D3, and D4, and the four vertices are sequentially connected to form a quadrilateral region, which is determined as a second region 920 in fig. 9.
By extracting the four vertexes of the first region and determining the quadrilateral region formed by sequentially connecting the four vertexes as the second region, the defect region (particularly the defect region in the scene of corner deletion and corner tilting) can be effectively amplified, and the sensitivity of edge defect detection can be improved.
Step S330, determining the edge defect detection result of the label according to the first area and the second area.
After the first area and the second area are determined, in step S330, an edge defect detection result of the label may be determined according to the first area and the second area.
In a possible implementation manner, whether a non-overlapping area exists between the position of the first area and the position of the second area can be judged; if the position of the first area and the position of the second area are not provided with the non-overlapping area, namely the first area and the second area are completely overlapped, the edge defect detection result of the label is considered as follows: the label has no edge defect; if a non-coincident region exists between the position of the first region and the position of the second region, namely the first region and the second region are not completely coincident, the edge defect detection result of the label is considered as follows: the label has edge defects, and the misaligned area is determined as the defective area of the label.
Whether a misalignment region exists between the position of the first region and the position of the second region or not is judged, and the misalignment region is determined to be a defect region of the label under the condition that the misalignment region exists between the position of the first region and the position of the second region, so that the method is simple and rapid, and the processing efficiency of edge defect detection can be improved.
The following will exemplarily describe the processing procedure of the edge defect detection method according to the embodiment of the present application with reference to fig. 10 to 13.
Fig. 10 is a schematic diagram illustrating a processing procedure of an edge defect detection method according to an embodiment of the present application. As shown in fig. 10, when detecting an edge defect of a rectangular label adhered to a product or a component thereof, step S1001 may be performed first to determine a first area where the label is located from an image to be detected, where the image to be detected is obtained by image capture of the label to be detected by an image capture device.
After the first area where the label is located is determined, quadrilateral area extraction is carried out on the first area in a mode of extracting a closed area after four edges are fitted to obtain a second area, and the method specifically comprises the following steps: executing step S1002, extracting peripheral contour points of the first area; step S1003, performing linear fitting on the peripheral contour points to obtain four straight lines, namely an upper straight line, a lower straight line, a left straight line and a right straight line; in step S1004, a closed region formed by four straight lines, i.e., upper, lower, left, and right, is determined as a second region.
After the first area and the second area are determined, in step S1005, it is determined whether there is a non-overlapping area between the location of the first area and the location of the second area; if there is no non-overlapping area between the position of the first area and the position of the second area, that is, the first area and the second area are completely overlapped, step S1006 is executed, and it is determined that the edge defect detection result of the label is: the label has no edge defect; if there is a non-overlapping area between the position of the first area and the position of the second area, that is, the first area and the second area are not completely overlapped, step S1007 is executed, and it is determined that the edge defect detection result of the label is: the label has edge defects, and the misaligned area is determined as the defective area of the label.
FIG. 11 shows a schematic diagram of a defective area of a label according to an embodiment of the present application. As shown in fig. 11, in a case that a misalignment region exists between a position of the first region 1120 and a position of the second region 1130 in the image to be detected 1110, the misalignment region 1140 can be obtained by removing the overlapping region of the first region and the second region, and the misalignment region 1140 can be determined as a defect region of the label.
Fig. 12 is a schematic diagram illustrating a processing procedure of an edge defect detection method according to an embodiment of the present application. As shown in fig. 12, when detecting an edge defect of a rectangular label attached to a product or a component thereof, step S1201 may be performed first to determine a first area where the label is located from an image to be detected, where the image to be detected is obtained by image-capturing the label to be detected by an image-capturing device.
After the first area where the label is located is determined, quadrilateral area extraction is carried out on the first area in a mode of extracting a back connecting line of four vertexes to obtain a second area, and the method specifically comprises the following steps: step S1202 is executed to extract four vertices of the first region by the shortest distance method or the convolution filtering method; in step S1203, the four vertices are sequentially connected to form a quadrilateral area, and the quadrilateral area is determined to be a second area.
After the first area and the second area are determined, in step S1204, it is determined whether there is a non-overlapping area between the location of the first area and the location of the second area; if there is no non-overlapping area between the position of the first area and the position of the second area, that is, the first area and the second area are completely overlapped, step S1205 is executed, and it is determined that the edge defect detection result of the label is: the label has no edge defect; if there is a non-overlapping area between the position of the first area and the position of the second area, that is, the first area and the second area are not completely overlapped, step S1206 is executed to determine that the edge defect detection result of the label is: the label has edge defects, and the misaligned area is determined as the defective area of the label.
FIG. 13 shows a schematic diagram of a defective area of a label according to an embodiment of the present application. As shown in fig. 13, in the case that the position of the first region 1320 and the position of the second region 1330 in the image 1310 to be detected have a misalignment region, the overlapping region of the first region 1320 and the second region 1330 can be removed to obtain a misalignment region 1340, and the misalignment region 1340 is determined as a defect region of the label.
As can be seen from fig. 11 and 13 in the above embodiment, although the same label has different second regions obtained by different quadrilateral region extraction methods, and the determined defect regions have differences, both the two methods can quickly detect the edge defect of the label.
In addition, the defective region in fig. 13 is effectively enlarged as compared with fig. 11, that is, the embodiment shown in fig. 12 can improve the sensitivity of edge defect detection.
It should be noted that, the edge defect detection method according to the embodiment of the present application is exemplarily described above only by taking the corner missing of the label as an example. The edge defect detection method in the embodiment of the application may also detect other edge defects (for example, edge deletion, edge deformation, edge tilting, overall deformation, etc.) of the label, and the specific manner is similar to the above manner, and is not described herein again.
The edge defect detection method of the embodiment of the application can detect the edge defects of the label in a self-matching mode, has no limitation on the size, the position and the rotation of the label and the light change and the like when the image to be detected is shot, has wide application scenes and strong robustness, does not need modeling and registration, has high processing speed and can be suitable for a production line with a fast beat.
The edge defect detection method of the embodiment of the present application is not limited to whether the label to be detected is transparent, and the edge defect detection method of the embodiment of the present application may be used as long as the area where the label to be detected is located can be identified in the image to be detected.
The edge defect detection method provided by the embodiment of the application can also be used for detecting edge defects when other flexible films (softfilms) similar to labels are pasted, and the other flexible films can be used for example, outer surface protective films of electronic products and the like. The application is not limited to the particular type of other flexible films.
FIG. 14 shows a block diagram of an edge defect detection apparatus according to an embodiment of the present application. As shown in fig. 14, the edge defect detecting apparatus includes:
a label region extraction module 1410, configured to determine, from an image to be detected, a first region where a label is located, where the label is rectangular;
a quadrilateral region extraction module 1420, configured to perform quadrilateral region extraction on the first region to obtain a second region, where the second region is a quadrilateral region with the highest fitting degree with the first region;
the defect determining module 1430 is configured to determine an edge defect detection result of the label according to the first area and the second area.
In one possible implementation manner, the quadrilateral area extraction module 1420 includes: the contour point extraction submodule is used for extracting peripheral contour points of the first area; the linear fitting submodule is used for performing linear fitting on the peripheral contour points to obtain four straight lines; and the first determining submodule is used for determining a closed area formed by the four straight lines as a second area.
In one possible implementation manner, the quadrilateral area extraction module 1420 includes: the vertex extraction submodule is used for extracting four vertexes of the first area; and the second determining submodule is used for determining the quadrilateral area formed by sequentially connecting the four vertexes as a second area.
In one possible implementation, the vertex extraction sub-module is configured to: determining a first pixel point which is closest to the upper left corner of the image to be detected from all pixel points in the first area; and determining the first pixel point as the top left corner vertex of the first area.
In one possible implementation, the vertex extraction sub-module is configured to: according to the first region, carrying out binarization processing on the image to be detected to obtain a first characteristic diagram; performing convolution processing on the first characteristic diagram according to a preset first convolution kernel to obtain a second characteristic diagram; determining a second pixel point with the maximum characteristic value in the second characteristic diagram; and determining a pixel point corresponding to the second pixel point in the first area as a top left corner vertex of the first area.
In one possible implementation manner, in the first feature map, a feature value corresponding to the first region is 1, a feature value corresponding to a background region other than the first region is-1, the size of the first convolution kernel is 2 × 2, and four parameters of the first convolution kernel are-1, and 3, respectively.
In one possible implementation, the defect determining module 1430 includes: the judging submodule is used for judging whether a non-coincident region exists between the position of the first region and the position of the second region; and the defective area determining submodule determines the non-coincident area as the defective area of the label if the non-coincident area exists between the position of the first area and the position of the second area.
An embodiment of the present application provides an edge defect detecting apparatus, including: a processor and a memory for storing processor-executable instructions; wherein the processor is configured to implement the above method when executing the instructions.
Embodiments of the present application provide a non-transitory computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
Embodiments of the present application provide a computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, the processor in the electronic device performs the above method.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an erasable Programmable Read-Only Memory (EPROM or flash Memory), a Static Random Access Memory (SRAM), a portable Compact Disc Read-Only Memory (CD-ROM), a Digital Versatile Disc (DVD), a Memory stick, a floppy disk, a mechanical coding device, such as a punch card or in-groove bump structure having instructions stored thereon, and any suitable combination of the foregoing.
The computer readable program instructions or code described herein may be downloaded from a computer readable storage medium to a respective computing/processing device, or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present application may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of Network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry can execute computer-readable program instructions to implement aspects of the present application by utilizing state information of the computer-readable program instructions to personalize custom electronic circuitry, such as Programmable Logic circuits, Field-Programmable Gate arrays (FPGAs), or Programmable Logic Arrays (PLAs).
Various aspects of the present application are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
It is also noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by hardware (e.g., a Circuit or an ASIC) for performing the corresponding function or action, or by combinations of hardware and software, such as firmware.
While the invention has been described in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a review of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
Having described embodiments of the present application, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (17)

1. An edge defect detection method, the method comprising:
determining a first area where a label is located from an image to be detected, wherein the shape of the label is rectangular;
quadrilateral region extraction is carried out on the first region to obtain a second region, and the second region is a quadrilateral region with the highest fitting degree with the first region;
and determining the edge defect detection result of the label according to the first area and the second area.
2. The method according to claim 1, wherein the quadrilateral region extraction of the first region to obtain a second region comprises:
extracting peripheral contour points of the first area;
performing linear fitting on the peripheral contour points to obtain four straight lines;
and determining a closed area formed by the four straight lines as a second area.
3. The method according to claim 1, wherein the quadrilateral region extraction of the first region to obtain a second region comprises:
extracting four vertexes of the first area;
and determining a quadrilateral area formed by sequentially connecting the four vertexes as a second area.
4. The method of claim 3, wherein said extracting four vertices of said first region comprises:
determining a first pixel point closest to the upper left corner of the image to be detected from all pixel points of the first area;
and determining the first pixel point as the top left corner vertex of the first area.
5. The method of claim 3, wherein said extracting four vertices of said first region comprises:
according to the first region, carrying out binarization processing on the image to be detected to obtain a first feature map;
performing convolution processing on the first characteristic diagram according to a preset first convolution kernel to obtain a second characteristic diagram;
determining a second pixel point with the maximum characteristic value in the second characteristic diagram;
and determining pixel points corresponding to the second pixel points in the first region as top left corners of the first region.
6. The method according to claim 5, wherein in the first feature map, a feature value corresponding to the first region is 1, a feature value corresponding to a background region outside the first region is-1,
the size of the first convolution kernel is 2 multiplied by 2, and four parameters of the first convolution kernel are-1, -1 and 3 respectively.
7. The method according to any one of claims 1 to 6, wherein determining the edge defect detection result of the label according to the first area and the second area comprises:
judging whether a non-coincident region exists between the position of the first region and the position of the second region;
and if the position of the first area and the position of the second area have a misalignment area, determining the misalignment area as a defect area of the label.
8. An edge defect detection apparatus, comprising:
the label area extraction module is used for determining a first area where a label is located from an image to be detected, and the shape of the label is rectangular;
the quadrilateral region extraction module is used for extracting a quadrilateral region from the first region to obtain a second region, and the second region is the quadrilateral region with the highest fitting degree with the first region;
and the defect determining module is used for determining the edge defect detection result of the label according to the first area and the second area.
9. The apparatus of claim 8, wherein the quadrilateral region extraction module comprises:
the contour point extraction submodule is used for extracting peripheral contour points of the first area;
the linear fitting submodule is used for performing linear fitting on the peripheral contour points to obtain four straight lines;
and the first determining submodule is used for determining the closed area formed by the four straight lines as a second area.
10. The apparatus of claim 8, wherein the quadrilateral region extraction module comprises:
a vertex extraction submodule for extracting four vertices of the first region;
and the second determining submodule is used for sequentially connecting the four vertexes to form a quadrilateral area and determining the quadrilateral area as a second area.
11. The apparatus of claim 10, wherein the vertex extraction sub-module is configured to:
determining a first pixel point closest to the upper left corner of the image to be detected from all pixel points of the first area;
and determining the first pixel point as the top left corner vertex of the first area.
12. The apparatus of claim 10, wherein the vertex extraction sub-module is configured to:
according to the first region, carrying out binarization processing on the image to be detected to obtain a first characteristic diagram;
performing convolution processing on the first characteristic diagram according to a preset first convolution kernel to obtain a second characteristic diagram;
determining a second pixel point with the maximum characteristic value in the second characteristic diagram;
and determining pixel points corresponding to the second pixel points in the first region as top left corners of the first region.
13. The apparatus according to claim 12, wherein in the first feature map, a feature value corresponding to the first region is 1, a feature value corresponding to a background region other than the first region is-1,
the size of the first convolution kernel is 2 multiplied by 2, and four parameters of the first convolution kernel are-1, -1 and 3 respectively.
14. The apparatus of any of claims 8 to 13, wherein the defect determination module comprises:
the judging submodule is used for judging whether a non-coincident region exists between the position of the first region and the position of the second region;
and the defective area determining submodule determines the non-coincident area as the defective area of the label if the non-coincident area exists between the position of the first area and the position of the second area.
15. An edge defect detecting apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to implement the method of any one of claims 1-7 when executing the instructions.
16. A non-transitory computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the method of any of claims 1-7.
17. A computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in an electronic device, a processor in the electronic device performs the method of any of claims 1-7.
CN202210021105.XA 2022-01-10 2022-01-10 Edge defect detection method, device and storage medium Pending CN114972157A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210021105.XA CN114972157A (en) 2022-01-10 2022-01-10 Edge defect detection method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210021105.XA CN114972157A (en) 2022-01-10 2022-01-10 Edge defect detection method, device and storage medium

Publications (1)

Publication Number Publication Date
CN114972157A true CN114972157A (en) 2022-08-30

Family

ID=82974345

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210021105.XA Pending CN114972157A (en) 2022-01-10 2022-01-10 Edge defect detection method, device and storage medium

Country Status (1)

Country Link
CN (1) CN114972157A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140084060A1 (en) * 2012-09-26 2014-03-27 Motorola Solutions, Inc. Rfid-based inventory monitoring systems and methods with self-adjusting operational parameters
CN103824373A (en) * 2014-01-27 2014-05-28 辰通智能设备(深圳)有限公司 Bill image sum classification method and system
CN105574813A (en) * 2015-12-31 2016-05-11 青岛海信移动通信技术股份有限公司 Image processing method and device
CN107609546A (en) * 2017-08-29 2018-01-19 北京奇艺世纪科技有限公司 A kind of caption recognition methods and device
CN110220917A (en) * 2019-06-11 2019-09-10 江苏农林职业技术学院 A kind of crown plug surface defect online test method based on image procossing
CN111185398A (en) * 2020-02-25 2020-05-22 威海远航科技发展股份有限公司 Online omnibearing intelligent detection system and detection method for vacuum blood collection tube
CN113256635A (en) * 2021-07-15 2021-08-13 武汉中导光电设备有限公司 Defect detection method, device, equipment and readable storage medium
CN113312962A (en) * 2021-04-13 2021-08-27 肖志坚 Book searching label falling-off detection method based on image processing
CN113537301A (en) * 2021-06-23 2021-10-22 天津中科智能识别产业技术研究院有限公司 Defect detection method based on template self-adaptive matching of bottle body labels

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140084060A1 (en) * 2012-09-26 2014-03-27 Motorola Solutions, Inc. Rfid-based inventory monitoring systems and methods with self-adjusting operational parameters
CN103824373A (en) * 2014-01-27 2014-05-28 辰通智能设备(深圳)有限公司 Bill image sum classification method and system
CN105574813A (en) * 2015-12-31 2016-05-11 青岛海信移动通信技术股份有限公司 Image processing method and device
CN107609546A (en) * 2017-08-29 2018-01-19 北京奇艺世纪科技有限公司 A kind of caption recognition methods and device
CN110220917A (en) * 2019-06-11 2019-09-10 江苏农林职业技术学院 A kind of crown plug surface defect online test method based on image procossing
CN111185398A (en) * 2020-02-25 2020-05-22 威海远航科技发展股份有限公司 Online omnibearing intelligent detection system and detection method for vacuum blood collection tube
CN113312962A (en) * 2021-04-13 2021-08-27 肖志坚 Book searching label falling-off detection method based on image processing
CN113537301A (en) * 2021-06-23 2021-10-22 天津中科智能识别产业技术研究院有限公司 Defect detection method based on template self-adaptive matching of bottle body labels
CN113256635A (en) * 2021-07-15 2021-08-13 武汉中导光电设备有限公司 Defect detection method, device, equipment and readable storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘建春等: "基于机器视觉的金属边缘细微缺陷检测方法的研究", 《制造技术与机床》 *
翟伟良等: "边缘轮廓圆的测距及变形检测算法设计", 《机械设计与制造工程》 *
陈萍: "基于HALCON的标签模切缺陷的检测", 《信息通信》 *

Similar Documents

Publication Publication Date Title
CN110717489B (en) Method, device and storage medium for identifying text region of OSD (on Screen display)
CN110008956B (en) Invoice key information positioning method, invoice key information positioning device, computer equipment and storage medium
US9305360B2 (en) Method and apparatus for image enhancement and edge verification using at least one additional image
US11156564B2 (en) Dirt detection on screen
CN109951635B (en) Photographing processing method and device, mobile terminal and storage medium
US9916513B2 (en) Method for processing image and computer-readable non-transitory recording medium storing program
CN108038826B (en) Method and device for correcting perspective deformed shelf image
TWI548269B (en) Method, electronic apparatus, and computer readable medium for processing reflection in image
EP3067858A1 (en) Image noise reduction
US9760997B2 (en) Image noise reduction using lucas kanade inverse algorithm
CN111260675B (en) High-precision extraction method and system for image real boundary
CN112036400A (en) Method for constructing network for target detection and target detection method and system
CN113744142A (en) Image restoration method, electronic device and storage medium
Enjarini et al. Planar segmentation from depth images using gradient of depth feature
Attard et al. Image mosaicing of tunnel wall images using high level features
US9094617B2 (en) Methods and systems for real-time image-capture feedback
CN114842213A (en) Obstacle contour detection method and device, terminal equipment and storage medium
US9792675B1 (en) Object recognition using morphologically-processed images
CN117671299A (en) Loop detection method, device, equipment and storage medium
CN114972157A (en) Edge defect detection method, device and storage medium
CN116894849A (en) Image segmentation method and device
CN111242963B (en) Container contour detection method and device
JP2006266943A (en) Apparatus and method for inspecting defect
JP6403207B2 (en) Information terminal equipment
CN107369150B (en) Method for detecting rectangular target and rectangular target detection device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220830

RJ01 Rejection of invention patent application after publication