CN110517265B - Method and device for detecting surface defects of product and storage medium - Google Patents

Method and device for detecting surface defects of product and storage medium Download PDF

Info

Publication number
CN110517265B
CN110517265B CN201910831247.0A CN201910831247A CN110517265B CN 110517265 B CN110517265 B CN 110517265B CN 201910831247 A CN201910831247 A CN 201910831247A CN 110517265 B CN110517265 B CN 110517265B
Authority
CN
China
Prior art keywords
detected
image
product
images
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910831247.0A
Other languages
Chinese (zh)
Other versions
CN110517265A (en
Inventor
孟凡武
许一尘
王立忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN201910831247.0A priority Critical patent/CN110517265B/en
Publication of CN110517265A publication Critical patent/CN110517265A/en
Application granted granted Critical
Publication of CN110517265B publication Critical patent/CN110517265B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method, a device and a storage medium for detecting surface defects of products, wherein the method comprises the following steps: acquiring a target image containing a product to be detected, wherein the surface of the product to be detected is provided with a plurality of objects to be detected; processing the target image to obtain a plurality of images to be detected, wherein each image to be detected comprises a single object to be detected of a product to be detected; and respectively calculating d invariant moments corresponding to each image to be detected, and carrying out surface defect detection on the product to be detected according to the d invariant moments of each image to be detected and the d standard invariant moments corresponding to the pre-obtained standard product image. The embodiment of the application can realize rapid and simple surface defect detection for the processed product with a plurality of objects to be detected on the surface, does not need manual identification one by one, and has higher detection efficiency.

Description

Method and device for detecting surface defects of product and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for detecting surface defects of a product, and a storage medium.
Background
The existing defect detection of the surface of a processed product mostly adopts a method of manually and carefully observing one by one, and when the surface of the product is found to have defects, the defective product is taken out in time and is classified into a processing failure area. For a product with a smooth surface, under the reflection of light, the difference between a part with a defect and a part without the defect is larger, and the product is easy to distinguish artificially, but a product with certain texture change, for example, the surface of the product has a plurality of objects to be observed, the difficulty of artificial identification is increased, and the fatigue of eyes can be accelerated by dense objects, so that the difficulty of work is increased.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method and an apparatus for detecting surface defects of a product, and a storage medium, which can achieve rapid detection of surface defects by capturing an image of a product to be detected, so as to solve the above technical problems.
In a first aspect, an embodiment of the present application provides a method for detecting surface defects of a product, where the method includes: acquiring a target image containing a product to be detected, wherein the surface of the product to be detected is provided with a plurality of objects to be detected; processing the target image to obtain a plurality of images to be detected, wherein each image to be detected comprises a single object to be detected of a product to be detected; respectively calculating d invariant moments corresponding to each image to be detected, and carrying out surface defect detection on the product to be detected according to the d invariant moments of each image to be detected and d standard invariant moments corresponding to a standard product image obtained in advance, wherein d is a positive integer.
In the scheme, for a product to be detected with a plurality of objects to be detected on the surface, an image to be detected comprising a single object to be detected is obtained firstly, and then whether defects exist in the image to be detected is judged by utilizing invariant moment representing geometric characteristics of the image. The scheme is based on the structural change of the image to be detected, defective products can be directly selected according to the image, the method is quicker and simpler compared with a manual detection mode, the detection efficiency is obviously higher, meanwhile, the method can directly detect the defects of the products without knowing the reasons for the defects and the type, shape, size and other pre-information of the defects, and the universality is high.
In a possible embodiment, the detecting of the surface defects of the to-be-detected product according to the d invariant moments of each to-be-detected image and the d standard invariant moments corresponding to the pre-obtained standard product image includes: respectively calculating the variance of d invariant moments of each image to be detected relative to the d standard invariant moments; and if the variance of any image to be detected in the plurality of images to be detected is greater than a first threshold value, determining that the product to be detected has surface defects.
And the variance represents the deviation degree of the invariant moment of the image to be detected relative to the standard invariant moment, and when the variance of any image to be detected is greater than a first threshold value, the object to be detected is obviously different from the object without surface defects, so that the defect of the product to be detected can be determined.
In a possible implementation manner, the plurality of objects to be detected have the same shape, and the processing the target image to obtain a plurality of images to be detected includes: performing threshold segmentation on the target image to obtain K1 connected regions, and determining K2 connected regions meeting first preset requirements from the K1 connected regions, wherein the first preset requirements are related to shape parameters of the connected regions, K1 and K2 are positive integers, and K1> K2; determining the centroid of each of the K2 connected regions, and mapping to the position of the target image according to each centroid to obtain K2 target segmentation images; and determining K3 images to be detected which meet second preset requirements from the K2 target segmentation images, wherein the second preset requirements are related to texture information of the target segmentation images, K3 is a positive integer, and K2 is more than K3.
In the above-mentioned scheme, through setting up first preset requirement, can filter and wait to detect the great connected region of connected region shape difference that the object formed, reach the effect of preliminary screening, through setting up the second preset requirement, can filter and wait to detect the great image of object texture difference, reach the effect of secondary screening, finally gradually separate and wait to detect the image of waiting to detect that the object corresponds.
In a possible implementation, the threshold-value segmenting the target image to obtain K1 connected regions includes: filtering the target image to obtain a filtered gray image; obtaining a binary image of the target image according to the size relationship between the gray value of each pixel point in the gray image and a second threshold value, wherein the second threshold value is the product of a preset coefficient and the maximum gray value in the gray image, and a single object to be detected on a product to be detected forms a connected region in the binary image; and acquiring K1 connected regions in the binary image.
Through filtering and binarization processing of the target image, a plurality of connected regions formed by the object to be detected in the product to be detected can be obtained.
In a possible embodiment, the determining K2 connected regions from the K1 connected regions that satisfy a first preset requirement includes: calculating the area of each of the K1 connected regions and the eccentricity of an ellipse with the same standard second-order center distance with the connected region; determining K2 connected regions which meet the following requirements from the K1 connected regions: the area of the communication area is within a first threshold range, and the eccentricity corresponding to the communication area is not smaller than a third threshold.
The area of the communication area and the corresponding eccentricity are used as a characteristic descriptor for primarily screening the communication area, the communication area which has overlarge area difference with the object to be detected and is obviously different in shape can be excluded by setting the threshold of the area and the eccentricity, the two parameters are mutually complementary, and the screening conditions of the communication area are jointly determined.
In a possible implementation, the obtaining K2 target segmentation images according to the position of each centroid mapping to the target image includes: segmenting the target image by taking the centroids of the K2 connected regions as centers to obtain K2 original segmented images; calculating a gradient direction histogram of each original segmentation image, wherein the gradient direction histogram represents statistics of the number of pixel points of the original segmentation image in different gradient directions; determining a gradient direction with the largest number of pixel points in the gradient direction histogram, and correcting the angle of the object to be detected in the original segmentation image according to the gradient direction and a preset rotation direction to obtain a corrected segmentation image; and respectively cutting each corrected segmentation image by taking the center of mass of the corrected segmentation image as the center to obtain K2 target segmentation images.
In the scheme, K2 original segmentation images with larger sizes are obtained by segmentation, angle correction is carried out on the original segmentation images according to the distribution of the gradient direction of the original segmentation images, so that the angles, the sizes and the shapes of the objects to be detected in the images tend to be consistent, rectangular frames with smaller sizes are adopted to cut the images after the angle correction, and the objects to be detected are positioned in the centers of the images after the angle correction, so that the objects to be detected can be cut out by the smallest rectangular frames, components of the objects not to be detected are introduced to the smallest extent, and the noise influence is reduced.
In a possible embodiment, the determining K3 images to be detected that satisfy a second preset requirement from among the K2 target segmented images includes: calculating a gradient direction histogram of the target segmentation image, wherein the gradient direction histogram represents statistics of the number of pixel points of the target segmentation image in different gradient directions; calculating the variance of the gradient direction histogram of each target segmentation image relative to a standard gradient direction histogram corresponding to a standard product image obtained in advance; k3 images to be detected which meet the following requirements are determined from the K2 target segmentation images: and the variance corresponding to the target segmentation image is smaller than a fourth threshold value.
According to the scheme, the images including the object to be detected in the target segmentation images can be distinguished from the images including the object not to be detected by using the gradient direction histogram, and when the variance obtained by the target segmentation images is not smaller than a fourth threshold value, the texture information of the target segmentation images is obviously different from the texture of the actual object to be detected, so that the target segmentation images are removed, and finally K3 target segmentation images meeting the requirements are left.
In a possible embodiment, the separately calculating d invariant moments for each image to be detected includes: respectively calculating second-order gradients of each image to be detected in the horizontal direction and the vertical direction, and calculating a plurality of invariant moments of the second-order gradients by using the hu invariant moments, wherein D is a positive integer and D is larger than D; and determining principal components of the D invariant moments by using a principal component analysis algorithm to obtain D invariant moments.
The second-order gradient has the effects of zeroing a smoothly-changed area in an image to be detected and highlighting a sudden-changed part in the image to be detected, because surface defects of a product are generally sharp, if the surface of the product has defects, the defect part of the image to be detected is highlighted after the second-order gradient calculation, so that the difference between a value of a moment invariant and a value of a standard moment invariant without the surface defects is larger, and whether the defect exists in the image to be detected is easy to judge.
In a possible embodiment, the plurality of objects to be detected are a plurality of protrusions with the same shape on the surface of the product to be detected. For example, the product to be detected is a corrugated pipe, and a single corrugation on the corrugated pipe protrudes from the surface of the product.
In a second aspect, an embodiment of the present application provides an apparatus for detecting surface defects of a product, the apparatus including: the device comprises an acquisition module, a detection module and a display module, wherein the acquisition module is used for acquiring a target image containing a product to be detected, and the surface of the product to be detected is provided with a plurality of objects to be detected; the image detection module is used for processing the target image to obtain a plurality of images to be detected, wherein each image to be detected comprises a single object to be detected of a product to be detected; and the defect detection module is used for respectively calculating d invariant moments corresponding to each image to be detected and carrying out surface defect detection on the product to be detected according to the d invariant moments of each image to be detected and d standard invariant moments corresponding to a pre-obtained standard product image.
In a third aspect, an embodiment of the present application provides a storage medium, where a computer program is stored on the storage medium, and when the computer program is executed by a processor, the computer program performs the method according to the first aspect or any one of the possible implementation manners of the first aspect.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
FIG. 1 is a flowchart of a method for detecting surface defects of a product according to an embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating the detection method of the present application in detail at step 102;
FIG. 3 is a flowchart illustrating the step 103 of the detection method of the present application;
FIG. 4 is a flowchart illustrating steps 1021-1023 of the detection method of the present application;
FIG. 5 is a schematic diagram of a system for detecting surface defects of a product according to an embodiment of the present disclosure;
FIG. 6 is a schematic view of an apparatus for detecting surface defects of a product according to an embodiment of the present disclosure;
fig. 7 is a schematic view of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
An embodiment of the present application introduces a method for detecting surface defects of a product, referring to fig. 1, the method includes the following steps:
step 101: and acquiring a target image containing a product to be detected.
In this embodiment, the product to be detected refers to a processed product having a plurality of objects to be detected on a surface thereof, for example, the objects to be detected may be a plurality of patterns printed on the surface of the product, or a plurality of convex or concave regions formed on the surface of the product due to periodic fluctuation of the surface of the product. One possible product to be inspected is a corrugated pipe, the surface of which has a plurality of corrugations with the same shape, wherein a single corrugation can be used as an object to be inspected, so that the corrugated pipe product can be inspected for surface defects by using the method provided by the embodiment.
Step 102: and processing the target image to obtain a plurality of images to be detected.
In this embodiment, each image to be detected includes an image of a single object to be detected of the product to be detected, for example, a single ripple on the corrugated tube.
In one embodiment, referring to fig. 2, step 102 may include the steps of:
step 1021: performing threshold segmentation on the target image to obtain K1 connected regions, and determining K2 connected regions meeting a first preset requirement from K1 connected regions.
In this embodiment, after performing threshold segmentation on a target image, a corresponding binary image is obtained, each object to be detected will form a connected region of a specific shape in the binary image, the connected regions in the binary image are extracted, and K1 connected regions are obtained in total.
The plurality of objects to be detected on the product to be detected have the same shape, and therefore the connected regions formed in the binary image have similar shapes. Besides the communication areas of the objects to be detected, the K1 communication areas also comprise interference areas of the objects not to be detected, therefore, the objects to be detected and the objects not to be detected can be distinguished by setting a first preset requirement related to the shape parameters of the communication areas, the interference areas with larger difference with the communication areas of the objects to be detected are filtered, and finally, the K2 communication areas are obtained, namely, the effect of primary screening is achieved.
Step 1022: and determining the mass center of each of the K2 connected regions, and mapping each mass center to the position of the target image to obtain K2 target segmentation images.
After step 1021, the number of connected regions is changed from the initial K1 to K2. Respectively determining the centroid of each of the K2 connected regions, mapping a centroid point to a target image, and then respectively segmenting the target image by taking each centroid point as a center to obtain K2 target segmentation images; wherein, a part of the segmented target images include a single object to be detected on the product to be detected, and the other part of the segmented target images include non-objects to be detected on the product to be detected, that is, an interference image of the non-objects to be detected is mixed in K2 segmented target images, and the step 1023 is used for screening out an image including the object to be detected from K2 segmented target images.
Step 1023: k3 images to be detected which meet second preset requirements are determined from the K2 target segmentation images.
For K2 target segmentation images, as the textures of the object to be detected and the object not to be detected in the images are obviously different, the interference images with larger texture difference with the object to be detected can be filtered by setting a second preset requirement related to the texture information of the target segmentation images, so that the secondary screening effect is achieved, and finally K3 images to be detected which are really used for defect detection are obtained.
In the aforementioned step 1021-.
After step 102, step 103 is performed: and respectively calculating d invariant moments corresponding to each image to be detected, and carrying out surface defect detection on the product to be detected according to the d invariant moments of each image to be detected and the d standard invariant moments corresponding to the standard product image.
The calculation process of the d invariant moments is as follows: and respectively calculating the second-order gradient of each image to be detected in the horizontal direction and the vertical direction, and calculating a plurality of invariant moments of the second-order gradient by using the hu invariant moments, namely respectively obtaining b invariant moments in the horizontal direction and the vertical direction, wherein d is 2b in total for each image to be detected. Alternatively, another calculation process is: respectively calculating in the horizontal direction and the vertical direction to obtain C invariant moments, wherein each image to be detected obtains D invariant moments, D is 2C, b, D, C and D are positive integers, and D is larger than D; principal components of the D invariant moments are determined using a Principal Component Analysis (PCA) algorithm to obtain D invariant moments. Because the part of the invariant moment parameters in the D invariant moments obtained by the product to be detected is low in effect possibly brought actually and even possibly causes interference to subsequent calculation, the original D invariant moments are reduced to D by utilizing a PCA algorithm, one of the D invariant moments can play the roles of reducing the dimension and the characteristic parameters, the finally obtained D invariant moments can more accurately reflect the structural change of the image to be detected, the second one can reduce the calculated amount, and the algorithm speed is improved.
The second-order gradient in the embodiment has the effects of zeroing a smoothly-changing area in the image to be detected, and meanwhile, highlighting a sudden-changing part in the image to be detected, because the surface defects of a product are generally sharp, if the surface of the product has defects, the defect part of the image to be detected is highlighted after the second-order gradient calculation, and the invariant moment can represent the geometric characteristics of the image, so that the difference between the invariant moment value and the standard invariant moment value without the surface defects is larger, and whether the defect exists in the image to be detected is easy to judge.
In one embodiment, referring to FIG. 3, step 103 comprises:
step 1031: and respectively calculating the variance of the d invariant moments of each image to be detected relative to the d standard invariant moments.
The d standard invariant moments are obtained according to a pre-obtained standard product image, the standard product refers to a processed product which has no surface defects and is the same in batch and model with the product to be detected, and the standard product image and the target image are shot in the same detection environment. And performing the same steps as the target image on the standard product image to obtain K3 standard images to be detected, obtaining d invariant moments for each standard image to be detected, and calculating the average value of each invariant moment to obtain d standard invariant moments.
The variance of the d invariant moments with respect to the d standard invariant moments is:
Figure BDA0002190777740000091
wherein m ispRepresenting the p-th invariant moment, M, corresponding to the image to be detectedpAnd representing the p standard invariant moment corresponding to the standard product image.
Step 1032: and if the variance of any image to be detected in the plurality of images to be detected is greater than a first threshold value, determining that the product to be detected has surface defects.
K3 variances sigma are obtained by K3 images to be detected1And setting a threshold t1, and when any one variance in the K3 variances is larger than the threshold t1, indicating that the surface of the product to be detected has defects. Optionally, when a certain product to be detected is foundWhen the surface defect appears in the article, can in time send the testing result to corresponding staff's terminal, remind the staff to verify. In order to facilitate the staff to observe the defect area quickly and accurately, the position of the object to be detected in each image to be detected, which corresponds to the target image, is recorded while the image to be detected is obtained, and when the variance of a certain image to be detected is greater than a threshold t1, the position of the object to be detected with surface defects is marked in the target image, so that the observation time of the staff is saved, and the working efficiency is improved.
In this embodiment, the variance between the invariant and the standard invariant is calculated, and the deviation degree between the invariant and the standard invariant is substantially determined, so that step 1031 does not necessarily calculate the variance, and any other index capable of representing the deviation degree can be applied to the calculation in the present embodiment.
It should be understood that the method provided by this embodiment may also be used for detecting a plurality of objects to be detected with different shapes, for example, when K3 images to be detected are obtained, a number of each image to be detected is recorded, the number can represent a position where the object to be detected corresponds to the target image, when a variance between d invariant moments and d standard invariant moments corresponding to the images to be detected is calculated, d standard invariant moments used by different images to be detected are not completely the same, but a standard image to be detected corresponding to the number in a standard product image is found according to the number of the image to be detected, and d standard invariant moments are obtained according to the d invariant moments of the standard image to be detected.
Optionally, referring to fig. 4, a specific implementation of steps 1021 and 1023 is described below to realize the acquisition of the image to be detected, including the following steps:
step 201: and filtering the target image to obtain a filtered gray image.
For example, the target image is filtered by using a Gabor filter, and the parameters of the Gabor filter are configured as follows: the wavelength λ is set to the pixel size of the horizontal length of an object to be detected in an image, for example, if one section of ripple of the corrugated pipe is 32 pixels in the horizontal direction, the wavelength is set to 32; the remaining parameters (such as direction, variance coefficient, aspect ratio, phase offset, etc.) may be set according to actual requirements, and this embodiment is not limited.
Step 202: and obtaining a binary image of the target image according to the size relation between the gray value of each pixel point in the gray image and the second threshold value.
For the filtered gray level image Imggb(i, j) performing threshold segmentation processing to obtain a binary image Img (i, j), wherein the binary image acquisition formula is as follows:
Figure BDA0002190777740000101
wherein Imggb(i, j) represents the gray scale value of the pixel point in the ith row and the jth column in the image, and the threshold value t2 is q max (Img)gb(i,j)),max(Imggb(i, j)) is the maximum gray scale value in the gray scale image, and q ranges from 0 to 1, and in one embodiment, takes any value from 0.3 to 0.5.
Step 203: k1 connected regions in the binary image are acquired.
Step 204: the area of each of the K1 connected regions is calculated, as well as the eccentricity of the ellipse having the same standard second-order center distance as the connected region.
Step 205: and determining K2 connected regions from the K1 connected regions, wherein the areas of the connected regions are within the first threshold range, and the eccentricity corresponding to the connected regions is not less than a third threshold value.
The area of the connected region and the corresponding eccentricity are used as a characteristic descriptor for primarily screening the connected region. After filtering and binarization of objects to be detected in the same shape, the area of a connected region and the eccentricity of an ellipse having the same standard second-order central moment as the connected region fluctuate within a certain range, but the overall difference is not large. The area and the eccentricity are two complementary shape parameters, the size and the shape of a communication area can be respectively measured, the communication area with the overlarge area difference with an object to be detected can be eliminated by setting the threshold range of the area, meanwhile, the eccentricity of the communication area is not smaller than a set third threshold, for example, a single ripple of a corrugated pipe is in a flat shape, the third threshold can be set to be a slightly larger value, and finally, the communication areas meeting the requirements of the area and the eccentricity are screened out to obtain K2 communication areas meeting the requirements.
The first threshold range and the third threshold range are determined according to the specific situation after filtering and binarization processing of the target image, and the area and eccentricity thresholds can be determined according to the processing result of the current target image or the processing result of the standard product image. In one embodiment, the first threshold range is set to 50-1000, and the third threshold of the eccentricity is set to 0.95, that is, the connected region with an area greater than 1000 or less than 50 is determined as the region other than the object to be detected, and similarly, the connected region with an eccentricity less than 0.95 is also determined as the region other than the object to be detected.
Step 206: and (3) segmenting the target image by taking the centroids of the K2 connected regions as centers to obtain K2 original segmented images.
In the embodiment, before obtaining the target segmentation image, firstly segmenting the target image into K2 images with the same size, namely selecting an image area by adopting a rectangular frame with a first size and taking the centroid of a connected area as a central frame, wherein the size of the rectangular frame is required to be capable of selecting at least two objects to be detected of a product to be detected, namely obtaining K2 original segmentation images; however, it should be noted that the size of the rectangular frame cannot be too large, if too many objects to be detected are selected, the adjacent original segmented images will contain more repeated parts, and meanwhile, the rectangular frame corresponding to the objects to be detected located at the edge will select too many regions not to be detected, which is not favorable for the application of the subsequent gradient direction histogram.
Step 207: and calculating a gradient direction histogram of each original segmentation image, and determining the gradient direction with the largest number of pixel points in the gradient direction histogram.
Calculating the gradient direction of each pixel point in each original segmentation image, and counting the number of the pixel points in each interval according to the intervals of the pre-divided gradient directions to obtain a gradient direction histogram, for example, the gradient direction is divided from-180 degrees to 180 degrees, an interval of the gradient direction is divided every 10 degrees, and the more the division number of the gradient direction is, the higher the resolution capability is. Then, determining any gradient direction in the interval with the maximum number of pixel points as the main gradient direction of the gradient direction histogram.
Step 208: and correcting the angle of the object to be detected in the original segmentation image according to the gradient direction and the preset rotation direction to obtain a corrected segmentation image.
And angle correction is carried out on the original segmentation image according to the determined main gradient direction and the preset rotation direction, namely, the angle of the object to be detected in the original segmentation image is positive, so that the angle, the size and the shape of the object to be detected in the image tend to be consistent, and subsequent calculation is facilitated.
It should be noted that in the angle correction process, the image rotation is performed in a discrete manner instead of rotating the entire target image. The discrete mode is that a target image of a product to be detected is divided into single and independent original divided images, and the plurality of discrete original divided images are respectively rotated, so that the aim of processing the whole image is to reduce the calculation and rotation speed of the image undoubtedly, and the calculation of a histogram and the rotation of the image are faster due to the small area of the plurality of discrete original divided images, so that the angle correction speed can be improved. Further, since the object to be detected is rotated by an angle in the present embodiment, so that the object to be detected is located in the center of the corrected segmented image, the object to be detected can be cut out with the smallest rectangular frame in step 209, and the components of the object not to be detected are introduced to the smallest extent, so as to reduce the noise influence.
Step 209: and respectively cutting each corrected segmentation image by taking the center of mass of the corrected segmentation image as the center to obtain K2 target segmentation images.
And (3) cutting the rotated image, namely selecting an image area by adopting a rectangular frame with a second size and taking the center of mass of the segmented image as a central frame, wherein the size of the rectangular frame is required to be capable of selecting a single object to be detected of a product to be detected, and finally obtaining K2 target segmented images.
It should be understood that the above step 206-209 is only one possible implementation manner of obtaining the target segmentation image, and in practical applications, the camera for capturing the target image may be adjusted to enable the object to be detected of the product to be detected to be directly parallel or perpendicular to the horizontal direction of the target image when imaging, so that the above angle correction step may be omitted, that is, the rectangular frame with the second size is directly adopted, and the K2 target segmentation images are obtained by segmentation with the centroid of the K2 connected regions as the center.
Step 210: and calculating a gradient direction histogram of the target segmentation image, and calculating the variance of the gradient direction histogram of each target segmentation image relative to the standard gradient direction histogram.
The step of calculating the gradient direction histogram is the same as the step 207, and is not repeated here, and the number of pixel points of the target segmented image in different gradient directions is finally obtained through statistics. The gradient direction histogram divides-180 ° to 180 ° into N gradient direction intervals.
The variance of the gradient direction histogram of the target segmentation image relative to the standard gradient direction histogram is as follows:
Figure BDA0002190777740000131
wherein n isqDividing the number of pixel points corresponding to the qth gradient direction interval in the gradient direction histogram of the image for the target, NqThe number of pixel points corresponding to the qth gradient direction interval in the standard gradient direction histogram of the standard product image is determined. The standard gradient direction histogram is obtained by executing the same steps as the target image on the standard product image to obtain K3 standard target segmentation images, obtaining a gradient direction histogram from each standard target segmentation image, and then calculating the average value of the number of pixel points in each gradient direction interval.
Step 211: and determining K3 target segmentation images with the variance smaller than a fourth threshold value from K2 target segmentation images to obtain K3 images to be detected.
The purpose of this step is to distinguish the images including the object to be detected from the images including the object not to be detected in the obtained target segmentation images, and the feature descriptors used for distinguishing are the distribution of the gradient direction in the images, and when the variance obtained by the target segmentation images is not less than the fourth threshold value, it is indicated that the texture information of the target segmentation images is obviously different from the texture of the actual object to be detected, so that the target segmentation images are removed, and finally K3 target segmentation images meeting the requirements are left, that is, K3 images to be detected are obtained for final defect detection.
In the embodiment, the gradient direction can well describe the texture information in the target segmentation image, and meanwhile, the robustness to the change of light rays can be good. In the environment of actually shooting the product to be detected, good illumination conditions cannot be guaranteed, so that a plurality of objects to be detected in the obtained target image are likely to be in a partially bright and partially dark state, but the distribution of the gradient directions of the plurality of objects to be detected is not changed greatly due to the fact that the shapes and textures of the objects to be detected are the same, and therefore the reliability and the accuracy of the method can be guaranteed.
It is to be understood that the above-mentioned steps 201 and 211 provide only one possible implementation, and that the acquisition of the image to be detected may be implemented in other ways than the above-mentioned solutions. For example, a template matching-based method may be used, that is, a template of an image to be detected is used to perform correlation calculation on a target image, a target similar to the image to be detected is searched from the target image, threshold processing is performed on a value obtained after the correlation calculation to obtain a corresponding binary image, and then screening of a connected region and segmentation of the image may be performed on the binary image, so as to obtain the image to be detected.
The method for detecting the surface defects of the products provided by the embodiment can be used for detecting the online defects of the product production line to be detected, namely shooting the images of the products in real time and obtaining the detection results of the products in real time after a certain product is processed, and can also be used for detecting the surface defects of the products in offline mode after a target image of the product to be detected is obtained.
The detection method provided by the embodiment has the following characteristics: in practical situations, the surface defect of the product may have a plurality of different defect types, different shapes and different sizes, but the original texture information of the product is damaged, so that the object to be detected with the defect is obviously different from other objects without the defect.
Referring to fig. 5, the present embodiment further provides a system 300 for detecting surface defects of a product, which is used to implement the detection method of the foregoing embodiment. The detection system 300 comprises a processing terminal 301 and a plurality of cameras 302, wherein each camera 302 is respectively arranged around a product to be detected and is used for shooting target images of different surfaces of the product to be detected, so that the obtained target images can completely reflect surface information of the product to be detected. Each camera 302 is in communication connection with the processing terminal 301, the processing terminal 301 receives a plurality of target images shot by the plurality of cameras 302, and performs the detection steps in the above method embodiment on each target image, and if any one of the target images indicates that the product to be detected has a surface defect, a detection result that the product has the defect can be output. The processing terminal 301 may be any one of a notebook computer, a tablet computer, a desktop computer, a server, and other computing devices with image processing capabilities.
Based on the same inventive concept, referring to fig. 6, an embodiment of the present application further provides an apparatus 400 for detecting surface defects of a product, the apparatus including:
the acquiring module 401 is configured to acquire a target image including a product to be detected, where the surface of the product to be detected has a plurality of objects to be detected;
an image detection module 402, configured to process the target image to obtain multiple images to be detected, where each image to be detected includes a single object to be detected of a product to be detected;
and the defect detection module 403 is configured to calculate d invariant moments corresponding to each image to be detected, and perform surface defect detection on the product to be detected according to the d invariant moments of each image to be detected and the d standard invariant moments corresponding to the pre-obtained standard product image.
Optionally, the shapes of the multiple objects to be detected are the same, and the defect detection module 403 is specifically configured to: respectively calculating the variance of d invariant moments of each image to be detected relative to the d standard invariant moments; and if the variance of any image to be detected in the plurality of images to be detected is greater than a first threshold value, determining that the product to be detected has surface defects.
Optionally, the shapes of the multiple objects to be detected are the same, and the image detection module 402 is specifically configured to: performing threshold segmentation on the target image to obtain K1 connected regions, and determining K2 connected regions meeting first preset requirements from the K1 connected regions, wherein the first preset requirements are related to shape parameters of the connected regions, K1 and K2 are positive integers, and K1> K2; determining the centroid of each of the K2 connected regions, and mapping to the position of the target image according to each centroid to obtain K2 target segmentation images; and determining K3 images to be detected which meet second preset requirements from the K2 target segmentation images, wherein the second preset requirements are related to texture information of the target segmentation images, K3 is a positive integer, and K2 is more than K3.
Optionally, the image detection module 402 is specifically configured to: filtering the target image to obtain a filtered gray image; obtaining a binary image of the target image according to the size relationship between the gray value of each pixel point in the gray image and a second threshold value, wherein the second threshold value is the product of a preset coefficient and the maximum gray value in the gray image, and a single object to be detected on a product to be detected forms a connected region in the binary image; and acquiring K1 connected regions in the binary image.
Optionally, the image detection module 402 is specifically configured to: calculating the area of each of the K1 connected regions and the eccentricity of an ellipse with the same standard second-order center distance with the connected region; determining K2 connected regions which meet the following requirements from the K1 connected regions: the area of the communication area is within a first threshold range, and the eccentricity corresponding to the communication area is not smaller than a third threshold.
Optionally, the image detection module 402 is specifically configured to: segmenting the target image by taking the centroids of the K2 connected regions as centers to obtain K2 original segmented images; calculating a gradient direction histogram of each original segmentation image, wherein the gradient direction histogram represents statistics of the number of pixel points of the original segmentation image in different gradient directions; determining a gradient direction with the largest number of pixel points in the gradient direction histogram, and correcting the angle of the object to be detected in the original segmentation image according to the gradient direction and a preset rotation direction to obtain a corrected segmentation image; and respectively cutting each corrected segmentation image by taking the center of mass of the corrected segmentation image as the center to obtain K2 target segmentation images.
Optionally, the image detection module 402 is specifically configured to: calculating a gradient direction histogram of the target segmentation image, wherein the gradient direction histogram represents statistics of the number of pixel points of the target segmentation image in different gradient directions; calculating the variance of the gradient direction histogram of each target segmentation image relative to a standard gradient direction histogram corresponding to a standard product image obtained in advance; k3 images to be detected which meet the following requirements are determined from the K2 target segmentation images: and the variance corresponding to the target segmentation image is smaller than a fourth threshold value.
Optionally, the defect detecting module 403 is specifically configured to: respectively calculating second-order gradients of each image to be detected in the horizontal direction and the vertical direction, and calculating a plurality of invariant moments of the second-order gradients by using the hu invariant moments, wherein D is a positive integer and D is larger than D; and determining principal components of the D invariant moments by using a principal component analysis algorithm to obtain D invariant moments.
Optionally, the multiple objects to be detected are multiple protrusions with the same shape on the surface of the product to be detected, for example, the product to be detected is a corrugated pipe, a single corrugation on the corrugated pipe protrudes from the surface of the product, and the single corrugation on the corrugated pipe serves as one object to be detected.
The basic principle and the technical effects of the detection apparatus provided above are the same as those of the previous method embodiment, and for the sake of brief description, corresponding contents in the method embodiment may be referred to where not mentioned in this embodiment, and are not described herein again.
Embodiments of the present application also provide a storage medium, which stores a program, and when the program is executed by a processor, the method for detecting surface defects of a product according to the above embodiments of the present application is performed.
Referring to fig. 7, the present embodiment provides an electronic device 500, which includes a processor 501 and a memory 502, where the memory 502 stores at least one instruction, at least one program, code set, or instruction set, and the at least one instruction, at least one program, code set, or instruction set is loaded and executed by the processor 501, so as to implement the method for detecting surface defects of a product provided by the foregoing embodiment. The electronic device 500 may further comprise a communication bus 503, wherein the processor 501 and the memory 502 are in communication with each other via the communication bus 503. The memory 502 may include high-speed random access memory (as a cache) and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. A communication bus 503 is a circuit that connects the described elements and enables transmission between these elements. For example, the processor 501 receives commands from other elements through the communication bus 503, decodes the received commands, and performs calculations or data processing according to the decoded commands.
The electronic device 500 may correspond to a processing terminal in the above-mentioned system for detecting surface defects of products, and is configured to perform image processing on the acquired target image to detect surface defects of products to be detected.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the modules is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form. The functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
It should be noted that the functions, if implemented in the form of software functional modules and sold or used as independent products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (9)

1. A method for detecting surface defects of a product, the method comprising:
acquiring a target image containing a product to be detected, wherein the surface of the product to be detected is provided with a plurality of objects to be detected;
processing the target image to obtain a plurality of images to be detected, wherein each image to be detected comprises a single object to be detected of a product to be detected, and the plurality of objects to be detected are identical in shape;
the target image is processed to obtain a plurality of images to be detected, and the method comprises the following steps:
performing threshold segmentation on the target image to obtain K1 connected regions, and determining K2 connected regions meeting first preset requirements from the K1 connected regions, wherein the first preset requirements are related to shape parameters of the connected regions, K1 and K2 are positive integers, and K1> K2; determining the centroid of each of the K2 connected regions, and mapping to the position of the target image according to each centroid to obtain K2 target segmentation images; determining K3 images to be detected which meet second preset requirements from the K2 target segmentation images, wherein the second preset requirements are related to texture information of the target segmentation images, K3 is a positive integer, and K2 is greater than K3;
respectively calculating d invariant moments corresponding to each image to be detected, carrying out surface defect detection on the product to be detected according to the d invariant moments of each image to be detected and d standard invariant moments corresponding to a pre-obtained standard product image, and respectively calculating the variance of the d invariant moments of each image to be detected relative to the d standard invariant moments; and if the variance of any one image to be detected in the plurality of images to be detected is greater than a first threshold value, determining that the product to be detected has surface defects, wherein d is a positive integer, and the range of the first threshold value is set to be 50-1000.
2. The method according to claim 1, wherein the threshold segmentation of the target image to obtain K1 connected regions comprises:
filtering the target image to obtain a filtered gray image;
obtaining a binary image of the target image according to the size relationship between the gray value of each pixel point in the gray image and a second threshold value, wherein the second threshold value is the product of a preset coefficient and the maximum gray value in the gray image, and a single object to be detected on a product to be detected forms a connected region in the binary image;
and acquiring K1 connected regions in the binary image.
3. The method of claim 1, wherein said determining K2 connected regions from said K1 connected regions that satisfy a first predetermined requirement comprises:
calculating the area of each of the K1 connected regions and the eccentricity of an ellipse with the same standard second-order center distance with the connected region;
determining K2 connected regions which meet the following requirements from the K1 connected regions: the area of the communication area is within a first threshold range, and the eccentricity corresponding to the communication area is not smaller than a third threshold.
4. The method of claim 1, wherein said mapping each centroid to the position of the target image to obtain K2 target segmentation images comprises:
segmenting the target image by taking the centroids of the K2 connected regions as centers to obtain K2 original segmented images;
calculating a gradient direction histogram of each original segmentation image, wherein the gradient direction histogram represents statistics of the number of pixel points of the original segmentation image in different gradient directions;
determining a gradient direction with the largest number of pixel points in the gradient direction histogram, and correcting the angle of the object to be detected in the original segmentation image according to the gradient direction and a preset rotation direction to obtain a corrected segmentation image;
and respectively cutting each corrected segmentation image by taking the center of mass of the corrected segmentation image as the center to obtain K2 target segmentation images.
5. The method according to claim 1, wherein the determining K3 images to be detected which meet a second preset requirement from the K2 target segmented images comprises:
calculating a gradient direction histogram of the target segmentation image, wherein the gradient direction histogram represents statistics of the number of pixel points of the target segmentation image in different gradient directions;
calculating the variance of the gradient direction histogram of each target segmentation image relative to a standard gradient direction histogram corresponding to a standard product image obtained in advance;
k3 images to be detected which meet the following requirements are determined from the K2 target segmentation images: and the variance corresponding to the target segmentation image is smaller than a fourth threshold value.
6. The method according to claim 1, wherein the calculating d invariant moments for each image to be detected comprises:
respectively calculating second-order gradients of each image to be detected in the horizontal direction and the vertical direction, and calculating a plurality of invariant moments of the second-order gradients by using the hu invariant moments, wherein D is a positive integer and D is larger than D;
and determining principal components of the D invariant moments by using a principal component analysis algorithm to obtain D invariant moments.
7. The method according to any one of claims 1 to 6, wherein the plurality of objects to be detected are a plurality of protrusions with the same shape on the surface of the product to be detected.
8. An apparatus for detecting surface defects of a product, the apparatus comprising:
the device comprises an acquisition module, a detection module and a display module, wherein the acquisition module is used for acquiring a target image containing a product to be detected, and the surface of the product to be detected is provided with a plurality of objects to be detected;
the image detection module is used for processing the target image to obtain a plurality of images to be detected, wherein each image to be detected comprises a single object to be detected of a product to be detected, and the plurality of objects to be detected are identical in shape; in processing the target image to obtain a plurality of images to be detected, the image detection module is specifically configured to: performing threshold segmentation on the target image to obtain K1 connected regions, and determining K2 connected regions meeting first preset requirements from the K1 connected regions, wherein the first preset requirements are related to shape parameters of the connected regions, K1 and K2 are positive integers, and K1> K2; determining the centroid of each of the K2 connected regions, and mapping to the position of the target image according to each centroid to obtain K2 target segmentation images; determining K3 images to be detected which meet second preset requirements from the K2 target segmentation images, wherein the second preset requirements are related to texture information of the target segmentation images, K3 is a positive integer, and K2 is greater than K3;
the defect detection module is used for respectively calculating d invariant moments corresponding to each image to be detected, carrying out surface defect detection on the product to be detected according to the d invariant moments of each image to be detected and d standard invariant moments corresponding to a pre-obtained standard product image, and respectively calculating the variance of the d invariant moments of each image to be detected relative to the d standard invariant moments; and if the variance of any one image to be detected in the plurality of images to be detected is greater than a first threshold value, determining that the product to be detected has surface defects, wherein d is a positive integer, and the range of the first threshold value is set to be 50-1000.
9. A storage medium, having stored thereon a computer program which, when executed by a processor, performs the method according to any one of claims 1-7.
CN201910831247.0A 2019-09-04 2019-09-04 Method and device for detecting surface defects of product and storage medium Active CN110517265B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910831247.0A CN110517265B (en) 2019-09-04 2019-09-04 Method and device for detecting surface defects of product and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910831247.0A CN110517265B (en) 2019-09-04 2019-09-04 Method and device for detecting surface defects of product and storage medium

Publications (2)

Publication Number Publication Date
CN110517265A CN110517265A (en) 2019-11-29
CN110517265B true CN110517265B (en) 2022-03-01

Family

ID=68629640

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910831247.0A Active CN110517265B (en) 2019-09-04 2019-09-04 Method and device for detecting surface defects of product and storage medium

Country Status (1)

Country Link
CN (1) CN110517265B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127571B (en) * 2019-12-03 2023-12-29 歌尔股份有限公司 Small sample defect classification method and device
CN111598832A (en) * 2020-04-01 2020-08-28 江门荣信电路板有限公司 Slot defect marking method and device and storage medium
CN111693533B (en) * 2020-06-11 2023-01-20 南通通富微电子有限公司 Workpiece surface quality detection method and device and appearance machine
TWI743837B (en) * 2020-06-16 2021-10-21 緯創資通股份有限公司 Training data increment method, electronic apparatus and computer-readable medium
CN112001841A (en) * 2020-07-14 2020-11-27 歌尔股份有限公司 Image to-be-detected region extraction method and device and product defect detection system
CN112001902A (en) * 2020-08-19 2020-11-27 上海商汤智能科技有限公司 Defect detection method and related device, equipment and storage medium
CN114419039B (en) * 2022-03-28 2022-06-24 武汉市融科优品装饰材料有限公司 Decorative wallpaper defect detection method based on template matching
CN115880288B (en) * 2023-02-21 2023-11-14 深圳市兆兴博拓科技股份有限公司 Detection method, system and computer equipment for electronic element welding

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102508110B (en) * 2011-10-10 2014-07-09 上海大学 Texture-based insulator fault diagnostic method
KR101361922B1 (en) * 2012-10-19 2014-02-21 서울여자대학교 산학협력단 Automatic detection system of porosity defect using density and shape information in industrial ct and controlling method therefor
CN106778734B (en) * 2016-11-10 2020-04-21 华北电力大学(保定) Sparse representation-based insulator string-falling defect detection method
CN106600600A (en) * 2016-12-26 2017-04-26 华南理工大学 Wafer defect detection method based on characteristic matching
CN107301422A (en) * 2017-05-24 2017-10-27 广西师范大学 A kind of bamboo cane method for sorting colors and system based on color characteristic and Hu squares
CN109493339B (en) * 2018-11-20 2022-02-18 北京嘉恒中自图像技术有限公司 Endoscope imaging-based method for detecting defects of pores on inner surface of casting
CN109785285B (en) * 2018-12-11 2023-08-08 西安工程大学 Insulator damage detection method based on ellipse characteristic fitting

Also Published As

Publication number Publication date
CN110517265A (en) 2019-11-29

Similar Documents

Publication Publication Date Title
CN110517265B (en) Method and device for detecting surface defects of product and storage medium
WO2022042579A1 (en) Lcd screen defect detection method and apparatus
CN113109368B (en) Glass crack detection method, device, equipment and medium
CN111879735B (en) Rice appearance quality detection method based on image
CN113608378B (en) Full-automatic defect detection method and system based on LCD (liquid crystal display) process
CN110660072B (en) Method and device for identifying straight line edge, storage medium and electronic equipment
CN116703909B (en) Intelligent detection method for production quality of power adapter
CN108710852B (en) Particle size distribution image recognition method and system for limiting shooting depth
CN116152242B (en) Visual detection system of natural leather defect for basketball
EP2973223B1 (en) Enhanced analysis for image-based serpentine belt wear evaluation
CN110807354B (en) Industrial assembly line product counting method
CN114821274A (en) Method and device for identifying state of split and combined indicator
CN113902652A (en) Speckle image correction method, depth calculation method, device, medium, and apparatus
CN116612125B (en) Artificial intelligence-based food and drug capsule quality detection method
CN112950594A (en) Method and device for detecting surface defects of product and storage medium
CN115690747B (en) Vehicle blind area detection model test method and device, electronic equipment and storage medium
CN117274211A (en) Screen defect detection method and device, terminal equipment and storage medium
CN114092385A (en) Industrial machine fault detection method and device based on machine vision
CN113409297A (en) Aggregate volume calculation method, particle form grading data generation method, system and equipment
CN117974646B (en) Visual inspection method for coating quality of optical fiber surface
CN108765365A (en) A kind of rotor winding image qualification detection method
CN117876376B (en) High-speed multifunctional connector quality visual detection method
CN110660073B (en) Straight line edge recognition equipment
CN115471469A (en) Sandstone specification machine vision detection method and system, electronic equipment and storage medium
CN116503371A (en) Method for detecting front and back sides of photovoltaic glass

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant