CN118130503A - Object defect detection method, system, storage medium and electronic equipment - Google Patents

Object defect detection method, system, storage medium and electronic equipment Download PDF

Info

Publication number
CN118130503A
CN118130503A CN202410200247.1A CN202410200247A CN118130503A CN 118130503 A CN118130503 A CN 118130503A CN 202410200247 A CN202410200247 A CN 202410200247A CN 118130503 A CN118130503 A CN 118130503A
Authority
CN
China
Prior art keywords
image
detection
defect
area
reference image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410200247.1A
Other languages
Chinese (zh)
Inventor
夏轩
何星
童浩然
张晓光
丁宁
邝安朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Artificial Intelligence and Robotics
Original Assignee
Shenzhen Institute of Artificial Intelligence and Robotics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Artificial Intelligence and Robotics filed Critical Shenzhen Institute of Artificial Intelligence and Robotics
Priority to CN202410200247.1A priority Critical patent/CN118130503A/en
Publication of CN118130503A publication Critical patent/CN118130503A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Image Analysis (AREA)

Abstract

The application provides an object defect detection method, which comprises the following steps: acquiring a detection image of an object to be detected; collecting a normal sample image of the object to be detected as a reference image, and automatically marking a group of alignment frames; aligning the reference image and the detection image by the alignment frame; the reference image and the detection image are input to a defect detection network to locate a defect position of the object to be detected. The application can automatically align the images shot by the two line scanning cameras of the same article, and eliminate detection errors caused by motion distortion. Defect detection is performed without training a special detection model in advance, so that the detection flow is simplified, and the detection speed is improved. The application also provides an object defect detection system, a storage medium and electronic equipment, which have the beneficial effects.

Description

Object defect detection method, system, storage medium and electronic equipment
Technical Field
The present application relates to the field of image processing, and in particular, to a method, a system, a storage medium, and an electronic device for detecting object defects.
Background
Currently, line scan cameras are often used in performing defect detection of an object surface. A line scan camera is an imaging device widely used in the industry and machine vision fields that captures only one line of an image when taking a picture, and thus requires relative motion between the camera and the subject to construct a complete two-dimensional image.
Therefore, the imaging of the line scanning camera needs to have relative motion between the detected object and the line scanning system, but the imaging distortion can be caused by uneven motion, so that multi-frame images of the moving object at different moments are required to be acquired for matching with template images, the overall time consumption of image matching is increased, and meanwhile, defect detection errors are easily caused due to imaging distortion.
Disclosure of Invention
The application aims to provide an object defect detection method, an object defect detection system, a storage medium and electronic equipment, wherein images shot by a two-time line scanning camera of the same object can be automatically aligned only by collecting a normal sample once, and detection errors caused by motion distortion can be eliminated.
In order to solve the technical problems, the application provides an object defect detection method, which comprises the following specific technical scheme:
Acquiring a detection image of an object to be detected;
collecting a normal sample image of the object to be detected as a reference image, and automatically marking a group of alignment frames;
aligning the reference image and the detection image by the alignment frame;
the reference image and the detection image are input to a defect detection network to locate a defect position of the object to be detected.
Optionally, the aligning the reference image and the detection image through the alignment frame includes:
calculating an edge map of the reference image by adopting a set edge detection algorithm, and performing projection integration along the vertical direction of the motion of the linear scanning camera to obtain an edge integral map of the reference image;
searching the peak position in the reference image edge integral graph, filtering the peak value lower than a threshold value, and determining a peak value area according to a preset value; the peak area is the area with the most obvious image texture characteristics;
Determining a plurality of corresponding reference image template areas in the reference image according to the peak area, and first initial coordinates of the reference image template areas in a motion direction axis;
Matching the reference image template area in the test image by using a template matching method to obtain a corresponding test image matching area and a second initial coordinate of the test image matching area on a motion direction axis;
Aligning the beginning of the reference image template area and the beginning of the test image matching area to obtain the beginning of the aligned test image;
And scaling the length corresponding to the second initial coordinate to the length corresponding to the first initial coordinate in the reference image, and splicing the length corresponding to the second initial coordinate with the beginning of the aligned test image to obtain a complete aligned test image.
Optionally, the aligning the beginning of the reference image template area and the beginning of the test image matching area, and obtaining the beginning of the aligned test image includes:
If the first coordinate L b1 of the second initial coordinate is larger than the first coordinate L a1 of the first initial coordinate, cutting the length of L b1-La1 at the beginning of the test image, and reserving the remaining length of L a1 as the beginning of the aligned test image;
If the first coordinate L b1 of the second initial coordinate is smaller than the first coordinate L a1 of the first initial coordinate, the length of L a1 is cut out at the beginning of the test image by the length of zero padding L a1-Lb1, and the length is used as the beginning of the aligned test image.
Optionally, inputting the reference image and the detection image into a defect detection network to locate a defect position of the object to be detected includes:
determining a general pre-training model, and setting the model as a feature extraction backbone network;
Respectively inputting the reference image and the aligned test image into the feature extraction backbone network to obtain a first feature image corresponding to the reference image under different scales and a second feature image corresponding to the aligned test image under different scales;
Calculating the similarity between the first characteristic diagram and the second characteristic diagram under different scales;
uniformly upsampling the similarity under different scales to be the size of an input image, and calculating a fusion value according to a fusion formula;
Comparing the fusion value with a preset defect threshold value; the areas with the fusion value larger than the defect threshold value correspond to defect areas, and the areas with the fusion value smaller than or equal to the defect threshold value are normal areas.
Optionally, if the object to be detected has a target detection area, the method further includes:
determining a target labeling frame of the target detection area;
aligning the reference image with the test image to obtain an aligned test image;
cutting out a target detection area in the reference image according to the coordinate information of the target labeling frame to obtain a reference area;
cutting out the corresponding areas in the aligned test images to obtain test areas;
respectively inputting the reference area and the test area into a feature extraction module to correspondingly obtain reference features and test features;
inputting the reference feature and the test feature into a defect detection module to obtain a thermodynamic diagram;
calculating the maximum thermodynamic value in the thermodynamic diagram and comparing the maximum thermodynamic value with a preset defect threshold;
and if the maximum thermal value is larger than the defect threshold value, confirming that the target detection area has defects.
Optionally, the determining the target labeling frame of the target detection area includes:
and labeling the target detection area in the reference image or automatically labeling the part area through a detection model to obtain a target labeling frame.
Optionally, cutting out a target detection area in the reference image according to the coordinate information of the target labeling frame, and when obtaining the reference area, further including:
acquiring each application state of the reference part corresponding to the target detection area to obtain a reference part group;
extracting a reference feature group corresponding to the reference part group;
Correspondingly, the inputting the reference feature and the test feature into the defect detection module, and obtaining the thermodynamic diagram includes:
And inputting the reference feature group and the test feature into a defect detection module to obtain a plurality of thermodynamic diagrams.
The application also provides an object defect detection system, comprising:
the image acquisition module is used for acquiring a detection image of the object to be detected;
the image marking module is used for collecting a normal sample image of the object to be detected as a reference image and automatically marking a group of alignment frames;
An image alignment module for aligning the reference image and the detection image by the alignment frame;
And the defect positioning module is used for inputting the reference image and the detection image into a defect detection network so as to position the defect position of the object to be detected.
The application also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the steps of the method as described above.
The application also provides an electronic device comprising a memory in which a computer program is stored and a processor which when calling the computer program in the memory implements the steps of the method as described above.
The application provides an object defect detection method, which comprises the following steps: acquiring a detection image of an object to be detected; collecting a normal sample image of the object to be detected as a reference image, and automatically marking a group of alignment frames; aligning the reference image and the detection image by the alignment frame; the reference image and the detection image are input to a defect detection network to locate a defect position of the object to be detected.
After the detection image of the object to be detected is obtained, the image alignment can be executed by only collecting the normal sample image of the normal sample once as the reference image, and the two line scanning camera shooting images of the same object can be automatically aligned, so that the detection error caused by motion distortion is eliminated. In theory, infinite length image alignment can be supported, and as only a detection image and a reference image are included, a defect sample and a label are not needed, the operation amount is small, and the method is suitable for processing high-pixel line scanning images. In addition, in the defect detection execution process, the defect detection is executed without training a special detection model in advance, a lightweight defect detection network can be applied, training is not needed, any public or private pre-training network can be used, the detection flow is simplified, and the detection speed is improved.
The application also provides an object defect detection system, a storage medium and electronic equipment, which have the beneficial effects and are not repeated here.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of an object defect detection method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an image alignment principle provided by an embodiment of the present application;
FIG. 3 is a flowchart of a method for part level defect detection according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an object defect detecting system according to an embodiment of the present application;
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Due to imaging distortion caused by motion, computational force pressure under high resolution and false detection in complex surface defect detection, the existing deep learning technology has poor defect detection performance on the image of the linear scanning camera. The invention provides a defect detection method and device for a line scanning camera image, which are used for detecting defects of large-size detection objects. The method comprises the steps of firstly collecting a normal sample image as a reference image, automatically marking a group of alignment frames, aligning the reference image with a detection image through the group of alignment frames, and finally inputting the reference image and the detection image into a defect detection network together, and positioning defects according to abnormal attention.
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Referring to fig. 1, fig. 1 is a flowchart of a method for detecting an object defect according to an embodiment of the present application, where the method includes:
S101: acquiring a detection image of an object to be detected;
S102: collecting a normal sample image of the object to be detected as a reference image, and automatically marking a group of alignment frames;
s103: aligning the reference image and the detection image by the alignment frame;
S104: the reference image and the detection image are input to a defect detection network to locate a defect position of the object to be detected.
The present embodiment aims to compare a line scan image of a normal sample with a line scan image of a test sample, i.e., a test image, and to obtain a defect region.
Therefore, the reference image and the test image are required to be aligned, the influence of imaging distortion caused by different movements of two times of shooting is eliminated, and the aligned test image is obtained. And then extracting the characteristics of the reference image and the aligned test image to obtain the reference characteristics and the test characteristics. And finally, inputting the reference characteristic and the test characteristic into a defect detection network to obtain a detection result.
Firstly, a line scanning camera is used for obtaining a detection image of an object to be detected. And then collecting a normal sample image as a reference image, wherein the normal sample image is a shooting image of a line scanning camera corresponding to the standard sample.
The image alignment performed in step S103 is described first below:
In step S103, the reference image and the detection image are intended to be aligned by the above-marked alignment frame. Referring specifically to fig. 2, fig. 2 is a schematic diagram of an image alignment principle provided in an embodiment of the present application, and may include the following steps:
And firstly, calculating an edge map of the reference image by adopting a set edge detection algorithm, and performing projection integration along the vertical direction of the motion of the linear scanning camera to obtain an edge integral map of the reference image. In the actual application process, a sobel operator or a canny operator can be used for calculating an edge map of the reference image A, and projection integration is carried out along the vertical direction of the motion of the linear scanning camera to obtain an edge integral map E of the reference image;
And secondly, searching the peak position in the reference image edge integral graph, filtering the peak value lower than the peak threshold value, and determining a peak value area according to a preset value. The peak position in the reference image edge integral map E is found, the peak that is too low is filtered out using the peak threshold T E, and the peak area is determined using the predetermined value L p. The peak area corresponds to the area with the most obvious image texture characteristics and can be used as a good template for matching a subsequent template. As shown, they may be edges of part a 1、a2、a3 or may be plain line areas. The values of T E and L p are required to be set according to the conditions of the actual application scene;
And thirdly, determining a plurality of corresponding reference image template areas in the reference image according to the peak area, and determining first initial coordinates of the reference image template areas in the motion direction axis. Determining a plurality of corresponding reference image template areas p a1、pa2、pa3、pa4、pa5、pa6 … … in the reference image a according to the peak areas, and first initial coordinates L a1、La2、La3、La4、La5、La6 … … of the reference image template areas in the motion direction axis;
A fourth step of matching the reference image template area in the test image by using a template matching method to obtain a corresponding test image matching area and a second initial coordinate of the test image matching area on a motion direction axis; matching in the test image B by using a template matching method to obtain a corresponding test image matching region p b1、pb2、pb3、pb4、pb5、pb6 … … and a second initial coordinate L b1、Lb2、Lb3、Lb4、Lb5、Lb6 … … of the test image matching region on a motion direction axis;
Template matching is a simple and widely used image processing technique for finding the most similar region to a given template image in a large image. Such methods are commonly used for image recognition, image tracking, or object detection tasks. The basic steps of template matching are as follows:
Selecting a template: first a template image is selected, typically a small image fragment, containing the object or pattern to be found in the large image.
Sliding window search: and sliding the template image on the original image, moving the template image by taking pixels as a unit, and calculating the matching degree of each position. This can be achieved by sliding a window, the size of which is the same as the template image.
Calculating similarity: and calculating the similarity between the template image and the corresponding region on the original image at each window position. Common similarity calculation methods include square error matching, normalized square error matching, correlation matching, normalized correlation matching, and the like.
Find the best match: in all window positions, the region most similar to the template image, i.e. the region with the highest similarity, is found. The similarity value for this region will be a peak, indicating that the template matches the region to the highest degree.
Determining a threshold: to reduce the likelihood of a mismatch, a threshold is typically set, and only if the similarity exceeds this threshold is a valid match found.
And fifthly, aligning the beginning of the reference image template area and the beginning of the test image matching area to obtain the beginning of the aligned test image. Specifically, the beginning of the reference image template region p a1 and the beginning of the test image matching region p b1 may be aligned, so as to obtain the beginning of the aligned test image C. If L b1>La1, cutting the length of L b1-La1 at the beginning of the test image B, and reserving the length of the remaining L a1 as the beginning of the aligned test image C; if L b1<La1, zero L a1-Lb1 length is filled at the beginning of B, then L a1 length is truncated as the beginning of aligned test image C.
And a sixth step of scaling the length corresponding to the second initial coordinate to the length corresponding to the first initial coordinate in the reference image, and splicing the length with the beginning of the aligned test image to obtain a complete aligned test image. Scaling the length of an image part corresponding to the second initial coordinate L b2 in the test image B to be the first initial coordinate L a2, and splicing the first initial coordinate L a2 with the beginning of the aligned test image C; scaling the length of an image part corresponding to L b3 in the test image B to L a3, and splicing with the aligned test image C; scaling the length of an image part corresponding to L b4 in the test image B to L a4, and splicing with the aligned test image C; the subsequent coordinates operate in the same steps to finally obtain a complete post-alignment test image C, as shown in fig. 2 at C 1、c2 and C 3.
After the above processing, the aligned test image C and the reference image a are aligned in each template matching region.
In performing step S104, a generic pre-training model may be first determined, set up as a feature extraction backbone network, e.g. a resnet network pre-trained on imagenet datasets, set up as a feature extraction backbone network M. And then respectively inputting the reference image and the aligned test image into a feature extraction backbone network to obtain a first feature image F a:Fa1、Fa2、Fa3 … … corresponding to the reference image under different scales and a second feature image F c:Fc1、Fc2、Fc3 … … corresponding to the aligned test image under different scales. It should be noted that, which scales are extracted, and the number of feature images can be set by those skilled in the art according to the actual application scenario.
And calculating the similarity between the first feature map F a and the second feature map F c under different scales, uniformly upsampling the similarity under different scales to be the size of the input image, and calculating a fusion value according to a fusion formula. Comparing the fusion value with a preset defect threshold value; the areas with the fusion value larger than the defect threshold value correspond to the defect areas, and the areas with the fusion value smaller than or equal to the defect threshold value are normal areas.
The calculated similarity can be as follows:
and the fusion formula may be as follows:
where UP () represents UP-sampling, where X is the thermodynamic diagram of the same size as a and C.
Finally, the value in X is compared with a preset defect threshold T X. The value of the defect threshold T X is set according to the situation of the actual application scene, and the region with the value in X larger than the defect threshold T X corresponds to the defect region, otherwise, the region is a normal region.
The following is a defect threshold T X setting scheme:
the method comprises the steps of firstly, determining a test set of a detection image;
Secondly, predicting on the test set by using a detection algorithm, and acquiring a probability value of each image area belonging to a positive class;
thirdly, selecting an initial classification threshold value, such as 0.5;
Fourth, calculating F1 values of the model on the test set by using the selected classification threshold value:
Wherein,
The value of F1 ranges from 0 to 1, where 1 represents the best performance and 0 represents the worst performance. The F1 value can be intuitively understood as an index comprehensively considering the accuracy and the recall, and fluctuates between 0 and 1 to represent the balance performance of the accuracy and the recall of the model to different degrees;
and fifthly, determining the change of the F1 value by changing the classification threshold value, and finally selecting the threshold value reaching the highest F1 value on the test set as the optimal threshold value.
Thus, the defect detection of the object to be detected is realized.
After the detection image of the object to be detected is obtained, the image alignment can be performed by only collecting the normal sample image of the normal sample once as the reference image, and the two line scanning cameras of the same object can be automatically aligned to shoot images, so that detection errors caused by motion distortion are eliminated. In theory, infinite length image alignment can be supported, and as only a detection image and a reference image are included, a defect sample and a label are not needed, the operation amount is small, and the method is suitable for processing high-pixel line scanning images. In addition, in the defect detection execution process, the defect detection is executed without training a special detection model in advance, a lightweight defect detection network can be applied, training is not needed, any public or private pre-training network can be used, the detection flow is simplified, and the detection speed is improved.
In the practical application process, only a partial region in the image may need to be detected, and at this time, detecting the whole image may introduce unnecessary calculation overhead and false detection. Therefore, taking the detection of the parts in the image as an example, the embodiment of the application provides a part-level defect detection method. Compared to the above embodiments, referring to fig. 3, fig. 3 is a flowchart of a method for detecting a part level defect according to an embodiment of the present application, and an image cropping step is added to the embodiment of the present application, and a specific process may be as follows:
marking a part area to be detected in the reference image A, or automatically marking the part area through an existing detection model to obtain a part marking frame;
Secondly, aligning the reference image A and the test image B by using an alignment module to obtain an aligned test image C;
Thirdly, cutting off the part in the reference image A by using a cutting module according to the coordinate information of the part marking frame to obtain a reference part, and cutting off the part in the aligned test image C to obtain a test part;
Inputting the reference part and the test part into a feature extraction module to obtain reference features and test features;
And fifthly, inputting the reference characteristic and the test characteristic into a defect detection module to obtain a thermodynamic diagram X. And calculating the maximum value X max in X, comparing the maximum value with a preset defect threshold value T X, and if X max is larger than the defect threshold value T X, the detection result of the part is a defective part, otherwise, the detection result is a normal part, and the region with the value of X larger than the defect threshold value T X corresponds to a defect region, otherwise, the detection result is a normal region.
Since the same part may have different shapes in different states (e.g., the switch key may have different shapes when turned on and off), the use of only one reference part for inspection may cause false inspection. Therefore, further, the misjudgment part can be selected and forms a reference part group together with the original reference part. When the feature extraction is performed, a plurality of reference features of the reference part group are extracted and compared with the test features in the defect detection module, so that a plurality of thermodynamic diagrams X i are obtained. The maximum value in each X i is calculated, and if one maximum value is smaller than T X, the part is judged to be normal. If the maximum value in each X i is greater than the defect threshold T X, then the part is determined to be a defective part, where the defective area is the intersection of the areas in each X i where the value is greater than the defect threshold T X.
The embodiment of the application provides a part-level defect detection method, which does not need to perform full-image detection on an object to be detected containing defects, and can perform defect detection by marking and cutting the area where the concerned part is positioned, so that the calculated data amount in the defect detection process is further reduced, and the detection efficiency is improved.
The object defect detection system provided in the embodiment of the present application is described below, and the object defect detection system described below and the object defect detection method described above may be referred to correspondingly.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an object defect detection system according to an embodiment of the present application, and the application further provides an object defect detection system, including:
the image acquisition module is used for acquiring a detection image of the object to be detected;
the image marking module is used for collecting a normal sample image of the object to be detected as a reference image and automatically marking a group of alignment frames;
An image alignment module for aligning the reference image and the detection image by the alignment frame;
And the defect positioning module is used for inputting the reference image and the detection image into a defect detection network so as to position the defect position of the object to be detected.
Based on the above embodiments, as a preferred embodiment, the image alignment module is a module for performing the steps of:
calculating an edge map of the reference image by adopting a set edge detection algorithm, and performing projection integration along the vertical direction of the motion of the linear scanning camera to obtain an edge integral map of the reference image;
searching the peak position in the reference image edge integral graph, filtering the peak value lower than a threshold value, and determining a peak value area according to a preset value; the peak area is the area with the most obvious image texture characteristics;
Determining a plurality of corresponding reference image template areas in the reference image according to the peak area, and first initial coordinates of the reference image template areas in a motion direction axis;
Matching the reference image template area in the test image by using a template matching method to obtain a corresponding test image matching area and a second initial coordinate of the test image matching area on a motion direction axis;
Aligning the beginning of the reference image template area and the beginning of the test image matching area to obtain the beginning of the aligned test image;
And scaling the length corresponding to the second initial coordinate to the length corresponding to the first initial coordinate in the reference image, and splicing the length corresponding to the second initial coordinate with the beginning of the aligned test image to obtain a complete aligned test image.
Based on the foregoing embodiments, as a preferred embodiment, the image alignment module includes:
A beginning alignment unit, configured to cut the length of L b1-La1 at the beginning of the test image if the first coordinate L b1 of the second initial coordinate is greater than the first coordinate L a1 of the first initial coordinate, and reserve the remaining length of L a1 as the beginning of the aligned test image; if the first coordinate L b1 of the second initial coordinate is smaller than the first coordinate L a1 of the first initial coordinate, the length of L a1 is cut out at the beginning of the test image by the length of zero padding L a1-Lb1, and the length is used as the beginning of the aligned test image.
Based on the above embodiments, as a preferred embodiment, the defect localization module is a module for performing the steps of:
determining a general pre-training model, and setting the model as a feature extraction backbone network;
Respectively inputting the reference image and the aligned test image into the feature extraction backbone network to obtain a first feature image corresponding to the reference image under different scales and a second feature image corresponding to the aligned test image under different scales;
Calculating the similarity between the first characteristic diagram and the second characteristic diagram under different scales;
uniformly upsampling the similarity under different scales to be the size of an input image, and calculating a fusion value according to a fusion formula;
Comparing the fusion value with a preset defect threshold value; the areas with the fusion value larger than the defect threshold value correspond to defect areas, and the areas with the fusion value smaller than or equal to the defect threshold value are normal areas.
Based on the above embodiment, as a preferred embodiment, further comprising:
The region detection module is used for determining a target annotation frame of the target detection region; aligning the reference image with the test image to obtain an aligned test image; cutting out a target detection area in the reference image according to the coordinate information of the target labeling frame to obtain a reference area; cutting out the corresponding areas in the aligned test images to obtain test areas; respectively inputting the reference area and the test area into a feature extraction module to correspondingly obtain reference features and test features; inputting the reference feature and the test feature into a defect detection module to obtain a thermodynamic diagram; calculating the maximum thermodynamic value in the thermodynamic diagram and comparing the maximum thermodynamic value with a preset defect threshold; and if the maximum thermal value is larger than the defect threshold value, confirming that the target detection area has defects.
Based on the foregoing embodiments, as a preferred embodiment, the area detection module includes:
And the marking frame determining unit is used for marking the target detection area in the reference image or automatically marking the part area through the detection model to obtain the target marking frame.
Based on the foregoing embodiments, as a preferred embodiment, the area detection module further includes:
The reference feature extraction unit is used for obtaining application states of the reference parts corresponding to the target detection area to obtain a reference part group; and extracting a reference feature group corresponding to the reference part group.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed, performs the steps provided by the above-described embodiments. The storage medium may include: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The present application also provides an electronic device, referring to fig. 5, and as shown in fig. 5, a block diagram of an electronic device provided in an embodiment of the present application may include a processor 1410 and a memory 1420.
Processor 1410 may include one or more processing cores, such as a 4-core processor, an 8-core processor, etc., among others. The processor 1410 may be implemented in at least one hardware form of DSP (DIGITAL SIGNAL Processing), FPGA (Field-Programmable gate array), PLA (Programmable Logic Array ). Processor 1410 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1410 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 1410 may also include an AI (ARTIFICIAL INTELLIGENCE ) processor for processing computing operations related to machine learning.
Memory 1420 may include one or more computer-readable storage media, which may be non-transitory. Memory 1420 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In this embodiment, the memory 1420 is used at least to store a computer program 1421, which, when loaded and executed by the processor 1410, can implement relevant steps in the object defect detection method performed by the electronic device side as disclosed in any of the foregoing embodiments. In addition, the resources stored by memory 1420 may include an operating system 1422, data 1423, and the like, and the storage may be transient storage or permanent storage. Operating system 1422 may include Windows, linux, android, among other things.
In some embodiments, the electronic device may further include a display 1430, an input-output interface 1440, a communication interface 1450, a sensor 1460, a power supply 1470, and a communication bus 1480.
Of course, the structure of the electronic device shown in fig. 5 is not limited to the electronic device in the embodiment of the present application, and the electronic device may include more or fewer components than those shown in fig. 5 or may combine some components in practical applications.
In the description, each embodiment is described in a progressive manner, and each embodiment is mainly described by the differences from other embodiments, so that the same similar parts among the embodiments are mutually referred. The system provided by the embodiment is relatively simple to describe as it corresponds to the method provided by the embodiment, and the relevant points are referred to in the description of the method section.
The principles and embodiments of the present application have been described herein with reference to specific examples, the description of which is intended only to facilitate an understanding of the method of the present application and its core ideas. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the application can be made without departing from the principles of the application and these modifications and adaptations are intended to be within the scope of the application as defined in the following claims.
It should also be noted that in this specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. An object defect detection method, characterized by comprising:
Acquiring a detection image of an object to be detected;
collecting a normal sample image of the object to be detected as a reference image, and automatically marking a group of alignment frames;
aligning the reference image and the detection image by the alignment frame;
the reference image and the detection image are input to a defect detection network to locate a defect position of the object to be detected.
2. The object defect detection method according to claim 1, wherein the aligning the reference image and the detection image by the alignment frame includes:
calculating an edge map of the reference image by adopting a set edge detection algorithm, and performing projection integration along the vertical direction of the motion of the linear scanning camera to obtain an edge integral map of the reference image;
searching the peak position in the reference image edge integral graph, filtering the peak value lower than a threshold value, and determining a peak value area according to a preset value; the peak area is the area with the most obvious image texture characteristics;
Determining a plurality of corresponding reference image template areas in the reference image according to the peak area, and first initial coordinates of the reference image template areas in a motion direction axis;
Matching the reference image template area in the test image by using a template matching method to obtain a corresponding test image matching area and a second initial coordinate of the test image matching area on a motion direction axis;
Aligning the beginning of the reference image template area and the beginning of the test image matching area to obtain the beginning of the aligned test image;
And scaling the length corresponding to the second initial coordinate to the length corresponding to the first initial coordinate in the reference image, and splicing the length corresponding to the second initial coordinate with the beginning of the aligned test image to obtain a complete aligned test image.
3. The method according to claim 2, wherein said aligning the beginning of the reference image template region and the beginning of the test image matching region, obtaining the beginning of the aligned test image, comprises:
If the first coordinate L b1 of the second initial coordinate is larger than the first coordinate L a1 of the first initial coordinate, cutting the length of L b1-La1 at the beginning of the test image, and reserving the remaining length of L a1 as the beginning of the aligned test image;
If the first coordinate L b1 of the second initial coordinate is smaller than the first coordinate L a1 of the first initial coordinate, the length of L a1 is cut out at the beginning of the test image by the length of zero padding L a1-Lb1, and the length is used as the beginning of the aligned test image.
4. The object defect detection method of claim 1, wherein inputting the reference image and the detection image to a defect detection network to locate a defect location of an object to be detected comprises:
determining a general pre-training model, and setting the model as a feature extraction backbone network;
Respectively inputting the reference image and the aligned test image into the feature extraction backbone network to obtain a first feature image corresponding to the reference image under different scales and a second feature image corresponding to the aligned test image under different scales;
Calculating the similarity between the first characteristic diagram and the second characteristic diagram under different scales;
uniformly upsampling the similarity under different scales to be the size of an input image, and calculating a fusion value according to a fusion formula;
Comparing the fusion value with a preset defect threshold value; the areas with the fusion value larger than the defect threshold value correspond to defect areas, and the areas with the fusion value smaller than or equal to the defect threshold value are normal areas.
5. The method according to claim 1, wherein if the object to be inspected has a target inspection area, further comprising:
determining a target labeling frame of the target detection area;
aligning the reference image with the test image to obtain an aligned test image;
cutting out a target detection area in the reference image according to the coordinate information of the target labeling frame to obtain a reference area;
cutting out the corresponding areas in the aligned test images to obtain test areas;
respectively inputting the reference area and the test area into a feature extraction module to correspondingly obtain reference features and test features;
inputting the reference feature and the test feature into a defect detection module to obtain a thermodynamic diagram;
calculating the maximum thermodynamic value in the thermodynamic diagram and comparing the maximum thermodynamic value with a preset defect threshold;
and if the maximum thermal value is larger than the defect threshold value, confirming that the target detection area has defects.
6. The method of claim 5, wherein determining the target mark box of the target detection area comprises:
and labeling the target detection area in the reference image or automatically labeling the part area through a detection model to obtain a target labeling frame.
7. The method for detecting an object defect according to claim 5, wherein when the target detection area in the reference image is cut out according to the coordinate information of the target labeling frame to obtain a reference area, further comprising:
acquiring each application state of the reference part corresponding to the target detection area to obtain a reference part group;
extracting a reference feature group corresponding to the reference part group;
Correspondingly, the inputting the reference feature and the test feature into the defect detection module, and obtaining the thermodynamic diagram includes:
And inputting the reference feature group and the test feature into a defect detection module to obtain a plurality of thermodynamic diagrams.
8. An object defect detection system, comprising:
the image acquisition module is used for acquiring a detection image of the object to be detected;
the image marking module is used for collecting a normal sample image of the object to be detected as a reference image and automatically marking a group of alignment frames;
An image alignment module for aligning the reference image and the detection image by the alignment frame;
And the defect positioning module is used for inputting the reference image and the detection image into a defect detection network so as to position the defect position of the object to be detected.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method according to any of claims 1-7.
10. An electronic device comprising a memory in which a computer program is stored and a processor that, when invoking the computer program in the memory, performs the steps of the method according to any of claims 1-7.
CN202410200247.1A 2024-02-22 2024-02-22 Object defect detection method, system, storage medium and electronic equipment Pending CN118130503A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410200247.1A CN118130503A (en) 2024-02-22 2024-02-22 Object defect detection method, system, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410200247.1A CN118130503A (en) 2024-02-22 2024-02-22 Object defect detection method, system, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN118130503A true CN118130503A (en) 2024-06-04

Family

ID=91236926

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410200247.1A Pending CN118130503A (en) 2024-02-22 2024-02-22 Object defect detection method, system, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN118130503A (en)

Similar Documents

Publication Publication Date Title
CN110414507B (en) License plate recognition method and device, computer equipment and storage medium
CN108009543B (en) License plate recognition method and device
CN108960211B (en) Multi-target human body posture detection method and system
CN111681273B (en) Image segmentation method and device, electronic equipment and readable storage medium
US8019164B2 (en) Apparatus, method and program product for matching with a template
CN111444921A (en) Scratch defect detection method and device, computing equipment and storage medium
WO2018233055A1 (en) Method and apparatus for entering policy information, computer device and storage medium
CN111582093A (en) Automatic small target detection method in high-resolution image based on computer vision and deep learning
CN110660072A (en) Method and device for identifying straight line edge, storage medium and electronic equipment
CN111626295A (en) Training method and device for license plate detection model
US20200005078A1 (en) Content aware forensic detection of image manipulations
CN110737785A (en) picture labeling method and device
CN116862910B (en) Visual detection method based on automatic cutting production
CN111415364A (en) Method, system and storage medium for converting image segmentation samples in computer vision
CN114241469A (en) Information identification method and device for electricity meter rotation process
CN113033558A (en) Text detection method and device for natural scene and storage medium
CN111626145A (en) Simple and effective incomplete form identification and page-crossing splicing method
CN114494751A (en) License information identification method, device, equipment and medium
CN113705564B (en) Pointer type instrument identification reading method
CN112749694B (en) Method and device for recognizing image direction and nameplate characters
CN117237681A (en) Image processing method, device and related equipment
CN110031471B (en) Method, system and device for analyzing surface defect growth of large-caliber optical element
CN115546219B (en) Detection plate type generation method, plate card defect detection method, device and product
CN115984268A (en) Target detection method and device based on machine vision, electronic equipment and medium
CN115578362A (en) Defect detection method and device for electrode coating, electronic device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination