CN117011216A - Defect detection method and device, electronic equipment and storage medium - Google Patents

Defect detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117011216A
CN117011216A CN202211392176.7A CN202211392176A CN117011216A CN 117011216 A CN117011216 A CN 117011216A CN 202211392176 A CN202211392176 A CN 202211392176A CN 117011216 A CN117011216 A CN 117011216A
Authority
CN
China
Prior art keywords
defect
target
area
detection
point location
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211392176.7A
Other languages
Chinese (zh)
Inventor
刘永
汪铖杰
吴永坚
吴运声
黄小明
王川南
王亚彪
常健
吴凯
李嘉麟
李剑
徐尚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Cloud Computing Beijing Co Ltd
Original Assignee
Tencent Cloud Computing Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Cloud Computing Beijing Co Ltd filed Critical Tencent Cloud Computing Beijing Co Ltd
Priority to CN202211392176.7A priority Critical patent/CN117011216A/en
Publication of CN117011216A publication Critical patent/CN117011216A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The present application relates to the field of data processing technologies, and in particular, to a defect detection method, a device, an electronic device, and a storage medium, where the method includes: respectively acquiring point location images acquired for an object to be detected on preset target point locations, and obtaining a defect detection result corresponding to the object to be detected based on sub-detection results respectively detected for the point location images, wherein each point location image is acquired, the following operations are executed: acquiring a point location template diagram associated with a corresponding target point location, and determining a target area corresponding to a detection indication area in the point location image; and determining a sub-detection result of the object to be detected at the target point position based on the classification category information, the degree category information and the defect pixel position information set obtained by detecting each candidate defect identification in the target area. Therefore, the robustness of defect detection is improved, the difficulty in realizing defect detection is reduced, and the accuracy of defect detection is improved.

Description

Defect detection method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a defect detection method, a defect detection device, an electronic device, and a storage medium.
Background
With the modern development of science and technology, the defect detection of various devices can be realized by adopting an automatic detection mode at present, for example, the automatic quality inspection of a 3C structural member is realized.
Under the related technology, a location frame comprising a defect type, a confidence value corresponding to the defect type and a defect can be detected by means of a trained defect detection model based on a point map acquired for an object to be detected, or a defect detection result comprising the defect type, the confidence value corresponding to the defect type, the location frame of the defect and a defect pixel position can be detected by means of a trained defect detection algorithm based on the point map acquired for the object to be detected; and judging whether the object to be detected has defects or not based on the obtained detection result.
Therefore, when judging whether the object to be detected has defects according to the defect types and the positioning frames, the general judgment on different types of defects can be realized only from the defect types; moreover, the problem that the object to be detected deviates from the specified target point position when the point position image is acquired cannot be avoided, so that the conditions of missing detection and false detection are very easy to occur in detection results obtained for the point position images on different target point positions, the false judgment on whether defects exist is caused, and the defect detection efficiency is greatly reduced.
Disclosure of Invention
The embodiment of the application provides a defect detection method, a defect detection device, electronic equipment and a storage medium, which are used for solving the problems of low defect detection efficiency and accuracy and easiness in misjudgment in the prior art.
In a first aspect, a defect detection method is provided, including:
respectively acquiring point position images acquired aiming at an object to be detected on each preset target point position;
based on the sub-detection results obtained by respectively detecting the point location images, obtaining a defect detection result corresponding to the object to be detected, wherein each time a point location image is obtained, the following operations are executed:
acquiring a point location template diagram associated with a corresponding target point location, and performing image alignment processing on the point location image and the point location template diagram, wherein a target area corresponding to a detection indication area is determined in the point location image, and the point location template diagram is configured with a corresponding detection indication area;
obtaining classification category information, degree category information and positioning frames corresponding to each identified candidate defect based on the point location image by adopting a trained defect classification model, and respectively identifying and obtaining a defect pixel position information set based on images in each positioning frame by adopting a trained defect segmentation model;
And determining a sub-detection result of the object to be detected at the target point location based on the classification category information, the degree category information and the defect pixel position information set of each candidate defect in the target area.
In a second aspect, a defect detection apparatus is provided, including:
the acquisition unit is used for respectively acquiring point position images acquired aiming at the object to be detected on each preset target point position;
the detection unit is used for obtaining a defect detection result corresponding to the object to be detected based on sub-detection results obtained by respectively detecting each point bitmap image, wherein each time a point bitmap image is obtained, the following operations are executed:
acquiring a point location template diagram associated with a corresponding target point location, and performing image alignment processing on the point location image and the point location template diagram, wherein a target area corresponding to a detection indication area is determined in the point location image, and the point location template diagram is configured with a corresponding detection indication area;
obtaining classification category information, degree category information and positioning frames corresponding to each identified candidate defect based on the point location image by adopting a trained defect classification model, and respectively identifying and obtaining a defect pixel position information set based on images in each positioning frame by adopting a trained defect segmentation model;
And determining a sub-detection result of the object to be detected at the target point location based on the classification category information, the degree category information and the defect pixel position information set of each candidate defect in the target area.
Optionally, when the defect classification model is obtained through training, the device further includes a training unit, where the training unit is specifically configured to:
constructing a defect classification model to be trained based on a preset target detection algorithm, and acquiring a sample point location image set, wherein the defect classification model comprises a sub-network for realizing a defect degree prediction function, the sample point location image set comprises sample point location images which are acquired by different types of acquisition equipment and correspond to different target points, and each sample point location image is marked with a defect classification type label, a defect degree type label and a positioning frame label;
and carrying out multi-round iterative training on the defect classification model to be trained by adopting the sample point location image set until a preset convergence condition is met, so as to obtain a trained defect classification model.
Optionally, when determining the sub-detection result of the object to be detected at the target point location based on the classification category information, the degree category information, and the defect pixel location information set of each candidate defect in the target area, the detection unit is configured to:
Screening out target defects of which the confidence value associated with the classification category information is higher than a set threshold value or the degree category information does not belong to preset normal degree information from the candidate defects in the target area;
for each target defect, the following operations are performed: determining area information corresponding to the target defect based on a defect pixel position information set corresponding to the target defect, and obtaining a corresponding area detection result based on the area information and a preset area detection condition;
and determining a sub-detection result of the object to be detected at the target point location based on the area detection result corresponding to each target defect.
Optionally, after obtaining the corresponding area detection result based on the area information and the preset area detection condition, the detecting unit is further configured to, before determining the sub-detection result of the target point location of the object to be detected based on the area detection result corresponding to each target defect:
determining a defect area frame corresponding to the target defect, and determining length information corresponding to the target defect based on the number of pixels constructing the edge of the defect area frame and the mapping relation between the pixels and the size;
And obtaining a corresponding length detection result based on the length information and a preset length detection condition.
Optionally, when the corresponding area detection result is obtained based on the preset area detection condition, the detection unit is configured to:
acquiring an area threshold configured for the area information of the target defect according to the area information of the locating frame corresponding to the target defect in the point location image and the classification type information corresponding to the target defect;
when the area information of the target defect is determined, the corresponding area detection result is judged to be abnormal in the area undetermined state when the area information of the target defect reaches a preset area threshold value, and when the area information of the target defect is determined to not reach the preset area threshold value, the area detection result is judged to be normal in the area detection state.
Optionally, when determining the sub-detection result of the object to be detected at the target point location based on the area detection result corresponding to each target defect, the detection unit is configured to:
acquiring a total defect threshold value set for each classification category information, and respectively counting the total target defects belonging to the same classification category information, wherein the corresponding area detection result is the total target defects of the area to-be-determined abnormality;
Determining that the total number of target defects corresponding to the class classification information is higher than a corresponding total defect threshold value, and judging the sub-detection result of the object to be detected at the target point position as detection abnormality; and determining the sub-detection results of the object to be detected at the target point positions to be normal detection when the total number of target defects corresponding to the various classification category information is not higher than the corresponding total defect threshold value.
Optionally, the detecting unit is configured to, when determining a target area corresponding to the detection indication area in the point location image by performing image alignment processing on the point location image and the point location template image:
calculating a parameter transformation matrix configured for the point location image when the point location image is converted into alignment with the point location template image;
and determining a target area corresponding to the detection indication area in the point position image based on the parameter transformation matrix and the position information corresponding to the detection indication area in the point position template diagram.
Optionally, when the trained defect segmentation model is adopted and the defect pixel position information sets are respectively identified based on the images in the positioning frames, the detection unit is used for:
Obtaining a trained defect segmentation model, wherein the defect positioning model is constructed based on a preset segmentation algorithm and is used for positioning a defect pixel position set covered by a defect;
and respectively inputting the image contents selected by each defective area frame into the defect positioning model to obtain a defect pixel position set which is respectively determined by the defect positioning model corresponding to each image content.
Optionally, when the defect detection result corresponding to the object to be detected is obtained based on the sub-detection results obtained by respectively detecting the point images, the detection unit is configured to:
determining that the sub-detection results obtained by respectively detecting the bitmap images of each point are abnormal, and judging the defect detection result corresponding to the object to be detected as unqualified detection when the sub-detection results judged to be abnormal in detection exist; the method comprises the steps of,
and when the sub-detection results obtained by respectively detecting the bitmap images of each point are determined to be normal, judging the defect detection result corresponding to the object to be detected as qualified detection.
In a third aspect, an electronic device is provided, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the above defect detection method when executing the program.
In a fourth aspect, a computer readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, implements the above-mentioned defect detection method.
In a fifth aspect, a computer program product is proposed, comprising a computer program, which, when executed by a processor, implements the above-mentioned defect detection method.
The application has the following beneficial effects:
in an embodiment of the application, a defect detection method, a device, an electronic device and a storage medium are provided, point location images acquired for an object to be detected on preset target points are respectively acquired, and a defect detection result corresponding to the object to be detected is obtained based on sub-detection results respectively detected for the point location images, wherein each point location image is acquired, the following operations are executed: acquiring a point location template diagram associated with a corresponding target point location, and performing image alignment processing on the point location image and the point location template diagram to obtain a target area corresponding to the detection indication area, wherein the point location template diagram is configured with a corresponding detection indication area; obtaining classification category information, degree category information and positioning frames corresponding to each identified candidate defect based on the point location image by adopting a trained defect classification model, and respectively identifying and obtaining a defective pixel position information set based on images in each positioning frame by adopting a trained defect segmentation model; and determining a sub-detection result of the object to be detected at the target point position based on the classification category information, the degree category information and the defect pixel position information set of each candidate defect in the target area.
In this way, through carrying out alignment processing on the point position images corresponding to the same target point position and the point position template diagram, the consistency of point position image imaging of the same target point position is ensured, the processing difficulty in automatic defect detection is reduced, the robustness of defect detection is improved, and meanwhile, through effectively positioning the target area corresponding to the detection indication area in the point position image, the subsequent defect detection based on the point position image can be focused in the target area, and misjudgment caused by defects of other areas in the point position image is avoided; in addition, considering that the sample quantity on which the defect detection model and the defect positioning model are trained is different, different models are adopted to identify the positioning frame and the defect pixel position information instead of one model, so that the identification of the positioning frame and the defect pixel position information can be realized simultaneously, the sample generation difficulty can be reduced, and the realization difficulty of defect detection can be reduced; in addition, the obtained defect detection result comprises degree category information aiming at defect detection, which is equivalent to learning and classifying the abnormal degree of the defect, and provides more consideration basis for judging the defect detection result; in addition, when the corresponding defect detection result is finally determined for the object to be detected, comprehensive judgment is performed based on the detection information of each candidate defect in each point location image, which is equivalent to considering the comprehensive influence of each candidate defect, and is beneficial to improving the accuracy of defect detection.
Drawings
Fig. 1 is a schematic diagram of a possible application scenario in an embodiment of the present application;
FIG. 2A is a schematic diagram of a defect detection process according to an embodiment of the present application;
FIG. 2B is a schematic view of point images acquired at different target points in an embodiment of the present application;
FIG. 2C is a flow chart of obtaining sub-detection results for a point location image according to an embodiment of the present application;
FIG. 2D is a schematic diagram of the composition of a defect classification model constructed in an embodiment of the application;
FIG. 3 is a general flow chart of defect detection according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an overall flow involved in a defect detection process according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a logic structure of a defect detecting device according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a hardware configuration of an electronic device according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a computing device according to an embodiment of the application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the technical solutions of the present application, but not all embodiments. All other embodiments, based on the embodiments described in the present document, which can be obtained by a person skilled in the art without any creative effort, are within the scope of protection of the technical solutions of the present application.
The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be capable of operation in sequences other than those illustrated or otherwise described.
Artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions. The defect detection method provided by the application can be applied to the processing process of the artificial intelligence technology.
Some terms in the embodiments of the present application are explained below to facilitate understanding by those skilled in the art.
Defects: the method means that the detected and determined object to be detected cannot provide the security which the user is entitled to expect, or the detected and determined object to be detected has unreasonable danger; in the embodiment of the application, the detected and determined defects possibly have different classification type information and degree type information, and after the defect detection is carried out on an object to be detected, whether the object to be detected is qualified or not can be determined according to the obtained defect detection result, wherein in the case that the object to be detected is a part, the part with the unqualified detection result is the part which cannot meet the normal delivery requirement and needs to be removed from all the parts to be detected or needs to be repaired; the parts whose defect detection results are acceptable are parts whose detected defects are negligible or parts whose defects are not detected.
Rate of over-killing: the number of parts actually qualified (marked as OK) among the detected parts with defects after the defect detection is performed accounts for the percentage of the total number of the detected parts.
Leak detection rate: means that after the defect detection is performed, the total number of parts with defects which are not detected is the percentage of the total number of the detected parts.
Batch rate: refers to the percentage of the number of lots returned to the total number of lots delivered, representing the probability that a lot of product is rejected.
Target point location: in the embodiment of the application, the shooting position of the pointer to the configuration of an object to be detected; in the application, in order to realize defect detection of an object to be detected, each detection area configured for the object to be detected can be predetermined, target point positions capable of acquiring different detection areas are determined, point position template diagrams are respectively shot and generated for each target point position, and contents on the object to be detected, which are focused and detected on the corresponding target point position, are identified through configuring a detection indication area.
Point location image: in the embodiment of the application, a point location image refers to an image shot for an object to be detected at a target point location.
Image alignment: in the embodiment of the application, two images subjected to image alignment processing have consistent shooting angles, and the product structures on the images can be completely aligned without obvious offset, rotation and size difference.
The following briefly describes the design concept of the embodiment of the present application:
under the related technology, when the defect detection is carried out, the detection can be realized by means of an artificial intelligence algorithm, and in some possible implementation modes in the prior art, the defect detection can be carried out by adopting a mode of combining a deep learning algorithm with rule judgment.
Specifically, in some possible implementation manners, a deep learning algorithm may be adopted to train to obtain a defect detection model, and then the defect detection model is adopted to output defect detection information based on a point diagram of an object to be detected, wherein the defect detection information comprises classification type information of a defect, a confidence value corresponding to the classification type information and a positioning frame of the defect;
in other possible embodiments, a deep learning algorithm may be adopted to train to obtain a defect detection model for realizing accurate defect detection, so that in the processing process, the defect detection model obtained by training is adopted to output a defect detection result based on an imaging point map of the object to be detected, wherein the defect detection result comprises a defect type, a defect confidence level, a defect positioning frame and a defect fine pixel position.
Furthermore, based on the detection results processed by the different implementation modes, rule judgment is performed, and in a specific judgment process, comprehensive judgment can be performed by means of a preset defect confidence threshold and a defect area threshold, so that whether the object to be detected has a defect or not is finally judged.
However, for the processing mode in the prior art, when the point position images are acquired, the problem that the imaging consistency is poor due to the fact that when different parts acquire images at the same target point position, the camera or the object to be detected has an offset condition, and the point position images shot by aiming at different objects to be detected at the same target point position is caused, the false detection can be carried out with high probability under the current technical framework, and the object to be detected is easily misjudged to be defective. Moreover, in the prior art, training is performed only according to point position images shot under a current production line when model training is performed, so that when imaging differences caused by process parameter differences of different target points in an object to be detected are large, the defect detection effect is poor.
In addition, in a scene where the defective pixel position needs to be considered, the defective category, the defective position and the defective pixel position need to be marked in the sample point position image at the same time, which requires great labor cost and material resource cost, and the training result is extremely high in dependence on sample data, so that the efficiency of model application detection is low due to low generation efficiency of the sample data. Furthermore, since the point images may cover the content of other target points, the point images of different target points may cause interference of defect detection, and thus cause an over-killing problem caused by false detection, and in this scenario, the existing method cannot better solve the above problem.
In view of this, in an embodiment of the present application, a defect detection method, a device, an electronic apparatus, and a storage medium are provided, where point images acquired for an object to be detected on preset target points are respectively acquired, and a defect detection result corresponding to the object to be detected is obtained based on sub-detection results respectively detected for the point images, where each point image is acquired, the following operations are performed: acquiring a point location template diagram associated with a corresponding target point location, and performing image alignment processing on the point location image and the point location template diagram to obtain a target area corresponding to the detection indication area, wherein the point location template diagram is configured with a corresponding detection indication area; obtaining classification category information, degree category information and positioning frames corresponding to each identified candidate defect based on the point location image by adopting a trained defect classification model, and respectively identifying and obtaining a defective pixel position information set based on images in each positioning frame by adopting a trained defect segmentation model; and determining a sub-detection result of the object to be detected at the target point position based on the classification category information, the degree category information and the defect pixel position information set of each candidate defect in the target area.
In this way, through carrying out alignment processing on the point position images corresponding to the same target point position and the point position template diagram, the consistency of point position image imaging of the same target point position is ensured, the processing difficulty in automatic defect detection is reduced, the robustness of defect detection is improved, and meanwhile, through effectively positioning the target area corresponding to the detection indication area in the point position image, the subsequent defect detection based on the point position image can be focused in the target area, and misjudgment caused by defects of other areas in the point position image is avoided; in addition, considering that the sample quantity on which the defect detection model and the defect positioning model are trained is different, different models are adopted to identify the positioning frame and the defect pixel position information instead of one model, so that the identification of the positioning frame and the defect pixel position information can be realized simultaneously, the sample generation difficulty can be reduced, and the realization difficulty of defect detection can be reduced; in addition, the obtained defect detection result comprises degree category information aiming at defect detection, which is equivalent to learning and classifying the abnormal degree of the defect, and provides more consideration basis for judging the defect detection result; in addition, when the corresponding defect detection result is finally determined for the object to be detected, comprehensive judgment is performed based on the detection information of each candidate defect in each point location image, which is equivalent to considering the comprehensive influence of each candidate defect, and is beneficial to improving the accuracy of defect detection.
The preferred embodiments of the present application will be described below with reference to the accompanying drawings of the specification, it being understood that the preferred embodiments described herein are for illustration and explanation only, and not for limitation of the present application, and that the embodiments of the present application and the features of the embodiments may be combined with each other without conflict.
Fig. 1 is a schematic diagram of a possible application scenario in an embodiment of the present application. The application scenario schematic diagram includes an image acquisition device 110, and a processing device 120.
It should be noted that, when the image capturing device 110 is an image capturing device having an information sending function, a communication connection may be established between the image capturing device 110 and the processing device 120 by a wired communication or a wireless communication manner, and when the image capturing device 110 is an image capturing device not having an information sending function, the captured image may be directly derived from the memory card of the image capturing device 110 and provided to the processing device 120 for processing.
In the embodiment of the present application, the image capturing device 110 is configured to capture two-dimensional images, and may specifically be a terminal device camera, a two-dimensional line scan camera, a common camera capable of capturing two-dimensional images, a gray-scale camera conventionally used in industry, and the like;
The processing device 120 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligence platforms, and the like. Or a computer device with analysis processing function, such as a personal computer or a notebook.
The technical scheme provided by the application can realize defect detection of the object to be detected based on point location images acquired for the object to be detected at different target point locations in various application scenes, and the following description is made on several possible application scenes schematically:
scene one, is applied to the automated inspection of the part.
In the automatic defect detection of structural parts such as 3C structural parts, the technical scheme provided by the application can be applied to factory workshops. After the part to be detected is placed in a part mold, the part to be detected is transmitted to an automatic detection workshop for realizing defect detection by a conveyor belt; the part to be detected is grabbed from the part mould by the manipulator, and the part to be detected is rotated according to a preset position, so that the image acquisition equipment can acquire point position images of different target points aiming at the part to be detected, wherein the rotation operation of the manipulator corresponds to the shooting operation of the image acquisition equipment, namely, when the part to be detected is rotated by the manipulator to reach one target point, the image acquisition equipment acquires the corresponding point position images, and the point position images acquired at one target point are at least one.
Further, each time an image acquisition device acquires a point image, the point image is uploaded to a processing device, wherein in some possible implementations the image acquisition device may be integrated on the processing device; the processing equipment detects defects aiming at the obtained point location image to obtain a sub-detection result aiming at the point location image; finally, the processing equipment respectively determines corresponding sub-detection results based on the point position images acquired on the target point positions, and then obtains a final corresponding defect detection result of the object to be detected based on the obtained sub-detection results.
Scene two, be applied to the defect detection to the product outward appearance.
Specifically, the object to be detected may be an assembled product that needs to detect an external defect, such as home appliances (televisions, refrigerators, etc.), furniture (cabinets, sofas, etc.), office equipment (desks, computers, etc.), vehicles (various types of automobiles, etc.), and the like.
For each assembled product, the processing equipment can respectively configure corresponding shooting point positions (or target point positions) and point position template diagrams, and further can respectively train each assembled product according to actual processing requirements to obtain a defect classification model and a defect segmentation model; on the basis, when automatic defect detection is carried out, the processing equipment acquires point position images shot at different target point positions, and further realizes defect detection of an assembled product based on the point position images to obtain a defect detection result.
In the embodiment of the present application, the trained defect classification model and defect segmentation model used by the processing device for defect detection may be obtained by training the processing device based on training samples, or may be obtained from other devices, which is not particularly limited in this regard.
The defect detection flow related to the embodiment of the present application will be described in detail from the point of view of the processing device with reference to the accompanying drawings:
referring to fig. 2A, which is a schematic diagram of a defect detection flow in an embodiment of the present application, a process of performing defect detection in an embodiment of the present application is described below with reference to fig. 2A:
step 201: the processing equipment respectively acquires point position images acquired aiming at the object to be detected on each preset target point position.
In the embodiment of the application, in order to meet the defect detection requirement of the object to be detected, each target point position can be preset for the object to be detected in advance, wherein the total number of the target point positions set for the object to be detected is set according to the actual processing requirement, and the application is not particularly limited to the total number; and generating a corresponding point location template diagram aiming at the object to be detected at each target point location, wherein a corresponding detection indication area is marked in each point location template diagram.
The processing equipment respectively acquires point position images acquired for the object to be detected on each preset target point position, so that the defect detection of the object to be detected can be realized according to the acquired point position images.
For example, referring to fig. 2B, which is a schematic diagram of point location images collected on different target points in the embodiment of the present application, as can be seen from the content illustrated in fig. 2B, assuming that the object to be detected is a hexagonal prism, in order to implement defect detection on the surface of the hexagonal prism, target points may be set for 8 faces of the hexagonal prism, so that the point location images collected on different target points may include the content to be identified; as can be seen from fig. 2B, assuming that the detection content for the target point location 1 is a rectangular area indicated by a thick line, the point location image acquired corresponding to the target point location 1 is indicated by the right content; assuming that the detection content aimed at by the target point location 7 is a hexagonal area indicated by a thick line, the point location image acquired by the corresponding target point location 7 is indicated as the content on the right side thereof.
Step 202: the processing equipment obtains a defect detection result corresponding to the object to be detected based on the sub-detection results obtained by respectively detecting the bitmap images of each point.
In the embodiment of the application, processing equipment detects the corresponding sub-detection results for each obtained point location image respectively, and then determines the defect detection result corresponding to the object to be detected based on the obtained sub-detection results.
When the processing equipment determines the defect detection result corresponding to the object to be detected based on each sub-detection result, the processing equipment determines that the sub-detection results detected respectively for each point bitmap image are abnormal, and judges the defect detection result corresponding to the object to be detected as unqualified detection; and when the processing equipment determines that the sub-detection results obtained by respectively detecting the bitmap images of each point are all normal, judging the defect detection result corresponding to the object to be detected as qualified detection.
Specifically, if the processing device determines that each sub-detection result is normal in detection, that is, when the processing device detects each point image of the object to be detected, if no sub-detection result with abnormal detection is obtained, the object to be detected can be determined to be qualified in detection; otherwise, if the processing device determines that the sub-detection result of the detection abnormality exists, that is, when the processing device detects the bitmap image of each point of the object to be detected, it determines that at least one sub-detection result of the detection abnormality exists, it may determine that the object to be detected is unqualified for detection, where the sub-detection result includes two types of detection normal and detection abnormal, and the defect detection result includes two types of detection qualified and detection unqualified.
In this way, when the defect detection result of the object to be detected is determined, the influence of the sub-detection results of the point location images at different target point locations is considered, and after the sub-detection results corresponding to the point location images are comprehensively considered, the effective judgment of the defect result of the object to be detected can be realized.
In the embodiment of the present application, when the processing device detects each point location image to obtain a corresponding sub-detection result, referring to fig. 2C, which is a schematic flow chart of obtaining a sub-detection result for a point location image in the embodiment of the present application, with reference to fig. 2C, in the embodiment of the present application, a detection process executed by the processing device when each point location image is obtained is described below:
step 2021: the processing equipment acquires a point location template diagram associated with a corresponding target point location, and obtains a target area corresponding to the detection indication area by performing image alignment processing on the point location image and the point location template diagram, wherein the point location template diagram is configured with the corresponding detection indication area.
In the embodiment of the application, when the processing equipment performs alignment processing on the point position image and the corresponding point position template image and obtains the target area corresponding to the detection indication area, the processing equipment calculates a parameter transformation matrix configured for the point position image when the point position image is converted into alignment with the point position template image; and determining a target area corresponding to the detection indication area in the point position image based on the parameter transformation matrix and the position information corresponding to the detection indication area in the point position template diagram.
In particular, the processing device may employ implementations including, but not limited to, any of the following in computing a parametric transformation matrix that converts a point image into alignment with a point template map:
in the first implementation mode, the parameter transformation matrix is calculated by carrying out correlation comparison of the areas.
Specifically, the processing device may mark the positioning region and each corresponding adjacent region in the point location template map, and determine a region to be matched corresponding to the position of the positioning region in the point location image; and then, a Scale-invariant feature transform (SIFT) algorithm is adopted, a matching area with highest correlation with the area to be matched is determined in the positioning area and each adjacent area, and then, a feature mapping relation between the area to be matched and the matching area is established, so that a parameter transformation matrix is obtained.
And in the second implementation mode, the parameter transformation matrix is calculated in a characteristic point extraction and matching mode.
In the embodiment of the application, the processing device may adopt a preset feature extraction algorithm to perform feature extraction on the point template image to obtain a first feature image, perform feature extraction on the point template image to obtain a second feature image, and adopt a preset feature comparison algorithm to establish a matching relationship between each feature point in the first feature image and the second feature image, and then establish a parameter transformation matrix based on the obtained matching relationship.
Specifically, the processing device may use a method of combining super computing power (SuperPoint) and super glue (SuperGlue) to implement feature point extraction and matching. The processing equipment can adopt a trained characteristic point extraction model constructed based on a Superpoint algorithm, perform characteristic extraction on a point location template diagram to obtain a characteristic diagram of HxWxC dimension, and perform characteristic extraction on the point location image in the same way, wherein H represents the height of the characteristic diagram, W represents the width of the characteristic diagram, and C represents the channel number of the characteristic diagram; and the processing equipment inputs the two extracted feature graphs into a trained feature matching model constructed based on a SuperGlue algorithm, obtains a point-to-point matching relation through graph convolution processing, filters out the point-to-point matching relation which does not meet geometric constraint through geometric verification, and estimates a parameter transformation matrix based on the residual point-to-point matching relation.
And in the third implementation mode, calculating a parameter transformation matrix by adopting a nearest point iteration method.
In the embodiment of the application, the processing device can adopt a preset edge feature extraction algorithm, respectively obtain a contour map composed of extracted contour points aiming at the point location template map and the point location image, and adopt a nearest neighbor search algorithm to determine each matching point pair between two contour maps when the pixel value error between each matching point pair is minimum, and establish a parameter transformation matrix according to the position relationship of each matching point pair.
Specifically, the processing device can perform edge extraction on the point location image and the point location pattern layout to obtain a contour map, establish matching point pairs between the contour maps through a nearest neighbor search algorithm after adopting an iterator to construct errors, and update the errors constructed in the iterator according to pixel value errors between the matching point pairs so as to optimize the errors towards smaller directions; and finally determining each matching point pair with the minimum error, and further establishing a parameter transformation matrix based on the position relation of each matching point pair, wherein a parameter transformation matrix mode is established based on the known position relation, which is a conventional technology in the field, and the application does not specifically describe.
In this way, the dot pattern image can be transformed to be aligned with the dot pattern map by means of calculating the parameter transformation matrix by any one of the implementation modes one to three.
Further, after the processing device acquires the parameter transformation matrix, a target area corresponding to the detection indication area is determined in the point image based on the parameter transformation matrix and the position information corresponding to the detection indication area in the point template map.
Specifically, after determining the parameter transformation matrix when the point location image is transformed to be aligned with the point location template map, the processing device may reversely determine the target area corresponding to the detection indication area in the point location image according to the parameter transformation matrix and the position information of the detection indication area in the point location template map.
Therefore, the target area aimed by detection can be determined in the point position image, a processing basis is provided for only focusing on the defect condition in the target area in the subsequent detection process and not focusing on the defect not in the target area, and further misjudgment on the defect condition at the current target point position due to the defects of other areas can be avoided.
Step 2022: the processing equipment adopts a trained defect classification model, obtains classification category information, degree category information and positioning frames corresponding to each identified candidate defect based on the point location image, adopts a trained defect segmentation model, and respectively identifies and obtains a defect pixel position information set based on images in each positioning frame.
In the embodiment of the application, when the processing equipment trains the defect classification model to be trained to obtain a trained defect classification model, the processing equipment can construct the defect classification model to be trained based on a preset target detection algorithm and acquire a sample point location image set, wherein the defect classification model comprises a sub-network for realizing a defect degree prediction function, the sample point location image set comprises sample point location images which are acquired by different types of acquisition equipment and correspond to different target points, and each sample point location image is marked with a defect classification type label, a defect degree type label and a positioning frame label; and performing multiple rounds of iterative training on the defect classification model to be trained by adopting the sample point location image set until a preset convergence condition is met, so as to obtain the trained defect classification model.
In the embodiment of the present application, when the defect classification model to be trained is constructed, the target detection algorithm used may be specifically a fast regional convolutional neural network (Faster RCNN) algorithm, a Cascade RCNN algorithm, or a YOLOv5 algorithm. However, the determined detection model in the related technology cannot output defect degree category information; in the application, in order to enable the constructed defect classification model to output defect degree type information, a sub-network for realizing a defect degree prediction function is added on the basis of an original algorithm, so that the prediction of the defect degree type information can be realized, wherein the sub-network for realizing the defect degree prediction function is a branch for realizing the defect degree prediction function, which is added in a head structure, and generally consists of 4 convolution layers, 1 average pooling layer and 1 activation layer in sequence. In the YOLOv5 algorithm, a sub-network for realizing the defect degree prediction function is to add a degree branch in the head structure, and is generally composed of 4 convolution layers.
Referring to fig. 2D, which is a schematic diagram of a defect classification model constructed according to an embodiment of the present application, it can be seen from the schematic diagram of fig. 2D that the constructed defect classification model includes a backbone (backbone) network of the fast RCNN, a neck (neg) network of the fast RCNN, and a head (head) network of the fast RCNN, wherein the backbone and the neg partially use a network structure in the related art, a branch (or sub-network) for implementing a defect level prediction function is newly added in the head network, and the branch for implementing the defect level prediction function is sequentially formed by 4 convolution layers, 1 average pooling layer, and 1 activation layer.
In the process of training a defect classification model, processing equipment firstly constructs a sample point location image set for model training, wherein the sample point location image set comprises sample point location images acquired by different types of image acquisition equipment aiming at objects to be detected at different target point locations, and each sample point location image is marked with a defective classification type label, a defective degree type label and a positioning frame label; defect classification categories may include categories such as crush, scratch, etc., and defect level categories may include categories such as severe, slight, etc.; according to the actual processing requirement, the positioning frame can be a rectangular frame for selecting a small-range area where the defect is located.
In a specific training process, the processing equipment can adopt a sample point location image set to perform multi-round iterative training on the defect classification model to be trained until a preset convergence condition is met, so as to obtain a trained defect classification model; the preset convergence condition may be a convergence condition for determining that the model converges in the related art, for example, the number of times that the loss value is continuously lower than the first set value reaches the second set value, or the number of training rounds reaches the third set value, where the first set value, the second set value, and the third set value are set according to the actual processing requirement.
In the iterative training process of one round, the processing equipment inputs the sample point location image into a defect classification model to be trained to obtain classification category information, a positioning frame and degree category information of the defects; further, a cross entropy loss function is adopted, and a loss value is calculated based on the category difference between the category information of the defects and the corresponding category labels and the position difference between the positioning frame and the corresponding positioning frame labels and the category difference between the degree category information and the corresponding degree category labels; and then reversely adjusting and adjusting model parameters according to the obtained loss value, wherein the number of the sample point location images input in parallel in the training process of one round is set according to actual processing requirements, and the application is not particularly limited to the method.
In the embodiment of the application, when the processing equipment adopts the defect classification model to be trained for processing, the first-stage locating frame extraction can be performed on the sample point location picture, and then the regression analysis is performed on the locating frame to determine the classification type information and the degree type information of the defect.
In this way, for the defect classification model obtained by training, the trained defect classification model is adopted, so that classification type information, degree type information and positioning frames of defects can be output, which is equivalent to synchronously defining the abnormal degree of the defects, and the defect detection is improved by more referential basis; in addition, the generated sample point location image set comprises images acquired by various acquisition devices at different target points, so that the defect classification model obtained after training has the capability of processing the images acquired by the different acquisition devices; moreover, the image contents on different target points can be considered, so that the classification result is not influenced by the process parameter differences of the object to be detected on the different target points, the influence of the imaging differences on the detection result is reduced, and the classification type and degree type of the defects and the detection effect of the positioning frame are ensured.
Meanwhile, when the processing equipment adopts a trained defect segmentation model and respectively identifies and obtains a defect pixel position information set based on images in each positioning frame, the processing equipment can acquire the trained defect positioning model, wherein the defect segmentation model is constructed based on a preset segmentation algorithm and is used for positioning the defect pixel position set covered by the defect; and inputting the image contents selected by the defect area frames into the defect positioning model respectively to obtain a defect pixel position set which is determined by the defect positioning model corresponding to the image contents respectively.
In particular, the processing device may construct a defect segmentation model to be trained based on preset segmentation algorithms, where the segmentation algorithms employed include, but are not limited to, object-contextual representation network (OCR Net) algorithms, and the Unet semantic segmentation network algorithm.
For the Unet semantic segmentation algorithm, when processing, the pictures are rolled and pooled first, and the pictures are processed into feature maps of 112×112, 56×56, 28×28, and 14×14 in four different sizes, assuming that the size of the pictures is 224×224. Then, up-sampling or deconvolution processing is carried out on the feature map with the size of 14 multiplied by 14, so as to obtain a feature map with the size of 28 multiplied by 28; then, the obtained feature map with the size of 28×28 is subjected to channel splicing (concat) with the feature map with the size of 28×28 obtained by the previous processing; then, carrying out convolution and up-sampling treatment on the spliced feature images to obtain feature images with the size of 56×56, splicing the feature images with the features of 56×56 obtained by the previous treatment, and carrying out convolution and up-sampling treatment on the spliced feature images in the same way to obtain feature images with the size of 112×112; and in the same way, a 224x224 prediction result with the same size as the input image can be obtained after four upsampling steps.
When the processing equipment trains the defect segmentation model to be trained, each defect pixel position in the image content can be respectively marked in the image content containing the defects of each object to be detected, and a defect pixel position set label obtained for each image content is obtained; further, according to a sample image content set constructed by the homonymy label, multiple rounds of iterative training are realized until a preset convergence condition is met, and a trained defect segmentation model is obtained; in addition, the obtained trained defect segmentation model can be applied to a defect segmentation scene of other products (or other objects to be detected), training is not needed again, and in the actual processing process, the trained defect segmentation model can be directly inferred based on an input defect interest area (namely, the image content selected by a positioning frame) to obtain defect refined pixels, namely, a defect pixel position information set.
For training of the defect segmentation model, a training mode under a related technology can be followed, and the defect segmentation model to be trained is subjected to multi-round iterative training by adopting a sample image content set until a preset convergence condition is met, so that a trained defect segmentation model is obtained, wherein the obtained defect segmentation model can be used as a pre-training model and applied to defect segmentation scenes of other objects to be detected; the preset convergence condition may be a convergence condition for determining that the model converges in the related art, for example, the number of times that the loss value is continuously lower than the fourth set value reaches the fifth set value, or the number of training rounds reaches the sixth set value, where the fourth set value, the fifth set value, and the sixth set value are set according to the actual processing requirement.
In the iterative training process of one round, the processing equipment inputs the sample image content into a defect segmentation model to be trained to obtain a predicted defect pixel position set; further, a cross entropy loss function is adopted, and a loss value is calculated based on the position difference between the predicted defective pixel position set and the corresponding defective pixel position set label; and then reversely adjusting and adjusting model parameters according to the obtained loss value, wherein the content number of the sample images which are input in parallel in the training process of one round is set according to the actual processing requirement, and the application is not particularly limited to the method.
In this way, by means of the defect segmentation model, the defect part in the image content is used as the foreground content, and the other parts except the defect in the image content are used as the background content, and the foreground and background segmentation processing is performed, so that the defect in the image content can be accurately positioned on the pixel level.
In addition, considering that the sample quantity according to which the defect detection model and the defect positioning model are trained is different, the defect area frame and the defect pixel position information set are identified by adopting different models, so that the sample generation difficulty can be greatly reduced, and the defect detection realization difficulty can be further reduced.
Step 2023: the processing equipment determines a sub-detection result of the object to be detected at the target point position based on the classification category information, the degree category information and the defect pixel position information set of each candidate defect in the target area.
In the embodiment of the application, after the processing device determines the target area corresponding to the detection indication area in the point location template image, the processing device can determine the defect of at least part of the content of the corresponding positioning frame in the target area or the defect of at least part of the pixel positions in the corresponding defect pixel position set in the target area as the candidate defect; and further screening candidate defects, screening out target defects which need to be further judged, and combining a classification result and a segmentation result corresponding to the target defects to realize specific judgment of the target defects.
Specifically, the processing device may screen out, from among the candidate defects in the target area, target defects for which the confidence value associated with the classification category information is higher than the set threshold value, or for which the degree category information does not belong to the preset normal degree information; then, for each target defect, the following operations are performed: determining area information corresponding to the target defect based on a defect pixel position information set corresponding to the target defect, and obtaining a corresponding area detection result based on the area information and a preset area detection condition; and further determining the sub-detection result of the object to be detected at the target point location based on the area detection result corresponding to each target defect.
It should be noted that, in the embodiment of the present application, for a trained defect classification model, the defect classification model outputs classification category information, degree category information of a defect, and when a frame is located, a corresponding confidence value is identified for each possible classification category information, a corresponding confidence value is identified for each possible degree category information, and the defect classification category information with the highest confidence value is generally determined as the classification category information detected for the defect, and the degree category information with the highest confidence value is determined as the degree category information detected for the defect.
Based on the above, the processing device screens out the target defect from the candidate defect by measuring the confidence value associated with the classification category information of the defect and the magnitude relation between the confidence value and the set threshold value and measuring the matching condition of the degree category information of the defect and the normal degree information, wherein for the degree category information, the degree category information can be generally divided into the normal degree information and the abnormal degree information according to the actual processing requirement, and the normal degree information and the abnormal degree information respectively correspond to at least one type of degree classification category.
In the embodiment of the application, when the confidence value of the association of the classification category information determined for a certain candidate defect (assumed to be the candidate defect A) is smaller than a set threshold value and the degree category information determined for the candidate defect A belongs to preset normal degree information, the candidate defect A is not a target defect; otherwise, the candidate defect a may be determined as the target defect.
For example, the assumed level category information includes: slight, moderate, severe, and extreme four kinds, then the degree category information of "slight" may be determined as normal degree information, and the three degree category information of "moderate", "severe", and "extreme" may be determined as abnormal degree information, according to actual processing needs.
For another example, assume that the set threshold value set for the classification category information is 85%, and the classification category information includes scratch, crush, and missing pieces; the classification category information detected for candidate defect 1 is assumed to be: 60% of scratches (i.e., 60% may be scratches); 45% of crush injury; 10% missing part, then the classification category information that can be obtained for candidate defect 1 is: scratching; assuming that the degree category information detected for the candidate defect 1 is moderate, although the degree category information of the candidate defect 1 is not higher than the set threshold, since the degree category information corresponding to the candidate defect 1 is "moderate" and does not belong to the preset normal degree information, the candidate defect 1 is screened as the target defect.
When the processing equipment detects the area of the target defect, calculating the real physical area of the current target defect according to the position information set of the defect pixel determined by segmentation and the conversion ratio of the pixel point position to the area to obtain corresponding area information; and obtaining a corresponding area detection result according to the obtained area information and a preset area detection condition, wherein the square of the physical quantity length corresponding to one pixel can be used as the area quantity corresponding to one pixel because the pixel precision value, namely the physical quantity length corresponding to one pixel, is given when the normal image acquisition equipment acquires the point position image.
Specifically, the processing device may obtain an area threshold configured for the area information of the target defect according to the area information of the locating frame corresponding to the target defect in the point location image and the classification category information corresponding to the target defect; and then, when the area information of the target defect is determined, judging the corresponding area detection result as an area undetermined abnormality when the area information of the target defect reaches a preset area threshold value, and judging the area detection result as an area detection normal when the area information of the target defect is determined and the area information of the target defect does not reach the preset area threshold value.
In the embodiment of the present application, different area thresholds may be set for defects of different areas in the point location image and defects of different classification information according to actual processing requirements. The processing equipment can mark a defect high-incidence area on the object to be detected according to the historical detection result and/or mark an over-killing area caused by texture differences of different batches of the object to be detected; furthermore, in the point location template diagram, an area corresponding to the defect high-incidence area is set as a sensitive area, an area corresponding to the over-killing area is set as a shielding area, and a confidence threshold value and an area threshold value in the preset area are dynamically adjusted to solve the problems of over-killing and omission of defects.
When the area detection result corresponding to the target defect is specifically determined, after the processing device determines the area threshold corresponding to the target defect, determining the area detection result of the target defect according to the size relation between the area information of the target object and the corresponding area threshold, wherein the area detection result may be any one of the area detection positive length and the area undetermined abnormality.
Therefore, by pertinently acquiring the area threshold value corresponding to the target defect, the effective definition of the defect area can be realized under the condition of considering the classification type information of the target defect and the area of the target defect in the point position image, which is equivalent to the judgment of the target defect from the aspect of the defect area.
Under the condition that the area detection results are considered aiming at the target defects, when the processing equipment determines the sub-detection results of the object to be detected at the target point positions based on the area detection results corresponding to each target defect, the processing equipment can acquire the defect total number threshold value set for each classification type information and respectively count the total number of target defects belonging to the same classification type information, wherein the corresponding area detection results are the total number of target defects with the area to be determined abnormal; determining that the total number of target defects corresponding to the class classification information is higher than a corresponding total defect threshold value, and judging the sub-detection result of the object to be detected at the target point position as detection abnormality; and determining the sub-detection results of the object to be detected at the target point positions to be normal detection when the total number of target defects corresponding to the various classification category information is not higher than the corresponding total number of defects threshold.
Specifically, when determining the sub-detection result of the object to be detected at the target point, the processing device may synthesize the area detection result of each target defect in the target area to perform comprehensive determination; in a specific judging process, firstly, determining classification category information corresponding to each target defect in a target area, and respectively acquiring a defect total number threshold value configured for each classification category information in advance; and respectively counting the total number of target defects which belong to each classification category information and are in an area to-be-determined abnormality, judging the sub-detection result of the object to be detected at the target point position as abnormal detection when the total number of target defects under the condition that one type of classification category information exists is determined to reach a corresponding total number of defects threshold, and judging the sub-detection result of the object to be detected at the target point position as normal detection when the total number of target defects corresponding to each type of classification category information is determined not to be higher than the corresponding total number of defects threshold.
In this way, when determining the sub-detection result of the object to be detected at the target point location, comprehensive judgment is performed based on the area detection results of the plurality of target defects, which is equivalent to the judgment of introducing multiple instance levels in the defect detection process, the superposition influence of the target defects of the same type is considered, and the defect detection effect is ensured.
In addition, in the embodiment of the application, other consideration factors such as defect length can be combined when determining the detection result of the sub-detection in the multi-instance comprehensive determination.
Specifically, after obtaining the corresponding area detection result based on the area information and the preset area detection condition, the processing device may determine a positioning frame corresponding to the target defect before determining the sub-detection result of the target point position of the object to be detected based on the area detection result corresponding to each target defect, and determine the length information corresponding to the target defect based on the number of pixels constructing the edge of the positioning frame and the mapping relationship between the pixels and the size; and obtaining a corresponding length detection result based on the length information and the preset length detection condition.
The same manner as the above method for determining the area detection result, the processing device may set different length thresholds for defects of different areas in the point location image and defects corresponding to different classification information according to actual processing requirements in advance.
When a length detection result corresponding to the target defect is specifically determined, calculating the physical length (or length information) of the target defect according to the proportional relation between the pixel point position and the millimeter and the total number of edge pixel points of a positioning frame of the target defect; if the length information is determined to be greater than the corresponding set length threshold, determining the length detection result of the target defect as a length undetermined abnormality, and if the length information is determined to be not greater than the corresponding set length threshold, determining the length detection result of the target defect as a length detection normal.
Furthermore, the processing device may obtain a threshold value of the total number of defects set for each classification category information, and respectively count the total number of target defects belonging to the same classification category information, where the corresponding length detection result is the total number of target defects with length undetermined anomalies; then, determining that the total number of target defects which are corresponding to the class classification information and belong to the length undetermined anomalies is higher than a corresponding total number threshold value of the defects, and judging the sub-detection result of the object to be detected at the target point position as detection anomalies; and determining the total number of target defects belonging to the length undetermined abnormality corresponding to the various classification category information, and judging the sub-detection result of the object to be detected at the target point position as normal detection when the total number of target defects is not higher than the corresponding total number of defects threshold value.
Therefore, by integrating the length information conditions of different target defects, the superposition effect of the length information of different target defects is considered, and the defect detection effect is improved.
Based on this, when the sub-detection result of the object to be detected at the target point is finally determined, the processing apparatus may determine that the sub-detection result of the object to be detected at the target point is detection abnormality in the case where the sub-detection result is determined to be detection abnormality based on the area detection result, or in the case where the sub-detection result is determined to be detection abnormality based on the length detection result.
Taking quality inspection of a 3C structural member as an example, a process of realizing defect detection in the embodiment of the application is described, wherein the 3C structural member can be a mobile phone camera support, a mobile phone middle frame, a SIM card slot, a computer shell, a smart watch shell and the like.
Referring to fig. 3, which is a general flow chart of defect detection in the embodiment of the present application, a general process of defect detection in the embodiment of the present application will be described with reference to fig. 3.
As can be seen from the description of fig. 3, the defect detection process implemented in the embodiment of the present application may be implemented by four functional modules, which are respectively: an image alignment module, a defect classification module, a defect foreground and background segmentation module and a rule determination module.
In a specific processing process, processing equipment firstly performs image alignment processing on a point image at a target point and a corresponding point template image by means of an image alignment module, and calculates a parameter transformation matrix when the point image is transformed to be aligned with the point template image;
the processing device then invokes the trained defect classification model to infer defect information by means of the defect classification module, wherein the defect information includes classification category information, degree category information, and a positioning frame of the defect.
It should be noted that, in some possible embodiments of the present application, the processing device may train to obtain a defect classification model for each target point location separately; and further, after obtaining a point location image corresponding to a certain target point location, a defect classification model corresponding to the target location may be used to implement processing of the point location image, where in a training process of the defect classification model, training may be implemented by using a sample image collected at a corresponding target point location, and a related processing process is the same as the above-mentioned method for training the defect classification model, which is not specifically described herein.
For example, in the case that the shell of a certain product and the keyboard belong to different materials, different defect classification models can be constructed by using the same algorithm, and training of the different defect classification models can be realized by adopting different data.
Therefore, different defect classification models are obtained through training aiming at different target points, effective detection of point location images on different target points can be realized when the process parameter difference is large under different target points, and the detection effect of defects on different target points can be improved.
Further, the processing equipment calls a trained defect segmentation model by means of a defect foreground and background segmentation module, and a fine pixel position information set is obtained based on image content in a positioning frame;
finally, the processing equipment judges by means of a rule judging module based on the obtained defect information and pixel position information set and adopting a single-instance-level rule and a multi-instance-level rule, wherein the single-instance-level rule judges single detection information, and mainly judges a target area, a shielding area, a sensitive area, classification type information, a confidence value, a degree type, length information, area information and the like; the multi-instance level rule makes a comprehensive decision for a plurality of detection information, for example, decides the number of a plurality of defects satisfying a length detection condition and an area detection condition, wherein one instance characterizes one defect.
Referring to fig. 4, which is a schematic diagram of an overall flow involved in a defect detection process in an embodiment of the present application, in conjunction with fig. 4, a process involved in implementing the defect detection process in an embodiment of the present application is described below:
in combination with what is illustrated in fig. 4, when defect detection is performed, the processing device inputs the current point location image, the point location number of the target point location corresponding to the current point location image, the point location template diagram corresponding to the current target point location, and four information of the detection area indication area in the point location template diagram into the image alignment module, and outputs a parameter transformation matrix to be adopted when the point location image is converted into alignment with the point location template diagram;
secondly, the processing equipment sends the point number of the target point and the current point image into a defect classification module to obtain defect information output by a trained defect classification model, wherein the defect information comprises classification type information, a positioning frame, confidence coefficient and degree type information of the defect;
after determining the region of interest according to the current point location image and the positioning frame, the processing device inputs the region of interest into a trained defect segmentation model, and calculates fine pixels of the current defect, namely, a pixel position information set is obtained, wherein N represents the total number of defects identified by the schematic representation of fig. 4, corresponding defect information is determined for each defect, and M represents the total number of defects which are left after the negligible defects are filtered through rule judgment.
Finally, the processing device gathers and sends the obtained defect information of the N defects into a rule judging module, the rule judging module judges the defect information, and in a specific execution process, the following seven rules can be adopted to realize parallel judgment.
The first rule is a focusing area rule for determining whether a defect is located in a target area. Specifically, according to the detection indication area in the point location template diagram, an alignment transformation parameter (or parameter transformation matrix) corresponding to the current point location image is calculated, a focusing area (or target area) in the current point location image is calculated, whether the current defect is in the focusing area of the current point location image is judged, if yes, the defect is judged to be a candidate defect, and if not, the defect is judged to be a negligible defect.
The second rule is a confidence rule for determining whether the defect confidence value reaches a confidence threshold. After the corresponding confidence threshold value is obtained according to the classification category information of the target point location and the defects, judging whether the confidence of the classification category of the defects meets the corresponding confidence threshold value, if so, determining the current defects as target defects, and if not, judging the current defects as negligible defects.
And the third rule is an area rule for judging whether the area information reaches a corresponding area threshold value. Specifically, according to the proportional relation between the pixel point position and the area input by the system, the actual physical area of the current defect is calculated, and whether the current defect is an area undetermined abnormality is judged according to an area threshold value.
And the fourth rule is a length rule for judging whether the length information obtains a corresponding length threshold value. Specifically, calculating the physical length (or length information) of the current defect according to the proportional relation between the pixel point position and the millimeter input by the system and the length of the positioning frame of the defect; and then judging whether the length is undetermined abnormal or not according to the value relation between the degree information and the corresponding length threshold value.
The fifth rule is to determine whether the degree type information matches the degree range by the degree type information in the defect information, specifically, the degree of the defect may be defined according to the quality requirement of the customer on the 3C part, and then the acceptable degree range (or called normal degree information) may be determined, so that when the degree type information is determined to be within the degree range, the defect may be considered as a negligible defect, and when the degree type information is determined not to be within the degree range, the defect may be determined as a non-negligible defect, that is, the target defect.
The rule six is based on the shielding region and the sensitive region to judge whether the defect is in the sensitive region or not, and the defect high-incidence region on the product (3C structural member) or the over-killing region caused by the defect batch texture can be marked through the shielding region and the sensitive region preset by the system; based on the method, the shielding region and the sensitive region can be set in a targeted mode, and the problems of overdischarge and missed detection of the defects can be solved by dynamically adjusting confidence threshold values, area threshold values, length threshold values and the like of the defects in the shielding region and the sensitive region.
The rule seven is a multi-instance level judgment rule, and is used for realizing multi-instance comprehensive judgment, and determining whether the corresponding type of defect can be ignored according to whether the number of the defect reaching a certain length threshold meets the standard or not or whether the number of the defect reaching a certain area threshold meets the standard or not.
Further, the whole rule is judged, so that the negligible defect and the non-negligible defect in the determined point location image can be detected, and when the non-negligible defect exists in a certain point location image, namely, the sub-detection result of the part at a certain target point location is determined to be abnormal, the part can be judged to be unqualified in detection, and the part is marked as NG; otherwise, if the defects detected at the target points are determined to be negligible defects, the part can be judged to be a qualified part and marked as OK.
Therefore, by means of the automatic defect detection mode for the 3C structural member, a general artificial intelligence algorithm can be adopted. The consistency of imaging quality of point images acquired for similar objects to be detected on the same target point is ensured by sequentially carrying out image alignment, defect classification, defect foreground and background segmentation and rule judgment processing; in addition, in the training process of the defect classification model, according to point location images of different partitions (namely different target point locations) of the structure, the influence of process differences of the partitions of the structure on detection is reduced, and the defect detection precision and the detection robustness are improved.
Further, in combination with tables 1 to 5, the defect detection mode provided by the application is compared with the existing defect detection mode in terms of performance difference, wherein the manual omission factor is obtained from line observation statistics and the manual overstock factor is not.
TABLE 1
Mobile phone camera support Important defect omission rate Rate of overkill Rate of batch withdrawal Single piece quality inspection time
The technology of the application 0.0015% 16.4% 0.35% 4s
Technology of the prior art 0.01% 22% - -
Manual work 0.1% - 2.48% 30s
Table 1 is a table for comparing the difference between the detection effects when the mobile phone camera support is subjected to defect detection in the embodiment of the present application, and it can be known from the contents illustrated in table 1 that the defect detection method proposed by the present application can greatly reduce the key defect omission rate, the batch withdrawal rate and the single quality inspection time.
TABLE 2
Mobile phone middle frame Leak rate Rate of overkill Single piece quality inspection time
The technology of the application 0.5% 20% 5s
Manual work 5% - 120s
Table 2 is a table for comparing the difference between the detection effects when the defect detection is performed on the middle frame of the mobile phone in the embodiment of the application, and it can be known from the contents illustrated in table 2 that the defect detection mode provided by the application can reduce the omission factor and the single-piece quality inspection time.
TABLE 3 Table 3
Sim clamping groove Leak rate Rate of overkill Single piece quality inspection time
The technology of the application 0.5% 5% 3s
Manual work 2% - 15s
Table 3 is a table for comparing the difference between the detection effects when defect detection is performed on Sim card slots in the embodiment of the present application, and it can be seen from the contents shown in table 3 that the defect detection method proposed by the present application can greatly reduce the omission ratio and the single-piece quality inspection time.
TABLE 4 Table 4
Computer casing Leak rate Rate of overkill Single piece quality inspection time
The technology of the application 2.3% 20% 15s
Manual work 15% - 60s
Table 4 is a table for comparing the difference between the detection effects when the computer housing is subjected to defect detection in the embodiment of the present application, and it can be seen from the contents shown in table 4 that the defect detection method proposed by the present application can greatly reduce the omission factor and the single-piece quality inspection time.
TABLE 5
Intelligent watch shell Leak rate Rate of overkill Single piece quality inspection time
The technology of the application 0.001% 2% 3s
Manual work 0.3% - 10s
Table 5 is a table for comparing the difference between the detection effects when the intelligent watch case is subjected to defect detection in the embodiment of the application, and it can be known from the contents shown in table 5 that the defect detection mode provided by the application can greatly reduce the omission ratio and the single-piece quality inspection time.
Based on the same inventive concept, referring to fig. 5, which is a schematic logic structure diagram of a defect detecting device according to an embodiment of the present application, a defect detecting device 500 includes,
an acquiring unit 501, configured to acquire point location images acquired for an object to be detected on preset target point locations respectively;
the detecting unit 502 is configured to obtain a defect detection result corresponding to the object to be detected based on sub-detection results obtained by respectively detecting each point bitmap image, where each time a point bitmap image is obtained, the following operations are performed:
Acquiring a point location template diagram associated with a corresponding target point location, and determining a target area corresponding to a detection indication area in the point location image by performing image alignment processing on the point location image and the point location template diagram, wherein the point location template diagram is configured with the corresponding detection indication area;
obtaining classification category information, degree category information and positioning frames corresponding to each identified candidate defect based on the point location image by adopting a trained defect classification model, and respectively identifying and obtaining a defective pixel position information set based on images in each positioning frame by adopting a trained defect segmentation model;
and determining a sub-detection result of the object to be detected at the target point position based on the classification category information, the degree category information and the defect pixel position information set of each candidate defect in the target area.
Optionally, when the defect classification model is obtained through training, the apparatus further includes a training unit 503, where the training unit 503 is specifically configured to:
constructing a defect classification model to be trained based on a preset target detection algorithm, and acquiring a sample point location image set, wherein the defect classification model comprises a sub-network for realizing a defect degree prediction function, the sample point location image set comprises sample point location images which are acquired by different types of acquisition equipment and correspond to different target points, and each sample point location image is marked with a defect classification type label, a defect degree type label and a positioning frame label;
And carrying out multi-round iterative training on the defect classification model to be trained by adopting the sample point location image set until a preset convergence condition is met, so as to obtain the trained defect classification model.
Optionally, when determining the sub-detection result of the object to be detected at the target point location based on the classification category information, the degree category information, and the defect pixel location information set of each candidate defect in the target area, the detection unit 502 is configured to:
screening out target defects of which the confidence value associated with the classification category information is higher than a set threshold value or the degree category information does not belong to preset normal degree information from the candidate defects in the target area;
for each target defect, the following operations are performed: determining area information corresponding to the target defect based on a defect pixel position information set corresponding to the target defect, and obtaining a corresponding area detection result based on the area information and a preset area detection condition;
and determining a sub-detection result of the object to be detected at the target point position based on the area detection result corresponding to each target defect.
Optionally, after obtaining the corresponding area detection result based on the area information and the preset area detection condition, before determining the sub-detection result of the object to be detected at the target point location based on the area detection result corresponding to each target defect, the detection unit 502 is further configured to:
Determining a defect area frame corresponding to the target defect, and determining length information corresponding to the target defect based on the number of pixels constructing the edge of the defect area frame and the mapping relation between the pixels and the size;
and obtaining a corresponding length detection result based on the length information and a preset length detection condition.
Optionally, based on a preset area detection condition, when a corresponding area detection result is obtained, the detection unit 502 is configured to:
acquiring an area threshold configured for the area information of the target defect according to the area information of the locating frame corresponding to the target defect in the point location image and the classification type information corresponding to the target defect;
when the area information of the target defect is determined, the corresponding area detection result is judged to be abnormal in the area pending state when the preset area threshold value is reached, and when the area information of the target defect is determined, the area detection result is judged to be normal in the area detection state when the preset area threshold value is not reached.
Optionally, when determining the sub-detection result of the object to be detected at the target point location based on the area detection result corresponding to each target defect, the detection unit 502 is configured to:
acquiring a total defect threshold value set for each classification category information, and respectively counting the total target defects belonging to the same classification category information, wherein the corresponding area detection result is the total target defects of the area to-be-determined abnormality;
Determining that the total number of target defects corresponding to the class classification information is higher than a corresponding total defect threshold value, and judging the sub-detection result of the object to be detected at the target point position as detection abnormality; and determining the sub-detection results of the object to be detected at the target point positions to be normal detection when the total number of target defects corresponding to the various classification category information is not higher than the corresponding total number of defects threshold.
Optionally, when determining the target area corresponding to the detection indication area in the point image by performing image alignment processing on the point image and the point template image, the detection unit 502 is configured to:
calculating a parameter transformation matrix configured for the point images when the point images are converted into the point template images to be aligned;
and determining a target area corresponding to the detection indication area in the point position image based on the parameter transformation matrix and the position information corresponding to the detection indication area in the point position template image.
Optionally, using a trained defect segmentation model, when identifying and obtaining the defect pixel location information set based on the images in each positioning frame, the detection unit 502 is configured to:
obtaining a trained defect segmentation model, wherein a defect positioning model is constructed based on a preset segmentation algorithm and is used for positioning a defect pixel position set covered by a defect;
And respectively inputting the image contents selected by the defect area frames into a defect positioning model to obtain a defect pixel position set which is respectively determined by the defect positioning model corresponding to the image contents.
Optionally, when a defect detection result corresponding to the object to be detected is obtained based on the sub-detection results obtained by respectively detecting the point images, the detection unit 502 is configured to:
determining that the sub-detection results obtained by respectively detecting each point bitmap image are abnormal, and judging the defect detection result corresponding to the object to be detected as unqualified detection when the sub-detection results judged to be abnormal exist; the method comprises the steps of,
and when the sub-detection results obtained by respectively detecting the bitmap images of each point are determined to be normal, judging the defect detection result corresponding to the object to be detected as qualified detection.
Having described the defect detection method and apparatus of the exemplary embodiment of the present application, next, an electronic device according to another exemplary embodiment of the present application is described.
Those skilled in the art will appreciate that the various aspects of the application may be implemented as a system, method, or program product. Accordingly, aspects of the application may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
Based on the same inventive concept as the above-mentioned method embodiment, an electronic device is further provided in the embodiment of the present application, and referring to fig. 6, which is a schematic diagram of a hardware composition structure of an electronic device to which the embodiment of the present application is applied, the electronic device 600 may at least include a processor 601 and a memory 602. The memory 602 stores program code that, when executed by the processor 601, causes the processor 601 to perform any of the defect detection steps described above.
In some possible implementations, a computing device according to the application may include at least one processor, and at least one memory. Wherein the memory stores program code that, when executed by the processor, causes the processor to perform the steps of rendering map data according to various exemplary embodiments of the application described hereinabove. For example, the processor may perform the steps as shown in fig. 2A, 2C.
A computing device 700 according to such an embodiment of the application is described below with reference to fig. 7. As shown in fig. 7, computing device 700 is in the form of a general purpose computing device. Components of computing device 700 may include, but are not limited to: the at least one processing unit 701, the at least one memory unit 702, and a bus 703 that connects the different system components (including the memory unit 702 and the processing unit 701).
Bus 703 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, and a local bus using any of a variety of bus architectures.
The storage unit 702 may include readable media in the form of volatile memory, such as Random Access Memory (RAM) 7021 and/or cache memory 7022, and may further include Read Only Memory (ROM) 7023.
The storage unit 702 may also include a program/utility 7025 having a set (at least one) of program modules 7024, such program modules 7024 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
The computing device 700 may also communicate with one or more external devices 704 (e.g., keyboard, pointing device, etc.), one or more devices that enable objects to interact with the computing device 700, and/or any devices (e.g., routers, modems, etc.) that enable the computing device 700 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 705. Moreover, the computing device 700 may also communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, through the network adapter 706. As shown, the network adapter 706 communicates with other modules for the computing device 700 over the bus 703. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with computing device 700, including, but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The various aspects of defect detection provided by the present application may also be implemented in the form of a program product comprising program code for causing an electronic device to carry out the steps of the defect detection method according to the various exemplary embodiments of the present application as described herein above when the program product is run on an electronic device, e.g. the electronic device may carry out the steps as shown in fig. 2A, 2C, based on the same inventive concept as the above described method embodiments.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (15)

1. A defect detection method, comprising:
respectively acquiring point position images acquired aiming at an object to be detected on each preset target point position;
based on the sub-detection results obtained by respectively detecting the point location images, obtaining a defect detection result corresponding to the object to be detected, wherein each time a point location image is obtained, the following operations are executed:
acquiring a point location template diagram associated with a corresponding target point location, and performing image alignment processing on the point location image and the point location template diagram, wherein a target area corresponding to a detection indication area is determined in the point location image, and the point location template diagram is configured with a corresponding detection indication area;
Obtaining classification category information, degree category information and positioning frames corresponding to each identified candidate defect based on the point location image by adopting a trained defect classification model, and respectively identifying and obtaining a defect pixel position information set based on images in each positioning frame by adopting a trained defect segmentation model;
and determining a sub-detection result of the object to be detected at the target point location based on the classification category information, the degree category information and the defect pixel position information set of each candidate defect in the target area.
2. The method of claim 1, wherein training to obtain a defect classification model comprises:
constructing a defect classification model to be trained based on a preset target detection algorithm, and acquiring a sample point location image set, wherein the defect classification model comprises a sub-network for realizing a defect degree prediction function, the sample point location image set comprises sample point location images which are acquired by different types of acquisition equipment and correspond to different target points, and each sample point location image is marked with a defect classification type label, a defect degree type label and a positioning frame label;
and carrying out multi-round iterative training on the defect classification model to be trained by adopting the sample point location image set until a preset convergence condition is met, so as to obtain a trained defect classification model.
3. The method of claim 1, wherein the determining the sub-detection result of the object to be detected at the target point location based on the classification category information, the degree category information, and the defect pixel location information set of each candidate defect in the target area comprises:
screening out target defects of which the confidence value associated with the classification category information is higher than a set threshold value or the degree category information does not belong to preset normal degree information from the candidate defects in the target area;
for each target defect, the following operations are performed: determining area information corresponding to the target defect based on a defect pixel position information set corresponding to the target defect, and obtaining a corresponding area detection result based on the area information and a preset area detection condition;
and determining a sub-detection result of the object to be detected at the target point location based on the area detection result corresponding to each target defect.
4. The method of claim 3, wherein after obtaining the corresponding area detection result based on the area information and the preset area detection condition, determining, based on the area detection result corresponding to each target defect, that the object to be detected is before the sub-detection result of the target point location further comprises:
Determining a defect area frame corresponding to the target defect, and determining length information corresponding to the target defect based on the number of pixels constructing the edge of the defect area frame and the mapping relation between the pixels and the size;
and obtaining a corresponding length detection result based on the length information and a preset length detection condition.
5. The method of claim 3, wherein the obtaining the corresponding area detection result based on the preset area detection condition includes:
acquiring an area threshold configured for the area information of the target defect according to the area information of the locating frame corresponding to the target defect in the point location image and the classification type information corresponding to the target defect;
when the area information of the target defect is determined, the corresponding area detection result is judged to be abnormal in the area undetermined state when the area information of the target defect reaches a preset area threshold value, and when the area information of the target defect is determined to not reach the preset area threshold value, the area detection result is judged to be normal in the area detection state.
6. The method of claim 5, wherein determining the sub-detection result of the object to be detected at the target point location based on the area detection result corresponding to each target defect comprises:
Acquiring a total defect threshold value set for each classification category information, and respectively counting the total target defects belonging to the same classification category information, wherein the corresponding area detection result is the total target defects of the area to-be-determined abnormality;
determining that the total number of target defects corresponding to the class classification information is higher than a corresponding total defect threshold value, and judging the sub-detection result of the object to be detected at the target point position as detection abnormality; and determining the sub-detection results of the object to be detected at the target point positions to be normal detection when the total number of target defects corresponding to the various classification category information is not higher than the corresponding total defect threshold value.
7. The method of any one of claims 1-6, wherein the determining a target region corresponding to a detection indication region in the dot location image by performing image alignment processing on the dot location image and the dot location template map comprises:
calculating a parameter transformation matrix configured for the point location image when the point location image is converted into alignment with the point location template image;
and determining a target area corresponding to the detection indication area in the point position image based on the parameter transformation matrix and the position information corresponding to the detection indication area in the point position template diagram.
8. The method of any of claims 1-6, wherein using the trained defect segmentation model to separately identify a set of defective pixel location information based on the images in each bounding box comprises:
obtaining a trained defect segmentation model, wherein the defect positioning model is constructed based on a preset segmentation algorithm and is used for positioning a defect pixel position set covered by a defect;
and respectively inputting the image contents selected by each defective area frame into the defect positioning model to obtain a defect pixel position set which is respectively determined by the defect positioning model corresponding to each image content.
9. The method according to any one of claims 1 to 6, wherein the obtaining the defect detection result corresponding to the object to be detected based on the sub-detection results obtained by detecting for each point location image respectively includes:
determining that the sub-detection results obtained by respectively detecting the bitmap images of each point are abnormal, and judging the defect detection result corresponding to the object to be detected as unqualified detection when the sub-detection results judged to be abnormal in detection exist; the method comprises the steps of,
and when the sub-detection results obtained by respectively detecting the bitmap images of each point are determined to be normal, judging the defect detection result corresponding to the object to be detected as qualified detection.
10. A defect detection apparatus, comprising:
the acquisition unit is used for respectively acquiring point position images acquired aiming at the object to be detected on each preset target point position;
the detection unit is used for obtaining a defect detection result corresponding to the object to be detected based on sub-detection results obtained by respectively detecting each point bitmap image, wherein each time a point bitmap image is obtained, the following operations are executed:
acquiring a point location template diagram associated with a corresponding target point location, and performing image alignment processing on the point location image and the point location template diagram, wherein a target area corresponding to a detection indication area is determined in the point location image, and the point location template diagram is configured with a corresponding detection indication area;
obtaining classification category information, degree category information and positioning frames corresponding to each identified candidate defect based on the point location image by adopting a trained defect classification model, and respectively identifying and obtaining a defect pixel position information set based on images in each positioning frame by adopting a trained defect segmentation model;
and determining a sub-detection result of the object to be detected at the target point location based on the classification category information, the degree category information and the defect pixel position information set of each candidate defect in the target area.
11. The apparatus of claim 10, wherein when training to obtain the defect classification model, the apparatus further comprises a training unit, the training unit is specifically configured to:
constructing a defect classification model to be trained based on a preset target detection algorithm, and acquiring a sample point location image set, wherein the defect classification model comprises a sub-network for realizing a defect degree prediction function, the sample point location image set comprises sample point location images which are acquired by different types of acquisition equipment and correspond to different target points, and each sample point location image is marked with a defect classification type label, a defect degree type label and a positioning frame label;
and carrying out multi-round iterative training on the defect classification model to be trained by adopting the sample point location image set until a preset convergence condition is met, so as to obtain a trained defect classification model.
12. The apparatus of claim 10, wherein the detection unit is configured to, when determining the sub-detection result of the object to be detected at the target point location based on classification category information, degree category information, and defect pixel position information set of each candidate defect in the target area:
Screening out target defects of which the confidence value associated with the classification category information is higher than a set threshold value or the degree category information does not belong to preset normal degree information from the candidate defects in the target area;
for each target defect, the following operations are performed: determining area information corresponding to the target defect based on a defect pixel position information set corresponding to the target defect, and obtaining a corresponding area detection result based on the area information and a preset area detection condition;
and determining a sub-detection result of the object to be detected at the target point location based on the area detection result corresponding to each target defect.
13. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the defect detection method of any of claims 1-9 when the program is executed by the processor.
14. A computer-readable storage medium having stored thereon a computer program, characterized by: the computer program, when executed by a processor, implements the defect detection method according to any of claims 1-9.
15. A computer program product comprising a computer program, which when executed by a processor implements the defect detection method according to any of claims 1-9.
CN202211392176.7A 2022-11-08 2022-11-08 Defect detection method and device, electronic equipment and storage medium Pending CN117011216A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211392176.7A CN117011216A (en) 2022-11-08 2022-11-08 Defect detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211392176.7A CN117011216A (en) 2022-11-08 2022-11-08 Defect detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117011216A true CN117011216A (en) 2023-11-07

Family

ID=88567827

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211392176.7A Pending CN117011216A (en) 2022-11-08 2022-11-08 Defect detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117011216A (en)

Similar Documents

Publication Publication Date Title
CN111612763B (en) Mobile phone screen defect detection method, device and system, computer equipment and medium
CN110060237B (en) Fault detection method, device, equipment and system
TW202013248A (en) Method and apparatus for vehicle damage identification
CN115294117B (en) Defect detection method and related device for LED lamp beads
CN111680750B (en) Image recognition method, device and equipment
CN111310826B (en) Method and device for detecting labeling abnormality of sample set and electronic equipment
KR100868884B1 (en) Flat glass defect information system and classification method
CN116188475A (en) Intelligent control method, system and medium for automatic optical detection of appearance defects
CN113962274A (en) Abnormity identification method and device, electronic equipment and storage medium
CN115131283A (en) Defect detection and model training method, device, equipment and medium for target object
CN112052730B (en) 3D dynamic portrait identification monitoring equipment and method
CN115861210B (en) Transformer substation equipment abnormality detection method and system based on twin network
WO2024021461A1 (en) Defect detection method and apparatus, device, and storage medium
CN111415339A (en) Image defect detection method for complex texture industrial product
CN116071315A (en) Product visual defect detection method and system based on machine vision
CN117392042A (en) Defect detection method, defect detection apparatus, and storage medium
Feng et al. A novel saliency detection method for wild animal monitoring images with WMSN
Avola et al. Real-time deep learning method for automated detection and localization of structural defects in manufactured products
CN111223078A (en) Method for determining defect grade and storage medium
CN116993654B (en) Camera module defect detection method, device, equipment, storage medium and product
CN112287905A (en) Vehicle damage identification method, device, equipment and storage medium
CN112686122A (en) Human body and shadow detection method, device, electronic device and storage medium
CN115272340B (en) Industrial product defect detection method and device
Nayak et al. Fruit recognition using image processing
CN113139540B (en) Backboard detection method and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination