CN111986178B - Product defect detection method, device, electronic equipment and storage medium - Google Patents

Product defect detection method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111986178B
CN111986178B CN202010849926.3A CN202010849926A CN111986178B CN 111986178 B CN111986178 B CN 111986178B CN 202010849926 A CN202010849926 A CN 202010849926A CN 111986178 B CN111986178 B CN 111986178B
Authority
CN
China
Prior art keywords
image
defect
information
channel
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010849926.3A
Other languages
Chinese (zh)
Other versions
CN111986178A (en
Inventor
矫函哲
黄锋
邹建法
聂磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010849926.3A priority Critical patent/CN111986178B/en
Publication of CN111986178A publication Critical patent/CN111986178A/en
Application granted granted Critical
Publication of CN111986178B publication Critical patent/CN111986178B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a product defect detection method, a product defect detection device, electronic equipment and a storage medium, and relates to the fields of computer vision, image processing, deep learning and the like. The specific implementation scheme is as follows: acquiring an image to be detected of a target product; determining a color characteristic difference value between the image to be detected and the template image according to the color characteristic value of the image to be detected and the color characteristic value of the pre-stored template image; taking the color characteristic difference value as a characteristic value of the first channel to obtain an input image comprising the first channel; and obtaining defect information of the target product according to the input image and the target detection model. The embodiment of the application can improve the defect detection effect on products with small size and weak texture characteristics in industrial scenes.

Description

Product defect detection method, device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of computers, in particular to the fields of computer vision, image processing, deep learning and the like.
Background
In industrial manufacturing scenarios, such as component manufacturing scenarios for consumer electronics, defect detection of product appearance is an important link before product shipment. The traditional appearance defect detection is realized through manual visual detection, but has the problems of high labor cost, difficult unification of quality inspection standards, difficult storage of detection data, secondary excavation and utilization and the like. Compared with the manual visual detection scheme, the automatic detection scheme based on computer vision has the characteristics of stable performance, sustainable iterative optimization and the like, and therefore, the automatic detection scheme based on computer vision is widely focused in the defect detection field.
Disclosure of Invention
The application provides a product defect detection method, a product defect detection device, electronic equipment and a storage medium.
According to an aspect of the present application, there is provided a product defect detection method including:
Acquiring an image to be detected of a target product;
determining a color characteristic difference value between the image to be detected and the template image according to the color characteristic value of the image to be detected and the color characteristic value of the pre-stored template image;
Taking the color characteristic difference value as a characteristic value of the first channel to obtain an input image comprising the first channel;
and obtaining defect information of the target product according to the input image and the target detection model.
According to another aspect of the present application, there is provided a product defect detecting apparatus comprising:
The image acquisition module is used for acquiring an image to be detected of the target product;
The difference value determining module is used for determining a color characteristic difference value between the image to be detected and the template image according to the color characteristic value of the image to be detected and the color characteristic value of the pre-stored template image;
The first channel processing module is used for taking the color characteristic difference value as a characteristic value of the first channel to obtain an input image comprising the first channel;
And the defect detection module is used for obtaining defect information of the target product according to the input image and the target detection model.
According to another aspect of the present application, there is provided an electronic apparatus including:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the methods provided by any of the embodiments of the present application.
According to another aspect of the present application, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method provided by any of the embodiments of the present application.
According to another aspect of the application, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method as described above.
According to the technical scheme, the characteristic value of the first channel included in the input image is the color characteristic difference value between the image to be detected of the target product and the template image, so that in the process of obtaining the defect information of the target product according to the input image and the target detection model, depth characteristic extraction can be carried out on the difference between the image to be detected and the template image, and the defect detection effect on the product with small size and weak texture characteristic in the industrial scene is improved by focusing attention of the model on the difference between the image to be detected and the template image.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
Drawings
The drawings are included to provide a better understanding of the present application and are not to be construed as limiting the application. Wherein:
FIG. 1 is a schematic diagram of a method for detecting defects in a product according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a method for detecting defects in a product according to another embodiment of the present application;
FIG. 3 is a schematic diagram of HRNet models in an embodiment of the present application;
FIG. 4 is a schematic diagram showing an application example of a product defect detection method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a product defect detection apparatus according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a product defect detection apparatus according to another embodiment of the present application;
Fig. 7 is a block diagram of an electronic device for implementing a product defect detection method of an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present application are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic diagram of a method for detecting defects of a product according to an embodiment of the present application, as shown in fig. 1, the method includes:
Step S11, obtaining an image to be detected of a target product;
By way of example, the target product may comprise a component of a product to be inspected for defects in an industrial manufacturing industry, such as consumer electronics, appliances, etc. The image to be detected of the target product may include an original image obtained by shooting the target product by an image acquisition device such as a camera, a video camera, etc. on the production line or an image obtained by processing the original image.
Step S12, determining a color characteristic difference value between the image to be detected and the template image according to the color characteristic value of the image to be detected and the color characteristic value of the pre-stored template image;
the template image may include an image of a qualified product or good product of the same type or the same model as the target product captured in advance, or may include a simulation design drawing of the product, or the like.
The color characteristic values of the image may include characteristic values of pixels in the image in one or more color channels. For example, for an RGB (Red-Green-Blue) color image, the color feature values may include R-channel feature values, G-channel feature values, and/or B-channel feature values for each pixel. For gray scale images, the color eigenvalues may include the eigenvalues of the gray scale channels, i.e., gray scale values.
The color characteristic difference may include a difference between a color characteristic value of each pixel of the image to be detected and a color characteristic value of a corresponding pixel in the template image. If the image to be detected and the template image are RGB color images, color characteristic difference values can be calculated for the R channel, the G channel and the B channel respectively. If the image to be detected and the template image are gray scale images, the color feature value includes a gray scale difference value of each pixel.
For example, if the image to be detected and the template image are images with the same resolution, for example, images acquired by the same image acquisition device, the difference between the color feature values may be calculated pixel by pixel, resulting in a color feature difference. If the image to be detected and the template image are not images with the same resolution, the resolution of the image to be detected or the template image can be adjusted to be the same, and then the color characteristic difference value is determined.
Step S13, taking the color characteristic difference value as a characteristic value of a first channel to obtain an input image comprising the first channel;
The input image obtained based on step S13 may be a single-channel image or a multi-channel image, including a first channel, where the feature value of the first channel is a color feature value. Illustratively, the input image may include one or more first channels, for example, the input image may include 3 first channels corresponding to an R channel, a G channel, and a B channel of the image to be detected, respectively; the input image may also include 1 first channel corresponding to the gray scale channel.
Illustratively, the input image may further include a second channel, a third channel, and other channels. The feature values of other channels in the input image may include color feature values of the image to be detected, color feature values of the template image, or other feature information of the target product, such as depth and surface inclination of the target product based on the identification of the image to be detected.
And step S14, obtaining defect information of the target product according to the input image and the target detection model.
For example, the object detection model may be used to obtain defect information of a product related to an input image with respect to the input image. The target detection model may be trained based on a deep convolutional neural network (Deep Convolutional Neural Networks, deep CNNs) including, for example, a U-net (U-type network), FCN (Fully Convolutional Networks, full convolutional network), mask R-CNN (Regions with Deep Convolutional Neural Networks features, mask instance segmentation) network, and the like. After the input image is input to the target detection model, the target detection model can output defect information of the target product.
Illustratively, the defect information of the target product may include defect location, defect size, defect type, and the like.
In the embodiment of the application, the characteristic value of the first channel included in the input image is the color characteristic difference value between the image to be detected of the target product and the template image, so that in the process of obtaining the defect information of the target product according to the input image and the target detection model, the depth characteristic extraction can be carried out aiming at the difference between the image to be detected and the template image, and the defect detection effect on the product with small size and weak texture characteristic in the industrial scene is improved by focusing the attention of the model on the difference between the image to be detected and the template image, and the method has the advantages of accurate detection, stability and strong robustness.
In an alternative exemplary embodiment, the input image further comprises a second channel and/or a third channel. The product defect detection method may further include:
taking the color characteristic value of the image to be detected as the characteristic value of the second channel, and/or,
And taking the color characteristic value of the template image as the characteristic value of the third channel.
For example, the input image may be a 9-channel image including 3 first channels, 3 second channels, and 3 third channels. Wherein the 3 first channels comprise color characteristic difference values respectively corresponding to the RGB three channels; the 3 second channels comprise characteristic values of RGB three channels of the image to be detected; the 3 third channels include the eigenvalues of the RGB three channels of the template image. The target detection model can respectively perform feature extraction for each channel, then fuse the extracted feature information of each channel, and output the defect information of the target product on the image to be detected.
As another example, the input image may be a 3-channel image including 1 first channel, 1 second channel, and 1 third channel. The first channel comprises a gray value difference value between the image to be detected and the template image; the second channel comprises gray values of the image to be detected; the third channel includes gray values of the template image.
According to the exemplary embodiment, the target detection model can learn the defect image, the template image and the difference between the defect image and the template image simultaneously in the training process, and the model automatically focuses Attention on a more important channel or the spatial position of the image based on an Attention mechanism of the model, so that the defect detection effect on products with small size and weak texture features in an industrial scene is greatly improved.
Illustratively, as shown in fig. 2, in an alternative embodiment of the step S11, acquiring the image to be detected of the target product may include:
step S121, obtaining an original image of a target product;
Step S122, determining the position information of the identification point of the target product in the original image;
and step S123, correcting the original image according to the position information of the identification point in the original image and the position information of the identification point in the template image to obtain an image to be detected, which is aligned with the template image.
Illustratively, the identification points may comprise fixed markers with high identity in the target product, such as screws, part corner points, etc. The position information of the identification point in the original image can be obtained by means of image identification. For example, position information of the identification point in the image is detected using a key point detection model. The position information of the identification point in the template image can be determined through manual annotation or can be determined through image identification.
For example, an image acquisition device on a production line may be used to capture a target product to obtain an original image. Then, position information of the identification point in the original image is determined using the key point detection model. And determining differences of various information such as rotation angles, positions, sizes and the like of products in the original image and the template image according to the position information of the identification points in the original image and the position information of the identification points in the template image, correcting the original image according to the differences, for example, correcting the original image in a rotation way according to the differences of the rotation angles of the products, and aligning the corrected image with the template image to serve as an image to be detected. The key point detection model may be obtained based on training a deep convolutional neural network, where the deep convolutional neural network includes, for example, FCN, HRNet (High-Resolution Net), and the like. The corrected image is aligned with the template image, and one or more of the rotation angle, the position and the size of the product in the corrected image and the template image can be consistent.
According to the embodiment, the image to be detected and the template image are aligned, so that each pixel in the image to be detected corresponds to each pixel in the template image one by one, the accuracy of the color characteristic difference value is improved, and the accuracy of product defect detection is improved.
Optionally, in the step S122, determining the location information of the identification point of the target product in the original image may include:
And inputting the original image into a high-resolution network HRNet model to obtain the position information of the identification point in the original image, which is output by the HRNet model.
This embodiment uses HRNet network models to detect the identification points in the original image. The HRNet model can refer to the schematic diagram shown in fig. 3 for processing the original image. As shown in fig. 3, the network structure of the HRNet model includes a convolution (Convolution) layer, a stride convolution (Strided Convolution) layer, and an upsampling (Upsample) layer. After an original image is input into a model, the original image is processed by a convolution layer, a stride convolution layer and an up-sampling layer in the model, a Feature Map (Feature Map) is output, a loss function is calculated according to the Feature Map, operations such as splicing and deconvolution are carried out on the Feature Map when a preset condition is reached, and finally the Feature Map with enhanced recognition points is output.
The convolution layer performs convolution scanning on the original image or the feature map by using convolution cores with different weights, extracts meaningful image features from the original image or the feature map, and outputs the meaningful image features to the next feature map. The stride convolution layer expands the receptive field of the convolution kernel under the condition of not increasing the number of parameters, and improves the performance of the model. The up-sampling layer up-samples the original image or the feature map, for example, the feature map with the width w and the height h can be changed into the feature map with the width 2w and the height 2h after passing through the up-sampling layer, so that more detail information is reserved.
The HRNet model is a deep neural network model with a convolution layer and an up-sampling layer, has higher robustness for original images with different brightness and inclination angles, is used for identifying point detection tasks, and has higher generalization performance.
Optionally, the step S123 may be performed to correct the original image according to the position information of the identification point in the original image and the position information of the identification point in the template image, so as to obtain the image to be detected aligned with the template image, and may include:
Determining an affine transformation matrix according to the position information of the identification points in the original image and the position information of the identification points in the template image;
and correcting the original image according to the affine transformation matrix to obtain an image to be detected, which is aligned with the template image.
For example, an original image is represented by a matrix, each element in the matrix represents a characteristic value of each pixel in the original image, and the original image is multiplied by an affine transformation matrix to obtain a matrix corresponding to the image to be detected, so that the image to be detected is obtained.
Since the affine transformation matrix can characterize the transformation operations of rotation, translation, scaling, etc. between the identified points in the original image and the identified points in the template image. Therefore, the original image is corrected based on the affine transformation matrix, so that the target product in the original image can be subjected to accurate transformation operation and is consistent with the product in the template image in all aspects of inclination angle, position, size and the like. The accuracy of the color characteristic difference value is improved, so that the accuracy of product defect detection is improved.
Illustratively, in an alternative embodiment of the step S14, obtaining defect information of the target product according to the input image and the target detection model may include:
Inputting the input image into a target detection model to obtain a defect position output by the target detection model and mask information corresponding to the defect position;
and determining the defect type corresponding to the defect position according to the corresponding relation between the mask information and the defect type.
In this embodiment, the defect information of the target product may be determined using a model in which the output information includes the defect position and the mask information. Based on the mask information output by the model, the defect type can be obtained. Facilitating the process of adapting to different types of defective products.
For example, a Mask R-CNN model is used as the target detection model. The network structure of the Mask R-CNN model comprises a convolution layer, a pooling (Pooling) layer, a full connection layer and the like. The convolution layer performs convolution scanning on the original image or the feature map by using convolution check with different weights, extracts meaningful image features from the original image or the feature map, and outputs the meaningful image features to the next feature map. The pooling layer is used for performing dimension reduction operation on the feature map and reserving main features in the image. Specifically, the Mask R-CNN model uses convolution operation of the classification model to obtain a corresponding feature map, then uses a candidate region network (Region Proposal Network) to calculate whether a certain region of interest (Region of Interest, ROI) of the original map contains a defect, if so, uses a convolution neural network to perform feature extraction, and then predicts a defect boundary, a bounding box and Mask information (Mask) of the product; if no defects are contained, no correlation calculation is performed. In the training process, the model parameters can be optimized by combining the predicted loss of various information, and when the error between the output of the model and the true value is smaller than a certain threshold value, the training is stopped.
Because the Mask R-CNN model adopts a deep neural network structure with convolution and pooling operations, the Mask R-CNN model has higher robustness for original images with different brightness and inclination angles, and has higher generalization performance in the detection task of defect positions. And, the output information of the model includes defect positions, mask information, confidence, etc., wherein the mask information corresponds to defect types. The loss of the predicted mask information can be combined with the loss of other information during model training to optimize model parameters, so that the defect type is determined according to the mask information output by the model, and the method has the advantages of accuracy in recognition and strong robustness.
Illustratively, the product defect detection method may further include:
and determining a processing mode of the target product according to the defect information of the target product.
For example, it is determined whether the robot arm is to be operated to remove the target product from the production line or whether alarm information is to be issued, according to the type, position or size of the defect of the target product and the quality requirements and safety requirements of the production line.
According to the exemplary embodiment, the processing mode of the target product can be determined based on the detected defect information, corresponding business decisions can be automatically made, the automation level of the production line is improved, and labor cost is saved.
Illustratively, the product defect detection method may further include:
Storing defect information of a target product and marking information corresponding to the defect information in a training database;
And calling defect information and marking information corresponding to the defect information from the training database, and updating the target detection model.
For example, after each product is inspected, defect information is stored in a training library. After the product defect detection method is operated for a period of time, the defect information and the detection accuracy rate can be manually reviewed, for example, the information marked in the manual review process is stored in a training database, and then the defect information and the marking information are called from the training database when an update instruction is received, so that the target detection model is retrained. According to the embodiment, the purpose that the model is expanded and generalized along with the dynamic business can be achieved, and the accuracy of defect detection is improved.
Fig. 4 is a schematic diagram of an application example of the product defect detection method of the embodiment of the present application. As shown in FIG. 4, in practical application, the product defect detection method can be implemented by using several main modules, such as an image acquisition system, a console, a correction module, a detection module, a training engine, a control module, a database, a service related system and the like.
The image acquisition system utilizes an image acquisition device on a production line to perform omnibearing image acquisition work on the part products to be detected.
The control console converts the image acquired by the image acquisition system into a detection request (query), and performs load balancing and scheduling in real time according to the deployment condition of the online prediction model, and sends the detection request to the optimal server carrying the prediction model.
The server is provided with a correction module and a detection module, and the correction module and the detection module are trained by a training engine. After the server performs preset image preprocessing on the received detection request, the correction module is used for image correction, and the acquired image to be processed is corrected to a field of view consistent with the template image. The correction module outputs the corrected image to the detection module, the detection module is used for carrying out target detection calculation, information such as the position, the confidence coefficient and the like of the defect is given, and then the result is returned to the control module.
The control module is designed in combination with the service scene, and can make a processing mode which meets the requirements of the production environment scene, such as alarming, log storage and the like, according to the prediction result given by the model and output response information. The control module also stores the predicted result and the corresponding processing mode into a database.
The service related system is used for carrying out corresponding service operation according to the response information output by the control module, for example, the mechanical arm is operated to take the part of which the defect is detected by the detection module from the production line.
The database is used for storing the prediction result of the product image, the corresponding template image and the processing mode generated by the control module. After the system is operated for a period of time, the accuracy of defect detection and positioning can be manually checked, and then the database is updated.
The training engine is used for training the deep learning model in the correction module and the detection module, and the final model output is deployed into the production environment. The training engine may use the data in the database as training data to retrain the target detection model to improve the defect detection accuracy. The model trained each time can gradually replace the old model running on line in a small-flow online mode, so that the purpose that the model is expanded and generalized along with the dynamic business is achieved.
According to the product defect detection method provided by the embodiment of the application, the characteristic value of the first channel included in the input image is the color characteristic difference value between the image to be detected of the target product and the template image, so that in the process of obtaining the defect information of the target product according to the input image and the target detection model, depth characteristic extraction can be performed on the difference between the image to be detected and the template image, and the defect detection effect on the product with small size and weak texture characteristics in an industrial scene is improved by focusing attention of the model on the difference between the image to be detected and the template image.
Fig. 5 is a schematic diagram of a product defect detecting apparatus according to an embodiment of the present application. As shown in fig. 5, the apparatus includes:
The image acquisition module 510 is configured to acquire an image to be detected of a target product;
The difference determining module 520 is configured to determine a color feature difference between the image to be detected and the template image according to the color feature value of the image to be detected and the color feature value of the pre-stored template image;
a first channel processing module 530, configured to take the color feature difference value as a feature value of the first channel, to obtain an input image including the first channel;
The defect detection module 540 is configured to obtain defect information of the target product according to the input image and the target detection model.
Illustratively, as shown in FIG. 6, the image acquisition module 510 includes:
an acquisition unit 511 for acquiring an original image of a target product;
a first determining unit 512, configured to determine position information of an identification point of the target product in the original image;
And a correction unit 513 for correcting the original image according to the position information of the identification point in the original image and the position information of the identification point in the template image, so as to obtain an image to be detected, which is aligned with the template image.
Illustratively, the orthotic unit 513 includes:
a determining subunit, configured to determine an affine transformation matrix according to the position information of the identification point in the original image and the position information of the identification point in the template image;
And the alignment subunit is used for correcting the original image according to the affine transformation matrix to obtain an image to be detected which is aligned with the template image.
Illustratively, the first determining unit is configured to input the original image to the high-resolution network HRNet model, and obtain location information of the identification point output by the HRNet model in the original image.
Illustratively, as shown in FIG. 6, the defect detection module 540 includes:
an input unit 541, configured to input an input image to the target detection model, to obtain a defect position output by the target detection model and mask information corresponding to the defect position;
And a second determining unit 542, configured to determine a defect type corresponding to the defect position according to the correspondence between the mask information and the defect type.
Illustratively, as shown in fig. 6, the input image further includes a second channel and/or a third channel;
The apparatus further comprises:
A second channel processing module 550, configured to take the color feature value of the image to be detected as a feature value of the second channel, and/or,
And a third channel processing module 560, configured to take the color feature value of the template image as a feature value of the third channel.
Illustratively, as shown in FIG. 6, the apparatus further comprises:
a third determining unit 570, configured to determine a processing manner of the target product according to the defect information of the target product.
Illustratively, as shown in FIG. 6, the apparatus further comprises:
A storage module 580, configured to store defect information of the target product and labeling information corresponding to the defect information in a training database;
And the updating module 590 is used for calling the defect information and the labeling information corresponding to the defect information from the training database and updating the target detection model.
The device provided by the embodiment of the application can realize the method provided by the embodiment of the application and has corresponding beneficial effects.
According to embodiments of the present application, the present application also provides an electronic device, a readable storage medium and a computer program product.
As shown in fig. 7, a block diagram of an electronic device of a product defect detection method according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
As shown in fig. 7, the electronic device includes: one or more processors 701, memory 702, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 701 is illustrated in fig. 7.
Memory 702 is a non-transitory computer readable storage medium provided by the present application. The memory stores instructions executable by the at least one processor to cause the at least one processor to perform the product defect detection method provided by the application. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to execute the product defect detection method provided by the present application.
The memory 702 is used as a non-transitory computer readable storage medium for storing non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules (e.g., the image acquisition module 510, the difference determination module 520, the first channel processing module 530, and the defect detection module 540 shown in fig. 5) corresponding to the product defect detection method according to the embodiment of the present application. The processor 701 executes various functional applications of the server and data processing, i.e., implements the product defect detection method in the above-described method embodiments, by running non-transitory software programs, instructions, and modules stored in the memory 702.
Memory 702 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created according to the use of the electronic device for product defect detection, etc. In addition, the memory 702 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, memory 702 optionally includes memory remotely located with respect to processor 701, which may be connected to the electronic device for product defect detection via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the product defect detection method may further include: an input device 703 and an output device 704. The processor 701, the memory 702, the input device 703 and the output device 704 may be connected by a bus or otherwise, in fig. 7 by way of example.
The input device 703 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device for product defect detection, such as a touch screen, keypad, mouse, trackpad, touchpad, pointer stick, one or more mouse buttons, track ball, joystick, and like input devices. The output device 704 may include a display apparatus, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibration motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASIC (application specific integrated circuit), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs (also referred to as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and Virtual Private Server (VPS) service are overcome.
According to the technical scheme, the characteristic value of the first channel included in the input image is the color characteristic difference value between the image to be detected of the target product and the template image, so that in the process of obtaining the defect information of the target product according to the input image and the target detection model, depth characteristic extraction can be carried out on the difference between the image to be detected and the template image, and the defect detection effect on the product with small size and weak texture characteristic in the industrial scene is improved by focusing attention of the model on the difference between the image to be detected and the template image.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution disclosed in the present application can be achieved, and are not limited herein.
The above embodiments do not limit the scope of the present application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application should be included in the scope of the present application.

Claims (15)

1. A method of product defect detection comprising:
Acquiring an image to be detected of a target product;
Determining a color characteristic difference value between the image to be detected and the template image according to the color characteristic value of the image to be detected and the color characteristic value of the pre-stored template image;
Taking the color characteristic difference value as a characteristic value of a first channel to obtain an input image comprising the first channel; wherein the input image further comprises a second channel and/or a third channel; the method further comprises the steps of: taking the color characteristic value of the image to be detected as the characteristic value of the second channel and/or taking the color characteristic value of the template image as the characteristic value of a third channel;
Obtaining defect information of the target product according to the input image and the target detection model; the target detection model performs feature extraction on each channel respectively, fuses the extracted feature information of each channel, and outputs defect information of a target product on the image to be detected;
The obtaining defect information of the target product according to the input image and the target detection model comprises the following steps:
inputting the input image into the target detection model to obtain a defect position output by the target detection model and mask information corresponding to the defect position;
Determining a defect type corresponding to the defect position according to the corresponding relation between the mask information and the defect type;
The target detection model is a Mask R-CNN model and is used for obtaining a corresponding feature map by utilizing convolution operation of a classification model, then calculating whether a certain region of interest of an input image contains a defect by utilizing a candidate region network, if so, carrying out feature extraction by utilizing a convolution neural network, and then predicting defect boundaries, boundary boxes and Mask information of a target product; if no defects are contained, no correlation calculation is performed.
2. The method of claim 1, wherein the acquiring the image to be detected of the target product comprises:
acquiring an original image of the target product;
determining the position information of the identification point of the target product in the original image;
and correcting the original image according to the position information of the identification point in the original image and the position information of the identification point in the template image to obtain the image to be detected, which is aligned with the template image.
3. The method according to claim 2, wherein the correcting the original image according to the position information of the identification point in the original image and the position information of the identification point in the template image to obtain the image to be detected aligned with the template image includes:
determining an affine transformation matrix according to the position information of the identification points in the original image and the position information of the identification points in the template image;
And correcting the original image according to the affine transformation matrix to obtain the image to be detected, which is aligned with the template image.
4. The method of claim 2, wherein the determining location information of the identification point of the target product in the original image comprises:
and inputting the original image into a high-resolution network HRNet model to obtain the position information of the identification point in the original image, which is output by the HRNet model.
5. The method of any one of claims 1 to 4, further comprising:
and determining a processing mode of the target product according to the defect information of the target product.
6. The method of any one of claims 1 to 4, further comprising:
Storing the defect information of the target product and the marking information corresponding to the defect information in a training database;
And calling the defect information and the labeling information corresponding to the defect information from the training database, and updating the target detection model.
7. A product defect detection apparatus comprising:
The image acquisition module is used for acquiring an image to be detected of the target product;
The difference value determining module is used for determining a color characteristic difference value between the image to be detected and the template image according to the color characteristic value of the image to be detected and the color characteristic value of the pre-stored template image;
The first channel processing module is used for taking the color characteristic difference value as a characteristic value of a first channel to obtain an input image comprising the first channel; wherein the input image further comprises a second channel and/or a third channel; the apparatus further comprises: the second channel processing module is used for taking the color characteristic value of the image to be detected as the characteristic value of the second channel, and/or the third channel processing module is used for taking the color characteristic value of the template image as the characteristic value of the third channel;
the defect detection module is used for obtaining defect information of the target product according to the input image and the target detection model; the target detection model performs feature extraction on each channel respectively, fuses the extracted feature information of each channel, and outputs defect information of a target product on the image to be detected;
wherein the defect detection module comprises:
the input unit is used for inputting the input image into the target detection model to obtain a defect position output by the target detection model and mask information corresponding to the defect position;
The second determining unit is used for determining the defect type corresponding to the defect position according to the corresponding relation between the mask information and the defect type;
The target detection model is a Mask R-CNN model and is used for obtaining a corresponding feature map by utilizing convolution operation of a classification model, then calculating whether a certain region of interest of an input image contains a defect by utilizing a candidate region network, if so, carrying out feature extraction by utilizing a convolution neural network, and then predicting defect boundaries, boundary boxes and Mask information of a target product; if no defects are contained, no correlation calculation is performed.
8. The apparatus of claim 7, wherein the image acquisition module comprises:
an acquisition unit, configured to acquire an original image of the target product;
A first determining unit, configured to determine position information of an identification point of the target product in the original image;
and the correction unit is used for correcting the original image according to the position information of the identification point in the original image and the position information of the identification point in the template image to obtain the image to be detected, which is aligned with the template image.
9. The apparatus of claim 8, wherein the corrective unit comprises:
a determining subunit, configured to determine an affine transformation matrix according to the position information of the identification point in the original image and the position information of the identification point in the template image;
And the alignment subunit is used for correcting the original image according to the affine transformation matrix to obtain the image to be detected, which is aligned with the template image.
10. The apparatus according to claim 8, wherein the first determining unit is configured to input the original image into a high resolution network HRNet model, and obtain location information of the identified point output by the HRNet model in the original image.
11. The apparatus of any of claims 7 to 10, further comprising:
And the third determining unit is used for determining the processing mode of the target product according to the defect information of the target product.
12. The apparatus of any of claims 7 to 10, further comprising:
The storage module is used for storing the defect information of the target product and the marking information corresponding to the defect information in a training database;
and the updating module is used for calling the defect information and the marking information corresponding to the defect information from the training database and updating the target detection model.
13. An electronic device, comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-6.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-6.
CN202010849926.3A 2020-08-21 2020-08-21 Product defect detection method, device, electronic equipment and storage medium Active CN111986178B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010849926.3A CN111986178B (en) 2020-08-21 2020-08-21 Product defect detection method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010849926.3A CN111986178B (en) 2020-08-21 2020-08-21 Product defect detection method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111986178A CN111986178A (en) 2020-11-24
CN111986178B true CN111986178B (en) 2024-06-18

Family

ID=73442796

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010849926.3A Active CN111986178B (en) 2020-08-21 2020-08-21 Product defect detection method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111986178B (en)

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112446865A (en) * 2020-11-25 2021-03-05 创新奇智(广州)科技有限公司 Flaw identification method, flaw identification device, flaw identification equipment and storage medium
CN112365491A (en) * 2020-11-27 2021-02-12 上海市计算技术研究所 Method for detecting welding seam of container, electronic equipment and storage medium
CN112598627A (en) * 2020-12-10 2021-04-02 广东省大湾区集成电路与***应用研究院 Method, system, electronic device and medium for detecting image defects
CN112613498B (en) * 2020-12-16 2024-07-02 浙江大华技术股份有限公司 Pointer identification method and device, electronic equipment and storage medium
CN112669384A (en) * 2020-12-31 2021-04-16 苏州江奥光电科技有限公司 Three-dimensional positioning method, device and system combining industrial camera and depth camera
CN112884743B (en) * 2021-02-22 2024-03-05 深圳中科飞测科技股份有限公司 Detection method and device, detection equipment and storage medium
CN113252678A (en) * 2021-03-24 2021-08-13 上海万物新生环保科技集团有限公司 Appearance quality inspection method and equipment for mobile terminal
CN113240642A (en) * 2021-05-13 2021-08-10 创新奇智(北京)科技有限公司 Image defect detection method and device, electronic equipment and storage medium
CN113344094A (en) * 2021-06-21 2021-09-03 梅卡曼德(北京)机器人科技有限公司 Image mask generation method and device, electronic equipment and storage medium
CN113609897A (en) * 2021-06-23 2021-11-05 阿里巴巴新加坡控股有限公司 Defect detection method and defect detection system
CN113591569A (en) * 2021-06-28 2021-11-02 北京百度网讯科技有限公司 Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium
CN113689397A (en) * 2021-08-23 2021-11-23 湖南视比特机器人有限公司 Workpiece circular hole feature detection method and workpiece circular hole feature detection device
CN113870225B (en) * 2021-09-28 2022-07-19 广州市华颉电子科技有限公司 Method for detecting content and pasting quality of artificial intelligent label of automobile domain controller
CN113933294B (en) * 2021-11-08 2023-07-18 中国联合网络通信集团有限公司 Concentration detection method and device
CN114354623A (en) * 2021-12-30 2022-04-15 苏州凌云视界智能设备有限责任公司 Weak mark extraction algorithm, device, equipment and medium
CN114419035B (en) * 2022-03-25 2022-06-17 北京百度网讯科技有限公司 Product identification method, model training device and electronic equipment
CN114612469B (en) * 2022-05-09 2022-08-12 武汉中导光电设备有限公司 Product defect detection method, device and equipment and readable storage medium
CN114782445B (en) * 2022-06-22 2022-10-11 深圳思谋信息科技有限公司 Object defect detection method and device, computer equipment and storage medium
CN114998097A (en) * 2022-07-21 2022-09-02 深圳思谋信息科技有限公司 Image alignment method, device, computer equipment and storage medium
CN117011214A (en) * 2022-08-30 2023-11-07 腾讯科技(深圳)有限公司 Object detection method, device, equipment and storage medium
CN115937629B (en) * 2022-12-02 2023-08-29 北京小米移动软件有限公司 Template image updating method, updating device, readable storage medium and chip
WO2024138462A1 (en) * 2022-12-28 2024-07-04 深圳华大生命科学研究院 Chip quality inspection method and apparatus, electronic device, and storage medium
CN115690101A (en) * 2022-12-29 2023-02-03 摩尔线程智能科技(北京)有限责任公司 Defect detection method, defect detection apparatus, electronic device, storage medium, and program product
CN116245876B (en) * 2022-12-29 2024-06-11 摩尔线程智能科技(北京)有限责任公司 Defect detection method, device, electronic apparatus, storage medium, and program product
CN115661161B (en) * 2022-12-29 2023-06-02 成都数联云算科技有限公司 Defect detection method, device, storage medium, apparatus and program product for parts
CN116046790B (en) * 2023-01-31 2023-10-27 北京百度网讯科技有限公司 Defect detection method, device, system, electronic equipment and storage medium
CN116883416B (en) * 2023-09-08 2023-11-24 腾讯科技(深圳)有限公司 Method, device, equipment and medium for detecting defects of industrial products

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109871895A (en) * 2019-02-22 2019-06-11 北京百度网讯科技有限公司 The defect inspection method and device of circuit board

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11010888B2 (en) * 2018-10-29 2021-05-18 International Business Machines Corporation Precision defect detection based on image difference with respect to templates
CN109916921A (en) * 2019-03-29 2019-06-21 北京百度网讯科技有限公司 Circuit board defect processing method, device and equipment
CN110763700A (en) * 2019-10-22 2020-02-07 深选智能科技(南京)有限公司 Method and equipment for detecting defects of semiconductor component
CN111369545B (en) * 2020-03-10 2023-04-25 创新奇智(重庆)科技有限公司 Edge defect detection method, device, model, equipment and readable storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109871895A (en) * 2019-02-22 2019-06-11 北京百度网讯科技有限公司 The defect inspection method and device of circuit board

Also Published As

Publication number Publication date
CN111986178A (en) 2020-11-24

Similar Documents

Publication Publication Date Title
CN111986178B (en) Product defect detection method, device, electronic equipment and storage medium
CN111833306B (en) Defect detection method and model training method for defect detection
CN111693534B (en) Surface defect detection method, model training method, device, equipment and medium
JP7051267B2 (en) Image detection methods, equipment, electronic equipment, storage media, and programs
CN112149636B (en) Method, device, electronic equipment and storage medium for detecting target object
CN111598164B (en) Method, device, electronic equipment and storage medium for identifying attribute of target object
CN112330730B (en) Image processing method, device, equipment and storage medium
CN112597837B (en) Image detection method, apparatus, device, storage medium, and computer program product
CN111833303A (en) Product detection method and device, electronic equipment and storage medium
CN110264444B (en) Damage detection method and device based on weak segmentation
CN112949767B (en) Sample image increment, image detection model training and image detection method
CN113537374B (en) Method for generating countermeasure sample
CN113408662B (en) Image recognition and training method and device for image recognition model
US20210374977A1 (en) Method for indoor localization and electronic device
CN115908988B (en) Defect detection model generation method, device, equipment and storage medium
CN115719436A (en) Model training method, target detection method, device, equipment and storage medium
CN113643260A (en) Method, apparatus, device, medium and product for detecting image quality
CN112749701B (en) License plate offset classification model generation method and license plate offset classification method
CN111191619A (en) Method, device and equipment for detecting virtual line segment of lane line and readable storage medium
CN114120086A (en) Pavement disease recognition method, image processing model training method, device and electronic equipment
KR20210032374A (en) System and method for finding and classifying lines in an image with a vision system
CN111079059A (en) Page checking method, device, equipment and computer readable storage medium
CN111862196A (en) Method, apparatus and computer-readable storage medium for detecting through-hole of flat object
CN113947771B (en) Image recognition method, apparatus, device, storage medium, and program product
CN116091416A (en) Method and device for training assembly defect detection and change detection models of printed circuit board

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant