CN110689539A - Workpiece surface defect detection method based on deep learning - Google Patents

Workpiece surface defect detection method based on deep learning Download PDF

Info

Publication number
CN110689539A
CN110689539A CN201910993517.8A CN201910993517A CN110689539A CN 110689539 A CN110689539 A CN 110689539A CN 201910993517 A CN201910993517 A CN 201910993517A CN 110689539 A CN110689539 A CN 110689539A
Authority
CN
China
Prior art keywords
workpiece
deep learning
convolutional
surface defect
defect detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910993517.8A
Other languages
Chinese (zh)
Other versions
CN110689539B (en
Inventor
王健
陈原
刘席发
高博文
吕琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN201910993517.8A priority Critical patent/CN110689539B/en
Publication of CN110689539A publication Critical patent/CN110689539A/en
Application granted granted Critical
Publication of CN110689539B publication Critical patent/CN110689539B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The invention discloses a workpiece surface defect detection method based on deep learning, which utilizes a deep learning technology to construct a workpiece surface defect detection system, aims to overcome the defects of high labor cost, low system efficiency and poor adaptability in the traditional method, can quickly identify and feed back the workpiece surface defects in a production environment, and ensures the accuracy and efficiency of the system. The system captures images of the surface of a workpiece through an image acquisition device, and uploads the images to a processing computer after being preprocessed by a capture terminal. The processing computer calls a predictor based on the deep neural network model to identify the image and outputs a prediction vector. And finally, the processing center issues the prediction vector to a display terminal, so that the surface defect state of the workpiece is visually displayed.

Description

Workpiece surface defect detection method based on deep learning
Technical Field
The invention relates to the technical field of machine vision and detection, in particular to a workpiece surface defect detection method based on deep learning.
Background
Quality control is an important ring of industry upgrading. Taking workpiece defect detection as an example, currently, many manufacturers still use manpower as a leading factor, and this way not only is inefficient, but also increases the labor cost. With the guidance of manufacturing informatization, it has become a development trend of manufacturing industry to participate in product quality control by machine vision instead of human vision, and some manufacturers are trying to adopt traditional machine vision schemes. The currently applied technical scheme mainly comprises a foreground detection algorithm based on background modeling and a learning algorithm based on a support vector machine. The efficiency of the foreground detection algorithm based on background modeling depends on the scale of a captured image, namely, the relation between the computation amount and the input scale presents power increase, and when the resolution of the captured image is improved, the system efficiency is often reduced sharply; compared with a foreground detection algorithm based on background modeling, a learning algorithm based on a support vector machine shows improvement in system efficiency, but because the space consumption of the support vector machine is mainly used for storing training samples and kernel matrixes, a large amount of storage space and machine memory are consumed when the matrix order (related to the number of input samples) is large. In addition, the support vector machine is difficult to solve the multi-classification problem.
Deep neural networks (deep learning) have met with the growing blowout period with great success in commercialization in recent years. The method is widely applied to the intelligent industry and brings great economic benefits to the human society. A Convolutional Neural Network (CNN) is a typical deep Neural network, has a better extraction capability for complex features than a conventional algorithm, and is widely applied to the fields of machine vision, image processing and the like. Compared with the traditional machine vision algorithm, the method has better adaptability and accuracy if the parameters are optimized properly, and the system efficiency is greatly improved. Compared with a learning algorithm based on a support vector machine, the method is low in calculation and storage cost and can be used for solving the multi-classification problem. Therefore, it is necessary to develop an efficient and accurate workpiece defect detection technology based on deep learning, and the technology is applied to the quality control link.
Disclosure of Invention
The purpose of the invention is as follows: in order to solve the problems in the background art, the invention constructs a workpiece surface defect detection system by utilizing a deep learning technology, solves the defects of high labor cost, low system efficiency and poor adaptability in the traditional method, can quickly identify and feed back the workpiece surface defects in a production environment, and ensures the accuracy and efficiency of the system. The technical scheme is as follows: in order to achieve the purpose, the invention adopts the technical scheme that:
a workpiece surface defect detection method based on deep learning comprises the following specific steps:
step 1, constructing an image acquisition system based on multi-view vision by adopting a plurality of groups of cameras and illumination sources, and capturing images of a workpiece at different angles;
step 2, constructing a distributed image processing system; a pipeline monitoring mechanism is adopted, a processing computer maintains a specific message pipeline, and a capture terminal monitors the pipeline; the method comprises the steps that a capture terminal carries out batch preprocessing on collected images during image collection each time and then uploads the preprocessed images to a database; by utilizing a persistence mechanism, the processing computer can store the preprocessed pictures to the local as the input of the classifier;
step 3, collecting training samples; putting the prepared and marked workpiece containing partial defects into an acquisition system for training data acquisition; according to the following steps of 8: 2, setting a training and verification set according to the proportion, and setting a label;
step 4, constructing a deep learning model; constructing a network model comprising convolutional layers and full-connection layers based on a convolutional neural network, wherein pooling, regularization and normalization modules are compounded among the layers and used for optimizing feature extraction and improving nonlinearity;
step 5, constructing a predictor program framework; constructing a detection system framework by adopting a Tensorflow API; program logic is constructed based on a factory mode, so that system software is convenient to upgrade, expand and optimize;
step 6, placing the workpiece to be detected in an image acquisition system, and operating a predictor program; when the online operation is carried out, the predictor program interface takes the workpiece capture image which is received and preprocessed as input; then, calling the trained static model to output a prediction vector;
and 7, the predictor distributes the prediction vector message to the display terminal through a preset channel. And the display terminal automatically identifies the position of the workpiece defect according to the prediction vector.
Further, the step 2 of batch preprocessing the collected images comprises converting the color space of the captured images; the capture terminal converts the color space of the input image into a YUV system in an RGB system mode.
Further, the network model building step in the step 4 is specifically as follows: storing the data set in a specified directory of a training script, operating the training script, setting the initial learning rate to be 0.001, and adopting an Adam optimizer; the specific parameters of the training model in sequence are respectively as follows:
(1) convolutional layer 1, with a total of 64 convolutional kernels, the size of the convolutional kernels is 11 × 11, the step size is set to 4, and the fill mode is set to SAME; the activation function is set to ReLU; compounding 2 × 2 pooling and performing local response normalization;
(2) convolutional layer 2, with a total of 256 convolutional kernels, with the convolutional kernel size 5 × 5, the fill mode set to SAME, the activation function set to ReLU, composite 2 × 2 pooling and local response normalization performed;
(3) convolutional layer 3, totaling 256 convolutional kernels, with the convolutional kernel size of 3 × 3, the fill mode set to SAME, the activation function set to ReLU, composite 2 × 2 pooling and local response normalization performed;
(4) fully connected layer 1, mapped as 4096 dimensions;
(5) a fully connected layer 2, mapped to N dimensions, where N represents the number of tags; the loss function is softmax;
finally obtaining an optimized dynamic model through parameter adjustment; and after the conversion script is operated, obtaining the static model in the pb format.
Further, the plurality of groups of cameras in step 1 are preferably digital cameras.
Further, the illumination source in step 1 is preferably an LED light source.
Has the advantages that: the invention provides a workpiece surface defect detection method based on deep learning, aiming at the problems of high human resource consumption and low efficiency of traditional workpiece surface defect detection. The image acquisition system based on the multi-view vision ensures the inference precision, and the distributed image processing system greatly improves the system efficiency and robustness; the generalization capability of a predictor network model is improved by expanding a training set, and the system accuracy is ensured. Therefore, the labor cost is greatly reduced, and the system efficiency is improved under the condition of ensuring the accuracy.
Drawings
FIG. 1 is a main flow chart of a workpiece surface defect detection method based on deep learning according to the present invention;
FIG. 2 is a schematic view of the spatial orientation of an image acquisition system based on multi-view vision;
FIG. 3 is a diagram of a predictor software flow;
fig. 4 is a schematic diagram of capturing terminal cluster distribution connection;
FIG. 5 is a diagram of a predictor network model;
FIG. 6 is a diagram of a system for detecting surface defects of a workpiece based on deep learning according to the present invention;
FIG. 7 is a diagram of the steps of a deep learning-based system for detecting surface defects of a workpiece.
Detailed Description
The present invention will be further described with reference to the accompanying drawings.
One specific example is provided below:
the workpiece surface defect detection method based on deep learning is divided into the following three modules: the system comprises an image acquisition and preprocessing module, a predictor module and a communication and data persistence module.
The image acquisition and preprocessing module:
(1) an image acquisition module:
the image acquisition system is designed into a cubic container and comprises 6 surfaces which are up, down, left, right, front and back in total when viewed from the space direction. However, since the bottom surface is used to hold the workpiece, only 5 surfaces actually need to be inspected. The surface to be detected can be divided into four angles: side view (i.e., front, back, left, right), side view (front left, front right, left back, right back), top view (top front, top back, top left, top right). Because the side view and the side view angle need to be divided into upper and lower capture points in the spatial layout, a total of (4+4) × 2+4+1 is 21 cameras is required. Adopt the raspberry group as the terminal in this embodiment to carry on the camera.
As shown in fig. 2, where the numbers 1, 4, 5 represent visible surfaces and the numbers 2, 3, 6 represent currently invisible surfaces. Each numerical meaning is 1: front 2: the back 3: left side 4: right 5: top surface 6: a bottom surface. Each edge mark can be uniquely determined by two faces, is composed of two digits and represents two intersecting faces for generating the edge line, and the number of the intersecting faces is 1-3; 1-4 of front left edge; 3-2 of the front right edge; 4-2 of the left rear edge; 5-1 of right rear edge; an upper front edge 5-2; 5-3 of an upper rear edge; 5-4 of an upper left edge; and (4) an upper right edge.
(2) Light source type selection and illumination control
The light source is an important premise of image acquisition, the acquired image is used as the input of a defect detection system, and the quality of the acquired image directly determines the accuracy of a prediction result to a great extent. The appropriate light source can not only amplify defect details, but can even shield interference features, thereby reducing noise of the detected sample.
The light source type can be divided into natural light and artificial light, and the image acquisition system is in a closed configuration, so that the system selects the artificial light source. In view of the characteristics of long lifetime and low power consumption of the LED, the LED is selected as the illumination generator in the present embodiment.
(3) Camera model selection
The cameras are mainly classified into analog cameras and digital cameras, wherein the analog cameras are simple, but the computer can only recognize digital signals, so that imaging results need to be subjected to DA conversion before being input into the computer. The digital camera directly collects digital signals and can input the digital signals to the computer terminal without a DA conversion circuit. Compared with the prior art, the digital camera is mainly used in the space environment with short distance and low interference. Considering that the space scale of the image acquisition system is small, the digital camera is adopted for facilitating the system design.
A predictor module:
(1) predictor hardware design
The predictor mainly comprises a display module and a data processing host. The processing host operating system platform is linux, and the hardware platform is loaded with a high-performance GPU and a CPU.
(2) Predictor software design
As shown in FIG. 3, the predictor software is mainly composed of an input module, a predictor module, and an output module. The operation mechanism and the function of each module are as follows:
an input module: a uniform interface for detecting sample input is provided, and the raw image with complete preprocessing is uniformly compressed to the input layer size (224 × 224) of the model by using the CVMat _ to _ Tensor api provided by the CV2, and is converted into a Tensor as the input of the model.
A predictor module: and loading the trained and optimized static model (. pb format), setting the node names of the input layer and the output layer to be the same as the corresponding node names in the static network model, and calling corresponding API (application programming interface) to operate the network.
An output module: after the prediction module is executed, a tenor type queue (prediction vector) is generated, and the maximum prediction component in each vector is extracted by using the argmax method to serve as a prediction result. The system adopts a binary classification mode, namely, a prediction vector has two dimensions, [1,0] represents that the surface of the workpiece has defects, and [0,1] represents that the surface of the workpiece is intact. Table 1 lists several important functions related to the predictor:
TABLE 1
Figure BDA0002239037340000051
Figure BDA0002239037340000061
A communication and data persistence module:
(1) communication system construction
As shown in fig. 4, the basic communication method utilized by the present method is implemented as a bridge mode. The realization mode is that a plurality of network ports expanded on one terminal are mutually communicated. Namely, one raspberry is used as a cluster head to be connected with the rest four raspberry pies to form a cluster-shaped structure, and then the cluster head is communicated with the switch.
(2) Data persistence
The embodiment adopts a Redis database to realize data persistence. Redis is a distributed key-value database, which has the characteristics of high performance, rich data types and atomicity support, thereby ensuring the high efficiency of the system.
The system mainly maintains two pipelines, namely Channel @1 and Channel @ 2. The functions are respectively as follows:
channel @1 is used as a message Channel between an upper computer and an image acquisition terminal: the upper computer periodically sends out an image acquisition command, all acquisition terminals monitoring the pipeline call a capture script to capture the workpiece image, and the image is stored in a database in a binary string form after the preprocessing step. And after the acquisition is finished, the persistence is executed, and the upper computer acquires the image blocks captured by all angles of the current workpiece and uses the image blocks as the original input of the predictor.
Channel @2 is used as a message Channel between the upper computer and the display terminal: when the predictor outputs the result, the data processing host issues the prediction vector to the display terminal through the channel. After receiving the prediction vector message, the display terminal calls an analyzer to analyze the prediction vector and feeds back a prediction result in the form of an image.
Detailed embodiments of the present invention are given below in conjunction with fig. 1-7.
Step 1, taking a raspberry pi as a platform of a capture terminal, and taking an external camera as an image capture device. And a total of 21 digital cameras are arranged on the side surface, the side edge, the upper side surface and the upper side edge of the inner surface of the image acquisition container. The linear light source is constructed to be evenly distributed on the inner surface of the collection container, taking into account the dimensions of the capture surface.
And 2, starting Redis service on the processing computer, and setting a monitoring network segment and a port number (a preset port is 6379). After the Redis service is started, a processing computer runs a command issuing process, and two message pipelines, namely Channel @1 and Channel @2, are set in the command issuing process. And then, remotely connecting the capturing terminals and the display terminals in batches, starting monitoring processes of the capturing terminals and the display terminals, monitoring the Channel @1 by the capturing terminals, and monitoring the Channel @2 by the display terminals. Training images are started to be acquired. Whenever the capture terminal captures a frame of original image (size 2952 x 1944), it will call the script to background fill and segment it. After the division blocks with the size of 800 x 800 are obtained, the image with the RGB system is converted into the YUV system and is stored in a database in a binary string mode.
And 3, executing persistence operation by the processing computer to acquire the image acquired by the capturing terminal. And carrying out manual marking according to the defect display condition of the acquired image block of the workpiece. After 100 workpieces are collected and 5300 image blocks are counted, image enhancement and deformation are carried out in batches, and the training set is expanded to about 20000.
Step 4, storing the data set in a specified directory of a training script, operating the training script, setting the initial learning rate to be 0.001, and adopting an Adam optimizer; the specific parameters of the training model in sequence are respectively as follows:
(1) convolutional layer 1, with a total of 64 convolutional kernels, the size of the convolutional kernels is 11 × 11, the step size is set to 4, and the fill mode is set to SAME; the activation function is set to ReLU; compounding 2 × 2 pooling and performing local response normalization;
(2) convolutional layer 2, with a total of 256 convolutional kernels, with the convolutional kernel size 5 × 5, the fill mode set to SAME, the activation function set to ReLU, composite 2 × 2 pooling and local response normalization performed;
(3) convolutional layer 3, totaling 256 convolutional kernels, with the convolutional kernel size of 3 × 3, the fill mode set to SAME, the activation function set to ReLU, composite 2 × 2 pooling and local response normalization performed;
(4) fully connected layer 1, mapped as 4096 dimensions;
(5) a fully connected layer 2, mapped to N dimensions, where N represents the number of tags; the loss function is softmax;
finally obtaining an optimized dynamic model through parameter adjustment; running the transformation script results in a static model in pb format, as shown in FIG. 5.
And 5, modifying the configuration file of the predictor program, and appointing the path of the input image and the storage path of the static model. And after the setting is finished, running a predictor program and loading the static model.
And 6, after the predictor program successfully loads the static model, entering a prediction mode. And placing the workpiece to be detected in a collecting container, and issuing a collecting instruction to a capturing terminal on a processing computer. And after receiving the acquisition instruction, the capture terminal automatically captures the image of the workpiece. And storing the preprocessed image blocks into the database, and meanwhile, intermittently executing persistence by the processing computer.
And 7, once the predictor reads the image block input, converting the image data into a Tensor as the input of the network model by using the library function, and then operating the network. After the network outputs the prediction vector, the processing computer transmits the information to a message issuing process running in a background, and the prediction vector is issued to the display terminal through Channel @ 2. And after receiving the prediction vector, the display terminal feeds back the defect condition of the workpiece to be detected in a graphical mode.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.

Claims (5)

1. A workpiece surface defect detection method based on deep learning is characterized in that: the method comprises the following steps:
step 1, constructing an image acquisition system based on multi-view vision by adopting a plurality of groups of cameras and illumination sources, and capturing images of a workpiece at different angles;
step 2, constructing a distributed image processing system; a pipeline monitoring mechanism is adopted, a processing computer maintains a specific message pipeline, and a capture terminal monitors the pipeline; the method comprises the steps that a capture terminal carries out batch preprocessing on collected images during image collection each time and then uploads the preprocessed images to a database; by utilizing a persistence mechanism, the processing computer can store the preprocessed pictures to the local as the input of the classifier;
step 3, collecting training samples; putting the prepared and marked workpiece containing partial defects into an acquisition system for training data acquisition; according to the following steps of 8: 2, setting a training and verification set according to the proportion, and setting a label;
step 4, constructing a deep learning model; constructing a network model comprising convolutional layers and full-connection layers based on a convolutional neural network, wherein pooling, regularization and normalization modules are compounded among the layers and used for optimizing feature extraction and improving nonlinearity;
step 5, constructing a predictor program framework; constructing a detection system framework by adopting a Tensorflow API; program logic is constructed based on a factory mode, so that system software is convenient to upgrade, expand and optimize;
step 6, placing the workpiece to be detected in an image acquisition system, and operating a predictor program; when the online operation is carried out, the predictor program interface takes the workpiece capture image which is received and preprocessed as input; then, calling the trained static model to output a prediction vector;
step 7, the predictor distributes the prediction vector information to the display terminal through a preset channel; and the display terminal automatically identifies the position of the workpiece defect according to the prediction vector.
2. The workpiece surface defect detection method based on deep learning of claim 1, wherein: in the step 2, the collected images are subjected to batch preprocessing, including converting the color space of the captured pictures; the capture terminal converts the color space of the input image into a YUV system in an RGB system mode.
3. The workpiece surface defect detection method based on deep learning of claim 1, wherein: the network model construction step in the step 4 is specifically as follows: storing the data set in a specified directory of a training script, operating the training script, setting the initial learning rate to be 0.001, and adopting an Adam optimizer; the specific parameters of the training model in sequence are respectively as follows:
(1) convolutional layer 1, with a total of 64 convolutional kernels, the size of the convolutional kernels is 11 × 11, the step size is set to 4, and the fill mode is set to SAME; the activation function is set to ReLU; compounding 2 × 2 pooling and performing local response normalization;
(2) convolutional layer 2, with a total of 256 convolutional kernels, with the convolutional kernel size 5 × 5, the fill mode set to SAME, the activation function set to ReLU, composite 2 × 2 pooling and local response normalization performed;
(3) convolutional layer 3, totaling 256 convolutional kernels, with the convolutional kernel size of 3 × 3, the fill mode set to SAME, the activation function set to ReLU, composite 2 × 2 pooling and local response normalization performed;
(4) fully connected layer 1, mapped as 4096 dimensions;
(5) a fully connected layer 2, mapped to N dimensions, where N represents the number of tags; the loss function is softmax;
finally obtaining an optimized dynamic model through parameter adjustment; and after the conversion script is operated, obtaining the static model in the pb format.
4. The workpiece surface defect detection method based on deep learning of claim 1, wherein: the multiple groups of cameras in the step 1 are preferably digital cameras.
5. The workpiece surface defect detection method based on deep learning of claim 1, wherein: the illumination source in the step 1 is preferably an LED light source.
CN201910993517.8A 2019-11-12 2019-11-12 Workpiece surface defect detection method based on deep learning Active CN110689539B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910993517.8A CN110689539B (en) 2019-11-12 2019-11-12 Workpiece surface defect detection method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910993517.8A CN110689539B (en) 2019-11-12 2019-11-12 Workpiece surface defect detection method based on deep learning

Publications (2)

Publication Number Publication Date
CN110689539A true CN110689539A (en) 2020-01-14
CN110689539B CN110689539B (en) 2023-04-07

Family

ID=69113507

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910993517.8A Active CN110689539B (en) 2019-11-12 2019-11-12 Workpiece surface defect detection method based on deep learning

Country Status (1)

Country Link
CN (1) CN110689539B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111830048A (en) * 2020-07-17 2020-10-27 苏州凌创电子***有限公司 Automobile fuel spray nozzle defect detection equipment based on deep learning and detection method thereof
CN111858361A (en) * 2020-07-23 2020-10-30 中国人民解放军国防科技大学 Atomic violation defect detection method based on prediction and parallel verification strategies
CN111951234A (en) * 2020-07-27 2020-11-17 上海微亿智造科技有限公司 Model detection method
CN112017172A (en) * 2020-08-31 2020-12-01 佛山科学技术学院 System and method for detecting defects of deep learning product based on raspberry group
CN113486457A (en) * 2021-06-04 2021-10-08 宁波海天金属成型设备有限公司 Die casting defect prediction and diagnosis system
CN115382685A (en) * 2022-08-16 2022-11-25 苏州智涂工业科技有限公司 Control technology of automatic robot spraying production line
CN115496763A (en) * 2022-11-21 2022-12-20 湖南视比特机器人有限公司 Workpiece wrong and neglected loading detection system and method based on multi-view vision
CN117011263A (en) * 2023-08-03 2023-11-07 东方空间技术(山东)有限公司 Defect detection method and device for rocket sublevel recovery section
CN117250200A (en) * 2023-11-07 2023-12-19 山东恒业金属制品有限公司 Square pipe production quality detection system based on machine vision

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004191112A (en) * 2002-12-10 2004-07-08 Ricoh Co Ltd Defect examining method
CN107392896A (en) * 2017-07-14 2017-11-24 佛山市南海区广工大数控装备协同创新研究院 A kind of Wood Defects Testing method and system based on deep learning
CN109829907A (en) * 2019-01-31 2019-05-31 浙江工业大学 A kind of metal shaft surface defect recognition method based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004191112A (en) * 2002-12-10 2004-07-08 Ricoh Co Ltd Defect examining method
CN107392896A (en) * 2017-07-14 2017-11-24 佛山市南海区广工大数控装备协同创新研究院 A kind of Wood Defects Testing method and system based on deep learning
CN109829907A (en) * 2019-01-31 2019-05-31 浙江工业大学 A kind of metal shaft surface defect recognition method based on deep learning

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111830048A (en) * 2020-07-17 2020-10-27 苏州凌创电子***有限公司 Automobile fuel spray nozzle defect detection equipment based on deep learning and detection method thereof
CN111858361B (en) * 2020-07-23 2023-07-21 中国人民解放军国防科技大学 Atomic violation defect detection method based on prediction and parallel verification strategy
CN111858361A (en) * 2020-07-23 2020-10-30 中国人民解放军国防科技大学 Atomic violation defect detection method based on prediction and parallel verification strategies
CN111951234A (en) * 2020-07-27 2020-11-17 上海微亿智造科技有限公司 Model detection method
CN111951234B (en) * 2020-07-27 2021-07-30 上海微亿智造科技有限公司 Model detection method
CN112017172A (en) * 2020-08-31 2020-12-01 佛山科学技术学院 System and method for detecting defects of deep learning product based on raspberry group
CN113486457A (en) * 2021-06-04 2021-10-08 宁波海天金属成型设备有限公司 Die casting defect prediction and diagnosis system
CN115382685A (en) * 2022-08-16 2022-11-25 苏州智涂工业科技有限公司 Control technology of automatic robot spraying production line
CN115496763A (en) * 2022-11-21 2022-12-20 湖南视比特机器人有限公司 Workpiece wrong and neglected loading detection system and method based on multi-view vision
CN117011263A (en) * 2023-08-03 2023-11-07 东方空间技术(山东)有限公司 Defect detection method and device for rocket sublevel recovery section
CN117011263B (en) * 2023-08-03 2024-05-10 东方空间技术(山东)有限公司 Defect detection method and device for rocket sublevel recovery section
CN117250200A (en) * 2023-11-07 2023-12-19 山东恒业金属制品有限公司 Square pipe production quality detection system based on machine vision
CN117250200B (en) * 2023-11-07 2024-02-02 山东恒业金属制品有限公司 Square pipe production quality detection system based on machine vision

Also Published As

Publication number Publication date
CN110689539B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN110689539B (en) Workpiece surface defect detection method based on deep learning
CN111179253B (en) Product defect detection method, device and system
CN112241699A (en) Object defect category identification method and device, computer equipment and storage medium
CN111507399A (en) Cloud recognition and model training method, device, terminal and medium based on deep learning
CN111738994B (en) Lightweight PCB defect detection method
CN112116620A (en) Indoor image semantic segmentation and painting display method
CN110929795A (en) Method for quickly identifying and positioning welding spot of high-speed wire welding machine
CN111507357A (en) Defect detection semantic segmentation model modeling method, device, medium and equipment
CN111242057A (en) Product sorting system, method, computer device and storage medium
CN113822842A (en) Industrial defect detection method based on multi-task learning
CN114332086B (en) Textile defect detection method and system based on style migration and artificial intelligence
CN113838015B (en) Electrical product appearance defect detection method based on network cooperation
CN114972246A (en) Die-cutting product surface defect detection method based on deep learning
CN114332659A (en) Power transmission line defect inspection method and device based on lightweight model issuing
CN108401106B (en) Shooting parameter optimization method and device, terminal and storage medium
CN116091748B (en) AIGC-based image recognition system and device
CN112183374A (en) Automatic express sorting device and method based on raspberry group and deep learning
CN115565168A (en) Sugarcane disease identification method based on attention system residual error capsule network
CN212846839U (en) Fabric information matching system
CN114138458A (en) Intelligent vision processing system
CN112016515A (en) File cabinet vacancy detection method and device
CN111709620B (en) Mobile portable online detection system for structural parameters of woven fabric
CN112184691A (en) Defect mode analysis method based on poor Map
CN112712124B (en) Multi-module cooperative object recognition system and method based on deep learning
CN110991361A (en) Multi-channel multi-modal background modeling method for high-definition high-speed video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant