CN111291830B - Method for improving glass surface defect detection efficiency and accuracy - Google Patents

Method for improving glass surface defect detection efficiency and accuracy Download PDF

Info

Publication number
CN111291830B
CN111291830B CN202010144610.4A CN202010144610A CN111291830B CN 111291830 B CN111291830 B CN 111291830B CN 202010144610 A CN202010144610 A CN 202010144610A CN 111291830 B CN111291830 B CN 111291830B
Authority
CN
China
Prior art keywords
accuracy
network
training
fster
rcnn
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010144610.4A
Other languages
Chinese (zh)
Other versions
CN111291830A (en
Inventor
尹玲
吕思杰
张斐
黎沛成
刘宜杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongguan University of Technology
Original Assignee
Dongguan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongguan University of Technology filed Critical Dongguan University of Technology
Priority to CN202010144610.4A priority Critical patent/CN111291830B/en
Publication of CN111291830A publication Critical patent/CN111291830A/en
Application granted granted Critical
Publication of CN111291830B publication Critical patent/CN111291830B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of target detection and identification, and discloses a method for improving the detection efficiency and accuracy of glass surface defects, which comprises the following steps: extracting a defect sample, inputting fster-rcnn, ssd and Yolov3 target recognition network training learning, and storing a learned model; training the learned model by using a fster-rcnn, ssd and Yolov3 target recognition network for image detection and recognition to obtain the image detection accuracy of the target recognition network; comparing the image detection accuracy rates of the fster-rcnn, ssd and Yolov3 target identification networks, distributing weights according to the sequence of the accuracy rates from large to small, combining the fster-rcnn, ssd and Yolov3 target detection networks to obtain a combined classifier, training the combined classifier to obtain the comprehensive accuracy rate, and recording the comprehensive accuracy rate as accuracy1; the fster-rcnn, ssd and Yolov3 target recognition network is combined with the dynamic weight to train the learned model; and collecting a sample image, inputting the sample image into the network training learning model after dynamic weight combination, and outputting the defect position and category on the sample image. The method is beneficial to improving the detection efficiency and accuracy.

Description

Method for improving glass surface defect detection efficiency and accuracy
Technical Field
The invention belongs to the field of target detection and identification, and particularly relates to a method for improving the detection efficiency and accuracy of glass surface defects.
Background
For example, chinese patent publication No. CN 107123111A discloses a depth residual error network construction method for mobile phone screen defect detection, which includes collecting a defect picture and a normal picture, marking, and training a self-defined depth residual error network through training data until convergence and higher accuracy are achieved; generating a shallow network model by using a method of randomly removing each residual module of a deep residual network with a certain probability, and repeating the operation to generate a plurality of network models with different depths; zooming mobile phone screen pictures shot by a high-resolution camera in different proportions to form a picture pyramid, dividing the pictures into small blocks and enabling the picture blocks to have certain overlapping areas for the pictures of each scale, and sending all the small blocks of pictures into network models with different depths together as a group; selecting a characteristic graph output by each network model as a response graph of the defect, obtaining the position of the defective area of the mobile phone screen by adopting a threshold segmentation method, and finally overlapping the detection results of the network models at different depths to obtain the final detection result. However, this method is disadvantageous in improving the detection efficiency.
Disclosure of Invention
The invention aims to provide a detection method capable of improving efficiency and accuracy.
In order to solve the problems, the method for improving the efficiency and the accuracy of detecting the defects on the surface of the glass comprises the following steps:
the method comprises the following steps: extracting a defect sample, inputting the fster-rcnn target identification network for training and learning, and storing a learned model;
step two: extracting a defect sample, inputting the defect sample into the ssd target detection network for training and learning, and storing a learned model;
extracting a defect sample, inputting a Yolov3 target detection network for training and learning, and storing a learned model;
step four: training the learned model by using the fster-rcnn target identification network to perform image detection and identification to obtain the image detection accuracy of the fster-rcnn target identification network;
step five: training the learned model by using the ssd target detection network to perform image detection and identification to obtain the image detection accuracy of the ssd target detection network;
step six: carrying out image detection and identification by using the model after the Yolov3 target detection network training learning to obtain the image detection accuracy of the Yolov3 target detection network;
step seven: comparing the image detection accuracy of the fster-rcnn target identification network, the image detection accuracy of the ssd target detection network and the image detection accuracy of the Yolov3 target detection network, distributing weights according to the sequence of the accuracy from large to small, and sequentially marking as w1, w2 and w3;
step eight: at time 1, according to weight w1: w2: w3= 1;
step nine: and when the nth time is more than or equal to 2 times, combining the fster-rcnn target identification network, the ssd target detection network and the Yolov3 target detection network according to the weight w1= n/(n + 1), w2= (2/3) (1-n/(n + 1)), w3= (1/3) (1-n/(n + 1)) to obtain a combined classifier for training to obtain the comprehensive accuracy, and marking as accuracy (n).
Step ten: recording ideal accuracy as p, wherein p is one of accuracy (n), when | accuracy (n) -p | is less than epsilon, the accuracy (n) converges on p, recording the weight ratio w1, w2 and w3 of the time as an optimal weight ratio, and combining the fster-rcnn target identification network, the ssd target detection network and the Yolov3 target detection network according to the optimal weight ratio to obtain the optimal network;
step eleven: combining the model after the fster-rcnn target recognition network training learning, the model after the ssd target detection network training learning and the model after the Yolov3 target detection network training learning by the dynamic weight;
step twelve: and collecting a sample image, inputting the sample image into the network training learning model after dynamic weight combination, and outputting the defect position and category on the sample image.
Further, the surface defect of the defect sample is a scratch or a chipping edge or an air bubble or a stain.
Further, in step twelve, the sample image is subjected to denoising processing by a residual error method.
Further, in step twelve, the sample image is subjected to median filtering denoising processing.
The recognition classification algorithm selects the three most commonly used deep learning target recognition algorithms to perform dynamic weight combination, and combines the advantages of the three algorithms to improve the detection precision and speed.
Drawings
Fig. 1 shows a background image sr1 used in the residual method.
Fig. 2 shows a detected image src2 used in the residual error method.
Fig. 3 shows the residual-method denoised image dst.
Fig. 4 is a convergence image of the integrated accuracy.
Detailed Description
The method and the algorithm are as follows:
as shown in fig. 1-3, the surface defect detection procedure:
1. images are acquired and captured using a camera, video camera, or the like.
2. Image denoising process
(1) The residual method reduces the external dust interference (mainly realized by firstly collecting a background image, then collecting a sample image, matching the two images, and then carrying out the difference operation pixel by pixel to weaken the external dust interference)
And (3) image segmentation algorithm by reference residual method:
residual method denoising principle: under the same environment, a background image sr1 and an image src2 to be detected are acquired before and after, because src1 and src2 are two images acquired before and after under the same condition, that is, the background image src1 and the background on the image src2 to be detected have the same noise and noise caused by dust, and at this time, the difference value is made pixel by using src2 and src1, subtract (src 2, src1, dst, mat (), -1); the same noise on src2 as on src1 can be subtracted, and the obtained residual image dst is an image from which dust noise points are removed, so that a source image with higher contrast is provided for later-stage target identification.
Because the external light source may be unevenly illuminated, and the like, the camera may generate impulse noise, salt and pepper noise and the like when acquiring images, and the median filtering can effectively remove image noise and simultaneously retain image edge details. (refer to several denoising algorithms commonly used in image processing books (opencv 3) and finally select the median filtering algorithm through the comparison experiment)
Median filtering: median filtering is a typical nonlinear filtering technique, and the basic principle is to replace the value of a point in a digital image or digital sequence with the median of the values of the points in a field of the point, so that the surrounding pixel values are close to the true values, thereby eliminating isolated noise points. The method is therefore particularly useful for removing impulse noise, salt and pepper noise, since it does not depend on values in the field that differ significantly from typical values. To take a simple example: the one-dimensional sequence 0,3,4,0,7 is sorted into 0,0,3,4,7 by median filtering, and the median is 3.
Target identification and classification:
referring to fster-rcnn, ssd and yolov3 deep learning articles, the three networks of dynamic weight combination can be combined with the advantages of the three networks to improve the detection accuracy and speed.
fster-rcnn target identification network: defect samples (scratches, edge breakage, bubbles, dirt and the like) are extracted, 2000 defect samples of each type are input into a network for training and learning, and the learned model is stored.
ssd target detection network: the ssd network is trained using the same samples as above, and the learned model is saved.
Yolov3 target detection network: the yolov3 network was trained using the same samples and the learned models were saved.
And combining the three target identification network models by dynamic weight, inputting the acquired image for testing, and accurately outputting the position and the category of the defect on the sample to be detected, namely finishing the identification and classification of the glass surface defect detection.
Dynamic weight combination: the recognition classification algorithm selects the three most commonly used deep learning target recognition algorithms to perform dynamic weight combination, and combines the advantages of the three algorithms to improve the detection precision and speed.
Basic thought: respectively training three target recognition algorithms, testing to obtain the recognition accuracy of each algorithm, and then comparing the accuracy of the three algorithms (if the accuracy is ranked as false-rcnn > ssd > yolov 3). Initially, respectively and equally distributing weights w1, w2 and w3 (all weights are one third) to the fast-rcnn, ssd and yolov3 to obtain a combined classifier for training to obtain a comprehensive accuracy (accuracy 1), and then increasing the algorithm weight with the highest accuracy to two thirds (namely w1= 2/3) and the weight w3 with the lowest accuracy (yolov 3): w2 (ssd) =1:2 (i.e., w2=2/9, w3= 1/9), three algorithms are combined with a new weight to train to obtain a new comprehensive accuracy rate (accuracy 2), at this time, accuracy2> accuracy1, at this time, the ratio of w1=3/4, w2 and w3 is continuously increased to 2:1 combination classifier to obtain a comprehensive accuracy rate accuracy3, and by analogy, w1 is sequentially increased to 4/5, 5/6, 7/8, 8/9.
The method can overcome the problems of the traditional surface defect detection, combines the advantages of the three algorithms, and is beneficial to comprehensively improving the efficiency and the accuracy of the screen defect detection.
The foregoing is a further detailed description of the invention in connection with specific preferred embodiments and it is not intended to limit the invention to the specific embodiments described. For those skilled in the art to which the invention pertains, several equivalent substitutions or obvious modifications can be made without departing from the spirit of the invention, and all the properties or uses are considered to be within the scope of the invention.

Claims (4)

1. A method for improving the detection efficiency and accuracy of glass surface defects is characterized by comprising the following steps:
the method comprises the following steps: extracting a defect sample, inputting fster-rcnn target recognition network for training and learning, and storing a learned model;
step two: extracting a defect sample, inputting the defect sample into the ssd target detection network for training and learning, and storing a learned model;
extracting a defect sample, inputting a Yolov3 target detection network for training and learning, and storing a learned model;
step four: training the learned model by using the fster-rcnn target identification network to perform image detection and identification to obtain the image detection accuracy of the fster-rcnn target identification network;
step five: training the learned model by using the ssd target detection network to perform image detection and identification to obtain the image detection accuracy of the ssd target detection network;
step six: using the model after the Yolov3 target detection network training learning to perform image detection and identification to obtain the image detection accuracy rate of the Yolov3 target detection network;
step seven: comparing the image detection accuracy of the fster-rcnn target identification network, the image detection accuracy of the ssd target detection network and the image detection accuracy of the Yolov3 target detection network, distributing weights according to the sequence of the accuracy from large to small, and sequentially marking as w1, w2 and w3;
step eight: at time 1, by weight w1: w2: w3= 1;
step nine: when the nth time is more than or equal to 2 times, combining the fster-rcnn target identification network, the ssd target detection network and the Yolov3 target detection network according to the weight w1= n/(n + 1), w2= (2/3) (1-n/(n + 1)), w3= (1/3) (1-n/(n + 1)) to obtain a combined classifier for training to obtain the comprehensive accuracy, and marking as accuracy (n);
step ten: recording ideal accuracy as p, wherein p is one of accuracy (n), when | accuracy (n) -p | is less than epsilon, the accuracy (n) converges on p, recording the weight ratio w1, w2 and w3 of the time as an optimal weight ratio, and combining the fster-rcnn target identification network, the ssd target detection network and the Yolov3 target detection network according to the optimal weight ratio to obtain the optimal network;
step eleven: combining the model after the fster-rcnn target recognition network training learning, the model after the ssd target detection network training learning and the model after the Yolov3 target detection network training learning by the dynamic weight;
step twelve: and collecting sample images, inputting the sample images into the network training learning model after dynamic weight combination, and outputting defect positions and categories on the sample images.
2. The method according to claim 1, wherein the surface defect of the defect sample is a scratch or a chipping or a blister or a smudge.
3. The method of claim 2, wherein in step twelve, the sample image is denoised by a residual method.
4. The method for improving the efficiency and accuracy of detecting defects on the surface of glass as claimed in claim 2 or 3, wherein in the twelfth step, the sample image is subjected to median filtering and de-noising.
CN202010144610.4A 2020-03-04 2020-03-04 Method for improving glass surface defect detection efficiency and accuracy Active CN111291830B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010144610.4A CN111291830B (en) 2020-03-04 2020-03-04 Method for improving glass surface defect detection efficiency and accuracy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010144610.4A CN111291830B (en) 2020-03-04 2020-03-04 Method for improving glass surface defect detection efficiency and accuracy

Publications (2)

Publication Number Publication Date
CN111291830A CN111291830A (en) 2020-06-16
CN111291830B true CN111291830B (en) 2023-03-03

Family

ID=71022529

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010144610.4A Active CN111291830B (en) 2020-03-04 2020-03-04 Method for improving glass surface defect detection efficiency and accuracy

Country Status (1)

Country Link
CN (1) CN111291830B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107123111A (en) * 2017-04-14 2017-09-01 浙江大学 A kind of depth residual error net structure method for mobile phone screen defects detection
CN108765391A (en) * 2018-05-19 2018-11-06 科立视材料科技有限公司 A kind of plate glass foreign matter image analysis methods based on deep learning
CN108918527A (en) * 2018-05-15 2018-11-30 佛山市南海区广工大数控装备协同创新研究院 A kind of printed matter defect inspection method based on deep learning
CN110728657A (en) * 2019-09-10 2020-01-24 江苏理工学院 Annular bearing outer surface defect detection method based on deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11210777B2 (en) * 2016-04-28 2021-12-28 Blancco Technology Group IP Oy System and method for detection of mobile device fault conditions

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107123111A (en) * 2017-04-14 2017-09-01 浙江大学 A kind of depth residual error net structure method for mobile phone screen defects detection
CN108918527A (en) * 2018-05-15 2018-11-30 佛山市南海区广工大数控装备协同创新研究院 A kind of printed matter defect inspection method based on deep learning
CN108765391A (en) * 2018-05-19 2018-11-06 科立视材料科技有限公司 A kind of plate glass foreign matter image analysis methods based on deep learning
CN110728657A (en) * 2019-09-10 2020-01-24 江苏理工学院 Annular bearing outer surface defect detection method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于改进卷积神经网络的玻璃缺陷识别方法研究;张丹丹;《中国优秀硕士学位论文全文数据库工程科技Ⅰ辑》;20190915;第B015-303页 *

Also Published As

Publication number Publication date
CN111291830A (en) 2020-06-16

Similar Documents

Publication Publication Date Title
CN113450307B (en) Product edge defect detection method
CN108562589B (en) Method for detecting surface defects of magnetic circuit material
CN109870461B (en) Electronic components quality detection system
CN112819772B (en) High-precision rapid pattern detection and recognition method
CN109767422A (en) Pipe detection recognition methods, storage medium and robot based on deep learning
CN109598287A (en) The apparent flaws detection method that confrontation network sample generates is generated based on depth convolution
CN115063409A (en) Method and system for detecting surface material of mechanical cutter
El-Sayed et al. New edge detection technique based on the shannon entropy in gray level images
CN112614062A (en) Bacterial colony counting method and device and computer storage medium
CN107038688A (en) The detection of image noise and denoising method based on Hessian matrixes
CN114495098B (en) Diaxing algae cell statistical method and system based on microscope image
CN115147409A (en) Mobile phone shell production quality detection method based on machine vision
CN114627383A (en) Small sample defect detection method based on metric learning
CN111612759B (en) Printed matter defect identification method based on deep convolution generation type countermeasure network
CN111429372A (en) Method for enhancing edge detection effect of low-contrast image
CN113870202A (en) Far-end chip defect detection system based on deep learning technology
CN113780423A (en) Single-stage target detection neural network based on multi-scale fusion and industrial product surface defect detection model
CN114022383A (en) Moire pattern removing method and device for character image and electronic equipment
CN115731198A (en) Intelligent detection system for leather surface defects
CN113076860B (en) Bird detection system under field scene
CN108491796B (en) Time domain periodic point target detection method
CN117152136B (en) Biological aerosol monitoring method based on colony unit counting
CN111291830B (en) Method for improving glass surface defect detection efficiency and accuracy
CN113673396A (en) Spore germination rate calculation method and device and storage medium
CN113538342A (en) Convolutional neural network-based quality detection method for coating of aluminum aerosol can

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant