CN113592859B - Deep learning-based classification method for defects of display panel - Google Patents

Deep learning-based classification method for defects of display panel Download PDF

Info

Publication number
CN113592859B
CN113592859B CN202111125529.2A CN202111125529A CN113592859B CN 113592859 B CN113592859 B CN 113592859B CN 202111125529 A CN202111125529 A CN 202111125529A CN 113592859 B CN113592859 B CN 113592859B
Authority
CN
China
Prior art keywords
defect
classification
classification model
picture
suspected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111125529.2A
Other languages
Chinese (zh)
Other versions
CN113592859A (en
Inventor
左右祥
杨义禄
李波
关玉萍
查世华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongdao Optoelectronic Equipment Co ltd
Original Assignee
Zhongdao Optoelectronic Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongdao Optoelectronic Equipment Co ltd filed Critical Zhongdao Optoelectronic Equipment Co ltd
Priority to CN202111125529.2A priority Critical patent/CN113592859B/en
Publication of CN113592859A publication Critical patent/CN113592859A/en
Application granted granted Critical
Publication of CN113592859B publication Critical patent/CN113592859B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30141Printed circuit board [PCB]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The invention discloses a classification method for display panel defects based on deep learning, which comprises the following steps: acquiring at least one suspected defect picture through automatic optical detection; inputting the suspected defect picture into a first classification model, wherein the first classification model is used for judging whether the suspected defect picture is a real defect or a false defect; and if the suspected defect picture is a real defect, inputting the suspected defect picture into a second classification model, wherein the second classification model is used for judging whether the suspected defect picture is a black defect or a white defect. The convolutional neural network training classification model based on deep learning realizes automatic classification of the defect pictures detected on the display panel by the AOI in a classifier cascade mode, and has high precision, high speed and good robustness.

Description

Deep learning-based classification method for defects of display panel
Technical Field
The invention belongs to the technical field of machine vision automatic classification, relates to a classification method for display panel defects based on deep learning, and more particularly relates to a method for classifying target defect images by using defect images and a classification model obtained by deep learning technology training.
Background
AOI (Automated Optical Inspection) is called automatic Optical Inspection in Chinese, and is a device for inspecting common defects encountered in welding production based on Optical principles. AOI is a new emerging testing technology, but the development is rapid, and AOI testing equipment is released by many manufacturers. During automatic detection, the machine automatically scans the PCB through the camera, acquires images, compares the tested welding spots with qualified parameters in the database, inspects the defects on the PCB through image processing, and displays/marks the defects through a display or an automatic mark for repair personnel to repair.
At present, many factory machine vision factories still rely on manual visual classification for defective products detected by equipment, and improve production lines through defect types. However, the efficiency of manual visual classification is low, the subjective factor of classification is large, and the automation process of industrial manufacturing is severely restricted. The automatic classification technology based on machine vision is indispensable to intelligent engineering, and because the defect type of a product can be obtained in real time, the feedback of a production system is given according to the defect type to eliminate the defect, and further, the production is improved. A large number of defects can be significantly reduced in a continuous process.
The existing solution most similar to the present invention is to train a model for each defect by manually selecting features and then using a machine learning method, and further automatically classify the defects, for example, chinese patent application No. CN201910537298.2 discloses an automatic defect classification method based on machine learning: preprocessing a sample image, enhancing each defect picture, performing machine learning on each defect type to generate a classification model of the defect, finally judging the extracted features of a target image one by utilizing a series of classification models, and finally determining the specific category of the target image. The method has the advantages that the characteristics of the defect picture extraction are based on manual experience and are selected actively, each defect corresponds to one classification model, a target image is judged, each defect model is required to be judged, and the instantaneity is low.
Disclosure of Invention
The purpose of the invention is realized by the following technical scheme.
In order to solve the problems, aiming at the defect of artificial feature extraction, the invention utilizes the convolutional neural network technology in the deep learning technology to carry out automatic feature extraction, then carries out preliminary classification judgment on the defect picture, and then realizes the automatic classification of the defect picture in machine vision in a segmentation cascade mode.
Specifically, according to a first aspect of the present invention, the present invention provides a classification method for a display panel defect based on deep learning, comprising the following steps:
acquiring at least one suspected defect picture through automatic optical detection;
inputting the suspected defect picture into a first classification model, wherein the first classification model is used for judging whether the suspected defect picture is a real defect or a false defect;
and if the suspected defect picture is a real defect, inputting the suspected defect picture into a second classification model, wherein the second classification model is used for judging whether the suspected defect picture is a black defect or a white defect.
Further, the method further comprises:
if the suspected defect picture is a black defect, inputting the suspected defect picture into a third classification model, wherein the third classification model is used for distinguishing the specific category to which the defect in the black defect picture belongs;
and if the suspected defect picture is a white defect, inputting the suspected defect picture into a fourth classification model, wherein the fourth classification model is used for distinguishing the specific category to which the defect belongs in the white defect picture.
Further, the first, second, third and fourth classification models are deep neural networks trained in advance, and the training process of each classification model is as follows:
(1) creating a classification model;
(2) inputting the grouped data into a classification model for model training;
(3) and comparing the classification result obtained by the classification model with the real classification label to obtain the loss value of the classification model, so that the link weight in the classification model is adjusted, the loss value of the network is continuously reduced, and the model training is further completed.
Furthermore, the classification model is formed by combining a feature extraction network formed by cascading 8 feature extraction units and a classification network, wherein the feature extraction unit is composed of a convolution layer, an activation layer and a maximization pooling layer.
Further, the convolution layer is slid on the image by utilizing a convolution kernel, and convolution calculation is carried out on the image, wherein the convolution calculation mode is that each element in the convolution kernel is multiplied by a corresponding element in the image area covered by the convolution kernel and then summed;
an activation layer in the feature extraction unit is an activation operation cascaded behind a convolutional layer, wherein an activation function is ReLu;
the maximized pooling layer in the feature extraction unit is to slide the neighborhood of n × n pixels on the output result of the active layer and to obtain the maximum value for all pixels in each neighborhood.
Furthermore, the classification network in the classification model is a fully-connected neural network consisting of an input layer, two hidden layers and an output layer; the method comprises the steps that a ReLu activation layer is cascaded behind an input layer and two hidden layers, a sigmoid layer is cascaded behind an output layer to output probability values of samples belonging to a certain class, wherein the sigmoid layer is formed by a sigmoid activation function.
Further, the classification result obtained by the classification model is compared with the real classification label to obtain a loss value of the classification model, and a cross entropy calculation mode is adopted, wherein the formula is as follows:
Figure 329048DEST_PATH_IMAGE001
where H (p, q) is the loss value, p (x) is the desired output, and q (x) is the actual output.
According to a second aspect of the present invention, there is provided a classification apparatus for a display panel defect based on deep learning, comprising:
the image acquisition module is used for acquiring at least one suspected defect image through automatic optical detection;
a true and false defect judgment module, configured to input the suspected defect picture into a first classification model, where the first classification model is used to judge whether the suspected defect picture is a true defect or a false defect;
and the black and white defect judging module is used for inputting the suspected defect picture into a second classification model if the suspected defect picture is a real defect, and the second classification model is used for judging whether the suspected defect picture is a black defect or a white defect.
According to a third aspect of the present invention, there is provided an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the computer program to implement the method according to any one of the first aspect.
According to a fourth aspect of the invention, there is provided a computer readable storage medium having stored thereon a computer program for execution by a processor to perform the method according to any one of the first aspect.
The invention has the advantages that: the automatic extraction of the features is realized through the technology of automatically extracting the features in the convolutional neural network, then the defect pictures are subjected to preliminary classification judgment firstly, and then are subdivided in a cascade mode, so that the automatic classification of the defect pictures in machine vision is realized. The convolutional neural network training classification model based on deep learning realizes automatic classification of the defect pictures detected on the display panel by the AOI in a classifier cascade mode, and has high precision, high speed and good robustness.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a flow chart of the classification method of the present invention.
FIG. 2 is a diagram of a classification model network framework according to the present invention.
Fig. 3 is a schematic diagram of an activation function of an activation layer in the feature extraction unit of the present invention.
FIG. 4 is a schematic diagram of a sigmoid activation function according to the present invention.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 6 shows a schematic diagram of a storage medium provided in an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The invention discloses a classification method for display panel defects based on deep learning, which comprises the following steps: for the defects detected by the AOI automatic optical detection equipment on the display panel, some false defects are often detected due to the equipment precision, so that firstly, collected images are divided into two groups (defective and non-defective) according to the defects, and then the data is used for carrying out convolutional neural network classification model training to obtain a classification model A; and then dividing the defective pictures into two groups (black defects and white defects) according to the visual characteristics, and performing convolutional neural network classification model training again by using the black and white defect data to obtain a classification model B for distinguishing whether the defective pictures are the black defects or the white defects. And then, the black defects and the white defects are respectively classified in a fine mode to obtain a model C1 and a model C2. When classifying the defect pictures detected by AOI, firstly, the pictures are judged whether the defects are real defects or not through the model A, if the defects are real defects, the pictures are input into the model B, and the defect subdivision is determined by using the model C1 or the model C2 according to the classification result of the model B. And finally, performing threshold filtering on the classification result obtained by the model C1 or the model C2 to determine the final defect type of the defect picture. The convolutional neural network training classification model based on deep learning realizes automatic classification of the defect pictures detected on the display panel by the AOI in a classifier cascade mode, and has high precision, high speed and good robustness.
Specifically, as shown in fig. 1, the present invention is directed to classifying display panel defects. The calculation process is as follows:
1. the defect pictures detected by the AOI are divided into two groups of data D1 according to whether real defects exist, the pictures with the real defects are divided into two groups of data D2 of black defects and white defects according to visual characteristics, and the black defects and the white defects are respectively subdivided into D3 and D4 according to the types of the defects to be classified.
2. And (3) carrying out classification model training by using the classified data D1, D2, D3 and D4 respectively by using a convolutional neural network in deep learning to obtain classification models A, B, C1 and C2 respectively.
2.1 create a classification model according to fig. 2.
The classification model is formed by combining a feature extraction network and a classification network, wherein the feature extraction network is formed by cascading 8 feature extraction units, and each feature extraction unit comprises a convolution layer, an activation layer and a maximization pooling layer.
The convolution layer is slid on the image by utilizing a convolution kernel, and convolution calculation is carried out on the image, wherein the convolution calculation mode is that each element in the convolution kernel is multiplied by the corresponding element in the image area covered by the convolution kernel and then summed.
Wherein the active layer in the feature extraction Unit is an active operation cascaded behind the convolutional layer, wherein the active function is a Rectified Linear Unit (ReLu), as shown in FIG. 3, and the formula is
Figure 905523DEST_PATH_IMAGE002
The maximized pooling layer in the feature extraction unit is to slide the neighborhood of n x n pixels on the output result of the active layer and to obtain the maximum value for all pixels in each neighborhood.
The classification network in the classification model is a fully-connected neural network consisting of an input layer, two hidden layers and an output layer. The above ReLu activation layer is cascaded behind the input layer and the two hidden layers, and the sigmoid layer is cascaded behind the output layer to output probability values of samples belonging to a certain class, wherein the sigmoid layer is formed by a sigmoid activation function, as shown in fig. 4, the function formula is as follows:
Figure 346737DEST_PATH_IMAGE003
the sizes of convolution kernels of 8 feature extraction units are 5 × 5, 3 × 3 and 3 × 3 in sequence, and the size of each feature extraction unit pooling layer is in sequence: 5, 3, and the number of feature maps output by each feature extraction unit is: 32, 32, 64, 64, 64, 64, 128, 256.
The number of neurons of an input layer in the classification network is 4096, the number of neurons of two hidden layers is 2048 and 1024 respectively, and the number of neurons of an output layer is the number of classes to be classified.
And 2.2, inputting the grouped data into a classification network for model training.
And 2.3, comparing the classification result obtained by the network with the real classification label to obtain a loss value of the network, so that the link weight in the network model is adjusted, the loss value of the network is continuously reduced, and the model training is further completed.
And comparing the network prediction result with the real label to calculate the loss value, and adopting a cross entropy calculation mode, wherein the formula is as follows:
Figure 505317DEST_PATH_IMAGE004
where p (x) is the desired output and q (x) is the actual output. And continuously adjusting the network weight by continuously reducing the softmax cross entropy loss value.
3. During detection, for a defect picture detected by AOI, the final specific defect type is obtained by cascade classification according to the four models A, B, C1 and C2.
The classification model A trained according to the D1 data set is used for distinguishing whether the AOI detected defect picture is a true defect or a false defect, wherein the false defect is the AOI false-detected picture.
Wherein the classification model B trained from the D2 data set is used to distinguish whether the visual characteristics of the defect in the defect picture are black or white.
Wherein, the classification model C1 trained according to the D3 data set is used for distinguishing the specific category to which the defect in the black defect picture belongs.
Wherein, the classification model C2 trained according to the D4 data set is used for distinguishing the specific categories to which the defects in the white defect picture belong.
Inputting the picture of the defect detected by the AOI into an A model, if the A model judges that the defect is a false defect, outputting the defect as the false defect, otherwise, entering a model B to judge whether the defect is a black defect or a white defect, if the defect is the black defect, continuously subdividing the specific category by using a model C1, if the defect is the white defect, subdividing the specific category by using a model C2, subdividing the result of C1 or C2, and further performing threshold filtering for filtering the small probability result, considering the defect smaller than the threshold as the false defect, and outputting the final classification class larger than the threshold.
The method has the advantages that the automatic extraction of the features is realized through the technology of automatically extracting the features in the convolutional neural network, then the defect pictures are subjected to preliminary classification judgment firstly, and then are subdivided in a cascade mode, so that the automatic classification of the defect pictures in machine vision is realized.
The embodiment of the present application further provides an electronic device corresponding to the method for classifying defects of a display panel based on deep learning provided in the foregoing embodiment, so as to execute the above method for classifying defects of a display panel based on deep learning. The embodiments of the present application are not limited.
Please refer to fig. 5, which illustrates a schematic diagram of an electronic device according to some embodiments of the present application. As shown in fig. 5, the electronic device 2 includes: the system comprises a processor 200, a memory 201, a bus 202 and a communication interface 203, wherein the processor 200, the communication interface 203 and the memory 201 are connected through the bus 202; the memory 201 stores a computer program that can be executed on the processor 200, and the processor 200 executes the computer program to execute the method for classifying defects of a display panel based on deep learning provided in any of the foregoing embodiments of the present application.
The Memory 201 may include a high-speed Random Access Memory (RAM) and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 203 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used.
Bus 202 can be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The memory 201 is used for storing a program, and the processor 200 executes the program after receiving an execution instruction, and the method for classifying a defect of a display panel based on deep learning disclosed in any embodiment of the present application may be applied to the processor 200, or implemented by the processor 200.
The processor 200 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 200. The Processor 200 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 201, and the processor 200 reads the information in the memory 201 and completes the steps of the method in combination with the hardware thereof.
The electronic device provided by the embodiment of the application and the deep learning-based classification method for the defects of the display panel provided by the embodiment of the application have the same beneficial effects as the method adopted, operated or realized by the electronic device.
The present embodiment further provides a computer-readable storage medium corresponding to the method for classifying a defect of a display panel based on deep learning provided in the foregoing embodiment, please refer to fig. 6, which illustrates the computer-readable storage medium being an optical disc 30 having a computer program (i.e., a program product) stored thereon, where the computer program, when executed by a processor, executes the method for classifying a defect of a display panel based on deep learning provided in any of the foregoing embodiments.
It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, a phase change memory (PRAM), a Static Random Access Memory (SRAM), a Dynamic Random Access Memory (DRAM), other types of Random Access Memories (RAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a flash memory, or other optical and magnetic storage media, which are not described in detail herein.
The computer-readable storage medium provided by the above-mentioned embodiment of the present application and the deep learning based classification method for the display panel defects provided by the embodiment of the present application have the same beneficial effects as the method adopted, run or implemented by the application program stored in the computer-readable storage medium.
It should be noted that:
the algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may be used with the teachings herein. The required structure for constructing such a device will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in the creation apparatus of a virtual machine according to embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (8)

1. A classification method for display panel defects based on deep learning is characterized by comprising the following steps:
acquiring at least one suspected defect picture through automatic optical detection;
inputting the suspected defect picture into a first classification model, wherein the first classification model is used for judging whether the suspected defect picture is a real defect or a false defect;
if the suspected defect picture is a real defect, inputting the suspected defect picture into a second classification model, wherein the second classification model is used for judging whether the suspected defect picture is a black defect or a white defect;
if the suspected defect picture is a black defect, inputting the suspected defect picture into a third classification model, wherein the third classification model is used for distinguishing the specific category to which the defect in the black defect picture belongs;
if the suspected defect picture is a white defect, inputting the suspected defect picture into a fourth classification model, wherein the fourth classification model is used for distinguishing the specific category to which the defect belongs in the white defect picture;
the first, second, third and fourth classification models are pre-trained deep neural networks, and the training process of each classification model is as follows:
(1) creating a classification model;
(2) inputting the grouped data into a classification model for model training;
(3) and comparing the classification result obtained by the classification model with the real classification label to obtain the loss value of the classification model, so that the link weight in the classification model is adjusted, the loss value of the network is continuously reduced, and the model training is further completed.
2. The method of claim 1, wherein the classification method for the display panel defects based on deep learning,
the classification model is formed by combining a feature extraction network and a classification network which are formed by cascading 8 feature extraction units, wherein each feature extraction unit is composed of a convolution layer, an activation layer and a maximization pooling layer.
3. The method of claim 2, wherein the classification method for the display panel defects based on deep learning,
the convolution layer is used for sliding on the image by utilizing a convolution kernel and carrying out convolution calculation on the image, wherein the convolution calculation mode is that each element in the convolution kernel is multiplied by a corresponding element in an image area covered by the convolution kernel and then summed;
an activation layer in the feature extraction unit is an activation operation cascaded behind a convolutional layer, wherein an activation function is ReLu;
the maximized pooling layer in the feature extraction unit is to slide the neighborhood of n × n pixels on the output result of the active layer and to obtain the maximum value for all pixels in each neighborhood.
4. The method of claim 2, wherein the classification method for the display panel defects based on deep learning,
the classification network in the classification model is a fully-connected neural network consisting of an input layer, two hidden layers and an output layer; the method comprises the steps that a ReLu activation layer is cascaded behind an input layer and two hidden layers, a sigmoid layer is cascaded behind an output layer to output probability values of samples belonging to a certain class, wherein the sigmoid layer is formed by a sigmoid activation function.
5. The method of claim 1, wherein the classification method for the display panel defects based on deep learning,
and comparing the classification result obtained by the classification model with the real classification label to obtain a loss value of the classification model, wherein a cross entropy calculation mode is adopted, and the formula is as follows:
Figure 371578DEST_PATH_IMAGE001
where H (p, q) is the loss value, p (x) is the desired output, and q (x) is the actual output.
6. A classification apparatus for a display panel defect based on deep learning, comprising:
the image acquisition module is used for acquiring at least one suspected defect image through automatic optical detection;
a true and false defect judgment module, configured to input the suspected defect picture into a first classification model, where the first classification model is used to judge whether the suspected defect picture is a true defect or a false defect;
a black and white defect judgment module, configured to input the suspected defect picture into a second classification model if the suspected defect picture is a real defect, where the second classification model is used to judge whether the suspected defect picture is a black defect or a white defect;
if the suspected defect picture is a black defect, inputting the suspected defect picture into a third classification model, wherein the third classification model is used for distinguishing the specific category to which the defect in the black defect picture belongs;
if the suspected defect picture is a white defect, inputting the suspected defect picture into a fourth classification model, wherein the fourth classification model is used for distinguishing the specific category to which the defect belongs in the white defect picture;
the first, second, third and fourth classification models are pre-trained deep neural networks, and the training process of each classification model is as follows:
(1) creating a classification model;
(2) inputting the grouped data into a classification model for model training;
(3) and comparing the classification result obtained by the classification model with the real classification label to obtain the loss value of the classification model, so that the link weight in the classification model is adjusted, the loss value of the network is continuously reduced, and the model training is further completed.
7. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the computer program to implement the method of any one of claims 1-5.
8. A computer-readable storage medium, on which a computer program is stored, characterized in that the program is executed by a processor to implement the method according to any of claims 1-5.
CN202111125529.2A 2021-09-26 2021-09-26 Deep learning-based classification method for defects of display panel Active CN113592859B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111125529.2A CN113592859B (en) 2021-09-26 2021-09-26 Deep learning-based classification method for defects of display panel

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111125529.2A CN113592859B (en) 2021-09-26 2021-09-26 Deep learning-based classification method for defects of display panel

Publications (2)

Publication Number Publication Date
CN113592859A CN113592859A (en) 2021-11-02
CN113592859B true CN113592859B (en) 2022-01-14

Family

ID=78242282

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111125529.2A Active CN113592859B (en) 2021-09-26 2021-09-26 Deep learning-based classification method for defects of display panel

Country Status (1)

Country Link
CN (1) CN113592859B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113902742B (en) * 2021-12-08 2022-05-20 中导光电设备股份有限公司 TFT-LCD detection-based defect true and false judgment method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108734696A (en) * 2017-04-18 2018-11-02 三星显示有限公司 System and method for white point Mura detections
CN111275660A (en) * 2018-12-05 2020-06-12 合肥欣奕华智能机器有限公司 Defect detection method and device for flat panel display
KR20200092143A (en) * 2019-01-24 2020-08-03 가천대학교 산학협력단 System and method for diagnosising display panel using deep learning neural network
CN112053318A (en) * 2020-07-20 2020-12-08 清华大学 Two-dimensional PCB defect real-time automatic detection and classification device based on deep learning
CN112884712A (en) * 2021-01-22 2021-06-01 深圳精智达技术股份有限公司 Method and related device for classifying defects of display panel
CN113379686A (en) * 2021-05-26 2021-09-10 广东炬森智能装备有限公司 PCB defect detection method and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005265503A (en) * 2004-03-17 2005-09-29 Seiko Epson Corp Display inspection method and display inspection dvice
JP4480150B2 (en) * 2004-11-12 2010-06-16 大日本印刷株式会社 Defect correction method for color filter substrate and color filter substrate
JP4252045B2 (en) * 2005-05-13 2009-04-08 三洋電機株式会社 Pixel defect correction method
CN109856156A (en) * 2019-01-22 2019-06-07 武汉精立电子技术有限公司 A kind of display panel tiny flaw determination method and device based on AOI
CN110376217B (en) * 2019-07-24 2021-12-21 千享科技(北京)有限公司 Active detection method and system for damage of display screen of electronic equipment
CN213689432U (en) * 2020-10-27 2021-07-13 中核建中核燃料元件有限公司 Fast reactor irradiation tank hole plugging welding spot compensation block

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108734696A (en) * 2017-04-18 2018-11-02 三星显示有限公司 System and method for white point Mura detections
CN111275660A (en) * 2018-12-05 2020-06-12 合肥欣奕华智能机器有限公司 Defect detection method and device for flat panel display
KR20200092143A (en) * 2019-01-24 2020-08-03 가천대학교 산학협력단 System and method for diagnosising display panel using deep learning neural network
CN112053318A (en) * 2020-07-20 2020-12-08 清华大学 Two-dimensional PCB defect real-time automatic detection and classification device based on deep learning
CN112884712A (en) * 2021-01-22 2021-06-01 深圳精智达技术股份有限公司 Method and related device for classifying defects of display panel
CN113379686A (en) * 2021-05-26 2021-09-10 广东炬森智能装备有限公司 PCB defect detection method and device

Also Published As

Publication number Publication date
CN113592859A (en) 2021-11-02

Similar Documents

Publication Publication Date Title
CN111179253B (en) Product defect detection method, device and system
US10878283B2 (en) Data generation apparatus, data generation method, and data generation program
US11176650B2 (en) Data generation apparatus, data generation method, and data generation program
CN109683360B (en) Liquid crystal panel defect detection method and device
CN110688925B (en) Cascade target identification method and system based on deep learning
CN111325713A (en) Wood defect detection method, system and storage medium based on neural network
JP2019109563A (en) Data generation device, data generation method, and data generation program
CN114549997B (en) X-ray image defect detection method and device based on regional feature extraction
JP2017049974A (en) Discriminator generator, quality determine method, and program
CN112766110A (en) Training method of object defect recognition model, object defect recognition method and device
TWI743837B (en) Training data increment method, electronic apparatus and computer-readable medium
CN111178446A (en) Target classification model optimization method and device based on neural network
CN115830004A (en) Surface defect detection method, device, computer equipment and storage medium
CN116843650A (en) SMT welding defect detection method and system integrating AOI detection and deep learning
CN111814852A (en) Image detection method, image detection device, electronic equipment and computer-readable storage medium
CN113592859B (en) Deep learning-based classification method for defects of display panel
CN114429445A (en) PCB defect detection and identification method based on MAIRNet
Choi et al. Deep learning based defect inspection using the intersection over minimum between search and abnormal regions
Park et al. Advanced cover glass defect detection and classification based on multi-DNN model
CN111105399A (en) Switch surface defect detection method and system
CN113962980A (en) Glass container flaw detection method and system based on improved YOLOV5X
CN117689646A (en) High-precision defect detection method, system and medium for positive and negative sample fusion
CN112836724A (en) Object defect recognition model training method and device, electronic equipment and storage medium
CN112927222B (en) Method for realizing multi-type photovoltaic array hot spot detection based on hybrid improved Faster R-CNN
CN113034432A (en) Product defect detection method, system, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant