CN118154603A - Display screen defect detection method and system based on cascading multilayer feature fusion network - Google Patents

Display screen defect detection method and system based on cascading multilayer feature fusion network Download PDF

Info

Publication number
CN118154603A
CN118154603A CN202410578270.4A CN202410578270A CN118154603A CN 118154603 A CN118154603 A CN 118154603A CN 202410578270 A CN202410578270 A CN 202410578270A CN 118154603 A CN118154603 A CN 118154603A
Authority
CN
China
Prior art keywords
module
feature
network
convolution
feature fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410578270.4A
Other languages
Chinese (zh)
Other versions
CN118154603B (en
Inventor
周鸣乐
万金
李刚
李敏
韩德隆
李旺
冯正乾
赵世龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Computer Science Center National Super Computing Center in Jinan
Original Assignee
Shandong Computer Science Center National Super Computing Center in Jinan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Computer Science Center National Super Computing Center in Jinan filed Critical Shandong Computer Science Center National Super Computing Center in Jinan
Priority to CN202410578270.4A priority Critical patent/CN118154603B/en
Publication of CN118154603A publication Critical patent/CN118154603A/en
Application granted granted Critical
Publication of CN118154603B publication Critical patent/CN118154603B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of image processing, and provides a display screen defect detection method and system based on a cascading multi-layer feature fusion network, wherein a liquid crystal display screen defect detection model comprises a residual feature extraction network for extracting image features, a cascading multi-layer feature fusion network for fusing shallow fine granularity information and deep semantic information in an image, and a target recognition network for determining defect types, positions and confidence information; the designed residual feature extraction module utilizes the depth convolution module and the point-by-point convolution module to effectively capture fine granularity features in the image, reduce the model parameter quantity and improve the model detection speed; the characteristic enhancement module is designed in the characteristic extraction network, and meanwhile, the detail characteristics and the integral structure of the defects of the liquid crystal display screen are considered, so that more important and obvious defect characteristics can be extracted, and the accuracy of the model on detection of different types of defects is improved.

Description

Display screen defect detection method and system based on cascading multilayer feature fusion network
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a display screen defect detection method and system based on a cascading multilayer feature fusion network.
Background
In the production of liquid crystal displays, the detection of surface defects is a critical quality control step, which is critical to ensure that the final product meets quality standards. Due to various factors such as manufacturing environment and equipment, various types of defects such as cracks, damages, scratches, spots, etc. may be generated in the liquid crystal display, and these defects may not only affect the performance and appearance of the product, but also reduce the market competitiveness of the product.
The inventor finds that the difference between different defect types in the liquid crystal display screen is large, and small differences in similar defects are difficult to effectively identify. The dimension change of the defect target of the liquid crystal display screen is large, the existing target detector cannot effectively fuse shallow detail information and deep semantic information, and therefore the existing target detector is poor in detection effect on defect products with different dimensions. The liquid crystal display product contains a large number of complex and tiny targets, and semantic information carried by the targets is weak, so that a universal target detector is difficult to accurately identify. In order to meet the production of practical industry, the inspection of the defects of the liquid crystal display screen must be performed within a specified time limit, and the detection speed and the detection accuracy of the universal target detector cannot meet the practical production requirements.
Disclosure of Invention
In order to solve the problems, the invention provides a display screen defect detection method and a display screen defect detection system based on a cascading multilayer feature fusion network, wherein a feature enhancement module is designed in a residual feature extraction network, and meanwhile, the detail features and the integral structure of the defects of a liquid crystal display screen are considered, so that more important and obvious defect features can be extracted, and the accuracy of a model on detection of different types of defects is improved; fine-grained characteristics extracted by a shallower network are introduced into the cascade multilayer characteristic fusion network, so that the method is used for enriching the extraction of detailed information of complex small targets and can better capture different scale characterizations of the targets.
In order to achieve the above object, the present invention is realized by the following technical scheme:
in a first aspect, the present invention provides a display screen defect detection method based on a cascading multilayer feature fusion network, including:
Acquiring an image of a liquid crystal display screen;
obtaining a display screen defect detection result according to the liquid crystal display screen image and a preset liquid crystal display screen defect detection model;
The liquid crystal display defect detection model comprises a residual feature extraction network for extracting image features, a cascading multilayer feature fusion network for fusing shallow fine granularity information and deep semantic information in an image, and a target identification network for determining defect types, positions and confidence information; in the residual feature extraction network, a depth convolution module and a point-by-point convolution module are utilized to capture fine granularity features in an image, and simultaneously reduce the number of model parameters, and when features are extracted, the detail features and the integral structure of the defects of the liquid crystal display screen are considered; and the cascade multi-layer feature fusion network takes the outputs of different modules in the residual feature extraction network as inputs at the same time, and introduces fine-grained features extracted by the shallow network.
In a second aspect, the present invention further provides a display screen defect detection system based on a cascaded multi-layer feature fusion network, including:
A data acquisition module configured to: acquiring an image of a liquid crystal display screen;
a detection module configured to: obtaining a display screen defect detection result according to the liquid crystal display screen image and a preset liquid crystal display screen defect detection model;
The liquid crystal display defect detection model comprises a residual feature extraction network for extracting image features, a cascading multilayer feature fusion network for fusing shallow fine granularity information and deep semantic information in an image, and a target identification network for determining defect types, positions and confidence information; in the residual feature extraction network, a depth convolution module and a point-by-point convolution module are utilized to capture fine granularity features in an image, and simultaneously reduce the number of model parameters, and when features are extracted, the detail features and the integral structure of the defects of the liquid crystal display screen are considered; and the cascade multi-layer feature fusion network takes the outputs of different modules in the residual feature extraction network as inputs at the same time, and introduces fine-grained features extracted by the shallow network.
In a third aspect, the present invention further provides an electronic device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the steps of the method for detecting a display screen defect based on the cascaded multi-layer feature fusion network according to the first aspect are implemented when the processor executes the program.
In a fourth aspect, the present invention also provides a computer program product, which comprises a computer program, the computer program implementing the steps of the display screen defect detection method based on the cascaded multi-layer feature fusion network according to the first aspect when being executed by a processor.
Compared with the prior art, the invention has the beneficial effects that:
1. The liquid crystal display defect detection model comprises a residual feature extraction network for extracting image features, a cascading multilayer feature fusion network for fusing shallow fine granularity information and deep semantic information in an image, and a target identification network for determining defect types, positions and confidence information; the designed residual feature extraction module utilizes the depth convolution module and the point-by-point convolution module to effectively capture fine granularity features in the image, reduce the model parameter quantity and improve the model detection speed; the feature extraction network is used for designing a feature enhancement module, and meanwhile, the detail features and the whole structure of the defects of the liquid crystal display screen are considered, so that more important and obvious defect features can be extracted, and the accuracy of the model on detection of different types of defects is improved; in the cascade multilayer feature fusion network, fine-grained features extracted by a shallower network are introduced, so that the method is used for enriching the extraction of detailed information of complex small targets and can better capture different scale characterizations of the targets;
2. According to the invention, the local and global characteristic information of the liquid crystal display screen is fully extracted by designing a residual characteristic extraction network; in addition, a residual feature extraction module is designed, the depth convolution module and the point-by-point convolution module are utilized to effectively capture fine granularity features in the image, meanwhile, the model parameter is reduced, and the model detection speed is improved; and designing a feature enhancement module, simultaneously taking detail features and an overall structure into consideration, extracting more important and obvious defect features, and improving the accuracy of the model on detection of different types of defects. Designing a cascading multi-layer feature fusion network, fully fusing shallow fine granularity information and deep semantic information, ensuring efficient fusion of different scale features in the network, and enhancing the detection capability of large scale change defects; and meanwhile, fine granularity features extracted by a shallower network are introduced and are used for enriching the extraction of detailed information of complex small targets. The target recognition network is designed to comprise a multi-branch feature fusion module and a detection head, so that the relation between multi-scale semantic information is enhanced by reducing semantic differences among different scale features, and the model is helped to capture different scale characterizations of the target better. The mixed attention module is designed to highlight the features most relevant to the defect detection of the liquid crystal display screen in the image, and inhibit unimportant information, so that the representation capability of the features is enhanced.
3. The invention designs a multiscale sensing loss function, which can be used for balancing the loss values of defects with different sizes in a liquid crystal display screen.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments and are incorporated in and constitute a part of this specification, illustrate and explain the embodiments and together with the description serve to explain the embodiments.
FIG. 1 is a flow chart of the method of embodiment 1 of the present invention;
FIG. 2 is a residual feature extraction network according to embodiment 1 of the present invention;
FIG. 3 is a convolution module according to embodiment 1 of the present disclosure;
FIG. 4 is a residual feature extraction module of embodiment 1 of the present invention;
FIG. 5 is a feature enhancement module of embodiment 1 of the present invention;
FIG. 6 is a high efficiency self-attention module of embodiment 1 of the present invention;
FIG. 7 is a hybrid attention module of embodiment 1 of the present invention;
FIG. 8 is a cascaded multi-layer feature fusion network of embodiment 1 of the present invention;
Fig. 9 is a multi-branch feature fusion module according to embodiment 1 of the present invention.
Detailed Description
The invention will be further described with reference to the drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the application. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
Surface defect detection: surface defect detection is a key technology in the field of industrial quality control, which aims to identify and locate defects on the surface of objects using computer and vision techniques.
Feature extraction network: in the object detection task, the feature extraction network functions to extract a feature representation of the object from the input image. The feature extraction network is able to learn the features of different areas in the image, thereby providing location information. With the improvement of network level, the feature extraction network can capture higher-level semantic information, which is important to understanding aspects such as object category and shape in the image, and is helpful to improving the accuracy of target detection.
Multiscale feature fusion network: the multi-scale feature fusion network aims to effectively capture information of different scales and improve the detection performance of a model on a target. By integrating feature information from different levels and scales, detection performance of targets of different scales is improved, and modeling of target context information is enhanced.
Example 1:
In the production of liquid crystal displays, the detection of surface defects is a critical quality control step, which is critical to ensure that the final product meets quality standards. Due to various factors such as manufacturing environment and equipment, various types of defects such as cracks, damages, scratches, spots, etc. may be generated in the liquid crystal display, and these defects may not only affect the performance and appearance of the product, but also reduce the market competitiveness of the product. In a single liquid crystal display image, a plurality of different defect types may exist at the same time, and the defects may be similar in appearance. In addition, the defects of the liquid crystal display screen have different shapes, the size of the defects has large variation, and the general target detector has difficulty in accurately identifying and positioning the defect targets. However, in the actual production process of the liquid crystal display, the detection speed of the defects is also very critical, and the detection of the defects must be performed within a prescribed time limit. Therefore, designing a detection method capable of rapidly identifying and distinguishing different types of defects has important significance for improving production efficiency and product quality.
With the rapid progress of automation technology, intelligent mechanical equipment has gradually replaced traditional manual operation modes, and the improvement of the automation level of the manufacturing industry is greatly promoted. Nevertheless, existing machine vision systems still face some challenges in terms of robustness, mainly due to their excessive reliance on manually set features. However, recent advances in deep learning have spawned many advanced target detection models, including single-stage YOLO (You Only Look Once) and SSD (Single Shot Multibox Detector) models, and dual-stage FASTER RCNN models. The revolutionary methods not only improve the detection accuracy, but also enhance the adaptability of the machine to various production scenes, and provide new power for automatic transformation of the manufacturing industry.
However, the following detection problems still exist in the existing defect detection method: the difference between different defect types in the liquid crystal display screen is large, and tiny differences in similar defects are difficult to effectively identify. The dimension change of the defect target of the liquid crystal display screen is large, the existing target detector cannot effectively fuse shallow detail information and deep semantic information, and therefore the existing target detector is poor in detection effect on defect products with different dimensions. The liquid crystal display product contains a large number of complex and tiny targets, and semantic information carried by the targets is weak, so that a universal target detector is difficult to accurately identify. In order to meet the production of practical industry, the inspection of the defects of the liquid crystal display screen must be performed within a specified time limit, and the detection speed and the detection accuracy of the universal target detector cannot meet the practical production requirements.
In summary, when performing liquid crystal display defect detection, the existing target detection model cannot accurately identify and locate a plurality of different types of defect targets and non-significant defect targets; the existing target detector cannot fully fuse shallow detail information and deep semantic information, so that the effect of detecting the multi-scale defect target is poor; the liquid crystal display screen contains a large number of small target defects, and the existing target detector has low detection precision when processing complex small-size targets; the imbalance problem of the target detection model between the detection accuracy and the reasoning speed.
Aiming at the problems, the embodiment provides a display screen defect detection method based on a cascading multilayer feature fusion network, which fully extracts multilayer features with rich image local and global information composition by designing a residual feature extraction network, improves the detection performance of different types of defects and non-obvious defect targets, designs a residual feature extraction module, effectively captures fine grain features in images by using a depth convolution module and a point-by-point convolution module, reduces the quantity of model parameters, and improves the model detection speed; designing a cascading multi-layer feature fusion network, fully fusing shallow fine granularity information and deep semantic information, and improving the detection precision of a multi-scale defect target; the mixed attention module is provided, and simultaneously, the spatial detail and the context semantic information are utilized to highlight the most relevant features of the image with the defect detection of the liquid crystal display screen, and meanwhile, the unimportant information is restrained, so that the representation capability of the features is enhanced; the target recognition network is designed, the relation between multi-scale semantic information is enhanced by reducing semantic differences among different scale features, and the model is helped to better capture different scale characterizations of the target. As shown in fig. 1, the method steps of the present embodiment include:
S1, acquiring an image of a liquid crystal display screen, randomly generating masks of different sizes for the acquired image, and increasing model robustness; labeling the processed image, constructing a data set by using the labeling file and the corresponding image file, and dividing the data set to obtain a training set, a testing set and a verification set;
S2, constructing a residual feature extraction network for detecting surface defects of the liquid crystal display;
s3, constructing a cascade multilayer feature fusion network for detecting surface defects of the liquid crystal display;
S4, building a target identification network for detecting surface defects of the liquid crystal display screen;
S5, connecting a residual feature extraction network, a cascading multilayer feature fusion network and a target identification network to form a liquid crystal display surface defect detection model, and training the defect detection model by using a training set;
And S6, packaging and deploying the trained liquid crystal display surface defect detection model, and detecting the position and the type of the liquid crystal display defects.
In step S1, optionally, an image acquisition device such as a camera is used to acquire an image of the liquid crystal display; after the liquid crystal display images are collected, an image file is constructed, and mask processing with different sizes is randomly generated on each collected liquid crystal display image; labeling the preprocessed liquid crystal display images by using an image labeling tool labelme, constructing labeling files, wherein each image file corresponds to one labeling file, the labeled image types are respectively of six defects of cracks, damages, scratches, water drops, spots and burrs, the labeling files contain defect position and defect type information of the liquid crystal display, a liquid crystal display data set is constructed by using the labeling files and the corresponding image files, and the data set is prepared according to 8:1:1 divide training set, test set and validation set.
In some embodiments, step S1 further includes randomly generating masks of different sizes, for each input image x, the image x has a height H and a width W, establishing a rectangular coordinate system with the center point of the image x as the origin, and taking the pixel point j, i of the i-th quadrant of the image x{1,2,3,4}, Using pixel j as the top left corner vertex of the randomly generated mask matrix, the side length of the generated square mask N is/>Performing matrix bit multiplication operation on the generated mask N and the image x to obtain an image/>The robustness of the model is increased.
In step S2, optionally, the residual feature extraction network for detecting the surface defect of the liquid crystal display screen includes a convolution module, a first residual feature extraction module, a second residual feature extraction module, a third residual feature extraction module, a fourth residual feature extraction module and a feature enhancement module.
Image is formedInput into a residual feature extraction network, image/>Obtaining a characteristic diagram/>, through a convolution moduleMap/>Input to a first residual feature extraction module to output a feature map/>Map/>Input to a second residual feature extraction module to output a feature map/>Map/>Input to a third residual feature extraction module to output and obtain a feature map/>Map/>Input to a fourth residual feature extraction module to output and obtain a feature map/>Map/>Input to the feature enhancement module to output a feature map/>
Optionally, the convolution module includes a convolution layer with a convolution kernel size of 3×3, batch normalization, and Silu activation functions.
Optionally, the first residual feature extraction module includes three convolution modules with convolution kernel sizes of 1×1, three depth convolution modules with convolution kernel sizes of 3×3, one point-by-point convolution module with convolution kernel sizes of 1×1, one convolution module with convolution kernel sizes of 3×3, and establishes a residual edge to reuse the input feature. Specifically, firstly, the input feature map is subjected to convolution module with the convolution kernel size of 3 multiplied by 3 and the step length of 2 to reduce the feature map size, and the feature map is obtained by outputting. Map/>Dividing the images according to the channel dimension average to obtain a feature map/>And/>Map/>Through a convolution module with the convolution kernel size of 1 multiplied by 1, the dimension of a transformation channel is half of that of an output channel, and through a batch normalization layer and Silu activation functions, feature scales are balanced to relieve gradient disappearance, and a feature map/>, is output. Map/>Through a depth convolution module with the convolution kernel size of 3 multiplied by 3, a feature map/> isoutput and obtainedMap/>Fine granularity features are extracted through a convolution module with the convolution kernel size of 1 multiplied by 1, and a feature map/> isobtained through outputAnd adding residual edges to prevent gradient from disappearing, and then outputting characteristic diagrams/>, of the left path and the right pathAnd feature map/>Performing splicing operation by using Concat functions, and outputting to obtain a feature map/>The feature map/>Input into a depth convolution module with the convolution kernel size of 3 multiplied by 3, and output to obtain a feature map/>Map/>The channel dimension is reduced by inputting the channel dimension into a point-by-point convolution module with the convolution kernel size of 1 multiplied by 1, and the characteristic diagram/> isobtained by outputting. The residual feature extraction module may be represented by the following formula:
Wherein, An input feature map representing a residual feature extraction module; /(I)A convolution representing a convolution kernel of size 3 x 3; /(I)A convolution representing a convolution kernel of size 1 x 1; /(I)Representing a depth convolution with a convolution kernel size of 3 x 3; /(I)A point-by-point convolution representing a convolution kernel size of 1 x 1; concat denotes a splicing operation; /(I)Representing a batch normalization layer; /(I)Representing an activation function; /(I)The left half part of the residual feature extraction module is represented to output a feature map; /(I)Output feature map representing right half part of residual feature extraction module;)Representing element-by-element additions.
Optionally, the first residual feature extraction module, the second residual feature extraction module, the third residual feature extraction module and the fourth residual feature extraction module adopt the same structure.
In this embodiment, the feature enhancement module includes three convolution modules with convolution kernel size 1×1, an efficient self-attention module, and a mixed-attention module. Map the characteristic mapInputting the residual error into a characteristic enhancement module, wherein a left half part branch in the characteristic enhancement module uses a convolution module with a convolution kernel size of 1 multiplied by 1 as a residual error edge to prevent gradient from disappearing, and outputting to obtain a characteristic diagram/>The branch at the right half part is input into a convolution module with the convolution kernel size of 1 multiplied by 1, and the characteristic diagram/> isoutputMap/>Input into a local global attention module, and output to obtain a feature map/>The correlation between defect targets is enhanced by local global attention constantly learning. The output feature map/>, of both branchesAnd/>Performing splicing operation by using Concat functions, and outputting to obtain a feature map/>. The feature map/>In the input mixed attention module, the output obtains a characteristic diagram/>The mixed attention module is used for highlighting the most relevant features of the images for detecting the defects of the liquid crystal display screen, suppressing unimportant feature information and improving the defect detection capability of the model.
Optionally, the high efficiency self-attention module includes a local global attention module and an enhanced feed forward network. In particular, feature mapInputting the feature images into a local global attention module, and obtaining feature images/>, through batch normalization layer outputMap/>The left half branch input to the local global attention module is subjected to window self-attention processing to obtain local features of the image, and the local features are output to obtain a feature map/>; While the right half branch is processed with an adaptive Patch sampling operation to obtain global features, specifically, feature graphs/>Input to the right half branch, input to the global maximum pooling layer, and output to obtain the feature map/>Map/>Input to the global self-attention layer, obtain a global window, provide a fixed number of tokens, output the obtained feature map/>Map/>Input into an up-sampling layer, and output to obtain a characteristic diagram/>. Finally, the obtained local feature map/>And global feature map/>Multiplying by element, and outputting to obtain feature map/>Map/>And feature mapResidual connection is carried out and then the characteristic diagram/> isobtained
The local global attention module may be represented by the following formula:
wherein W-MHSA represents window self-attention; G-MHSA represents global self-attention; upsample denotes upsampling; maxPool denotes maximum pooling, and F denotes input feature map; Representing element-by-element multiplication; /(I) A feature map which is output after the local global self-attention is shown; the Attention represents self-Attention and can be expressed by the following formula:
Wherein Q represents a query vector; k represents the queried vector; v represents the value obtained by inquiry; Representing a scale factor; b denotes a trainable relative position offset and T denotes a transpose.
Alternatively, the feature map isThe method is input into an enhanced feedforward network, and the enhanced feedforward network adopts parallel paths to obtain more characteristic information from pixel positions near a space so as to extract fine granularity characteristics, and meanwhile, introduces residual edges to prevent gradient disappearance and gradient explosion. Specifically, feature map/>Input into the right half branch, the right half branch passes through a convolution module with convolution kernel size of 1×1 and a depth convolution module with convolution kernel size of 1×1, and outputs to obtain a feature map/>Map/>Input into left half branch, and the left half branch passes through a convolution module with convolution kernel size of 1×1, a depth convolution module with convolution kernel size of 1×1 and Silu activation function, and outputs to obtain feature map/>The feature map of the output of both branches is then/>And feature map/>Performing element-by-element multiplication operation, and outputting to obtain a feature map/>Finally, the characteristic diagramInputting the obtained data into a 1X 1 convolution module to reduce the channel number, and outputting to obtain a characteristic diagram/>. The enhanced feed forward network may be represented by the following formula:
wherein X represents an input feature map; f represents an output feature map; Representing element-by-element additions; /(I) Representing element-by-element multiplication; siLU denotes an activation function; /(I)Representing a depth convolution with a convolution kernel size of 3 x 3; /(I)A convolution with a convolution kernel size of 1 x 1 is represented.
In the present embodiment, the feature mapThe method comprises the steps of inputting the information into a mixed attention module, wherein the mixed attention module comprises a channel attention module and a space attention module which are parallel. Specifically, the channel attention module includes a convolution module with a convolution kernel size of 1×1, average pooling, max pooling, and Sigmoid activation functions. Map/>Inputting the images into a channel attention module, firstly, adjusting the images by using a maximum pooling operation and an average pooling operation to obtain rich characteristic representations, and carrying out characteristic diagram/>Input to global maximum pooling, output to obtain feature map/>Map/>Input into global average pooling, and output to obtain feature map/>Output feature map/>, which is subjected to global maximum pooling and global average poolingAnd/>Performing splicing operation by using Concat functions, and outputting to obtain a feature map/>The feature map/>The number of channels is reduced by inputting the characteristic images into a convolution module with the convolution kernel size of 1 multiplied by 1, and the characteristic images/> areobtained by outputting the characteristic images/>Then obtaining a weight matrix through a Sigmoid activation function, and combining the obtained weight matrix with the original input feature map/>Multiplying the elements by each other, and outputting to obtain a feature map/>. The channel attention module may be represented by the following formula:
In the method, in the process of the invention, Representing a Sigmoid activation function; concat denotes a splicing operation; maxPool denotes maximum pooling; avgPool denotes average pooling; f represents an input feature map; /(I)Indicating a convolution kernel size of 1 x 1 convolutions.
Alternatively, the feature map isInput into the spatial attention module, feature map/>Firstly, a convolution module with the convolution kernel size of 1 multiplied by 1 is used for outputting and obtaining a characteristic diagram/>Map/>Input into a first convolution module with the convolution kernel size of 3 multiplied by 3, and output to obtain a characteristic diagram/>Map/>Inputting the characteristic map into a second convolution module with the convolution kernel size of 3 multiplied by 3, and outputting the characteristic map/>Map/>And input feature map/>Multiplying by element, and outputting to obtain feature map/>. The spatial attention module gathers pixels in the target, enhances local features, and improves feature characterization capability, so that target objects with different depths are predicted better. The spatial attention module may be represented by the following formula:
In the method, in the process of the invention, Representing a 1x1 convolution module,/>Representing a 3x 3 convolution module,/>An input feature map representing a spatial attention module.
Optionally, the output characteristic diagram of the channel attention moduleAnd output feature map of spatial attention module/>Performing splicing operation by using Concat functions, and outputting to obtain a feature map/>Map/>Input into a convolution module with the convolution kernel size of 1 multiplied by 1, and output to obtain a feature map/>
In the method, in the process of the invention,Representing a 1x1 convolution module; /(I)Representing a stitching operation.
In step S3, optionally, the feature map is displayed、/>、/>、/>The method comprises the steps of inputting the multi-layer feature fusion network to a cascading multi-layer feature fusion network, wherein the cascading multi-layer feature fusion network comprises a first cross feature fusion unit, a second cross feature fusion unit, a third cross feature fusion unit, a fourth cross feature fusion unit, a fifth cross feature fusion unit, a sixth cross feature fusion unit, a seventh cross feature fusion unit, an eighth cross feature fusion unit, a ninth cross feature fusion unit, a tenth cross feature fusion unit and an eleventh cross feature fusion unit.
The first cross feature fusion unit comprises a convolution layer, an up-sampling layer and a C3 module of YOLOV network, and is used for mapping the feature mapInput to a first cross feature fusion unit, and output to obtain a feature map/>
The second cross feature fusion unit comprises a convolution layer, an up-sampling layer and a C3 module of YOLOV network, and is used for mapping the feature mapInput to a first cross feature fusion unit, and output to obtain a feature map/>
The third cross feature fusion unit comprises a convolution layer, an up-sampling layer and a C3 module of YOLOV network, and is used for mapping the feature mapInput to a first cross feature fusion unit, and output to obtain a feature map/>
The fourth cross feature fusion unit comprises a convolution layer and a C3 module of YOLOV network, and is used for mapping the featuresInput to a first cross feature fusion unit, and output to obtain a feature map/>
The fifth cross feature fusion unit comprises a residual feature extraction module and a C3 module of YOLOV network, and is used for mapping the feature mapAnd feature map/>Performing splicing operation through Concat functions, and outputting to obtain a feature map/>Map/>Inputting to a fifth cross feature fusion unit, and outputting to obtain feature map/>
The sixth cross feature fusion unit comprises a residual feature extraction module and a C3 module of YOLOV network, and is used for mapping the featuresFeature map/>And feature map/>Performing splicing operation through Concat functions, and outputting to obtain a feature map/>Map/>Inputting the data into a sixth cross feature fusion unit, and outputting to obtain a feature map/>
The seventh cross feature fusion unit comprises a residual feature extraction module and a C3 module of YOLOV network, and is used for mapping the feature mapFeature map/>And feature map/>Performing splicing operation through Concat functions, and outputting to obtain a feature map/>Map/>Input to a seven-intersection feature fusion unit, and output to obtain a feature map/>
The eighth cross feature fusion unit comprises a convolution layer and a C3 module of YOLOV network, and is used for mapping the featuresAnd feature map/>Performing splicing operation through Concat functions, and outputting to obtain a feature map/>Map/>Input to an eight-cross feature fusion unit, and output to obtain a feature map/>
The ninth cross feature fusion unit comprises a convolution layer, an up-sampling layer and a C3 module of YOLOV network, and is used for mapping the feature mapAnd feature map/>Performing splicing operation through Concat functions, and outputting to obtain a feature map/>Map/>And inputting the characteristic images into a nine-intersection characteristic fusion unit, and outputting the characteristic images to obtain a characteristic image f1.
The tenth cross feature fusion unit comprises a convolution layer, an up-sampling layer and a C3 module of YOLOV network, and is used for mapping the feature mapFeature map/>The feature map f1 is spliced through Concat functions, and the feature map/> isobtained through outputMap the characteristics ofAnd inputting the characteristic images into a ten-crossing characteristic fusion unit, and outputting the characteristic images to obtain a characteristic image f2.
The eleventh cross feature fusion unit comprises a convolution layer and a C3 module of YOLOV network, and maps the featuresFeature map/>The feature map f2 is spliced through Concat functions, and the feature map/> isobtained through output
In step S4, optionally, the target recognition network for detecting the surface defect of the liquid crystal display screen includes a multi-branch feature fusion module and a detection head of YOLOV networks. The detection head of YOLOV network includes a convolution module with convolution kernel size of 1×1 and Sigmoid activation function.
Optionally, in the multi-branch feature fusion module, the feature map f2 is up-sampled twice and output to obtain the feature mapThe feature map f1 is up-sampled and output in four times to obtain a feature map/>Feature map f3,/>And/>Performing splicing operation by using Concat functions, and outputting to obtain a feature map/>; The feature map f3 is output by downsampling twice to obtain feature map/>The feature map f1 is up-sampled twice and output to obtain the feature map/>Map/>F2 and/>Performing splicing operation by using Concat functions, and outputting to obtain a feature map/>; The feature map f3 is output in a downsampling and quadrupling mode to obtain a feature map/>The feature map f2 is output by downsampling twice to obtain feature map/>Map/>F2 and/>Performing splicing operation by using Concat functions, and outputting to obtain a feature map/>. Map/>The number of channels is reduced through a 1x1 convolution module, and a characteristic diagram/> isobtained through outputMap/>The number of channels is reduced through a 1x1 convolution module, and a characteristic diagram/> isobtained through outputMap/>The number of channels is reduced through a 1x1 convolution module, and a characteristic diagram/> isobtained through output. Secondly, the feature map/>, is interpolated by nearest neighborUp-sampling four times, outputting to obtain characteristic diagram/>Map/>Up-sampling twice by nearest neighbor interpolation method, and outputting to obtain feature map/>Map/>And feature map/>Retention and feature map/>The same feature size, the feature map/>、/>And/>And performing splicing operation through Concat functions, and outputting to obtain a characteristic diagram f. Finally, inputting the characteristic diagram f into a1×1 point-by-point convolution module, and outputting to obtain a characteristic diagram/>Map/>Input into a 3×3 depth convolution module, and output to obtain a feature map/>Map/>Inputting the characteristic map into a1 multiplied by 1 point-by-point convolution module, and outputting the characteristic map/>For the characteristic diagram/>And establishing connection with the feature map F to perform element-by-element addition, and outputting to obtain the feature map F. The multi-branch feature fusion module may be represented by the following formula:
first, the feature maps f1, f2, and f3 are adjusted to uniform sizes and a splicing operation is performed:
Wherein, Representing a splicing operation; /(I)Representing a quadruple downsampling; /(I)Representing double downsampling; /(I)Representing a quadruple upsampling; /(I)Representing double up-sampling.
Next, the feature map is formed、/>Adjusted to the feature map/>The same size, three feature maps/>, of the same size, will be obtained、/>And/>And (3) performing splicing and fusion operation:
Wherein, Representing a1 x 1 convolution; /(I)Representing a stitching operation.
Finally, extracting refined features by using a depth convolution module and a point-by-point convolution module:
Wherein, Represents a 1x1 point-by-point convolution; /(I)Representing a 3 x 3 depth convolution.
In some embodiments, step S4 further includes inputting the output feature map F of the multi-branch feature fusion module into a detection head of YOLOV networks of the target recognition network, so as to obtain a final feature image p, where the final feature image p includes the type of the defect, the defect position and the confidence information of the liquid crystal display.
In step S5, optionally, the residual feature extraction network, the cascade multi-layer feature fusion network, and the target recognition network are sequentially connected to form a liquid crystal display defect detection model, and the training set of the divided data set is put into the liquid crystal display defect detection model for training.
In some embodiments, step S5 further includes selecting, by the optimizer, an SGD for training, the picture size initialization size of 640×640, training 300 rounds, and training 32 pictures per batch.
In some embodiments, step S5 further comprises model multi-scale perceived loss during trainingLoss value for balancing defects of different sizes in a liquid crystal displayCan be expressed by the following formula:
Q(1-IoU+/>+/>+/>
Q=()/>
wherein IoU is the ratio of the intersection of the areas of the predicted frame and the real frame to the union of the areas of the predicted frame and the real frame; (a, B) representing euclidean distances of the center points of the predicted frame and the real frame, A, B being the center points of the predicted frame and the real frame, respectively; c represents the diagonal length of the smallest bounding box enclosed by the real and predicted frames,/> And/>The width and the height of the real frame; w and H are the width and height of the prediction frame; /(I)And/>The width and the height of the minimum external rectangle of the prediction frame and the real frame are respectively; q is a punishment item for obviously changing and perceiving different scale defect objects, and is used for improving the attention of a network to the different scale objects; /(I)Representing the maximum value of the real frame area; /(I)Representing a minimum value of the real frame area; s represents the area of the image where the defect target is located; a is a constant and A >0, A is used to adjust the curvature of the function; x is the area of the defect target.
Optionally, step S5 further comprises passing through a formulaCalculating to obtain total lossIn/>Is a cross entropy loss function.
In step S6, optionally, writing an inference script of the liquid crystal display defect detection model, loading and storing a weight file with an optimal training result in the training set into the inference script of the liquid crystal display defect detection model, integrating the inference script into an application program, and finally deploying the application program on a local server.
Example 2:
the embodiment provides a display screen defect detection system based on a cascading multilayer feature fusion network, which comprises the following steps:
A data acquisition module configured to: acquiring an image of a liquid crystal display screen;
a detection module configured to: obtaining a display screen defect detection result according to the liquid crystal display screen image and a preset liquid crystal display screen defect detection model;
The liquid crystal display defect detection model comprises a residual feature extraction network for extracting image features, a cascading multilayer feature fusion network for fusing shallow fine granularity information and deep semantic information in an image, and a target identification network for determining defect types, positions and confidence information; in the residual feature extraction network, a depth convolution module and a point-by-point convolution module are utilized to capture fine granularity features in an image, and simultaneously reduce the number of model parameters, and when features are extracted, the detail features and the integral structure of the defects of the liquid crystal display screen are considered; and the cascade multi-layer feature fusion network takes the outputs of different modules in the residual feature extraction network as inputs at the same time, and introduces fine granularity features extracted by the shallow network.
The operation method of the system is the same as the display screen defect detection method based on the cascade multilayer feature fusion network in embodiment 1, and will not be described here again.
Example 3:
The present embodiment provides an electronic device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the steps of the display screen defect detection method based on the cascaded multi-layer feature fusion network described in embodiment 1 are implemented when the processor executes the program.
Example 4:
The present embodiment provides a computer program product, which includes a computer program, where the steps of the display screen defect detection method based on the cascaded multi-layer feature fusion network described in embodiment 1 are implemented when the computer program is executed by a processor.
The above description is only a preferred embodiment of the present embodiment, and is not intended to limit the present embodiment, and various modifications and variations can be made to the present embodiment by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present embodiment should be included in the protection scope of the present embodiment.

Claims (10)

1. The display screen defect detection method based on the cascade multilayer feature fusion network is characterized by comprising the following steps of:
Acquiring an image of a liquid crystal display screen;
obtaining a display screen defect detection result according to the liquid crystal display screen image and a preset liquid crystal display screen defect detection model;
The liquid crystal display defect detection model comprises a residual feature extraction network for extracting image features, a cascading multilayer feature fusion network for fusing shallow fine granularity information and deep semantic information in an image, and a target identification network for determining defect types, positions and confidence information; in the residual feature extraction network, a depth convolution module and a point-by-point convolution module are utilized to capture fine granularity features in an image, and simultaneously reduce the number of model parameters, and when features are extracted, the detail features and the integral structure of the defects of the liquid crystal display screen are considered; and the cascade multi-layer feature fusion network takes the outputs of different modules in the residual feature extraction network as inputs at the same time, and introduces fine granularity features extracted by the shallow network.
2. The display screen defect detection method based on the cascading multilayer feature fusion network according to claim 1, wherein the residual feature extraction network comprises a convolution module, a first residual feature extraction module, a second residual feature extraction module, a third residual feature extraction module, a fourth residual feature extraction module and a feature enhancement module;
The convolution module comprises a convolution layer with a convolution kernel size of 3 multiplied by 3, batch normalization and Silu activation functions; the first residual feature extraction module, the second residual feature extraction module, the third residual feature extraction module and the fourth residual feature extraction module comprise three convolution modules with convolution kernel sizes of 1×1, three depth convolution modules with convolution kernel sizes of 3×3, a point-by-point convolution module with convolution kernel sizes of 1×1 and a convolution module with convolution kernel sizes of 3×3; the feature enhancement module comprises three convolution modules with convolution kernel sizes of 1×1, a high-efficiency self-attention module and a mixed-attention module.
3. The display screen defect detection method based on the cascade multilayer feature fusion network according to claim 2, wherein the residual feature extraction module is characterized in that the convolution module with the step length of 2 reduces the size of the feature map by the convolution kernel size of 3×3, and the feature map with the reduced size of the feature map is divided evenly according to the channel dimension; the segmented feature graphs respectively pass through a convolution module with the convolution kernel size of 1 multiplied by 1, the dimension of a transformation channel is half of that of an output channel, feature scales are balanced through a batch normalization layer and Silu activation functions to relieve gradient disappearance, and fine-grained features are extracted through a depth convolution module with the convolution kernel size of 3 multiplied by 3 and a convolution module with the convolution kernel size of 1 multiplied by 1; after splicing the output characteristic graphs of the two paths by using Concat functions, inputting the output characteristic graphs into a depth convolution module with the convolution kernel size of 3 multiplied by 3 and a point-by-point convolution module with the convolution kernel size of 1 multiplied by 1, so that the channel dimension is reduced;
In the feature enhancement module, a part of branches use a convolution module with a convolution kernel size of 1 multiplied by 1 as a residual edge to prevent gradient disappearance; the other part of branches are input into a convolution module with the convolution kernel size of 1 multiplied by 1, the obtained feature images are output into a local global attention module, and the correlation between defect targets is enhanced through the continuous learning of the local global attention; and then, the output feature graphs of the two branches are input into a mixed attention module after splicing operation by using Concat functions, and features which are most relevant to the defect detection of the liquid crystal display screen in the images are highlighted by using the mixed attention module, and meanwhile, unimportant feature information is restrained.
4. The method for detecting defects of a display screen based on a cascading multi-layer feature fusion network according to claim 2, wherein the high-efficiency self-attention module comprises a local global attention module and an enhanced feedforward network; in the local global attention module, a feature map is obtained through output of a batch normalization layer, and the left half branch input to the local global attention module is subjected to window self-attention processing to obtain local features of an image; the right half branch is processed by utilizing the self-adaptive Patch sampling operation to obtain global features, specifically, a feature map is input to the right half branch, the feature map is input to a global maximum pooling layer, the feature map is output to obtain the feature map, the feature map is input to a global self-attention layer, a global window is obtained, a fixed number of tokens are provided, the feature map is output to obtain the feature map, and the feature map is input to an up-sampling layer; multiplying the obtained local feature map and the global feature map element by element;
The mixed attention module comprises a channel attention module and a space attention module; the channel attention module comprises a convolution module with a convolution kernel size of 1 multiplied by 1, and an average pooling function, a maximum pooling function and a Sigmoid activation function; the spatial attention module comprises a convolution module with a convolution kernel size of 1 x1 and two convolution modules with a convolution kernel size of 3 x 3.
5. The method for detecting the defects of the display screen based on the cascading multilayer feature fusion network according to claim 1, wherein the cascading multilayer feature fusion network comprises a first cross feature fusion unit, a second cross feature fusion unit, a third cross feature fusion unit, a fourth cross feature fusion unit, a fifth cross feature fusion unit, a sixth cross feature fusion unit, a seventh cross feature fusion unit, an eighth cross feature fusion unit, a ninth cross feature fusion unit, a tenth cross feature fusion unit and an eleventh cross feature fusion unit, and the first cross feature fusion unit, the second cross feature fusion unit and the third cross feature fusion unit all comprise a convolution layer, an up-sampling layer and a C3 module of the YOLOV network; the fourth cross feature fusion unit comprises a convolution layer and a C3 module of YOLOV network; the fifth cross feature fusion unit, the sixth cross feature fusion unit and the seventh cross feature fusion unit all comprise a residual feature extraction module and a C3 module of YOLOV network; the eighth cross feature fusion unit comprises a convolution layer and a C3 module of YOLOV network; the ninth cross feature fusion unit and the tenth cross feature fusion unit comprise a convolution layer, an up-sampling layer and a C3 module of YOLOV network; the eleventh cross-feature fusion unit includes a convolutional layer and a C3 module of YOLOV network.
6. The display screen defect detection method based on the cascading multi-layer feature fusion network according to claim 1, wherein the target recognition network comprises a multi-branch feature fusion module and a detection head of YOLOV network; the multi-branch feature fusion module comprises three convolution modules with convolution kernel sizes of 1 multiplied by 1, an up-sampling module, a point-by-point convolution module with convolution kernel sizes of 1 multiplied by 1 and a depth convolution module with convolution kernel sizes of 3 multiplied by 3; the detection head of YOLOV network includes a convolution module with convolution kernel size of 1×1 and Sigmoid activation function.
7. The display screen defect detection method based on cascading multi-layer feature fusion network according to claim 1, wherein the multi-scale perception loss in the training processExpressed as:
Q(1-IoU+/>+/>+/>);
Q=()/>
wherein IoU is the ratio of the intersection of the areas of the predicted frame and the real frame to the union of the areas of the predicted frame and the real frame; (a, B) representing euclidean distances of the center points of the predicted frame and the real frame, A, B being the center points of the predicted frame and the real frame, respectively; c represents the diagonal length of the smallest bounding box enclosed by the real and predicted frames,/> And/>The width and the height of the real frame; w and H are the width and height of the prediction frame; /(I)And/>The width and the height of the minimum external rectangle of the prediction frame and the real frame are respectively; q is a punishment item for obviously changing and perceiving different scale defect objects, and is used for improving the attention of a network to the different scale objects; /(I)Representing the maximum value of the real frame area; /(I)Representing a minimum value of the real frame area; s represents the area of the image where the defect target is located; a is a constant and A >0, A is used to adjust the curvature of the function; x is the area of the defect target.
8. Display screen defect detection system based on cascade multilayer characteristic fusion network, characterized by comprising:
A data acquisition module configured to: acquiring an image of a liquid crystal display screen;
a detection module configured to: obtaining a display screen defect detection result according to the liquid crystal display screen image and a preset liquid crystal display screen defect detection model;
The liquid crystal display defect detection model comprises a residual feature extraction network for extracting image features, a cascading multilayer feature fusion network for fusing shallow fine granularity information and deep semantic information in an image, and a target identification network for determining defect types, positions and confidence information; in the residual feature extraction network, a depth convolution module and a point-by-point convolution module are utilized to capture fine granularity features in an image, and simultaneously reduce the number of model parameters, and when features are extracted, the detail features and the integral structure of the defects of the liquid crystal display screen are considered; and the cascade multi-layer feature fusion network takes the outputs of different modules in the residual feature extraction network as inputs at the same time, and introduces fine granularity features extracted by the shallow network.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and capable of running on the processor, characterized in that the processor implements the steps of the method for detecting display defects based on a cascaded multi-layer feature fusion network according to any one of claims 1-7 when executing the program.
10. A computer program product, characterized in that the computer program product comprises a computer program which, when being executed by a processor, implements the steps of the display screen defect detection method based on a cascaded multi-layer feature fusion network as claimed in any one of claims 1-7.
CN202410578270.4A 2024-05-11 2024-05-11 Display screen defect detection method and system based on cascading multilayer feature fusion network Active CN118154603B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410578270.4A CN118154603B (en) 2024-05-11 2024-05-11 Display screen defect detection method and system based on cascading multilayer feature fusion network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410578270.4A CN118154603B (en) 2024-05-11 2024-05-11 Display screen defect detection method and system based on cascading multilayer feature fusion network

Publications (2)

Publication Number Publication Date
CN118154603A true CN118154603A (en) 2024-06-07
CN118154603B CN118154603B (en) 2024-07-23

Family

ID=91287206

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410578270.4A Active CN118154603B (en) 2024-05-11 2024-05-11 Display screen defect detection method and system based on cascading multilayer feature fusion network

Country Status (1)

Country Link
CN (1) CN118154603B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116630626A (en) * 2023-06-05 2023-08-22 吉林农业科技学院 Connected double-attention multi-scale fusion semantic segmentation network
US20230298152A1 (en) * 2022-03-16 2023-09-21 Nanjing University Of Aeronautics And Astronautics Method for analyzing minor defect based on progressive segmentation network
CN117132584A (en) * 2023-09-22 2023-11-28 山东省计算中心(国家超级计算济南中心) Liquid crystal display screen flaw detection method and device based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230298152A1 (en) * 2022-03-16 2023-09-21 Nanjing University Of Aeronautics And Astronautics Method for analyzing minor defect based on progressive segmentation network
CN116630626A (en) * 2023-06-05 2023-08-22 吉林农业科技学院 Connected double-attention multi-scale fusion semantic segmentation network
CN117132584A (en) * 2023-09-22 2023-11-28 山东省计算中心(国家超级计算济南中心) Liquid crystal display screen flaw detection method and device based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DAI, Y等: "F-GAN: A fusion algorithm for surface defect detection based on generative adversarial network", 《2023 8TH INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING AND SIGNAL PROCESSING》, 31 December 2023 (2023-12-31) *
陈坤等: "改进Faster RCNN在铝型材表面缺陷检测中的应用研究", 中国计量大学学报, no. 02, 15 June 2020 (2020-06-15) *

Also Published As

Publication number Publication date
CN118154603B (en) 2024-07-23

Similar Documents

Publication Publication Date Title
CN108647585B (en) Traffic identifier detection method based on multi-scale circulation attention network
CN109583483B (en) Target detection method and system based on convolutional neural network
US9727775B2 (en) Method and system of curved object recognition using image matching for image processing
TWI485650B (en) Method and arrangement for multi-camera calibration
CN111242127B (en) Vehicle detection method with granularity level multi-scale characteristic based on asymmetric convolution
Liang et al. Objective quality prediction of image retargeting algorithms
CN111738344A (en) Rapid target detection method based on multi-scale fusion
CN110059728B (en) RGB-D image visual saliency detection method based on attention model
CN114638784A (en) Method and device for detecting surface defects of copper pipe based on FE-YOLO
CN111768415A (en) Image instance segmentation method without quantization pooling
CN109410211A (en) The dividing method and device of target object in a kind of image
CN111368637B (en) Transfer robot target identification method based on multi-mask convolutional neural network
CN110598715A (en) Image recognition method and device, computer equipment and readable storage medium
CN112288758B (en) Infrared and visible light image registration method for power equipment
CN111739037B (en) Semantic segmentation method for indoor scene RGB-D image
CN114140623A (en) Image feature point extraction method and system
CN115861210B (en) Transformer substation equipment abnormality detection method and system based on twin network
CN113850136A (en) Yolov5 and BCNN-based vehicle orientation identification method and system
CN114565842A (en) Unmanned aerial vehicle real-time target detection method and system based on Nvidia Jetson embedded hardware
CN117557784B (en) Target detection method, target detection device, electronic equipment and storage medium
CN116805387B (en) Model training method, quality inspection method and related equipment based on knowledge distillation
CN118154603B (en) Display screen defect detection method and system based on cascading multilayer feature fusion network
CN115719414A (en) Target detection and accurate positioning method based on arbitrary quadrilateral regression
CN115205793A (en) Electric power machine room smoke detection method and device based on deep learning secondary confirmation
CN115358981A (en) Glue defect determining method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant