CN117593244A - Film product defect detection method based on improved attention mechanism - Google Patents
Film product defect detection method based on improved attention mechanism Download PDFInfo
- Publication number
- CN117593244A CN117593244A CN202311318982.4A CN202311318982A CN117593244A CN 117593244 A CN117593244 A CN 117593244A CN 202311318982 A CN202311318982 A CN 202311318982A CN 117593244 A CN117593244 A CN 117593244A
- Authority
- CN
- China
- Prior art keywords
- film product
- defects
- attention mechanism
- image
- pictures
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000007547 defect Effects 0.000 title claims abstract description 78
- 238000001514 detection method Methods 0.000 title claims abstract description 46
- 230000007246 mechanism Effects 0.000 title claims abstract description 30
- 238000012545 processing Methods 0.000 claims abstract description 17
- 238000007781 pre-processing Methods 0.000 claims abstract description 7
- 230000000694 effects Effects 0.000 claims abstract description 6
- 239000013598 vector Substances 0.000 claims description 24
- 238000000034 method Methods 0.000 claims description 20
- 238000012549 training Methods 0.000 claims description 16
- 238000013507 mapping Methods 0.000 claims description 9
- 230000006872 improvement Effects 0.000 claims description 7
- 238000004519 manufacturing process Methods 0.000 claims description 7
- 238000012360 testing method Methods 0.000 claims description 7
- 238000010586 diagram Methods 0.000 claims description 6
- 238000002474 experimental method Methods 0.000 claims description 5
- 238000012795 verification Methods 0.000 claims description 5
- 230000004913 activation Effects 0.000 claims description 3
- 230000008859 change Effects 0.000 claims description 3
- 230000006835 compression Effects 0.000 claims description 3
- 238000007906 compression Methods 0.000 claims description 3
- 230000002708 enhancing effect Effects 0.000 claims description 3
- 230000005284 excitation Effects 0.000 claims description 3
- 230000006870 function Effects 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 239000002994 raw material Substances 0.000 claims description 3
- 230000001629 suppression Effects 0.000 claims description 3
- 238000013461 design Methods 0.000 claims description 2
- 238000010200 validation analysis Methods 0.000 claims 1
- 238000011179 visual inspection Methods 0.000 abstract description 6
- 238000005516 engineering process Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000007306 turnover Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
Abstract
The invention relates to the technical field of defect detection, and provides a film product defect detection method based on an improved attention mechanism, wherein the step 1 specifically comprises the following steps: 1) Acquiring a film product image by using an industrial camera, and shooting a film product defect image in a corresponding state by adjusting the height and focal length of the industrial camera; 2) Extracting pictures from the video shot by the industrial camera according to proper frames and storing the pictures in a computer; preprocessing the acquired image to improve the effect of subsequent processing. The step 2 specifically comprises the following steps: and marking the position and the size of the enhanced film product defect image by using a labelimg image marking tool. By the technical scheme, the defect detection method of the existing film product mainly depends on manual visual inspection or a traditional image processing algorithm; however, the subjectivity and fatigue of the manual visual inspection or the problem of low detection precision of the traditional image processing algorithm under a complex background can bring better detection effect in the detection of film products.
Description
Technical Field
The invention relates to the technical field of defect detection, in particular to a film product defect detection method based on an improved attention mechanism.
Background
Film products are widely used in many fields such as electronic devices, optical devices, etc.; in the course of manufacturing film products, the presence of defects may lead to a decrease in product quality, and thus an efficient and accurate method for detecting defects in film products is required.
By searching, the existing authority bulletin number CN109829893B is a defect target detection method based on an attention mechanism, and the method provided by the invention improves the weight of a defect region through the attention mechanism, thereby improving the defect detection precision; the method for classifying and regressing the surface defects of the industrial products can be applied to other types of surface defect detection frames to improve the detection precision, and has strong universality.
However, the above detection method is not aimed at detecting film products, the detection effect is poor, and the existing film product defect detection method mainly depends on manual visual inspection or a traditional image processing algorithm; however, these methods have some limitations, such as subjectivity and fatigue of manual visual inspection, and low detection accuracy of the conventional image processing algorithm under a complex background, and for this reason, we propose a film product defect detection method based on an improved attention mechanism.
Disclosure of Invention
The invention provides a film product defect detection method based on an improved attention mechanism, which solves the problem that the existing film product defect detection method in the related technology mainly depends on manual visual inspection or a traditional image processing algorithm; however, these methods have some limitations such as subjectivity and fatigue of manual visual inspection, and problems of low detection accuracy of the conventional image processing algorithm in a complex background.
The technical scheme of the invention is as follows:
a film product defect detection method based on an improved attention mechanism, comprising the steps of:
step 1: collecting and processing data;
step 2: marking data;
step 3: detecting product defects;
step 4: calculating model improvement;
step 5: analyzing and generating a report;
the step 3 specifically comprises the following steps: 1) Performing target detection on the image by utilizing a YOLOv7 target detection algorithm, and identifying a defect target in the film product;
inputting the preprocessed data into a YOLOv7 network; extracting a characteristic representation of the input data through a plurality of convolution layers to capture key information in the film product image; performing YOLOv7 model training;
binary Cross Entropy is used, i.e. a two-class cross entropy loss; the following formula is given:
in the above formula, y is a real label, and p is a predicted value.
As a further technical scheme of the present invention, the step 1 specifically includes:
1) Acquiring a film product image by using an industrial camera, and shooting a film product defect image in a corresponding state by adjusting the height and focal length of the industrial camera;
2) Extracting pictures from the video shot by the industrial camera according to proper frames and storing the pictures in a computer; preprocessing the acquired image to improve the effect of subsequent processing.
As a further technical scheme of the present invention, step 2 specifically includes: and marking the positions and the sizes of the enhanced film product defect images by using a labelimg image marking tool, storing tag information in txt files with the same name as the pictures, and dividing all the pictures and marked tags into a training set, a verification set and a test set according to a proper proportion after marking work of all the pictures is completed.
As a further technical solution of the present invention, step 4 specifically includes:
YOLOv7 model improvement step: improving the YOLOv7 model in the step 3; introducing an SE attention mechanism, and weighting and enhancing the characteristics of the convolution layer through compression, excitation and scaling stages to enhance the attention capability of a defect region;
step 4-1, squeeze stage: carrying out global average pooling operation on the output feature map of the convolution layer to obtain a global description vector of the channel;
the formula is:
mapping the compressed feature vector into a feature representation with higher dimension through a full connection layer;
the formula is: z=σ (W 1 ·F s +b 1 );
Step 4-2, an accounting stage: performing activation function processing on the output of the full connection layer, and introducing nonlinearity;
the formula is: a=σ (W 2 ·Z+b 2 );
Mapping the activated feature vector into a weight vector with the same dimension as the input feature map;
the formula is: s=σ (W 3 ·F s +b 3 );
Step 4-3, scale stage: multiplying the weight vector of the channel with the input feature map element by element, and scaling the features of each channel;
the formula is:
in the above formula, F represents the output characteristic diagram of the convolution layer, H and W represent the height and width of the characteristic diagram respectively, F ij Elements representing the ith row and jth column of the feature map, F s Representing the compressed feature vector, Z representing the output of the fully connected layer, a representing the activated feature vector,representing weight vectors, F sc Representing a feature map scaled by the SE attention mechanism.
As a further technical solution of the present invention, step 5 specifically includes: mapping the features to a detection layer by utilizing a convolution and full connection layer, and generating information such as the position, the size, the confidence score, the category probability and the like of a prediction boundary box; performing non-maximum suppression on the detected boundary frames to remove redundant and overlapped boundary frames, and reserving the boundary frame with the highest confidence; and classifying and positioning defects according to the detected bounding box, and determining the types and positions of the defects in the film product.
As a further technical scheme of the invention, the industrial camera adopted in the step 1 is a CCD camera, the video is stored frame by frame according to proper frames in the extracted pictures, and the content of preprocessing the collected images is to rotate, overturn, cut and change the color saturation of the original pictures.
As a further technical scheme of the invention, in the step 2, the training set, the verification set and the test set are divided into pictures and marked labels in the ratio of 7:2:1 and 5:3:2.
As a further technical scheme of the invention, in step 3, the YOLOv7 is established by means of a computer network, input data of a plurality of convolution layers are automatically extracted through the computer network, and the most suitable super parameters are selected through multiple experiments of the YOLOv7 model training, so that the optimal training round number is found out.
As a further technical scheme of the invention, in the fifth step, the defect classification of the film product is specifically a design defect, a raw material defect and a manufacturing defect, and the defect positioning is specifically a specific position of which section of the film product the defect is positioned.
As a further technical scheme of the invention, in the step 5, through multiple experiments, the most suitable super parameters are selected, the optimal training round number is found out, and finally, a final detection report is generated through a computer.
The working principle and the beneficial effects of the invention are as follows:
1. the invention improves the detection accuracy and efficiency of the defects of the film products by introducing improved attention manufacturing and combining the image processing and the deep learning technology; the method is sufficiently and automatically focused on potential defect areas, has strong universality and applicability, is suitable for detecting defects of different types of film products, has high detection precision, reduces the workload of management personnel, and avoids subjectivity and fatigue of manual inspection.
Drawings
The invention will be described in further detail with reference to the drawings and the detailed description.
FIG. 1 is a flow schematic of the method of the present invention.
Detailed Description
The technical solutions of the embodiments of the present invention will be clearly and completely described below in conjunction with the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
As shown in fig. 1, this embodiment proposes.
In summary, a film product defect detection method based on an improved attention mechanism comprises the steps of:
the step 1 specifically comprises the following steps:
1) Acquiring a film product image by using an industrial camera, and shooting a film product defect image in a corresponding state by adjusting the height and focal length of the industrial camera;
2) Extracting pictures from the video shot by the industrial camera according to proper frames and storing the pictures in a computer; preprocessing the acquired image to improve the effect of subsequent processing.
The step 2 specifically comprises the following steps: and marking the positions and the sizes of the enhanced film product defect images by using a labelimg image marking tool, storing tag information in txt files with the same name as the pictures, and dividing all the pictures and marked tags into a training set, a verification set and a test set according to a proper proportion after marking work of all the pictures is completed.
The step 3 specifically comprises the following steps: 1) Performing target detection on the image by utilizing a YOLOv7 target detection algorithm, and identifying a defect target in the film product;
inputting the preprocessed data into a YOLOv7 network; extracting a characteristic representation of the input data through a plurality of convolution layers to capture key information in the film product image; performing YOLOv7 model training;
binary Cross Entropy is used, i.e. a two-class cross entropy loss; the following formula is given:
in the above formula, y is a real label, and p is a predicted value.
The industrial camera adopted in the step 1 is a CCD camera, the video extracts proper frames in the pictures according to proper frames, the proper frames are stored frame by frame, and the content of preprocessing the acquired images is to rotate, turn over, cut out and change the color saturation of the original pictures; in the step 2, dividing the pictures and the marked labels into training sets, verification sets and test sets according to the specific proportions of 7:2:1 and 5:3:2;
a method for object detection of an image based on a YOLOv7 network is provided, whereby a defective object in a film product can be identified.
Example 2
As shown in fig. 1, on the basis of embodiment 1, it is further proposed that step 4 specifically includes:
YOLOv7 model improvement step: improving the YOLOv7 model in the step 3; introducing an SE attention mechanism, and weighting and enhancing the characteristics of the convolution layer through compression, excitation and scaling stages to enhance the attention capability of a defect region;
step 4-1, squeeze stage: carrying out global average pooling operation on the output feature map of the convolution layer to obtain a global description vector of the channel;
the formula is:
mapping the compressed feature vector into a feature representation with higher dimension through a full connection layer;
the formula is: z=σ (W 1 ·F s +b 1 );
Step 4-2, an accounting stage: performing activation function processing on the output of the full connection layer, and introducing nonlinearity;
the formula is: a=σ (W 2 ·Z+b 2 );
Mapping the activated feature vector into a weight vector with the same dimension as the input feature map;
the formula is: s=σ (W 3 ·F s +b 3 );
Step 4-3, scale stage: multiplying the weight vector of the channel with the input feature map element by element, and scaling the features of each channel;
the formula is:
in the above formula, F represents the output characteristic diagram of the convolution layer, H and W represent the height and width of the characteristic diagram respectively, F ij Elements representing the ith row and jth column of the feature map, F s Representing the compressed feature vector, Z representing the output of the fully connected layer, a representing the activated feature vector,representing weight vectors, F sc Representing a feature map scaled by the SE attention mechanism.
The model improvement method based on the YOLOv7 network is provided, and the method can enable the finally calculated data value to be more accurate, can automatically focus on a potential defect area on a film product, and effectively improves the universality and applicability of the film product.
Example 3
As shown in fig. 1, on the basis of any one of the embodiments 1 or 2, it is further proposed that step 5 specifically includes: mapping the features to a detection layer by utilizing a convolution and full connection layer, and generating information such as the position, the size, the confidence score, the category probability and the like of a prediction boundary box; performing non-maximum suppression on the detected boundary frames to remove redundant and overlapped boundary frames, and reserving the boundary frame with the highest confidence; and classifying and positioning defects according to the detected bounding box, and determining the types and positions of the defects in the film product.
Step five, classifying defects of the film product, namely designing defects, raw material defects and manufacturing defects, wherein the defects are positioned at specific positions of which section of the film product the defects are positioned; and 5, through multiple experiments, selecting the most suitable super parameters, finding out the optimal training wheel number, and finally generating a final detection report through a computer.
The most suitable super parameters are selected through a plurality of tests, so that the detection report finally generated by the computer is more accurate,
it should be noted that LabelImg is a graphic image labeling tool, written by Python, and Qt is used as its graphic interface; the annotation is saved as an XML file in the PASCALVOC format; in addition, the method also supports the YOLO and CreateML formats, and is an existing mature image annotation tool;
the YOLO algorithm is the most typical representative of one-stage target detection algorithm, is used for identifying and positioning objects based on a deep neural network, has a high running speed, can be used for a real-time system, and is an existing mature algorithm.
The SE attention mechanism English is called as the squeze-and-expression, which is a model based on the attention mechanism, can be used for tasks such as text classification, emotion analysis, machine translation and the like in natural language processing, and is mainly used for weighting texts by learning the importance of each word in the texts, so that the model focuses on important information more, and is an existing mechanism.
In conclusion, the invention adds an improved attention manufacturing mechanism on the basis of the YOLOv7 model, and combines the image processing and the deep learning technology at the same time, thereby improving the detection accuracy and efficiency of the defects of the film products; the method is sufficiently and automatically focused on potential defect areas, has strong universality and applicability, is simultaneously suitable for detecting defects of different types of film products in an accurate calculation mode and image grabbing analysis capability, has high detection precision, can accurately classify and position the defects in the film product production process, reduces the workload of management personnel, and avoids false detection caused by subjectivity and fatigue of manual inspection.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.
Claims (10)
1. A method for detecting defects in a film product based on an improved attention mechanism, comprising the steps of:
step 1: collecting and processing data;
step 2: marking data;
step 3: detecting product defects;
step 4: calculating model improvement;
step 5: analyzing and generating a report;
the step 3 specifically comprises the following steps: 1) Performing target detection on the image by utilizing a YOLOv7 target detection algorithm, and identifying a defect target in the film product;
inputting the preprocessed data into a YOLOv7 network; extracting a characteristic representation of the input data through a plurality of convolution layers to capture key information in the film product image; performing YOLOv7 model training;
binary Cross Entropy is used, i.e. a two-class cross entropy loss; the following formula is given:
in the above formula, y is a real label, and p is a predicted value.
2. The method for detecting defects in a film product based on an improved attention mechanism according to claim 1, wherein said step 1 specifically comprises:
1) Acquiring a film product image by using an industrial camera, and shooting a film product defect image in a corresponding state by adjusting the height and focal length of the industrial camera;
2) Extracting pictures from the video shot by the industrial camera according to proper frames and storing the pictures in a computer; preprocessing the acquired image to improve the effect of subsequent processing.
3. The method for detecting defects in a film product based on an improved attention mechanism according to claim 2, wherein step 2 specifically comprises: and marking the positions and the sizes of the enhanced film product defect images by using a labelimg image marking tool, storing tag information in txt files with the same name as the pictures, and dividing all the pictures and marked tags into a training set, a verification set and a test set according to a proper proportion after marking work of all the pictures is completed.
4. A film product defect detection method based on an improved attention mechanism as in claim 3 wherein step 4 specifically comprises:
YOLOv7 model improvement step: improving the YOLOv7 model in the step 3; introducing an SE attention mechanism, and weighting and enhancing the characteristics of the convolution layer through compression, excitation and scaling stages to enhance the attention capability of a defect region;
step 4-1, squeeze stage: carrying out global average pooling operation on the output feature map of the convolution layer to obtain a global description vector of the channel;
the formula is:
mapping the compressed feature vector into a feature representation with higher dimension through a full connection layer;
the formula is: z=σ (W 1 ·F s +b 1 );
Step 4-2, an accounting stage: performing activation function processing on the output of the full connection layer, and introducing nonlinearity;
the formula is: a=σ (W 2 ·Z+b 2 );
Mapping the activated feature vector into a weight vector with the same dimension as the input feature map;
the formula is: s=σ (W 3 ·F s +b 3 );
Step 4-3, scale stage: multiplying the weight vector of the channel with the input feature map element by element, and scaling the features of each channel;
the formula is: f (F) sc =S·F;
In the above formula, F represents the output characteristic diagram of the convolution layer, H and W represent the height and width of the characteristic diagram respectively, F ij Elements representing the ith row and jth column of the feature map, F s Representing the compressed feature vector, Z representing the output of the fully connected layer, A representing the activated feature vector, S representing the weight vector, F sc Representing a feature map scaled by the SE attention mechanism.
5. The method for detecting defects in a film product based on an improved attention mechanism according to claim 1, wherein the step 5 specifically comprises: mapping the features to a detection layer by utilizing a convolution and full connection layer, and generating information such as the position, the size, the confidence score, the category probability and the like of a prediction boundary box; performing non-maximum suppression on the detected boundary frames to remove redundant and overlapped boundary frames, and reserving the boundary frame with the highest confidence; and classifying and positioning defects according to the detected bounding box, and determining the types and positions of the defects in the film product.
6. The method for detecting defects of film products based on an improved attention mechanism according to claim 2, wherein the industrial camera adopted in the step 1 is a CCD camera, the video is stored frame by frame in the proper frames of the extracted pictures, and the content of the preprocessing of the collected images is to rotate, flip, clip and change the color saturation of the original pictures.
7. A film product defect detection method based on an improved attention mechanism according to claim 3, wherein the ratio of the training set, the validation set and the test set to the pictures and the labeled labels in step 2 is specifically 7:2:1 and 5:3:2.
8. The method for detecting defects of film products based on an improved attention mechanism according to claim 1, wherein YOLOv7 in step 3 is established by means of a computer network, input data of a plurality of convolution layers are automatically extracted through the computer network, and YOLOv7 model training is performed through multiple experiments, and the most suitable super parameters are selected to find out the optimal training round number.
9. The method for detecting defects of a film product based on an improved attention mechanism according to claim 5, wherein in the fifth step, the defects of the film product are classified into design defects, raw material defects and manufacturing defects, and the defects are positioned in a specific position of which section of the film product the defects are positioned.
10. The method for detecting defects of a film product based on an improved attention mechanism according to claim 9, wherein in the step 5, through multiple experiments, the most suitable super parameters are selected, the optimal training wheel number is found, and finally, a final detection report is generated through a computer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311318982.4A CN117593244A (en) | 2023-10-12 | 2023-10-12 | Film product defect detection method based on improved attention mechanism |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311318982.4A CN117593244A (en) | 2023-10-12 | 2023-10-12 | Film product defect detection method based on improved attention mechanism |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117593244A true CN117593244A (en) | 2024-02-23 |
Family
ID=89910422
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311318982.4A Pending CN117593244A (en) | 2023-10-12 | 2023-10-12 | Film product defect detection method based on improved attention mechanism |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117593244A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117935174A (en) * | 2024-03-22 | 2024-04-26 | 浙江佑威新材料股份有限公司 | Intelligent management system and method for vacuum bag film production line |
-
2023
- 2023-10-12 CN CN202311318982.4A patent/CN117593244A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117935174A (en) * | 2024-03-22 | 2024-04-26 | 浙江佑威新材料股份有限公司 | Intelligent management system and method for vacuum bag film production line |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113963147B (en) | Key information extraction method and system based on semantic segmentation | |
CN111680556A (en) | Method, device and equipment for identifying vehicle type at traffic gate and storage medium | |
CN112819748B (en) | Training method and device for strip steel surface defect recognition model | |
CN112232371A (en) | American license plate recognition method based on YOLOv3 and text recognition | |
CN112614105B (en) | Depth network-based 3D point cloud welding spot defect detection method | |
CN113936195B (en) | Sensitive image recognition model training method and device and electronic equipment | |
CN117593244A (en) | Film product defect detection method based on improved attention mechanism | |
CN113222913B (en) | Circuit board defect detection positioning method, device and storage medium | |
CN112381175A (en) | Circuit board identification and analysis method based on image processing | |
CN115294089A (en) | Steel surface defect detection method based on improved YOLOv5 | |
CN114972880A (en) | Label identification method and device, electronic equipment and storage medium | |
CN110689447A (en) | Real-time detection method for social software user published content based on deep learning | |
CN114694130A (en) | Method and device for detecting telegraph poles and pole numbers along railway based on deep learning | |
CN112883926B (en) | Identification method and device for form medical images | |
CN112418207B (en) | Weak supervision character detection method based on self-attention distillation | |
CN116958052A (en) | Printed circuit board defect detection method based on YOLO and attention mechanism | |
CN116030050A (en) | On-line detection and segmentation method for surface defects of fan based on unmanned aerial vehicle and deep learning | |
Chowdhury et al. | A Hybrid Information Extraction Approach using Transfer Learning on Richly-Structured Documents. | |
Das et al. | Object Detection on Scene Images: A Novel Approach | |
CN117593514B (en) | Image target detection method and system based on deep principal component analysis assistance | |
CN112115949B (en) | Optical character recognition method for tobacco certificate and order | |
CN114332007B (en) | Industrial defect detection and identification method based on transducer | |
CN116824271B (en) | SMT chip defect detection system and method based on tri-modal vector space alignment | |
CN118095314A (en) | Magnetizing tag detection method based on deep learning | |
CN116580364A (en) | Multi-target detection and tracking method based on improved YOLO V5-s |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |