CN111507966B - Composite material pore detection method based on UNET depth network - Google Patents
Composite material pore detection method based on UNET depth network Download PDFInfo
- Publication number
- CN111507966B CN111507966B CN202010304692.4A CN202010304692A CN111507966B CN 111507966 B CN111507966 B CN 111507966B CN 202010304692 A CN202010304692 A CN 202010304692A CN 111507966 B CN111507966 B CN 111507966B
- Authority
- CN
- China
- Prior art keywords
- pore
- pixel
- composite material
- unet
- block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 239000011148 porous material Substances 0.000 title claims abstract description 79
- 239000002131 composite material Substances 0.000 title claims abstract description 24
- 238000001514 detection method Methods 0.000 title claims abstract description 13
- 238000000034 method Methods 0.000 claims abstract description 24
- 238000004364 calculation method Methods 0.000 claims abstract description 15
- 238000005070 sampling Methods 0.000 claims description 12
- 238000011176 pooling Methods 0.000 claims description 9
- 239000013598 vector Substances 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 238000013459 approach Methods 0.000 abstract description 2
- 230000000694 effects Effects 0.000 abstract description 2
- 230000003044 adaptive effect Effects 0.000 abstract 1
- 230000011218 segmentation Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000000227 grinding Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/0006—Industrial image inspection using a design-rule based approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a composite material pore detection method based on an UNET depth network, and relates to the field of depth networks, wherein the method adopts the UNET depth network to process a slice image to obtain a pixel pore prediction picture, combines pore pixel points in the pixel pore prediction picture through M adjacent to obtain a plurality of pore blocks, performs algorithm grid calculation on each pore block to extract the minimum circumscribed rectangle of the pore block as a pore region in the detected composite material, and calculates the porosity in the composite material; the method can distinguish the pores from non-pore parts which are difficult to separate in the traditional methods such as foreign matters/scratches, so as to improve the identification precision and avoid false detection and omission; in addition, the aperture block is dynamically and adaptively grid-divided through an adaptive grid algorithm in aperture ratio calculation, so that the aperture ratio calculation value of field personnel can be more closely attached, and the effects of calculation accuracy and manual approach are achieved.
Description
Technical Field
The invention relates to the field of deep networks, in particular to a composite material pore detection method based on a UNET deep network.
Background
The problem of air mixing often exists in the paving process of the composite material, at present, the composite material is usually imaged by microscopic slicing, binarization is carried out by utilizing a traditional visual method opencv, background separation is carried out by morphological operation, so that suspected areas are divided, and finally, the frame selection of pore areas is carried out by threshold filtration, so that the area ratio of pores in the composite material can be judged, the ratio of the pores in the current area is calculated, and then whether the paving is qualified or not is judged. However, the conventional method has the following problems: the scratch and the foreign matter are relatively similar to each other in the picture, and are similar to the normal pore in the pixel level after binarization, so that the false recognition is caused, and the recognition accuracy is low.
Disclosure of Invention
Aiming at the problems and the technical requirements, the inventor provides a composite material pore detection method based on a UNET depth network, and the technical scheme of the invention is as follows:
a composite pore detection method based on UNET depth network, the method comprising:
acquiring a slice image of the composite material;
processing the slice image by adopting a UNET depth network to obtain a pixel pore prediction picture, wherein the pixel pore prediction picture comprises a prediction result of each pixel point in the slice image, and the prediction result is a pore pixel point or a non-pore pixel point;
combining pore pixel points in the pixel pore prediction picture through M adjacent to obtain a plurality of pore blocks;
and carrying out algorithm grid calculation and extraction on each pore block to obtain the minimum circumscribed rectangle of the pore block as a pore region in the detected composite material, and calculating the porosity in the composite material.
The further technical scheme is that the method comprises the steps of carrying out algorithm grid calculation and extraction on each pore block to obtain the minimum circumscribed rectangle of the pore block, and the method comprises the following steps: and dividing standard grids from the pixel points of the holes on each element of the row from the upper left of the pixel hole predicted picture after M is adjacent, so that the condition that the occupied hole area in one grid is maximum and a plurality of hole areas in the same grid are calculated respectively is met, and the minimum circumscribed rectangle of each hole block is obtained.
The further technical scheme is that the merging of pore pixel points in the pixel pore prediction picture through M adjacent comprises the following steps: and merging each pore pixel point with pore pixel points in 4 adjacent pixel points of the meter-shaped lattice.
The further technical scheme is that the UNET depth network adopts a symmetrical structure and adopts a jump connection mode at the same level, and slice images sequentially pass through four layers of downsampling encodings to obtain 1024-dimensional feature vectors, wherein each layer of downsampling encodings comprises two layers of convolution layers, and a maximum pooling layer is arranged between every two layers of downsampling encodings; the feature vector sequentially carries out four times of up-sampling through four up-sampling encoder layers to obtain a pixel pore prediction picture, wherein each up-sampling encoder layer comprises two convolution layers, and a deconvolution layer is arranged between each two up-sampling encoder layers; the convolution kernels of the convolution layers are 3*3 and all adopt ReLU as an activation function, the pooling kernels of the largest pooling layers are 2 x 2, and the convolution kernels of the deconvolution layers are 2 x 2.
The beneficial technical effects of the invention are as follows:
the application discloses a composite material pore detection method based on a UNET depth network, which utilizes the identification pores of the UNET depth network, and can distinguish the pores from non-pore parts which are difficult to separate in the traditional methods such as foreign matters/scratches, so as to improve the identification precision and avoid false detection and omission detection; in addition, according to the national standard grid method for identifying the apertures of the site staff, the national standard 336 standard is met through the self-grinding self-adaptive grid algorithm in aperture ratio calculation, the dynamic self-adaptive grid division is carried out on the aperture blocks, the division of staff is reduced, the aperture ratio calculated value of the site staff is more closely, the calculation accuracy and the manual approach are achieved, and the production requirement is met. In addition, the staff can actively modify the recognition result so as to modify the calculation result and feed back the recognition result to the depth network for active learning, and the method can adapt to pictures under different resolutions and adapt to picture changes caused by equipment replacement, so that the work of secondary segmentation by the staff is reduced, only 3s are needed under the current prediction of one picture GPU, and the manual calculation work is greatly reduced.
Drawings
FIG. 1 is a flow chart of a method of composite void detection as disclosed herein.
Detailed Description
The following describes the embodiments of the present invention further with reference to the drawings.
The application discloses a composite material pore detection method based on a UNET depth network, please refer to a flow chart shown in FIG. 1, the method comprises the following steps:
and step 1, obtaining a slice image of the composite material.
And 2, processing the slice image by adopting a UNET depth network to obtain a pixel pore prediction picture, wherein the pixel pore prediction picture comprises a prediction result of each pixel point in the slice image, and the prediction result is a pore pixel point or a non-pore pixel point. The method comprises the steps of generating a class label value of a full-image pixel level by marking a pore region in a slice image by using a semantic segmentation method of deep learning, extracting different dimension characteristics through a neural network in multiple layers, classifying each pixel, and predicting labels of each pixel point in the neural network, namely predicting results as belonging to pore pixel points or non-pore pixel points.
Aiming at the problems that the image semantics are simpler and the structure is fixed, the high-level semantic information and the low-level features are important, so that the semantic segmentation method adopted by the method is a UNET depth network, the UNET depth network adopts a symmetrical structure and adopts a jump connection mode at the same level, a slice image sequentially passes through four downsampling encodings to carry out four downsampling processes to obtain 1024-dimensional feature vectors, each downsampling encodings comprises two convolution layers, and a maximum pooling layer is arranged between every two downsampling encodings; the feature vector sequentially carries out four times of up-sampling through four up-sampling encoder layers to obtain a pixel pore prediction picture, wherein each up-sampling encoder layer comprises two convolution layers, and a deconvolution layer is arranged between each two up-sampling encoder layers; the convolution kernels of the convolution layers are 3*3 and all adopt ReLU as an activation function, the pooling kernels of the largest pooling layers are 2 x 2, and the convolution kernels of the deconvolution layers are 2 x 2. The provincial network uses jump connection on the same level, instead of directly carrying out supervision and loss back transmission on the high-level semantic features (the uppermost layer), so that the finally recovered feature map is fused with more low-level features, and features of different scales are fused, so that low-level features of different modes can be extracted more accurately under the suspected condition of foreign matters and scratches, such as different edge gray values of the scratches and normal pores, more accurate prediction is realized, and pores are well distinguished from non-pore structures which are difficult to separate in the traditional methods, such as the foreign matters, the scratches and the like.
And 3, merging the pore pixel points in the pixel pore prediction picture through M adjacent pixels to obtain a plurality of pore blocks, namely merging each pore pixel point and pore pixel points in 4 adjacent pixel points of the Mi-shaped grid into the same pixel block.
And 4, carrying out algorithm grid calculation and extraction on each pore block to obtain the minimum circumscribed rectangle of the pore block as the pore region in the detected composite material, and calculating the porosity in the composite material. That is, the standard grid division is performed on each element of the row from the top left of the pixel aperture prediction picture after the M is adjacent, so that the maximum aperture area occupied in one grid is satisfied, and the calculation is performed on a plurality of aperture areas in the same grid, thereby obtaining the minimum circumscribed rectangle of each aperture block.
The traditional grid algorithm performs grid division from left to right and from top to bottom in a picture and then performs statistics according to a national standard mode, and an actual worker calculates the porosity again by using a standard grid with the minimum range occupied by each pore in statistics so as to reduce the deviation of a calculation result brought by a calculation standard.
What has been described above is only a preferred embodiment of the present application, and the present invention is not limited to the above examples. It is to be understood that other modifications and variations which may be directly derived or contemplated by those skilled in the art without departing from the spirit and concepts of the present invention are deemed to be included within the scope of the present invention.
Claims (3)
1. A composite pore detection method based on UNET depth network, the method comprising:
acquiring a slice image of the composite material;
processing the slice image by adopting an UNET depth network to obtain a pixel pore prediction picture, wherein the pixel pore prediction picture comprises a prediction result of each pixel point in the slice image, and the prediction result is a pore pixel point or a non-pore pixel point;
combining pore pixel points in the pixel pore prediction picture through M adjacent to obtain a plurality of pore blocks;
carrying out algorithm grid calculation and extraction on each pore block to obtain a minimum circumscribed rectangle of the pore block as a pore region in the detected composite material, and calculating the porosity in the composite material; the algorithm grid computing and extracting are carried out on each pore block to obtain the minimum circumscribed rectangle of the pore block, and the method comprises the following steps: and dividing standard grids from the pixel points of the holes on each element of the row from the upper left of the pixel hole predicted picture after M is adjacent, so that the condition that the occupied hole area in one grid is the largest and a plurality of hole areas in the same grid are calculated respectively is met, and the minimum circumscribed rectangle of each hole block is obtained.
2. The method of claim 1, wherein merging pore pixels in the pixel pore prediction picture by M-adjacency comprises: and merging each pore pixel point with pore pixel points in 4 adjacent pixel points of the meter-shaped lattice.
3. The method according to claim 1 or 2, wherein the UNET depth network adopts a symmetrical structure and adopts a jump connection mode in the same level, the slice image sequentially carries out four downsampling steps through four downsampling encodings to obtain 1024-dimensional feature vectors, each downsampling encodings comprises two convolution layers, and a maximum pooling layer is arranged between each two downsampling encodings; the feature vector sequentially carries out four times of up-sampling through four up-sampling encoder layers to obtain the pixel pore prediction picture, wherein each up-sampling encoder layer comprises two convolution layers, and a deconvolution layer is arranged between each two up-sampling encoder layers; the convolution kernels of the convolution layers are 3*3 and all adopt ReLU as an activation function, the pooling kernels of the largest pooling layers are 2 x 2, and the convolution kernels of the deconvolution layers are 2 x 2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010304692.4A CN111507966B (en) | 2020-04-17 | 2020-04-17 | Composite material pore detection method based on UNET depth network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010304692.4A CN111507966B (en) | 2020-04-17 | 2020-04-17 | Composite material pore detection method based on UNET depth network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111507966A CN111507966A (en) | 2020-08-07 |
CN111507966B true CN111507966B (en) | 2024-02-06 |
Family
ID=71864093
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010304692.4A Active CN111507966B (en) | 2020-04-17 | 2020-04-17 | Composite material pore detection method based on UNET depth network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111507966B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116883400B (en) * | 2023-09-07 | 2023-11-21 | 山东大学 | Powder spreading porosity prediction method and system in laser selective melting process |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103761529A (en) * | 2013-12-31 | 2014-04-30 | 北京大学 | Open fire detection method and system based on multicolor models and rectangular features |
CN105894514A (en) * | 2016-04-06 | 2016-08-24 | 广东工业大学 | Printed matter defect detection method and system based on GPU parallel operation |
CN109859230A (en) * | 2018-12-26 | 2019-06-07 | 北京理工大学 | A kind of dividing method for porous media Micro-CT scanning image |
CN110110661A (en) * | 2019-05-07 | 2019-08-09 | 西南石油大学 | A kind of rock image porosity type recognition methods based on unet segmentation |
-
2020
- 2020-04-17 CN CN202010304692.4A patent/CN111507966B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103761529A (en) * | 2013-12-31 | 2014-04-30 | 北京大学 | Open fire detection method and system based on multicolor models and rectangular features |
CN105894514A (en) * | 2016-04-06 | 2016-08-24 | 广东工业大学 | Printed matter defect detection method and system based on GPU parallel operation |
CN109859230A (en) * | 2018-12-26 | 2019-06-07 | 北京理工大学 | A kind of dividing method for porous media Micro-CT scanning image |
CN110110661A (en) * | 2019-05-07 | 2019-08-09 | 西南石油大学 | A kind of rock image porosity type recognition methods based on unet segmentation |
Non-Patent Citations (1)
Title |
---|
李寿涛 ; 陈浩 ; 陈思 ; 李敬 ; .图像ROI选择及其应用研究.CT理论与应用研究.(第06期),正文第4部分. * |
Also Published As
Publication number | Publication date |
---|---|
CN111507966A (en) | 2020-08-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114937055B (en) | Image self-adaptive segmentation method and system based on artificial intelligence | |
CN110728200B (en) | Real-time pedestrian detection method and system based on deep learning | |
CN109978839B (en) | Method for detecting wafer low-texture defects | |
CN113724231B (en) | Industrial defect detection method based on semantic segmentation and target detection fusion model | |
CN108009518A (en) | A kind of stratification traffic mark recognition methods based on quick two points of convolutional neural networks | |
CN110766017B (en) | Mobile terminal text recognition method and system based on deep learning | |
CN109886159B (en) | Face detection method under non-limited condition | |
CN111080609B (en) | Brake shoe bolt loss detection method based on deep learning | |
CN111915628B (en) | Single-stage instance segmentation method based on prediction target dense boundary points | |
CN114742799B (en) | Industrial scene unknown type defect segmentation method based on self-supervision heterogeneous network | |
CN112580647A (en) | Stacked object oriented identification method and system | |
CN115272304B (en) | Cloth defect detection method and system based on image processing | |
CN114743119A (en) | High-speed rail contact net dropper nut defect detection method based on unmanned aerial vehicle | |
CN114548208A (en) | Improved plant seed real-time classification detection method based on YOLOv5 | |
CN111507966B (en) | Composite material pore detection method based on UNET depth network | |
CN111666811A (en) | Method and system for extracting traffic sign area in traffic scene image | |
CN112749741B (en) | Hand brake fastening fault identification method based on deep learning | |
CN110633635A (en) | ROI-based traffic sign board real-time detection method and system | |
CN113221991A (en) | Method for re-labeling data set by utilizing deep learning | |
CN108280842A (en) | A kind of foreground segmentation method overcoming illuminance abrupt variation | |
CN108537798A (en) | Rapid super-pixel segmentation method | |
CN114882387A (en) | Bearing raceway bruise identification and automatic polishing positioning method in grinding process | |
CN110570437B (en) | Electric power channel automatic inspection data processing method based on boundary identification | |
CN116563659A (en) | Optical smoke detection method combining priori knowledge and feature classification | |
CN112487967A (en) | Scenic spot painting behavior identification method based on three-dimensional convolution network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |