CN113313668A - Subway tunnel surface disease feature extraction method - Google Patents

Subway tunnel surface disease feature extraction method Download PDF

Info

Publication number
CN113313668A
CN113313668A CN202110420628.7A CN202110420628A CN113313668A CN 113313668 A CN113313668 A CN 113313668A CN 202110420628 A CN202110420628 A CN 202110420628A CN 113313668 A CN113313668 A CN 113313668A
Authority
CN
China
Prior art keywords
feature
fusion
network
feature map
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110420628.7A
Other languages
Chinese (zh)
Other versions
CN113313668B (en
Inventor
王保宪
杨宇飞
杜彦良
任伟新
徐飞
王俊芳
赵杨平
张颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Shijiazhuang Tiedao University
Original Assignee
Shenzhen University
Shijiazhuang Tiedao University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University, Shijiazhuang Tiedao University filed Critical Shenzhen University
Priority to CN202110420628.7A priority Critical patent/CN113313668B/en
Publication of CN113313668A publication Critical patent/CN113313668A/en
Application granted granted Critical
Publication of CN113313668B publication Critical patent/CN113313668B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30132Masonry; Concrete

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for extracting surface disease characteristics of a subway tunnel. The method comprises the following steps: constructing a deep learning model for feature extraction, wherein the model comprises a backbone network, a first branch network and a second branch network, the backbone network takes an original image as input and is used for extracting a plurality of original feature maps of the image, the first branch network and the second branch network are set to be pyramid structures, the first branch network is used for carrying out forward feature fusion from top to bottom on the features extracted by the backbone network, and the second branch network is used for carrying out reverse feature fusion from bottom to top on the features extracted by the backbone network; and training the deep learning model by taking the real image labeling area as a target and a set loss function as a constraint so as to be used for detecting and identifying the surface diseases of the subway tunnel. The invention improves the extraction capability of the deep learning network on the disease characteristics, thereby improving the disease detection quality.

Description

Subway tunnel surface disease feature extraction method
Technical Field
The invention relates to the technical field of tunnel lining surface disease image detection and identification, in particular to a subway tunnel surface disease feature extraction method.
Background
The detection and identification of the defects such as water leakage and cracks on the surface of the subway tunnel are important contents in the conventional subway tunnel inspection. Because the defects of strong subjectivity, low efficiency and the like exist in manual inspection, the tunnel surface disease detection and identification based on machine vision becomes a new trend of industry development in recent years. The tunnel surface disease detection and identification based on machine vision mainly comprises a traditional image processing method and a deep learning method. The traditional image processing method comprises threshold segmentation, edge detection, morphological analysis and the like, and although the algorithm computation complexity is low and the algorithm hardware computation requirement is not high, the interferences of low contrast of the subway tunnel surface diseases, uneven illumination, serious background noise pollution and the like are difficult to overcome.
Compared with the traditional image processing method, the deep learning method utilizes the multilayer neural network to mine the multilayer characteristics of the image from the massive image data information and continuously collect the multilayer characteristics of the image into the network model, and then finishes the tasks of classification, positioning, segmentation and the like of the input image data by training the specific network model. The deep learning method shows excellent generalization capability and robustness, and is widely applied to the field of tunnel lining surface disease image detection and identification in recent years. For example, patent application CN201910348834.4 discloses a subway shield tunnel disease detection method based on deep learning, which performs disease detection on collected shield tunnel images by using a cocknet deep learning model, thereby solving the problem of interference of environmental factors on damage identification to a certain extent. Patent application CN201810843204.X discloses a tunnel structure apparent disease detection device and method, the developed detection device is used for shooting the surface of a subway tunnel, and the detection and identification of diseases are completed on the shot image by using a full convolution neural network (R-FCN) based on a region suggestion candidate frame.
The method obtains more accurate experimental effect than the traditional image processing method by means of strong feature extraction and mode classification capability of a deep learning network, but the method is applied to the actual inspection of the surface diseases of the subway tunnel, and still has the following problems: 1) the target diseases and background interferences (segment joints, bolt holes, pipelines and the like) of the tunnel in a complex environment are similar in representation and difficult to distinguish; 2) the block diseases such as water leakage are easy to lose in detection due to the fact that the boundary of the block diseases is not obvious, and detection accuracy is not enough.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a subway tunnel surface disease feature extraction method.
According to the first aspect of the invention, the invention provides a subway tunnel surface disease feature extraction method. The method comprises the following steps:
constructing a deep learning model for feature extraction, wherein the model comprises a backbone network, a first branch network and a second branch network, the backbone network takes an original image as input and is used for extracting a plurality of original feature maps of the image, the first branch network and the second branch network are set to be pyramid structures, the first branch network is used for carrying out forward feature fusion from top to bottom on the features extracted by the backbone network, and the second branch network is used for carrying out reverse feature fusion from bottom to top on the features extracted by the backbone network;
and training the deep learning model by taking the set loss function as a constraint to detect and identify the surface diseases of the subway tunnel by taking the real image labeling area as a target.
According to a second aspect of the invention, a subway tunnel surface disease detection method is provided. The method comprises the following steps: the subway tunnel surface image to be detected is collected, the trained deep learning model obtained by the method is input for feature extraction, and then the subway tunnel surface diseases are identified based on the extracted features.
Compared with the prior art, the method has the advantages that the novel subway tunnel surface disease feature extraction method based on the bidirectional pyramid and interlayer feature reinforcement learning is provided, and the detection and identification performance of the deep learning network on the tunnel surface disease can be obviously improved.
Other features of the present invention and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a block diagram of bi-directional pyramid feature reinforcement learning, according to one embodiment of the present invention;
fig. 2 is a flowchart of a subway tunnel surface disease feature extraction method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of two inter-layer feature reinforcement learning models, according to one embodiment of the invention;
FIG. 4 is a network architecture diagram of a bidirectional pyramid network, according to one embodiment of the present invention;
FIG. 5 is two basic block diagrams of a residual network according to one embodiment of the present invention;
FIG. 6 is a schematic diagram of a feature space weight computation network according to one embodiment of the present invention;
fig. 7 is a schematic diagram of a feature channel weight calculation network according to an embodiment of the present invention.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
In order to strengthen the excavation and utilization of the significant feature information of the tunnel surface disease area, the invention constructs a deep learning model framework for feature extraction, and the framework integrally comprises a backbone network and a bidirectional pyramid structure which are respectively used for the forward feature fusion from bottom to top and the reverse feature fusion from top to bottom.
For example, referring to fig. 1, a depth residual error network ResNet-101 is used as a basic feature extraction framework, and a bottom-up feature pyramid is supplemented on the basis of the direction of a traditional top-down feature pyramid, so that bidirectional cross fusion of a deep semantic feature map and a shallow spatial feature map is realized.
In addition, in the feature fusion process of the bidirectional pyramid, a shallow feature map space weight calculation module is provided for the characteristics of large space scale and more space clutter content of the shallow feature map, so that the reinforced learning of the significant features of the disease area in the shallow feature map is improved; and aiming at the characteristics of small spatial scale and channel semantic feature aggregation of the deep feature map, a feature channel weight calculation module is provided and constructed, so that the attention learning of the target feature channel in the deep feature map is improved.
Specifically, referring to fig. 2, the method for extracting the surface disease characteristics of the subway tunnel includes the following steps.
Step S110, extracting an original feature of the image.
Referring to the 'original feature extraction' part of fig. 4, in this embodiment, a deep residual error network ResNet-101 is used as a basic feature extraction module, and the ResNet-101 is composed of two basic blocks, Conv Block and Identity Block, which are alternately connected in series, and have 101 layers, and the structures of the two basic blocks are shown in fig. 5, where Conv2D, BatchNorm and ReLu represent convolution, batch normalization and ReLu activation functions, respectively. And generating 4 original feature layers of C2, C3, C4 and C5 by utilizing a ResNet-101 main feature extraction network, wherein C2 is a bottom-layer feature, and C5 is a top-layer feature. Taking the input image size of 1024 × 3 as an example, the scales of C2, C3, C4, and C5 are 256 × 256, 128 × 512, 64 × 1024, and 32 × 2048, respectively.
And step S120, fusing the forward pyramid characteristics.
After obtaining the original feature maps C2-C5, as shown in the "channel dimension reduction" and "forward feature fusion" section in fig. 4, the number of channels of the 4 feature maps is first reduced to 256, and then top-down feature fusion is performed, which specifically includes:
step S121, upsampling C5 to make the size of the upsampled C5 the same as that of the C4 feature map subjected to channel dimensionality reduction (namely 64 × 256), and inputting the upsampled C5 and the feature map into an enhancement feature Fusion module (Enhance Fusion Block, EF-Block) based on interlayer feature enhancement to obtain P4;
step S122, further up-sampling P4 to make the size of the P4 the same as the size of the C3 feature map subjected to channel dimensionality reduction (128 × 256), and inputting the P4 and the C3 feature map into an enhancement-based inter-layer feature Fusion module (Enhance Fusion Block, EF-Block) to obtain P3;
step S123, further upsampling P3 to make its size the same as that of the C2 feature map (256 × 256) after channel dimensionality reduction, and inputting the two to an enhancement Fusion Block (EF-Block) based on inter-layer feature enhancement, so as to obtain P2.
Finally, P2, P3, and P4 generated by feature fusion are output as predicted feature maps of forward feature fusion.
And step S130, fusing the reverse pyramid characteristics.
On the basis of reducing the channel number of the original feature map to 256, performing bottom-up feature fusion, as shown in a "reverse feature fusion" part in fig. 4, specifically including:
step S131, down-sampling C2 (or called first bottom layer feature map) to make its size the same as that of C3 (or called second bottom layer feature map) feature map after channel dimensionality reduction (128 × 256), and performing feature fusion based on interlayer feature enhancement on the two to obtain R2;
step S132, further down-sampling R2 to make the size of the R2 the same as the size of the C4 feature map subjected to channel dimensionality reduction (64 × 256), and performing feature fusion based on interlayer feature enhancement on the R2 and the C4 feature map to obtain R3;
in step S133, R3 is further down-sampled to have the same size as the size of the C5 feature map after channel dimensionality reduction (32 × 256), and feature fusion based on interlayer feature enhancement is performed on both, to obtain R4.
Finally, R2, R3, and R4 generated by feature fusion are output as predicted feature maps of inverse feature fusion.
And step S140, feature fusion based on interlayer feature enhancement.
In the bidirectional pyramid inter-layer feature fusion process of steps S120 and S130, different attention calculation mechanisms are introduced to the fusion module EF-Block according to the dimension of the feature.
Specifically, for low-dimensional feature fusion, for example, the generation process of 4 feature maps of P2, P3, R2 and R3 is set as low-dimensional feature fusion, and a spatial attention mechanism is introduced for feature fusion in consideration of the fact that the low-dimensional feature maps have larger scale and fewer channels.
The spatial attention mechanism is shown in fig. 6, and through iterative interactive training of 3 residual network units, a feature space position in an input feature map, which is beneficial to segmentation of a disease target, can be obtained, and a spatial feature weight coefficient is quantized and output, so that useful feature information in a feature space is enhanced. This process can be expressed as:
Fsa=F*Res{Res[Res(F,w1),w2],w3} (1)
in the above equation, F is the input characteristic, Res (-) is the residual block convolution calculation, { w ·1,w2,w3The parameters are respectively the network parameters corresponding to the 3 residual blocks. Based on the above scheme, referring to fig. 3(a), after spatial feature weight calculation is performed on two input feature maps of the low-dimensional feature fusion module, the size of one feature map is transformed to be the same as that of the other feature map, and finally the two feature maps are added to complete low-dimensional feature fusion.
For high-dimensional feature fusion, for example, the generation process of the two feature maps P4 and R4 is set as high-dimensional feature fusion, and a channel attention mechanism is introduced to perform feature fusion in consideration of the fact that the high-dimensional features have rich semantic information and a large number of channels, as shown in fig. 7.
In fig. 7, the feature channel weight calculation part first performs global pooling on input features, further learns the relationship among the channels through full-connection operation, and further obtains different weights of each channel, and finally multiplies the channel weights by the original input features to obtain a channel enhanced output result. This process can be expressed as:
Fse=F*fc[gp(F)] (2)
in the above formula, F is the input feature, gp(. is a global pooling layer, fc(. cndot.) is a fully connected layer. Based on the above scheme, as shown in fig. 3(b), the two input feature maps of the high-dimensional feature fusion module are respectively subjected to channel weightingAfter calculation, the size of one feature map is converted to be the same as that of the other feature map, and finally the two feature maps are added to complete high-dimensional feature fusion.
Step S150: and constructing a characteristic graph error loss function.
And (3) taking the predicted feature maps P2, P3 and P4 generated by top-down fusion in the bidirectional pyramid structure and the predicted feature maps R2, R3 and R4 generated by bottom-up fusion as reference bases for feature map loss function calculation. Amplifying each prediction feature map to the size of the original map, and establishing the error loss corresponding to each prediction feature map by using a cross entropy function
Figure BDA0003027715310000061
In the above formula, y is an artificially marked binary image of the input image; and C is a prediction feature map generated by the deep learning network. From equation (6), the error loss function generated by the entire predicted feature map can be determined as follows:
Figure BDA0003027715310000062
based on the error loss function, through continuous iterative learning, the tunnel surface disease deep learning detection and identification model obtained through final training can be applied to new tunnel surface disease image detection and identification.
It should be understood that although the above description focuses on the feature extraction process, a tunnel surface disease detection and identification model can be obtained by using the extracted features through regression model prediction or classification prediction, and the like, and tunnel diseases which can be identified include but are not limited to deformation invasion, cracks, water leakage, slab staggering, chipping, collapse, substrate grout pumping, sinking, bottom heave, lining back cavity, and the like.
In summary, the method is based on a bidirectional pyramid structure, and utilizes interlayer feature reinforcement learning to obtain a plurality of feature maps related to target diseases; converting the image mark true value image size to the same size of each characteristic image by utilizing a down sampling technology; and establishing a deep learning error loss function by using a cross entropy function, calculating the error amount of a pixel predicted value in the characteristic image and the image mark true value image, and then reversely propagating the error amount to update the network parameters of each module in each layer of network. Through continuous iterative learning, the tunnel surface disease deep learning detection and identification model obtained through final training can be applied to new tunnel surface disease image detection and identification.
Compared with the prior art, the invention has at least the following advantages:
1) at present, most researches on the excavation of the significant characteristics of a damaged area of a subway tunnel are insufficient, so that missing detection or false alarms occur. According to the invention, the significant characteristics of the surface diseases are fully considered, the learning capability of the characteristic extraction network on the significant characteristics of the subway tunnel disease area is enhanced by using visual attention, and the accurate extraction of the disease characteristics is realized.
2) The invention provides a method for extracting subway tunnel disease features by using a feature pyramid network, and the feature pyramid structure only adopts a top-down interlayer feature fusion method in consideration of the existing feature pyramid structure, and the feature fusion mode of one-way flow cannot ensure that the deep learning network can effectively sense the disease area significance features under the complex background interference of the tunnel. In contrast, the invention supplements a bottom-up branch on the basis of the traditional characteristic pyramid network structure, and the bidirectional structure can realize the two-way fluidity of information, and obviously improves the disease detection quality while keeping little change in the running time.
3) At present, in the traditional pyramid network, feature information convergence is often realized by simply adopting addition operation in interlayer feature fusion, and background noise content is easily introduced into a deep learning network. The invention respectively establishes a low-dimensional feature fusion module and a high-dimensional feature fusion module aiming at the characteristics of large scale, few channels, rich semantic information and more channels of the shallow feature map, and the creative differentiation interlayer feature enhanced learning mechanism can greatly improve the extraction capability of the deep learning network on the disease features, thereby better inhibiting the background noise.
It should be noted that, without departing from the spirit and scope of the present invention, those skilled in the art may make appropriate changes or modifications to the above-described embodiments, for example, in addition to using a deep residual error network as a basic feature extraction module, other types of network models may also be used, and the present invention does not limit the number of layers for extracting feature maps, the size of convolution kernels, the dimension of feature maps, and the like.
The present invention may be a system, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present invention may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + +, Python, or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, by software, and by a combination of software and hardware are equivalent.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (10)

1. A subway tunnel surface disease feature extraction method comprises the following steps:
constructing a deep learning model for feature extraction, wherein the model comprises a backbone network, a first branch network and a second branch network, the backbone network takes an original image as input and is used for extracting a plurality of original feature maps of the image, the first branch network and the second branch network are set to be pyramid structures, the first branch network is used for carrying out forward feature fusion from top to bottom on the features extracted by the backbone network, and the second branch network is used for carrying out reverse feature fusion from bottom to top on the features extracted by the backbone network;
and training the deep learning model by taking the real image labeling area as a target and a set loss function as a constraint so as to be used for detecting and identifying the surface diseases of the subway tunnel.
2. The method of claim 1, wherein the first branch network firstly reduces the number of channels of the original feature maps to the same size and then performs top-down feature fusion on the extracted original feature maps of the backbone network, including:
the top layer feature is up-sampled to enable the size of the top layer feature to be the same as the size of a secondary top layer feature map subjected to channel dimensionality reduction, and the top layer feature map and the secondary top layer feature map are input to a feature fusion module based on interlayer feature enhancement to obtain fusion features;
and performing upsampling on the fusion feature to enable the size of the fusion feature to be the same as the size of the next feature map of the second-level top feature map subjected to channel dimensionality reduction, inputting the fusion feature and the next feature map into a feature fusion module based on interlayer feature enhancement to obtain the fusion feature, and outputting the feature generated through feature fusion as a prediction feature map of forward feature fusion by analogy.
3. The method of claim 1, wherein the second branch network performs the steps of:
down-sampling a first bottom layer feature map extracted by a backbone network to enable the size of the first bottom layer feature map to be the same as that of a second bottom layer feature map subjected to channel dimensionality reduction, and performing feature fusion based on interlayer feature enhancement on the first bottom layer feature map and the second bottom layer feature map to obtain fusion features;
and downsampling the fusion features to enable the size of the fusion features to be the same as that of the third bottom layer feature map subjected to channel dimensionality reduction, performing feature fusion based on interlayer feature enhancement on the fusion features and the third bottom layer feature map to obtain fusion features, and outputting the features generated through feature fusion as a prediction feature map for reverse feature fusion by analogy.
4. The method according to claim 2 or 3, wherein in feature fusion, for the feature fusion determined as low-dimensional, a spatial attention mechanism is used for feature fusion, the feature spatial position of the lesion target segmentation in the input feature map is obtained, and the output spatial feature weight coefficient is quantized.
5. The method according to claim 4, wherein the feature space positions of the lesion object segmentation in the input feature map are obtained through iterative interactive training of a plurality of residual error networks.
6. The method according to claim 2 or 3, wherein in feature fusion, for feature fusion determined as high dimensionality, a channel attention mechanism is adopted for feature fusion, input features are globally pooled first, relationships among channels are learned through full-connection operation, different weights of each channel are obtained, and finally the channel weights are multiplied by original input features to obtain channel enhanced output results.
7. The method of claim 1, wherein the loss function is a sum of error losses for each predicted feature map of the deep learning model.
8. A subway tunnel surface disease detection method comprises the following steps:
acquiring a subway tunnel surface image to be detected, inputting the deep learning model obtained by the method according to any one of claims 1 to 7 for feature extraction, and further identifying subway tunnel surface diseases based on the extracted features.
9. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
10. A computer device comprising a memory and a processor, on which memory a computer program is stored which is executable on the processor, characterized in that the steps of the method of any of claims 1 to 8 are implemented when the processor executes the program.
CN202110420628.7A 2021-04-19 2021-04-19 Subway tunnel surface disease feature extraction method Active CN113313668B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110420628.7A CN113313668B (en) 2021-04-19 2021-04-19 Subway tunnel surface disease feature extraction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110420628.7A CN113313668B (en) 2021-04-19 2021-04-19 Subway tunnel surface disease feature extraction method

Publications (2)

Publication Number Publication Date
CN113313668A true CN113313668A (en) 2021-08-27
CN113313668B CN113313668B (en) 2022-09-27

Family

ID=77372256

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110420628.7A Active CN113313668B (en) 2021-04-19 2021-04-19 Subway tunnel surface disease feature extraction method

Country Status (1)

Country Link
CN (1) CN113313668B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114370828A (en) * 2021-12-28 2022-04-19 中国铁路设计集团有限公司 Shield tunnel diameter convergence and radial slab staggering detection method based on laser scanning
CN114549947A (en) * 2022-01-24 2022-05-27 北京百度网讯科技有限公司 Model training method, device, equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109029381A (en) * 2018-10-19 2018-12-18 石家庄铁道大学 A kind of detection method of tunnel slot, system and terminal device
CN111524117A (en) * 2020-04-20 2020-08-11 南京航空航天大学 Tunnel surface defect detection method based on characteristic pyramid network
CN111523410A (en) * 2020-04-09 2020-08-11 哈尔滨工业大学 Video saliency target detection method based on attention mechanism
CN111696077A (en) * 2020-05-11 2020-09-22 余姚市浙江大学机器人研究中心 Wafer defect detection method based on wafer Det network
CN111967396A (en) * 2020-08-18 2020-11-20 上海眼控科技股份有限公司 Processing method, device and equipment for obstacle detection and storage medium
CN112232231A (en) * 2020-10-20 2021-01-15 城云科技(中国)有限公司 Pedestrian attribute identification method, system, computer device and storage medium
CN112232232A (en) * 2020-10-20 2021-01-15 城云科技(中国)有限公司 Target detection method
CN112395951A (en) * 2020-10-23 2021-02-23 中国地质大学(武汉) Complex scene-oriented domain-adaptive traffic target detection and identification method
CN112434713A (en) * 2020-12-02 2021-03-02 携程计算机技术(上海)有限公司 Image feature extraction method and device, electronic equipment and storage medium
CN112560732A (en) * 2020-12-22 2021-03-26 电子科技大学中山学院 Multi-scale feature extraction network and feature extraction method thereof
CN112633061A (en) * 2020-11-18 2021-04-09 淮阴工学院 Lightweight FIRE-DET flame detection method and system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109029381A (en) * 2018-10-19 2018-12-18 石家庄铁道大学 A kind of detection method of tunnel slot, system and terminal device
CN111523410A (en) * 2020-04-09 2020-08-11 哈尔滨工业大学 Video saliency target detection method based on attention mechanism
CN111524117A (en) * 2020-04-20 2020-08-11 南京航空航天大学 Tunnel surface defect detection method based on characteristic pyramid network
CN111696077A (en) * 2020-05-11 2020-09-22 余姚市浙江大学机器人研究中心 Wafer defect detection method based on wafer Det network
CN111967396A (en) * 2020-08-18 2020-11-20 上海眼控科技股份有限公司 Processing method, device and equipment for obstacle detection and storage medium
CN112232231A (en) * 2020-10-20 2021-01-15 城云科技(中国)有限公司 Pedestrian attribute identification method, system, computer device and storage medium
CN112232232A (en) * 2020-10-20 2021-01-15 城云科技(中国)有限公司 Target detection method
CN112395951A (en) * 2020-10-23 2021-02-23 中国地质大学(武汉) Complex scene-oriented domain-adaptive traffic target detection and identification method
CN112633061A (en) * 2020-11-18 2021-04-09 淮阴工学院 Lightweight FIRE-DET flame detection method and system
CN112434713A (en) * 2020-12-02 2021-03-02 携程计算机技术(上海)有限公司 Image feature extraction method and device, electronic equipment and storage medium
CN112560732A (en) * 2020-12-22 2021-03-26 电子科技大学中山学院 Multi-scale feature extraction network and feature extraction method thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MINGXING TAN等: ""EfficientDet: Scalable and Efficient Object Detection"", 《2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114370828A (en) * 2021-12-28 2022-04-19 中国铁路设计集团有限公司 Shield tunnel diameter convergence and radial slab staggering detection method based on laser scanning
CN114549947A (en) * 2022-01-24 2022-05-27 北京百度网讯科技有限公司 Model training method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113313668B (en) 2022-09-27

Similar Documents

Publication Publication Date Title
Chun et al. Automatic detection method of cracks from concrete surface imagery using two‐step light gradient boosting machine
Yang et al. Automatic pixel‐level crack detection and measurement using fully convolutional network
Li et al. Seismic fault detection using an encoder–decoder convolutional neural network with a small training set
Hoang et al. Metaheuristic optimized edge detection for recognition of concrete wall cracks: a comparative study on the performances of roberts, prewitt, canny, and sobel algorithms
CN114120102A (en) Boundary-optimized remote sensing image semantic segmentation method, device, equipment and medium
EP3815043A1 (en) Systems and methods for depth estimation via affinity learned with convolutional spatial propagation networks
Li et al. A robust instance segmentation framework for underground sewer defect detection
Pally et al. Application of image processing and convolutional neural networks for flood image classification and semantic segmentation
CN111582175A (en) High-resolution remote sensing image semantic segmentation method sharing multi-scale countermeasure characteristics
CN113313668B (en) Subway tunnel surface disease feature extraction method
CN112488025B (en) Double-temporal remote sensing image semantic change detection method based on multi-modal feature fusion
CN113744153B (en) Double-branch image restoration forgery detection method, system, equipment and storage medium
Qian et al. Learning pairwise inter-plane relations for piecewise planar reconstruction
Song et al. Pixel-level crack detection in images using SegNet
CN114724155A (en) Scene text detection method, system and equipment based on deep convolutional neural network
CN113313669B (en) Method for enhancing semantic features of top layer of surface defect image of subway tunnel
Yan et al. CycleADC-Net: A crack segmentation method based on multi-scale feature fusion
CN114781499B (en) Method for constructing ViT model-based intensive prediction task adapter
Zuo et al. A remote sensing image semantic segmentation method by combining deformable convolution with conditional random fields
Li et al. Attention‐guided multiscale neural network for defect detection in sewer pipelines
CN115035172A (en) Depth estimation method and system based on confidence degree grading and inter-stage fusion enhancement
Fu et al. Histogram‐based cost aggregation strategy with joint bilateral filtering for stereo matching
CN117455868A (en) SAR image change detection method based on significant fusion difference map and deep learning
Sariturk et al. Comparison of residual and dense neural network approaches for building extraction from high-resolution aerial images
CN117197470A (en) Polyp segmentation method, device and medium based on colonoscope image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant