CN115619687B - Tunnel lining void radar signal identification method, equipment and storage medium - Google Patents

Tunnel lining void radar signal identification method, equipment and storage medium Download PDF

Info

Publication number
CN115619687B
CN115619687B CN202211638082.3A CN202211638082A CN115619687B CN 115619687 B CN115619687 B CN 115619687B CN 202211638082 A CN202211638082 A CN 202211638082A CN 115619687 B CN115619687 B CN 115619687B
Authority
CN
China
Prior art keywords
data
void
model
image
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211638082.3A
Other languages
Chinese (zh)
Other versions
CN115619687A (en
Inventor
宋恒
耿天宝
张宜声
程维国
王东杰
路景海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Shuzhi Construction Research Institute Co ltd
Original Assignee
Anhui Shuzhi Construction Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Shuzhi Construction Research Institute Co ltd filed Critical Anhui Shuzhi Construction Research Institute Co ltd
Priority to CN202211638082.3A priority Critical patent/CN115619687B/en
Publication of CN115619687A publication Critical patent/CN115619687A/en
Application granted granted Critical
Publication of CN115619687B publication Critical patent/CN115619687B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a method, equipment and a storage medium for identifying a tunnel lining void radar signal, and belongs to the field of radar detection. Aiming at the problems that the three-dimensional ground penetrating radar is difficult to collect in a tunnel due to limited use, has less data and is easy to be subjected to over fitting of a model in the prior art, the invention provides a tunnel lining void radar signal identification method, equipment and a storage medium, which utilize simulation to randomly generate a batch of model files, perform simulation on each model in batches, and simulate a single model simulation flow circularly on each measuring line to generate a simulated void three-dimensional data set; removing direct waves from the B-scan image of the void three-dimensional data set; and data enhancement is performed; training by using the model to obtain a trained model; and inputting an image to be identified by using the trained model, so as to obtain the identified image, wherein the parameter quantity is small, and the model is relieved from being over-fitted on the tunnel void data with smaller data scale.

Description

Tunnel lining void radar signal identification method, equipment and storage medium
Technical Field
The invention relates to the field of radar detection, in particular to a method, equipment and storage medium for identifying a tunnel lining emptying radar signal.
Background
The ground penetrating radar (Ground Penetrating radar, GPR) technology is used as an emerging nondestructive testing technology, has the characteristics of accurate positioning, high speed, flexible use, high detection precision and the like, and is widely applied to detection of unknown objects in shallow underground in various fields such as urban infrastructure detection, road and bridge detection and the like. The ground penetrating radar transmitter continuously transmits electromagnetic wave signals (A-Scan) to the underground, and the receiver receives a plurality of A-Scan signals to form a B-Scan image, so that the underground target is always presented in a hyperbolic shape in the ground penetrating radar image. The ground penetrating radar detection is essentially the detection of hyperbolic morphological characteristics of each target object in the B-Scan image. The automatic detection and identification of the target object in the current tunnel structure mainly adopts an artificial intelligence technology to identify and detect, and performs model learning training and reasoning on a large number of B-scan data based on two-dimensional features.
In combination with field tunneling surveys for a long time, we found that unlike other targets, the void appears as a flat space structure, which itself has a relatively complex multiple solution, i.e., the cross-sectional views corresponding to different line locations of the same void tend to vary greatly. If the model is trained by using only the B-scan data generated by sweeping at a random position without the space, the model generalization capability is poor, the detection failure is caused by changing the sweeping position, and the final model identification effect cannot reach the expectations. In view of the multi-resolution of the tunnel void structure, the automatic void detection is realized by considering the three-dimensional C-scan data with more information, but most of the three-dimensional bottom detection radars in the current market are contact radars, and the defects of heavy weight and manual push-pull are overcome, so that the real data of the arch and crown positions in the tunnel cannot be collected in a large scale.
Disclosure of Invention
1. Technical problem to be solved
Aiming at the problems that the three-dimensional ground penetrating radar is difficult to collect due to limited use of the C-scan in a tunnel, the data is less, and an algorithm model is easy to be fitted in the prior art, the invention provides a tunnel lining void radar signal identification method, equipment and a storage medium, which can realize simple data collection, have less model parameters and relieve the condition that the model is fitted on void data of a tunnel with smaller data scale.
2. Technical proposal
The aim of the invention is achieved by the following technical scheme.
A tunnel lining void radar signal identification method comprises the following steps:
collecting three-dimensional real data of void;
randomly generating a batch of model files by modeling and simulation, carrying out simulation on each model in batches, wherein a single model simulation flow is circulated and simulated and fused on each measuring line, and generating a simulated void three-dimensional data set;
processing the three-dimensional data set, and removing a direct wave from the B-scan image of the three-dimensional data set; and data enhancement is performed;
labeling the enhanced void three-dimensional data set;
training by using the model to obtain a trained model;
and inputting an image to be identified by using the trained model, and obtaining the identified image.
Furthermore, the data acquisition adopts array ground penetrating radar acquisition.
Furthermore, the collected data and the data generated through simulation are fused to obtain the C-Scan image in the track measuring direction.
Furthermore, the direct wave of the B-Scan image is removed by means of mean filtering.
Further, the specific formula of the mean filtering is:
Figure 265825DEST_PATH_IMAGE001
wherein the method comprises the steps of
Figure 389639DEST_PATH_IMAGE002
For the pre-processing j-th a-Scan echo data,
Figure 736307DEST_PATH_IMAGE003
to be processed afterIs the j-th a-Scan echo data.
Further, the training model is an FCN network combining 3-dimensional convolution and 2.5-dimensional convolution, the loss function is,
Figure 148834DEST_PATH_IMAGE004
wherein X represents an estimated value, Y represents a target value,
Figure 239149DEST_PATH_IMAGE005
represents the estimated value of a certain pixel point in X,
Figure 420732DEST_PATH_IMAGE006
the target value of a certain pixel point in Y is represented, and alpha and beta are coefficients;
the calculation formula of the adaptive error of the evaluation index, namely adaptive_rand_error, is as follows:
Figure 675651DEST_PATH_IMAGE007
and p: the self-adaptive precision, the number of pixel pairs with the same label in the test label image and the real image is divided by the total number of pixels in the test image;
r: adaptive regression, the number of pairs of pixels in the test label image and the real image that have the same label divided by the total number of pixels in the real image.
A virtual device comprising an application program and an operating system, wherein the application program performs the method for identifying the running of a process according to any one of the above.
A data processing apparatus, the data processing apparatus comprising: a memory, a processor, and a data processing program stored on the memory and executable on the processor, which when executed by the processor, performs the steps of any of the above.
Preferably, a data processing device includes, but is not limited to, a smart phone, a tablet computer or a portable computer.
A storage medium comprising a stored application program, wherein the application program performs the method of identifying process operation of any one of the above.
3. Advantageous effects
Compared with the prior art, the invention has the advantages that:
by combining the 3-dimensional convolution with the 2.5-dimensional convolution, a corresponding identification model is constructed, the full fitting learning is carried out on the multi-solvability void, the parameter quantity is reduced, and the overfitting on void data with smaller data scale is relieved. The method is not only suitable for detecting the void, but also has important reference significance for identifying other targets with space polynomials.
Drawings
FIG. 1 is a schematic diagram of a three-dimensional void identification algorithm of the present invention;
FIG. 2 is a flow chart of a batch simulation of the present invention;
FIG. 3 is a diagram showing the comparative effect before and after filtration;
fig. 4 is a three-dimensional void identification effect diagram.
Detailed Description
The invention will now be described in detail with reference to the drawings and the accompanying specific examples.
Examples
Aiming at the problems of less C-scan data and difficult acquisition caused by limited use of the three-dimensional ground penetrating radar in a tunnel, simulation software is selected to simulate the forward modeling of a real tunnel by combining with the design parameters of the tunnel to generate C-scan data, and after the data is enhanced, the data volume basically meets the three-dimensional image segmentation requirement. In the field of three-dimensional image segmentation, three-dimensional convolution is the best choice in performance. However, the three-dimensional convolution is large in calculation amount, and for some thick-layer data with poor interlayer continuity, performance advantages of the three-dimensional model are not obvious, and over-fitting often occurs, even if adjacent layers of features are not directly regarded as multi-channel input, similar to RGB three-channels of a natural image, the method is called 2.5-dimensional convolution for distinguishing the above-mentioned method.
The whole framework of the three-dimensional void identification method is shown in fig. 1, and the model structure is mainly divided into three parts, namely a downsampling part, a multichannel processing part and an upsampling part. The downsampling part is two layers on the left side in the architecture diagram, three-dimensional convolution kernels with the size of 3x3x3 are adopted in the layers, the step length is 1, the edges of the feature diagram are filled to ensure that the size of the feature diagram is unchanged, the activation function is pReLU function, and batch normalization is added to accelerate convergence. The difference is that the number of convolution kernels is different, so that the thickness of the characteristic diagram is gradually increased. The down arrow is a max pooling process that halves the image size. The multi-channel processing part is the lower two layers in the graph, the high-dimensional feature graph is regarded as a multi-channel of one graph, and assuming that the number of channels of the feature graph is 128, 64 convolutions with the size of 3x3x128 are adopted, the rest of the processing is unchanged, so that the feature graph with unchanged size and halved channels is obtained, and the like. The up-sampling part is two layers on the right side of the figure, and up-sampling, superposition and three-dimensional convolution are processed successively. The up-sampling enlarges the image by the deconvolution technique, with a step size of (2, 2). And then, feature graphs with the same size among the same hierarchy are overlapped, so that the model prediction precision is improved. Here the three-dimensional convolution is the same as the downsampling portion. The final output feature map is restored to the input size.
Different from the main stream FCN network, in order to solve the problems of multiple resolvability of a desquamation structure and large 3-dimensional convolution calculation amount, the scheme adopts a mode of combining 3-dimensional convolution and 2.5-dimensional convolution, an input image is firstly converted into a high-dimensional (thick-layer) characteristic through 2 layers of 3-dimensional convolution, at the moment, the input image is regarded as a multi-channel to be processed by equivalent 2-dimensional convolution, the multi-channel FCN network is fused into a characteristic diagram, the dimension of the characteristic diagram is only half of the upper layer after multiple fusion operation, the subsequent 3-dimensional convolution and 2.5-dimensional convolution are alternately used, and the main structure of the model is a classical encoder-decoder structure in 3D-UNet. The model can learn the spatial relation of adjacent B-scan images to ensure full fitting learning of the void with multiple resolvability, has fewer parameter amounts, and relieves the overfitting of the model on the void data of the tunnel with smaller data scale.
The specific void identification steps are as follows:
acquisition of a data set.
In the data acquisition stage, firstly, real data of three-dimensional void of an acquisition part of the array ground penetrating radar is utilized, and in consideration of the condition of insufficient acquired data sample number, by means of modeling and software simulation, the simulation can use GprMax3.0 software to generate simulated void three-dimensional data in batches under random condition. But may also be specially adapted software.
In the modeling process, a three-dimensional full 0 array data with the size of (400, 400, 400) is created, an irregular geometric body is randomly built by means of a rand function, element values at corresponding positions in the array are assigned to 2, a waterproof board is built and placed under the void, values at corresponding positions in the array are assigned to 3, data are finally stored, medium materials are configured, and medium types at different positions can be corresponding to line indexes of a medium material document through the values at the positions in the array.
Software simulation can be 3D modeled by inputting a file containing model design parameters. The 3D forward simulation model design comprises the volume of the model, the size of a grid, the thickness of a medium, electromagnetic parameters, the shape, the size and the position of an underground target body, the positions and the moving step length of a transmitting antenna and a receiving antenna, the type and the frequency of a wave source and the like. In order to make the echo data obtained by simulation and the data structure acquired by the actual array type ground penetrating radar as same as possible, and simultaneously consider the duration of simulation modeling, the requirements of model stability conditions and numerical dispersion, the setting range of a simulation space is 0.8mx0.8mx0.8m, the size of a simulation space grid is set to dx=dy=dz=0.002 m, the size of a time window is 40ns, the antenna frequency is 400MHz, and Ricker waves are adopted as simulation excitation sources of the model. And setting a PML boundary condition within a range of 10cm of each boundary of the three-dimensional space of the model to eliminate boundary reflection, wherein the moving step length is 2cm, which is close to the actual working state of the vehicle-mounted array type ground penetrating radar. In the aspect of media, the modeling file and the media material file are directly read, the pipeline direction is longitudinal, the number of measurement points of each measurement channel is 75, 30 measurement channels are total, the interval between the measurement channels is 2cm, and the number of array antenna channels and the distance between the antennas in the actual acquisition process are corresponding.
And the B-Scan images in the track measuring direction are obtained by fusing the simulated output files, and as shown in figure 3, a three-dimensional C-Scan image can be obtained by stacking a plurality of B-Scan images in the track measuring direction. In data form, the structure is stored in the form of a file by integrating 30B-Scan images of the track into a three-dimensional matrix structure.
Data is generated in batches.
A flow chart of batch generation of data is shown in fig. 2. Since 3D modeling simulation takes a lot of time and the steps are highly repeated, we implement batch generation of simulation data by means of Python scripts, and of course, other modes are also possible, as long as batch generation is possible. Firstly randomly generating a batch of model files, and then carrying out simulation on each model in batch. And the single model simulation flow is to circularly simulate and generate a B-scan image on each measuring line, and all the measuring lines are saved in the same file after the simulation is finished. And repeatedly simulating each model until all the model files are completely simulated, and finally generating a simulated void three-dimensional data set.
Preprocessing of the data set.
In the stage, the direct wave is removed from the B-scan image, and the direct wave is removed by adopting a mean value filtering method because the direct wave has strong energy relative to the useful signal, and the useful signal is easily covered and can not be distinguished. As shown in fig. 3, the left vertically aligned (a) group of images is the image effect before filtering, and the right vertically aligned (b) group is the image effect after filtering.
Assuming that the ground penetrating radar B-Scan image is composed of N pieces of A-Scan echo data, each piece of A-Scan echo data contains M pieces of sampling point data, namely, each B-Scan image is a matrix, the mathematical expression of mean filtering is as follows:
Figure 575474DEST_PATH_IMAGE008
wherein the method comprises the steps of
Figure 203902DEST_PATH_IMAGE009
For echo data of the j-th A-Scan before processing,
Figure 505570DEST_PATH_IMAGE003
Is the j-th A-Scan echo data after processing. The direct wave in the image usually presents a horizontal straight line with strong energy, and the mean value filtering mode can effectively weaken the transverse energy signal by subtracting the mean value of all data of each line. As shown in fig. 3, the left side is the original image, the right side is the filtered image, the direct wave is filtered, the void becomes more obvious, and the transverse wave above the image is basically removed.
In order to further expand the data volume and diversity thereof, data enhancement is also required on the basis of the simulation data. Mainly adopts the modes of translation, rotation and scaling. The composition of the final dataset is shown in the following table:
Figure 928461DEST_PATH_IMAGE010
and labeling the void data.
Labeling the void data, the labeling stage of the embodiment selects 3D-slicer software for labeling, and the 3D-slicer software is widely applied to medical image labeling, and can also use other suitable labeling software or programs for labeling.
And (5) model training.
The specific parameter configuration of the training phase is shown in the following table:
Figure 315580DEST_PATH_IMAGE011
the loss function is set as a linear combination of binary cross entropy loss Binary Cross Entropy Loss (BCE) and the Dice loss function, and the calculation mode is shown as the formula:
Figure 747699DEST_PATH_IMAGE012
where X represents the estimated value output, Y represents the target value target,
Figure 638294DEST_PATH_IMAGE005
represents the estimated value output of a pixel point in X,
Figure 497666DEST_PATH_IMAGE006
the target value target of a certain pixel point in Y is indicated. The α and β are coefficients, which are set according to the requirement, and in this embodiment, the α and β are Dice coefficients, which are widely used metrics in the computer vision world, but are very unfavorable for dividing small objects. The sizes of α and β need to be adjusted according to the foreground size, and α=0.1 and β=0.9 because the target size for the void is moderate.
The calculation formula of the adaptive error adaptive_rand_error of the evaluation index is as follows:
Figure 106502DEST_PATH_IMAGE013
and p: adaptive accuracy (The adapted Rand precision), the number of pixel pairs in the test label image and the real image that have the same label, divided by the total number of pixels in the test image.
r: an adaptive regression (The adapted Rand recall) divides the number of pixel pairs having the same label in the test label image and the real image by the total number of pixels in the real image.
All 6776 groups of three-dimensional nulls were randomly divided into training and test sets at a ratio of 4:1, with the final index conditions shown in the following table. The specific recognition effect is shown in fig. 4. The vertically arranged (a) images on the left side are image effects before recognition, and the vertically arranged (b) images on the right side are image effects after recognition.
Figure 342311DEST_PATH_IMAGE014
In the scheme, because the three-dimensional void data set constructed by the method is small in scale, the dimension of the middle level features of the model is low, and when the method is used practically, the dimension number of the high-dimensional features of the middle level can be dynamically adjusted to match data according to the data sets with different scales, so that the over-fitting or under-fitting problem is relieved. According to the distribution condition of the samples, the loss function and the evaluation index are optionally adjusted, and the optional loss function is as follows: BCELoss (binary cross entropy Loss function-entcopy Loss, pixelated Pixel-wise cross entropy Loss function Cross Entropy Loss, weighted cross entropy Loss function Weighted Entropy Loss, generalized Dice Loss function Generalised Dice Loss, optional evaluation indexes are Mean IoU, dice coefficient, average correctness Average Precision.
The method combines 3-dimensional convolution and 2.5-dimensional convolution, and treats the middle-level high-dimensional features as multiple channels to be processed by equivalent 2-dimensional convolution and fused into a feature map. The method not only can fully fit and learn the multi-solvability void, but also has fewer parameter amounts, and can relieve the overfitting on void data with smaller data scale. The method is not only suitable for detecting the void, but also has important reference significance for identifying other targets with space polynomials. The method also provides reference for the data use of the non-contact three-dimensional ground penetrating radar in advance, and the method can be directly used for the detection and analysis of the non-contact three-dimensional ground penetrating radar after the technology is mature.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, virtual system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The corresponding virtual device may include an application program and an operating system, where the application program executes the above-described running method. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
And may also be a memory, an example of a computer-readable medium. Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises an element. The terms first, second, etc. are used to denote a name, but not any particular order. The foregoing has been described schematically the invention and embodiments thereof, which are not limiting, but are capable of other specific forms of implementing the invention without departing from its spirit or essential characteristics. The drawings are also intended to depict only one embodiment of the invention, and therefore the actual construction is not intended to limit the claims, any reference number in the claims not being intended to limit the claims. Therefore, if one of ordinary skill in the art is informed by this disclosure, a structural manner and an embodiment similar to the technical scheme are not creatively designed without departing from the gist of the present invention, and all the structural manners and the embodiment are considered to be within the protection scope of the present patent.

Claims (9)

1. A tunnel lining void radar signal identification method comprises the following steps:
collecting three-dimensional real data of void;
randomly generating a batch of model files by using simulation, carrying out batch simulation on each model, wherein a single model simulation flow is circulated, simulated and fused on each measuring line, and a simulated void three-dimensional data set is generated;
removing direct waves from the B-scan image of the void three-dimensional data set; and data enhancement is performed;
labeling the enhanced void three-dimensional data set;
training by using the model to obtain a trained model; the main structure of the model is the classical encoder-decoder structure in 3D-UNet; the training model is an FCN network combining 3-dimensional convolution and 2.5-dimensional convolution; the input image is firstly converted into high-dimensional features through 2 layers of 3-dimensional convolution, the high-dimensional features are regarded as multi-channel processing by equivalent 2-dimensional convolution, the multi-channel processing part is in a 2-layer structure, and the multi-channel processing part is fused into a feature map; the dimension of the feature map after multiple fusion operations is only half of the dimension of the upper layer, and the subsequent 3-dimensional convolution and 2.5-dimensional convolution are alternately used;
and inputting an image to be identified by using the trained model, and obtaining the identified image.
2. The method for identifying the tunnel lining void radar signals according to claim 1, wherein the data acquisition adopts array type ground penetrating radar acquisition.
3. The method for identifying the tunnel lining void radar signal according to claim 1 or 2, wherein the acquired data are fused with the data generated through simulation to obtain a C-Scan image in the track measurement direction.
4. A tunnel lining void radar signal identification method according to claim 3, wherein the direct wave of the B-Scan image is removed by means of mean filtering.
5. The method for identifying the tunnel lining void radar signal according to claim 4, wherein the specific formula of the mean filtering is as follows:
Figure QLYQS_1
wherein the method comprises the steps of
Figure QLYQS_2
For the echo data of the j-th A-Scan before processing, < >>
Figure QLYQS_3
Is the j-th A-Scan echo data after processing.
6. A tunnel lining void radar signal identification method as claimed in claim 4, wherein the training model loss function is,
Figure QLYQS_4
wherein X represents an estimated value, Y represents a target value,
Figure QLYQS_5
representing the estimated value of a pixel in X, < >>
Figure QLYQS_6
The target value of a certain pixel point in Y is represented, and alpha and beta are coefficients;
the calculation formula of the adaptive error of the evaluation index, namely adaptive_rand_error, is as follows:
Figure QLYQS_7
and p: the self-adaptive precision, the number of pixel pairs with the same label in the test label image and the real image is divided by the total number of pixels in the test image;
r: adaptive regression, the number of pairs of pixels in the test label image and the real image that have the same label divided by the total number of pixels in the real image.
7. A data processing apparatus, characterized in that the data processing apparatus comprises: memory, a processor and a data processing program stored on the memory and executable on the processor, which data processing program, when executed by the processor, implements the identification method according to any one of claims 1 to 6.
8. A data processing device according to claim 7, wherein the data processing device comprises, but is not limited to, a smart phone, a tablet computer or a portable computer.
9. A storage medium comprising a stored application program, wherein the application program performs the identification method of any one of claims 1 to 6.
CN202211638082.3A 2022-12-20 2022-12-20 Tunnel lining void radar signal identification method, equipment and storage medium Active CN115619687B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211638082.3A CN115619687B (en) 2022-12-20 2022-12-20 Tunnel lining void radar signal identification method, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211638082.3A CN115619687B (en) 2022-12-20 2022-12-20 Tunnel lining void radar signal identification method, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115619687A CN115619687A (en) 2023-01-17
CN115619687B true CN115619687B (en) 2023-05-09

Family

ID=84881057

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211638082.3A Active CN115619687B (en) 2022-12-20 2022-12-20 Tunnel lining void radar signal identification method, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115619687B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3893200A1 (en) * 2019-10-23 2021-10-13 GE Precision Healthcare LLC Method, system and computer readable medium for automatic segmentation of a 3d medical image

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107624193A (en) * 2015-04-29 2018-01-23 西门子公司 The method and system of semantic segmentation in laparoscope and endoscope 2D/2.5D view data
CN112085736B (en) * 2020-09-04 2024-02-02 厦门大学 Kidney tumor segmentation method based on mixed-dimension convolution
CN112232392B (en) * 2020-09-29 2022-03-22 深圳安德空间技术有限公司 Data interpretation and identification method for three-dimensional ground penetrating radar
US20220114699A1 (en) * 2020-10-09 2022-04-14 The Regents Of The University Of California Spatiotemporal resolution enhancement of biomedical images
CN112462346B (en) * 2020-11-26 2023-04-28 西安交通大学 Ground penetrating radar subgrade disease target detection method based on convolutional neural network
CN112700429B (en) * 2021-01-08 2022-08-26 中国民航大学 Airport pavement underground structure disease automatic detection method based on deep learning
CN114066883A (en) * 2021-12-20 2022-02-18 昆明理工大学 Liver tumor segmentation method based on feature selection and residual fusion
CN114548278A (en) * 2022-02-22 2022-05-27 西安建筑科技大学 In-service tunnel lining structure defect identification method and system based on deep learning
CN115239812A (en) * 2022-07-18 2022-10-25 湖南大学 Underground collapse hidden danger real-time detection method and system based on 3D task parallel network

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3893200A1 (en) * 2019-10-23 2021-10-13 GE Precision Healthcare LLC Method, system and computer readable medium for automatic segmentation of a 3d medical image

Also Published As

Publication number Publication date
CN115619687A (en) 2023-01-17

Similar Documents

Publication Publication Date Title
CN110807788B (en) Medical image processing method, medical image processing device, electronic equipment and computer storage medium
CN110287932B (en) Road blocking information extraction method based on deep learning image semantic segmentation
CN111275714B (en) Prostate MR image segmentation method based on attention mechanism 3D convolutional neural network
Qiu et al. Application of an improved YOLOv5 algorithm in real-time detection of foreign objects by ground penetrating radar
Su et al. Deep convolutional neural network–based pixel-wise landslide inventory mapping
Zhang et al. Polarimetric SAR terrain classification using 3D convolutional neural network
Beucher et al. Interpretation of convolutional neural networks for acid sulfate soil classification
CN113095409A (en) Hyperspectral image classification method based on attention mechanism and weight sharing
Du et al. PST: Plant segmentation transformer for 3D point clouds of rapeseed plants at the podding stage
CN115471448A (en) Artificial intelligence-based thymus tumor histopathology typing method and device
CN115937697A (en) Remote sensing image change detection method
CN107292039B (en) UUV bank patrolling profile construction method based on wavelet clustering
Hou et al. A pointer meter reading recognition method based on YOLOX and semantic segmentation technology
Loebel et al. Extracting glacier calving fronts by deep learning: The benefit of multispectral, topographic, and textural input features
CN111681204A (en) CT rib fracture focus relation modeling method and device based on graph neural network
Cromwell et al. Lidar cloud detection with fully convolutional networks
Zhang et al. Application of deep generative networks for SAR/ISAR: a review
CN115619687B (en) Tunnel lining void radar signal identification method, equipment and storage medium
Liu et al. Advances in automatic identification of road subsurface distress using ground penetrating radar: State of the art and future trends
Berger et al. Automated ice-bottom tracking of 2D and 3D ice radar imagery using Viterbi and TRW-S
CN113392705A (en) Method for identifying pipeline leakage target in desert area based on convolutional neural network
Lguensat et al. Convolutional neural networks for the segmentation of oceanic eddies from altimetric maps
Qian et al. Deep Learning-Augmented Stand-off Radar Scheme for Rapidly Detecting Tree Defects
Sunandini et al. Significance of Atrous Spatial Pyramid Pooling (ASPP) in Deeplabv3+ for Water Body Segmentation
Li Road extraction from remote sensing images using parallel softplus networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant