CN116256701A - Ground penetrating radar mutual interference wave suppression method and system based on deep learning - Google Patents
Ground penetrating radar mutual interference wave suppression method and system based on deep learning Download PDFInfo
- Publication number
- CN116256701A CN116256701A CN202310549759.4A CN202310549759A CN116256701A CN 116256701 A CN116256701 A CN 116256701A CN 202310549759 A CN202310549759 A CN 202310549759A CN 116256701 A CN116256701 A CN 116256701A
- Authority
- CN
- China
- Prior art keywords
- mutual interference
- detection
- network
- interference wave
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/36—Means for anti-jamming, e.g. ECCM, i.e. electronic counter-counter measures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/885—Radar or analogous systems specially adapted for specific applications for ground probing
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a method and a system for suppressing ground penetrating radar mutual interference waves based on deep learning, wherein the method comprises the following steps: constructing a simulated detection scene and an initial mutual interference wave suppression network; generating a network data set based on the simulated detection scene and through the simulated detection; dividing the network data set into a training data set and a verification data set; training an initial mutual interference wave suppression network by using a training data set to obtain a trained basic mutual interference wave suppression network; verifying the basic mutual interference wave suppression network through a verification data set, and determining the optimal model parameters of the basic mutual interference wave suppression network; adjusting parameters of the basic mutual interference wave suppression network according to the optimal model parameters to obtain the mutual interference wave suppression network; and acquiring a ground penetrating radar image with mutual interference, and inputting the ground penetrating radar image into a mutual interference wave suppression network to obtain the ground penetrating radar image after mutual interference wave suppression. The invention can restrain the mutual interference wave when the ground penetrating radar is used for detecting a plurality of target scenes.
Description
Technical Field
The invention belongs to the technical field of ground penetrating radar data processing, and particularly relates to a ground penetrating radar mutual interference wave suppression method and system based on deep learning.
Background
Ground penetrating radar (Ground Penetrating Radar, GPR for short) is a common nondestructive detection method, and is used for target detection positioning and imaging identification by acquiring scattering echoes of an underground target, and has the characteristics of high efficiency, high anti-interference level, strong penetrating capacity and the like, and is widely applied to the fields of civil engineering, geological exploration, military and the like. The high-frequency electromagnetic wave emitted by the GPR can be reflected after encountering the medium with different electromagnetic characteristics underground, and the position, structure, physical characteristics and other information of an underground target can be deduced according to the wave form, intensity, emission, arrival time and other parameters of the echo received by the receiving antenna, so that the detection of the underground object can be regarded as the detection of the GPR signal. However, when a plurality of targets exist in the detection scene, electromagnetic waves between the targets can generate multiple scattering, so that mutual interference electromagnetic waves can be generated between the two targets, and the mutual interference waves are coupled with real echo signals, so that the mutual interference wave signals with false targets are arranged between the echo signals of the real targets in the GPR B-scan image, and the difficulty in underground target analysis and feature extraction is increased.
The mutual interference wave belongs to one of GPR interference waves, and in the situations that GPR is used for underground pipeline, cavity, mine detection and the like, a multi-target GPR B-scan image is judged and interpreted by an analyst with rich experience. However, due to randomness of the detection scene medium, diversity of detection target types and uncertainty of detection target distribution, inter-disturbance wave forms among targets are also very various, so that the GPR B-scan judgment and interpretation process is very time-consuming and easy to make mistakes. The field of GPR interference wave suppression has been receiving a great deal of attention from scholars both at home and abroad. In the past few years, processing methods based on subspace techniques, such as Principal Component Analysis (PCA), independent Component Analysis (ICA), singular Value Decomposition (SVD), and the like, have been used for GPR interference wave suppression, and are applicable to mutual interference wave suppression. These methods separate the target echo and the inter-disturbance wave based on their signal strength difference, but when their signal strengths are similar, the inter-disturbance wave and the target echo cannot be well separated, and this may result in loss of part of the target echo information that should be originally preserved.
In addition, some techniques based on low-rank sparse representation and morphological component analysis are also used for GPR interference wave suppression, however, the performance of these methods depends largely on the selection of parameters or dictionaries, and the proper parameters or dictionaries are different according to the interference wave morphology in different scenes, meanwhile, the ability of these methods to identify non-uniform inter-interference waves caused by electromagnetic wave scattering among multiple underground targets in GPR B-scan images is weak, and they require more time to process input images, so they are not suitable for application scenes with high real-time requirements. Based on this, it is an urgent problem to improve the detectability of GPR when a plurality of target scenes are contained in the ground.
Disclosure of Invention
The invention provides a ground penetrating radar cross-interference wave suppression method and system based on deep learning, which are used for solving the problem that GPR detection is easy to be interfered by cross-interference waves when facing a plurality of underground target scenes.
In a first aspect, the present invention provides a method for suppressing mutual interference of ground penetrating radar based on deep learning, the method comprising the following steps:
constructing a simulated detection scene and an initial mutual interference wave suppression network;
generating a network data set based on the simulated detection scene and through simulated detection;
Dividing the network data set into a training data set and a verification data set;
training the initial mutual interference wave suppression network by using the training data set to obtain a trained basic mutual interference wave suppression network;
verifying the basic mutual interference wave suppression network through the verification data set, and determining optimal model parameters of the basic mutual interference wave suppression network;
adjusting parameters of the basic mutual interference wave suppression network according to the optimal model parameters to obtain a mutual interference wave suppression network;
and obtaining a ground penetrating radar image with mutual interference, and inputting the ground penetrating radar image into the mutual interference wave suppression network to obtain the ground penetrating radar image after mutual interference wave suppression.
Optionally, the generating the network data set based on the simulated detection scene and through the simulated detection includes the following steps:
based on the empty background of the simulated detection scene, acquiring an empty background detection image through simulation detection;
randomly setting two detection targets in the simulated detection scene, and obtaining a first detection image set through the simulated detection;
generating a double-target detection data set by combining the first detection image set and the empty background detection image, wherein the double-target detection data set comprises double-target mutual interference wave detection images;
Randomly setting three detection targets in the simulated detection scene, and obtaining a second detection image set through the simulation;
generating a three-target detection data set by combining the second detection image set and the empty background detection image, wherein the three-target detection data set comprises three-target mutual interference wave detection images;
and summarizing the dual-target detection data set and the three-target detection data set to obtain a network data set.
Optionally, the initial mutual interference wave suppression network includes a main feature extraction module, a residual intensive module, a self-attention mechanism control module, a feature layer up-sampling module and a convolution prediction network, where the residual intensive module and the self-attention mechanism control module form a jump connection part of the initial mutual interference wave suppression network.
Optionally, the training the initial mutual interference suppression network by using the training data set, and obtaining the trained basic mutual interference suppression network includes the following steps:
initializing the initial mutual interference wave suppression network, and configuring random initial network parameter weights for the initial mutual interference wave suppression network;
reducing loss errors of the trunk feature extraction module, the residual error concentration module, the self-attention mechanism control module, the feature layer up-sampling module and the convolution prediction network by using a preset optimizer;
And inputting the training data set into the initial mutual interference wave suppression network to perform periodic batch training, and obtaining a trained basic mutual interference wave suppression network.
Optionally, a loss function calculation formula of the basic mutual interference wave suppression network is as follows:
in the method, in the process of the invention,for the loss function, +.>、/>For the number of rows and columns of the two-dimensional image, +.>For a detected image of the training dataset which does not contain cross-talk waves>For the output result of said convolution prediction network, < >>Andrepresenting pixel values corresponding to pixel points in the image.
Optionally, the trunk feature extraction module includes five feature extraction layers, each layer of feature extraction layer includes two continuous convolution layers, the number of convolution kernels of the convolution layers in the first layer of the trunk feature extraction module is 64, the number of convolution kernels of the convolution layers in the second layer of the trunk feature extraction module is 128, the number of convolution kernels of the convolution layers in the third layer of the trunk feature extraction module is 256, the number of convolution kernels of the convolution layers in the fourth layer of the trunk feature extraction module is 512, the number of convolution kernels of the convolution layers in the fifth layer of the trunk feature extraction module is 1024, and output results of the feature extraction layers in each layer of the first four layers of the trunk feature extraction module are all used as inputs of the next feature extraction layer.
Optionally, the residual dense module includes four residual dense blocks, each residual dense block includes three dense connection layers and a local feature fusion layer, and a calculation formula of the residual dense block is:
in the method, in the process of the invention,for the input features of the residual secret block, < >>For the output result of the dense connection layer,representing a nonlinear transformation>For the output result of the local feature fusion layer, < >>Representing a convolution operation of 1 x 1, +.>For the final output of the residual block, the final output of the residual block is +.>Is->Andthe result of the addition on the channel.
Optionally, the self-attention mechanism control module includes a plurality of self-attention mechanism control blocks, and a calculation formula of the self-attention mechanism control blocks is:
in the method, in the process of the invention,output feature layer for the ith residual error density block, +.>Outputting the result after the feature layer up-sampling for the feature layer up-sampling module to the trunk feature extraction module, wherein the result is->、/>And->All are convolution operations, ++>And->The number of convolution kernels used is one half of the input features, +.>The number of convolution kernels used is 1, < >>And->For convolving the corresponding bias term +.>For ReLU function>For Sigmoid function, ++ >Is->Parameter setting in the calculation process, wherein the convolution kernel size is 1 multiplied by 1, the step size is 1, the filling size is 0, < >>For the last output result of the self-attention mechanism control block,/or->Is the attention coefficient.
Optionally, the feature layer up-sampling module includes four feature sampling layers, each of which includes an up-sampling layer, a feature connection layer and three convolution layers.
In a second aspect, the present invention also provides a deep learning-based ground penetrating radar cross interference suppression system, including a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method as described in the first aspect when executing the computer program.
The beneficial effects of the invention are as follows:
the invention constructs a mutual interference wave suppression network by the following steps: constructing a simulated detection scene and an initial mutual interference wave suppression network; generating a network data set based on the simulated detection scene and through simulated detection; dividing the network data set into a training data set and a verification data set; training the initial mutual interference wave suppression network by using the training data set to obtain a trained basic mutual interference wave suppression network; verifying the basic mutual interference wave suppression network through the verification data set, and determining optimal model parameters of the basic mutual interference wave suppression network; and adjusting parameters of the basic mutual interference wave suppression network according to the optimal model parameters to obtain the mutual interference wave suppression network. After the inter-disturbance wave suppression network is obtained, a ground penetrating radar image with multi-target inter-disturbance wave interference can be obtained, then the ground penetrating radar image is input into the inter-disturbance wave suppression network, and the ground penetrating radar image after the inter-disturbance wave suppression can be finally obtained through the suppression of the inter-disturbance wave suppression network on the multi-target inter-disturbance waves, so that the detection capability of the GPR on the underground containing a plurality of target scenes is improved.
Drawings
Fig. 1 is a schematic flow chart of a method for suppressing mutual interference waves of ground penetrating radars based on deep learning.
Fig. 2 is a schematic diagram of the structure of an initial mutual interference suppression network.
Fig. 3 is a schematic structural diagram of a trunk feature extraction module.
Fig. 4 is a schematic diagram of the structure of the residual error density block.
Fig. 5 is a schematic diagram of the structure of the self-attention mechanism control block.
Fig. 6 is a schematic diagram of the architecture of a feature layer upsampling module and a convolutional prediction network.
Fig. 7 is a schematic diagram of test results of a cross-interference suppression network on a test dataset.
Fig. 7 (a) is a cross-talk wave-containing detection image of two detection targets of the same attribute in a uniform medium.
Fig. 7 (b) is an image of fig. 7 (a) after cross-interference suppression via a cross-interference suppression network.
Fig. 7 (c) is a cross-interference wave-containing detection image of three detection targets of the same attribute in a uniform medium.
Fig. 7 (d) is an image of fig. 7 (c) after cross-interference suppression via a cross-interference suppression network.
Fig. 8 is a schematic diagram of test results of the cross-interference suppression network in the applicability test.
Fig. 8 (a) is a cross-talk wave-containing detection image of two different attribute detection targets in a uniform medium.
Fig. 8 (b) is an image of fig. 8 (a) after cross-interference suppression via a cross-interference suppression network.
Fig. 8 (c) is a cross-talk wave-containing detection image of three different attribute detection targets in a uniform medium.
Fig. 8 (d) is an image of fig. 8 (c) after cross-interference suppression via a cross-interference suppression network.
Fig. 8 (e) is a cross-interference wave-containing detection image of four detection targets of the same attribute in a uniform medium.
Fig. 8 (f) is an image of fig. 8 (e) after cross-interference suppression via a cross-interference suppression network.
Fig. 9 is a schematic diagram of a test result of the mutual interference suppression network in the noise immunity test.
Fig. 9 (a) is a cross-talk wave-containing detection image of two detection targets of the same attribute in a uniform medium.
Fig. 9 (b) is an image of fig. 9 (a) at a signal-to-noise ratio SNR of 0 dB.
Fig. 9 (c) is an image of fig. 9 (a) at a signal-to-noise ratio SNR of-10 dB.
Fig. 9 (d) is an image of fig. 9 (a) at a signal-to-noise ratio SNR of-20 dB.
Fig. 9 (e) is an image of fig. 9 (b) after cross-interference suppression via a cross-interference suppression network.
Fig. 9 (f) is an image of fig. 9 (c) after cross-interference suppression via a cross-interference suppression network.
Fig. 9 (g) is an image of fig. 9 (d) after cross-interference suppression via a cross-interference suppression network.
Detailed Description
The invention discloses a ground penetrating radar mutual interference wave suppression method based on deep learning.
Referring to fig. 1, in one embodiment, the method for suppressing the mutual interference of the ground penetrating radar based on the deep learning specifically includes the following steps:
S101, constructing a simulation detection scene and an initial mutual interference wave suppression network.
The GPR simulation modeling software gprMax is used for constructing a simulation detection scene, wherein the simulation detection scene is a two-dimensional underground area with the size of 1.44m multiplied by 0.32 m. For diversity of the data sets, the GPR antenna was simulated using theoretical hertz dipole sources fed with Ricker waveforms having center frequencies fc of 1.00GHz, 1.25GH, and 1.50GHz, respectively. Two kinds of underground media are used, wherein the first kind is uniform underground media with relative dielectric constant of 6-12, the second kind is a soil mixing model which is more similar to measured data, the mixing soil model consists of sand and clay, the proportion of the sand is changed between 20% and 80%, the upper limit of the water content of the soil is changed between 0.1% and 25%, and the soil mixing model is formed by mixing soil with 50 different water contents.
S102, generating a network data set based on the simulated detection scene through the simulated detection.
Wherein, based on the simulated detection scene constructed in step S101, the gprimax software is used to perform detection simulation and calculation, thereby generating a network data set.
S103, dividing the network data set into a training data set and a verification data set.
The network data set may be divided into a training data set for training a model network and a verification data set for verifying the model network according to a certain proportion, the number of the training data sets is greater than the number of the verification data sets, and in this embodiment, the proportion of the training data set and the verification data set may be 8:2.
S104, training the initial mutual interference wave suppression network by using the training data set to obtain a trained basic mutual interference wave suppression network.
The initial mutual interference wave suppression network is initialized, and then the training data set is input into the initial mutual interference wave suppression network for training, so that a trained basic mutual interference wave suppression network is obtained.
S105, verifying the basic mutual interference wave suppression network through the verification data set, and determining the optimal model parameters of the basic mutual interference wave suppression network.
And inputting the verification data set into the basic mutual interference wave suppression network, and adjusting the weight parameters of the basic mutual interference wave suppression network according to the loss value returned by the network so as to determine the optimal model parameters of the basic mutual interference wave suppression network.
S106, adjusting parameters of the basic mutual interference wave suppression network according to the optimal model parameters to obtain the mutual interference wave suppression network.
S107, acquiring a ground penetrating radar image with mutual interference, and inputting the ground penetrating radar image into a mutual interference wave suppression network to obtain the ground penetrating radar image after mutual interference wave suppression.
The implementation principle of the embodiment is as follows:
the network data set is divided into a training data set and a verification data set by constructing a simulated probing scene to simulate and generate the network data set. And simultaneously constructing an initial mutual interference wave suppression network, and training the initial mutual interference wave suppression network by using a training data set to obtain a basic mutual interference wave suppression network. And verifying the basic mutual interference wave suppression network through the verification data set, so that the basic mutual interference wave suppression network is subjected to parameter adjustment, and the final mutual interference wave suppression network is obtained. The ground penetrating radar image is input into a mutual interference wave suppression network, and the ground penetrating radar image after mutual interference wave suppression can be finally obtained through the suppression of the mutual interference wave suppression network on the multi-target mutual interference wave, so that the detection capability of the GPR on the underground containing a plurality of target scenes is improved.
In one embodiment, the step S102 of generating a network data set based on the simulated probing scene and through the simulated probing specifically includes the following steps:
based on the empty background of the simulated detection scene, acquiring an empty background detection image through simulation detection;
randomly setting two detection targets in a simulated detection scene, and obtaining a first detection image set through simulated detection;
generating a double-target detection data set by combining the first detection image set and the empty background detection image, wherein the double-target detection data set comprises double-target mutual interference detection images;
randomly setting three detection targets in a simulated detection scene, and obtaining a second detection image set through simulation;
generating a three-target detection data set by combining the second detection image set and the empty background detection image, wherein the three-target detection data set comprises three-target mutual interference detection images;
and summarizing the double-target detection data set and the three-target detection data set to obtain a network data set.
In this embodiment, a circular target with a radius of 1cm to 5cm is used as a detection target, the detection target is made of metal or plastic, the distance between the detection targets is set to 0.12m to 0.40m, and the detection target is placed at 9cm to 23cm underground in a simulated detection scene.
Before the simulated detection, dielectric parameters of the underground background medium and configuration parameters of the transmitting-receiving antenna are set. In the simulation detection process, a detection target is not arranged in the simulation detection scene, so that the simulation detection scene is in an empty background, and a GPR B-scan image of the simulation detection scene in the empty background, namely an empty background detection image, is calculated through the gprMax software simulation and is recorded as an image A.
In the same background medium, two detection targets are randomly placed in a simulated detection scene, the two detection targets are respectively a detection target 1 and a detection target 2, the two detection targets are kept contactless and are not exposed out of the ground, the embedded positions and the material parameters of the two detection targets are set, other simulation parameters are kept unchanged, and a GPR B-scan image of the two detection targets in the simulated detection scene is obtained through simulation calculation and is recorded as an image B1.
In the above detection scene, the detection target 1 and the detection target 2 are respectively removed, only one detection target is reserved, other simulation parameters are kept unchanged, and GPR B-scan images when only one detection target in the simulation detection scene is obtained through simulation calculation are respectively recorded as an image C1 and an image C2. Image C1 is a GPR B-scan image that is only reserved for detection target 1, and image C2 is a GPR B-scan image that is only reserved for detection target 2. The sizes of the image a, the image B1, the image C1, and the image C2 are the same. The image B1, the image C1, and the image C2 constitute a first detection image set.
Let image d1=image C1-image a, then image D1 represents the B-scan image of the detection target 1 from which the background wave is removed. Let image d2=image C2-image a, then image D2 represents the B-scan image of the detection target 2 from which the background wave is removed. Let image d3=image d1+image D2, then image D3 represents the superposition of B-scan with the background removed from each of detection target 1 and detection target 2. Let image d0=image B1-image a, then image D0 represents an image obtained by removing background by B-scan when the detection target 1 and the detection target 2 exist simultaneously, and image D0 contains the cross-interference waves of the detection target 1 and the detection target 2.
The above is generated for a single set of dual target detection data sets (D0, D3) 1Is a process of (2). When the initial mutual interference suppression network is trained subsequently, D0 is input, and the corresponding label output is D3. For the deep learning training of the initial mutual interference suppression network, a large number of data sets are required, so that the above simulation detection process can be repeated, and each time the simulation detection process is repeated, the dielectric parameters of the background medium, the configuration parameters of the transmitting-receiving antenna and the related parameters of the two detection targets are randomly generated. Repeating the above simulation detection process for N times to obtain N groups of double-target detection data sets, which are recorded as ,/>,…,/>。
In order to increase the diversity of the network data sets, a three-target detection data set containing three detection targets is also constructed, and the three-target detection data set generation method comprises the following steps:
in the same background medium for the double-target simulation detection, three detection targets are randomly placed, wherein the three detection targets are respectively a detection target 3, a detection target 4 and a detection target 5. And (3) keeping the three detection targets out of contact and exposing the ground, setting the embedded positions and the material parameters of the three detection targets, keeping other simulation parameters unchanged, and obtaining GPR B-scan images of the three detection targets underground through simulation calculation, and recording the GPR B-scan images as an image B2.
In the above detection scene, two detection targets are removed respectively, only one of the detection targets is reserved, other simulation parameters are kept unchanged, and GPR B-scan images of only one underground detection target are obtained through simulation calculation and respectively marked as an image C3, an image C4 and an image C5. Image C3 is a GPR B-scan image when only probe 3 is retained, image C4 is a GPR B-scan image when only probe 4 is retained, and image C5 is a GPR B-scan image when only probe 5 is retained. The sizes of the image a, the image B2, the image C3, the image C4, and the image C5 are the same. Image B2, image C3, image C4 and image C5 constitute a second set of detection images.
Let image d5=image C3-image a, then image D5 represents the B-scan image of the detection target 3 from which the background wave is removed. Let image d6=image C4-image a, then image D6 represents the B-scan image of the detection target 4 from which the background wave is removed. Let image d7=image C5-image a, then image D7 represents the B-scan image of the detection target 5 from which the background wave is removed. Let image d8=image d4+image d5+image D6, then image D8 represents the superposition of B-scan after removal of the background by each of detection target 3, detection target 4, and detection target 5. Let image d4=image B2-image a, then image D4 represents the image after removing the background by the B-scan when the detection target 3, the detection target 4, and the detection target 5 coexist, and image D4 contains the cross-interference waves between the three targets.
The above is a single set of three-target probe data setsAnd (5) a generating process. When the initial mutual interference suppression network is trained subsequently, D4 is input, and the corresponding label output is D8. For the deep learning training of the initial mutual interference suppression network, a large number of data sets are required, so that the above simulation detection process can be repeated, and each time the simulation detection process is repeated, the dielectric parameters of the background medium, the configuration parameters of the transmitting-receiving antenna and the relevant parameters of the three detection targets are randomly generated. The above procedure was repeated N times to obtain N sets of three target detection data sets, denoted +. >,/>,…,/>。
The N groups of double-target detection data sets,/>,…,/>And N sets of three target detection data +.>,/>,…,/>A network data set is constructed. To facilitate network training, the GPR B-scan images in the data set are 256×256 in size, i.e., each picture has 256 tracks of A-scan data, and each track of A-scan data has 256 time acquisition points.
In one embodiment, referring to fig. 2, the initial mutual interference suppression network includes a backbone feature extraction module Encoder, a residual intensive module RDBs, a self-attention mechanism control module AGs, a feature layer up-sampling module Decoder, and a convolution prediction network, where the residual intensive module RDBs and the self-attention mechanism control module AGs form a jump connection portion of the initial mutual interference suppression network.
The main feature extraction module Encoder is used for extracting the picture features, and five effective feature layers of the input picture can be obtained after the main feature extraction module Encoder is used for extracting the picture features; the residual intensive module RDBs are used for inhibiting the mutual interference wave information which is partially not completely eliminated, particularly the characteristics from the lower layer, and are used for receiving the effective characteristic layers with different scales generated by the trunk characteristic extraction module Encoder and then outputting the result to the self-attention mechanism control module AGs; the self-attention mechanism control module AGs is used for coping with focusing of polymorphic mutual interference waves of different scenes, extracting an interested region in a picture, performing intensive mutual interference wave suppression on the specific region, and connecting the feature layers to a feature layer up-sampling module Decoder after finishing focusing tasks; the feature layer up-sampling module Decoder is used for receiving the output feature layer of the self-attention mechanism control module AGs and the feature layer output by the last layer of the main feature extraction module Encoder, up-sampling the feature layer and the feature layer, and finally, a convolution prediction network is used for classifying pictures to generate a final result image.
In one embodiment, step S104 is to train the initial inter-interference suppression network using the training data set, and the obtained trained basic inter-interference suppression network specifically includes the following steps:
initializing an initial mutual interference wave suppression network, and configuring random initial network parameter weights for the initial mutual interference wave suppression network;
reducing loss errors of a trunk feature extraction module, a residual error concentration module, a self-attention mechanism control module, a feature layer up-sampling module and a convolution prediction network by using a preset optimizer;
and inputting the training data set into the initial inter-interference wave suppression network to perform periodic batch training, and obtaining the trained basic inter-interference wave suppression network.
In this embodiment, the preset optimizer is an RMSProp optimizer, and the weight attenuation coefficient of the RMSProp optimizer is set to beThe momentum item parameter is set to 0.9, and the learning rate of the initial cross-interference wave suppression network is set to +.>. And randomly initializing weight parameters of the initial mutual interference suppression network, transmitting the training data set into the initial mutual interference suppression network in batches for training, and setting the batch size of the network training to be 8. And k GPR B-scan images containing the mutual interference waves and k GPR B-scan images without the mutual interference waves with the same size corresponding to the GPR B-scan images in each batch of data set are used for reducing loss errors of a trunk feature extraction module Encoder, a residual intensive module RDBs, a self-attention mechanism control module AGs, a feature layer up-sampling module Decoder and a convolution prediction network by using an RMSProp optimizer, and training an initial mutual interference wave suppression network. Repeating the batch training until all images in the training data set are transmitted into the initial mutual interference wave suppression network to complete one period of training.
The period training is repeated for 150 times until the loss value of the initial mutual interference wave suppression network is not reduced, the output value of the loss function of the initial mutual interference wave suppression network tends to be stable, the training of the initial mutual interference wave suppression network is completed, and the network weight parameters at the moment are saved.
In one embodiment, the loss function calculation formula of the basic mutual interference wave suppression network is as follows:
in the method, in the process of the invention,for loss function->、/>For the number of rows and columns of the two-dimensional image, +.>For training of detection images without cross-talk in the dataset,/for the detection of images without cross-talk in the dataset>For convolving the output result of the prediction network, +.>And->Representing pixel values corresponding to pixel points in the image.
In one embodiment, referring to fig. 3, the trunk feature extraction module Encoder is used for extracting features of a picture, and five feature layers with different scales of an input picture can be obtained after the trunk feature extraction module Encoder passes through the trunk feature extraction module Encoder. The main feature extraction module Encoder is divided into five feature extraction layers altogether, the first feature extraction layer is two continuous convolution layers, the number of convolution kernels is 64, after each layer of convolution, data are normalized and then activated by using a ReLU function, and the output of the first layer of the main feature extraction module Encoder is feature FE1. The second to fifth feature extraction layers of the backbone feature extraction module Encoder use similar structures, and each feature extraction layer is calculated by performing maximum pooling operation on the output result of the upper feature extraction layer, connecting two continuous convolution layers, and normalizing data after convolution of each convolution layer and activating the data by using a ReLU function.
The same number of convolution kernels is used for two continuous convolution layers of each of the second-layer feature extraction layer to the fifth-layer feature extraction layer, wherein the number of convolution kernels of the two convolution layers in the second-layer feature extraction layer is 128, the number of convolution kernels of the two convolution layers in the third-layer feature extraction layer is 256, the number of convolution kernels of the two convolution layers in the fourth-layer feature extraction layer is 512, and the number of convolution kernels of the two convolution layers in the fifth-layer feature extraction layer is 1024. The output feature graphs from the second layer feature extraction layer to the fifth layer feature extraction layer of the backbone feature extraction module Encoder are respectively、/>、/>、/>. The step length of all convolution layers in the trunk feature extraction module Encoder is 1, the filling size is 1, the size of convolution kernel is 3×3, the window size of the largest pooling layer is 3, and the output of the trunk feature extraction module Encoder is the feature、/>、/>、/>、/>。
In one embodiment, the residual intensive modules RDBs are used to suppress the partially incompletely cancelled mutual interference information, especially from low level features. The residual intensive modules RDBs receive feature layers of different scales generated from the backbone feature extraction module Encoder、/>、/>、/>、/>The result is then output to the self-attention mechanism control module AGs.
The residual dense modules RDBs are composed of four residual dense blocks RDB, referring to fig. 4, each residual dense block RDB includes three dense connection layers and a local feature fusion layer, the input of each of the dense connection layers is the concatenation of the output feature graphs of all the dense connection layers in front of the dense connection layers in the channel dimension, each dense connection layer is realized by a convolution of 3×3 and a ReLU function, the convolution step size and the filling size are both 1, and the convolution kernel number is set to be one third of the number of the input feature graph channels of the residual dense block RDB, which is also called the growth rate of the residual dense modules. The local feature fusion layer uses convolution of 1 multiplied by 1 to adaptively fuse the output and input feature graphs of the dense connection layer to obtain a feature fusion graph, wherein the convolution step length is 1, and the filling size is 0. And introducing residual learning after the local feature fusion layer, and combining the input feature map and the feature fusion map by a channel addition method to obtain a final result of a residual dense block RDB.
In this embodiment, the calculation formula of the residual density block RDB is:
in the method, in the process of the invention,input features for residual dense block, +.>For the output result of the dense connection layer, +.>Representing a nonlinear transformation comprising a 3 x 3 convolution operation and a ReLU function +. >For the output result of the local feature fusion layer, +.>Representing a convolution operation of 1 x 1, +.>Final output of residual blockIs->And->The result of the addition on the channel.
Output results of front four layers of backbone feature extraction module Encoder、/>、/>、/>Not only as input to the next layer within itself, but also as input to the residual intensive modules RDBs. Will->、/>、/>、/>The four feature layers are respectively input into four residual dense blocks RDB of a residual dense module RDBs, and the output of the residual dense module RDBs is characterized by +.>、/>、/>、/>These feature layers serve as part of the input to the self-attention mechanism control module AGs.
In one embodiment, referring to fig. 5, the self-attention mechanism control module AGs is configured to cope with focusing of multi-form mutual interference waves of different scenes, extract an area of interest in a picture, perform dense mutual interference wave suppression on the specific area, and input the feature layers to the feature layer up-sampling module Decoder after completing a focusing task. The self-attention mechanism control module comprises a plurality of self-attention mechanism control blocks, each self-attention mechanism control block AG of the self-attention mechanism control module AGs has two inputs, one is the output feature layer of the residual intensive modules RDBs Wherein->The other is the result of up-sampling the feature layer output by the main feature extraction module Encoder by the feature layer up-sampling module Decoder corresponding to the layer +.>。
The self-attention mechanism control block AG calculates the attention coefficients of the output feature layer of the residual density block RDB by collecting the region features similar to the two input feature layersAttention coefficient->Output feature layer representing the residual dense block RDB>The final output of the self-attention mechanism control block AG is the output feature layer of the residual error density block RDB>And attention coefficient->Is a product of (a) and (b). The final output of the self-attention mechanism control block AG of different levels of the self-attention mechanism control module AGs is characterized by +.>、/>、/>、/>。
In the present embodiment, the calculation formula of the self-attention mechanism control block AG is:
in the method, in the process of the invention,output feature layer for the ith residual error density block,/->Outputting the result after the feature layer up-sampling for the feature layer up-sampling module to the trunk feature extraction module, wherein ∈10>、/>And->All are convolution operations, ++>Andthe number of convolution kernels used is one half of the input features, +.>The number of convolution kernels used is 1, < >>And->For convolving the corresponding bias term +.>For ReLU function >For Sigmoid function, ++>Is->Parameter setting in the calculation process, wherein the convolution kernel size in the parameter setting is 1 multiplied by 1, the step size is 1, the filling size is 0, < >>For the last output result of the self-attention mechanism control block, < ->Is the attention coefficient.
In one embodiment, referring to fig. 6, the feature layer upsampling module Decoder includes four feature sampling layers, each of which uses a similar structure, each of which includes an upsampling layer, a feature connection layer, and three convolution layers. The feature layer up-sampling module Decoder is used for receiving the output feature layer of the self-attention mechanism control module AGs and the feature layer output by the last layer of the trunk feature extraction module Encoder and up-sampling the feature layers. The first layer of feature sampling layer of the feature layer up-sampling module Decoder firstly receives the feature layer output by the last layer of feature extraction layer of the main feature extraction module EncoderThe method comprises the steps of up-sampling and a layer of convolution layer, wherein the convolution layer comprises convolution, data normalization and ReLU activation operations, the convolution kernel size is 3 multiplied by 3, the step size is 1, the filling size is 1, and the number of the convolution kernels is +.>One half, 512. Then output the result- >Output characteristics of RDBs with residual dense modulesThe co-incoming self-attention mechanism control block AG gets the feature +.>Then->And->Splicing, and transmitting the spliced result into two continuous convolution layers to obtain a final result of a first characteristic sampling layer of a characteristic layer up-sampling module Decoder>In two continuous convolution layers, the data is normalized and activated by using a ReLU function after each convolution, wherein the convolution kernels are 3 multiplied by 3, the step sizes are 1, the filling sizes are 1, and the number of the convolution kernels is->One half, 512.
Final output of the feature layer up-sampling module Decoder first layer feature sampling layerAs input to the next feature sample layer, the above-described operational flows of upsampling, convolution layer, incoming attention mechanism control block AG, stitching, and two consecutive convolution layers are repeated. The up-sampling module of feature layer Decoder has up-sampling of four feature sampling layers, and the input parameters of the first feature sampling layer are the output feature layer of the main feature extraction module Encoder->And the output characteristics of the residual intensive modules RDBs +.>Output result is->The number of convolution kernels used by the convolution operation during this period is 512. The input parameter of the second layer of characteristic sampling layer is the output result of the first layer of characteristic sampling layer +. >And output characteristics of residual dense modules RDBsOutput result is->The number of convolution kernels used by the convolution operation during this period is 256.
The input parameter of the third layer of characteristic sampling layer is the output result of the second layer of characteristic sampling layerAnd the output characteristics of the residual intensive modules RDBs +.>Output result is->The number of convolution kernels used by the convolution operation during this period is 128. The input parameter of the fourth layer of characteristic sampling layer is the output result of the third layer of characteristic sampling layer +.>And the output characteristics of the residual intensive modules RDBs +.>Output result is->The number of convolution kernels used by the convolution operation during this period is 64. The convolution kernel size in all convolution operations in the feature layer up-sampling module Decoder is 3 multiplied by 3, the step length is 1, the filling size is 1, and all output results of the feature layer up-sampling module Decoder are ∈3>、/>、/>、/>。
In another embodiment, the network data set may be calculated by gprMax simulation software, where the network data set includes 1000 GPR B-scan images with cross-waves and 1000 GPR B-scan images without cross-waves. 800 of 1000 sets of network data are used as training data sets for the initial inter-interference rejection network, 100 sets are used as basic inter-interference rejection network validation data sets, and 100 sets are used as test data sets. In order to facilitate the training of the network, the length and width of the GPR B-scan image containing the mutual interference waves are unified, and the GPR B-scan image is 256 times 256 in size, namely each image has 256 channels of A-scan data, and each channel of A-scan data has 256 time acquisition points.
In this embodiment, an RMSProp optimizer is used, and the weight attenuation coefficient of the RMSProp optimizer is set to beThe momentum item parameter is set to 0.9, and the learning rate of the initial cross-interference wave suppression network is set to +.>. The method comprises the steps of randomly initializing weight parameters of an initial mutual interference wave suppression network, respectively transmitting 800 groups of training data sets into the initial mutual interference wave suppression network for training, setting the batch size of network training to be 8, after training for 150 periods in total, enabling the loss value of the initial mutual interference wave suppression network not to be reduced, and storing the network weight parameters at the moment.
And transmitting the 100 groups of verification data sets into a trained basic mutual interference wave suppression network, and adjusting the weight parameters of the basic mutual interference wave suppression network according to the loss value returned by the network so as to determine the optimal model parameters. And adjusting parameters of the basic mutual interference wave suppression network based on the optimal model parameters to obtain the mutual interference wave suppression network. 100 test training sets containing the mutual interference waves are transmitted into the mutual interference wave suppression network to test the training effect of the network, and a test result diagram is shown in fig. 7.
Fig. 7 (a) is an image of two detection targets in a uniform medium, wherein the two detection targets are made of PVC, and fig. 7 (b) is an image of fig. 7 (a) after cross-interference suppression by a cross-interference suppression network. Fig. 7 (c) is a cross-interference wave-containing detection image of three detection targets in a uniform medium, wherein the three detection targets in fig. 7 (c) are all made of metal, and fig. 7 (d) is an image of fig. 7 (c) after cross interference wave suppression through a cross interference wave suppression network. The mutual interference wave suppression effect of the mutual interference wave suppression network is better. Fig. 7 (c) is an extreme case in the test data set, where when the echo hyperbola of the rightmost probe object is superimposed on the echo hyperbolas of the two intermediate probe objects, the cross-talk suppressing network still exhibits a good cross-talk suppressing effect, and the object echo hyperbolas interfered by the cross-talk can be restored to some extent after the cross-talk suppressing.
In one embodiment, some data not in the network data set may be acquired by gprMax simulation software and input to a cross-talk suppression network trained from the network data set for testing. Test results graph referring to fig. 8. Fig. 8 (a) is an image of two detection targets containing a cross-interference wave in a uniform medium, wherein the two detection targets in fig. 8 (a) are a circular cavity and a PVC cylinder respectively, and fig. 8 (b) is an image of fig. 8 (a) after the cross-interference wave is suppressed by a cross-interference wave suppression network. Fig. 8 (c) is an image of three detection targets containing a cross-interference wave in a uniform medium, in fig. 8 (c), the three detection targets are two metal cylinders and a circular cavity respectively, and fig. 8 (d) is an image of fig. 8 (c) after the cross-interference wave is suppressed by a cross-interference wave suppression network.
Fig. 8 (e) is an image of four detection targets in a uniform medium, wherein the four detection targets in fig. 8 (e) are all metal cylinders, and fig. 8 (f) is an image of fig. 8 (e) after cross-interference suppression through a cross-interference suppression network. According to the test result, the cross interference wave suppression network has good applicability, and has good cross interference wave suppression effect under the conditions of a cavity-containing material detection target scene, a different material detection target combination scene and four detection target scenes which are not in a training data set.
In one embodiment, to measure noise immunity of the cross-interference rejection network, some data are designed to add different signal-to-noise ratio SNR to the GPR B-scan image in the test dataset, where the SNR value represents the ratio of the signal intensity of the original image to the added gaussian random white noise intensity. The image added with random noise is input to a cross interference rejection network trained by a network data set for testing, and the test result is referred to fig. 9.
Fig. 9 (a) is a cross-interference wave-containing detection image of two detection targets in a uniform medium, and both detection targets in fig. 9 (a) are metal cylinders. Fig. 9 (b) is an image of fig. 9 (a) when the SNR is 0dB, and fig. 9 (c) is an image of fig. 9 (b) after cross-interference suppression through a cross-interference suppression network. Fig. 9 (d) is an image of fig. 9 (a) when the SNR is-10 dB, and fig. 9 (e) is an image of fig. 9 (d) after cross-interference suppression through a cross-interference suppression network. Fig. 9 (f) is an image of fig. 9 (a) when the SNR is-20 dB, and fig. 9 (g) is an image of fig. 9 (f) after cross-interference suppression through a cross-interference suppression network. The mutual interference wave suppression network has good noise immunity, and has good mutual interference wave suppression effect on the noisy image under the condition that no noise is added in the training data set.
In summary, the method for suppressing the inter-disturbance waves of the ground penetrating radar based on the deep learning can suppress the inter-disturbance waves among multiple targets in the GPR B-scan image, and the inter-disturbance wave suppression network has stronger robustness and can cope with the inter-disturbance waves of different forms in different scenes. Under the condition that the data types in the training data set are limited, the mutual interference wave suppression network can also well suppress the mutual interference waves in the multi-target scene which is not in the training data set, and also has a good mutual interference suppression effect on the GPR B-scan image with noise.
The invention also discloses a ground penetrating radar mutual interference suppression system based on the deep learning, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the ground penetrating radar mutual interference suppression method based on the deep learning as shown in figure 1 when executing the computer program.
The implementation principle of the embodiment is as follows:
by the retrieval of the program, the network data set can be generated by constructing a simulated detection scene to simulate, and is divided into a training data set and a verification data set. And simultaneously constructing an initial mutual interference wave suppression network, and training the initial mutual interference wave suppression network by using a training data set to obtain a basic mutual interference wave suppression network. And verifying the basic mutual interference wave suppression network through the verification data set, so that the basic mutual interference wave suppression network is subjected to parameter adjustment, and the final mutual interference wave suppression network is obtained. The ground penetrating radar image is input into a mutual interference wave suppression network, and the ground penetrating radar image after mutual interference wave suppression can be finally obtained through the suppression of the mutual interference wave suppression network on the multi-target mutual interference wave, so that the detection capability of the GPR on the underground containing a plurality of target scenes is improved.
Those of ordinary skill in the art will appreciate that: the discussion of any of the embodiments above is merely exemplary and is not intended to imply that the scope of the present application is limited to such examples; the technical features of the above embodiments or in the different embodiments may also be combined under the idea of the present application, the steps may be implemented in any order, and there are many other variations of the different aspects of one or more embodiments in the present application as above, which are not provided in details for the sake of brevity.
One or more embodiments herein are intended to embrace all such alternatives, modifications and variations that fall within the broad scope of the present application. Any omissions, modifications, equivalents, improvements, and the like, which are within the spirit and principles of the one or more embodiments in the present application, are therefore intended to be included within the scope of the present application.
Claims (9)
1. The method for suppressing the mutual interference waves of the ground penetrating radar based on the deep learning is characterized by comprising the following steps:
the method comprises the steps of constructing a simulated detection scene and an initial mutual interference wave suppression network, wherein the initial mutual interference wave suppression network comprises a trunk feature extraction module, a residual error intensive module, a self-attention mechanism control module, a feature layer up-sampling module and a convolution prediction network, and the residual error intensive module and the self-attention mechanism control module form a jump connection part of the initial mutual interference wave suppression network;
Generating a network data set based on the simulated detection scene and through simulated detection;
dividing the network data set into a training data set and a verification data set;
training the initial mutual interference wave suppression network by using the training data set to obtain a trained basic mutual interference wave suppression network;
verifying the basic mutual interference wave suppression network through the verification data set, and determining optimal model parameters of the basic mutual interference wave suppression network;
adjusting parameters of the basic mutual interference wave suppression network according to the optimal model parameters to obtain a mutual interference wave suppression network;
and obtaining a ground penetrating radar image with mutual interference, and inputting the ground penetrating radar image into the mutual interference wave suppression network to obtain the ground penetrating radar image after mutual interference wave suppression.
2. The method for suppressing the cross interference of the ground penetrating radar based on the deep learning according to claim 1, wherein the generating the network data set based on the simulated detection scene and through the simulated detection comprises the following steps:
based on the empty background of the simulated detection scene, acquiring an empty background detection image through simulation detection;
randomly setting two detection targets in the simulated detection scene, and obtaining a first detection image set through the simulated detection;
Generating a double-target detection data set by combining the first detection image set and the empty background detection image, wherein the double-target detection data set comprises double-target mutual interference wave detection images;
randomly setting three detection targets in the simulated detection scene, and obtaining a second detection image set through the simulation;
generating a three-target detection data set by combining the second detection image set and the empty background detection image, wherein the three-target detection data set comprises three-target mutual interference wave detection images;
and summarizing the dual-target detection data set and the three-target detection data set to obtain a network data set.
3. The method for deep learning-based ground penetrating radar cross-interference suppression according to claim 1, wherein the training the initial cross-interference suppression network using the training data set to obtain a trained basic cross-interference suppression network comprises the following steps:
initializing the initial mutual interference wave suppression network, and configuring random initial network parameter weights for the initial mutual interference wave suppression network;
reducing loss errors of the trunk feature extraction module, the residual error concentration module, the self-attention mechanism control module, the feature layer up-sampling module and the convolution prediction network by using a preset optimizer;
And inputting the training data set into the initial mutual interference wave suppression network to perform periodic batch training, and obtaining a trained basic mutual interference wave suppression network.
4. The method for suppressing the cross interference of the ground penetrating radar based on the deep learning according to claim 3, wherein a loss function calculation formula of the basic cross interference suppression network is as follows:
in the method, in the process of the invention,for the loss function, +.>、/>For the number of rows and columns of the two-dimensional image, +.>For a detected image of the training dataset which does not contain cross-talk waves>For the output result of said convolution prediction network, < >>And->Representing pixel values corresponding to pixel points in the image.
5. The method for suppressing the cross interference of the ground penetrating radar based on deep learning according to claim 1, wherein the trunk feature extraction module comprises five layers of feature extraction layers, each layer of feature extraction layer comprises two continuous convolution layers, the number of convolution kernels of a first layer of the trunk feature extraction module is 64, the number of convolution kernels of a second layer of the trunk feature extraction module is 128, the number of convolution kernels of a third layer of the convolution layers of the trunk feature extraction module is 256, the number of convolution kernels of a fourth layer of the convolution layers of the trunk feature extraction module is 512, the number of convolution kernels of a fifth layer of the trunk feature extraction module is 1024, and output results of the feature extraction layers of all the previous four layers of the trunk feature extraction module are used as inputs of the next layer of the feature extraction layer.
6. The deep learning-based ground penetrating radar mutual interference suppression method according to claim 1, wherein the residual dense module comprises four residual dense blocks, each residual dense block comprises three dense connection layers and one local feature fusion layer, and a calculation formula of the residual dense blocks is as follows:
in the method, in the process of the invention,for the input features of the residual secret block, < >>For the output result of the dense connection layer, < >>Representing a nonlinear transformation>For the output result of the local feature fusion layer, < >>A convolution operation of 1 x 1 is represented,for the final output of the residual block, the final output of the residual block is +.>Is->And->The result of the addition on the channel.
7. The deep learning-based ground penetrating radar cross interference suppression method according to claim 6, wherein the self-attention mechanism control module comprises a plurality of self-attention mechanism control blocks, and a calculation formula of the self-attention mechanism control blocks is as follows:
in the method, in the process of the invention,output feature layer for the ith residual error density block, +.>Outputting the result after the feature layer up-sampling for the feature layer up-sampling module to the trunk feature extraction module, wherein the result is->、/>And- >All of them are the convolution operations, and,and->The number of convolution kernels used is one half of the input features, +.>The number of convolution kernels used is 1, < >>And->For convolving the corresponding bias term +.>For ReLU function>For Sigmoid function, ++>Is->Parameter setting in the calculation process, wherein the convolution kernel size is 1 multiplied by 1, the step size is 1, the filling size is 0, < >>For the last output result of the self-attention mechanism control block,/or->Is the attention coefficient.
8. The method for suppressing the cross interference of the ground penetrating radar based on the deep learning according to claim 1, wherein the feature layer up-sampling module comprises four feature sampling layers, and each feature sampling layer comprises an up-sampling layer, a feature connection layer and three convolution layers.
9. A deep learning-based ground penetrating radar cross interference suppression system comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 8 when executing the computer program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310549759.4A CN116256701B (en) | 2023-05-16 | 2023-05-16 | Ground penetrating radar mutual interference wave suppression method and system based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310549759.4A CN116256701B (en) | 2023-05-16 | 2023-05-16 | Ground penetrating radar mutual interference wave suppression method and system based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116256701A true CN116256701A (en) | 2023-06-13 |
CN116256701B CN116256701B (en) | 2023-08-01 |
Family
ID=86686554
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310549759.4A Active CN116256701B (en) | 2023-05-16 | 2023-05-16 | Ground penetrating radar mutual interference wave suppression method and system based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116256701B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108830331A (en) * | 2018-06-22 | 2018-11-16 | 西安交通大学 | A kind of Ground Penetrating Radar object detection method based on full convolutional network |
KR102309343B1 (en) * | 2020-04-01 | 2021-10-06 | 세종대학교산학협력단 | Frequency-wavenumber analysis method and apparatus through deep learning-based super resolution ground penetrating radar image generation |
WO2022063727A1 (en) * | 2020-09-25 | 2022-03-31 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Device and method for creating training data for two-dimensional scans of a ground-penetrating radar system |
CN114331890A (en) * | 2021-12-27 | 2022-04-12 | 中南大学 | Ground penetrating radar B-scan image feature enhancement method and system based on deep learning |
CN114758230A (en) * | 2022-04-06 | 2022-07-15 | 桂林电子科技大学 | Underground target body classification and identification method based on attention mechanism |
CN114966600A (en) * | 2022-07-29 | 2022-08-30 | 中南大学 | Clutter suppression method and system for B-scan image of ground penetrating radar |
CN115291210A (en) * | 2022-07-26 | 2022-11-04 | 哈尔滨工业大学 | Three-dimensional image pipeline identification method of 3D-CNN ground penetrating radar combined with attention mechanism |
CN115345790A (en) * | 2022-08-02 | 2022-11-15 | 上海应用技术大学 | Ground penetrating radar image enhancement method based on window self-attention neural network |
-
2023
- 2023-05-16 CN CN202310549759.4A patent/CN116256701B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108830331A (en) * | 2018-06-22 | 2018-11-16 | 西安交通大学 | A kind of Ground Penetrating Radar object detection method based on full convolutional network |
KR102309343B1 (en) * | 2020-04-01 | 2021-10-06 | 세종대학교산학협력단 | Frequency-wavenumber analysis method and apparatus through deep learning-based super resolution ground penetrating radar image generation |
WO2022063727A1 (en) * | 2020-09-25 | 2022-03-31 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Device and method for creating training data for two-dimensional scans of a ground-penetrating radar system |
CN114331890A (en) * | 2021-12-27 | 2022-04-12 | 中南大学 | Ground penetrating radar B-scan image feature enhancement method and system based on deep learning |
CN114758230A (en) * | 2022-04-06 | 2022-07-15 | 桂林电子科技大学 | Underground target body classification and identification method based on attention mechanism |
CN115291210A (en) * | 2022-07-26 | 2022-11-04 | 哈尔滨工业大学 | Three-dimensional image pipeline identification method of 3D-CNN ground penetrating radar combined with attention mechanism |
CN114966600A (en) * | 2022-07-29 | 2022-08-30 | 中南大学 | Clutter suppression method and system for B-scan image of ground penetrating radar |
CN115345790A (en) * | 2022-08-02 | 2022-11-15 | 上海应用技术大学 | Ground penetrating radar image enhancement method based on window self-attention neural network |
Non-Patent Citations (2)
Title |
---|
FEIFEI HOU 等: "Deep Learning-Based Subsurface Target Detection From GPR Scans", IEEE SENSORS JOURNAL, vol. 21, no. 6, XP011839460, DOI: 10.1109/JSEN.2021.3050262 * |
侯斐斐 等: "面向探地雷达B-scan图像的目标检测算法综述", 电子与信息学报, vol. 42, no. 1 * |
Also Published As
Publication number | Publication date |
---|---|
CN116256701B (en) | 2023-08-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108229404B (en) | Radar echo signal target identification method based on deep learning | |
Buscombe | Shallow water benthic imaging and substrate characterization using recreational-grade sidescan-sonar | |
CN106772365A (en) | A kind of multipath based on Bayes's compressed sensing utilizes through-wall radar imaging method | |
US8193967B2 (en) | Method and system for forming very low noise imagery using pixel classification | |
Bell et al. | Simulation and analysis of synthetic sidescan sonar images | |
CN111639746B (en) | GNSS-R sea surface wind speed inversion method and system based on CNN neural network | |
CN112766221B (en) | Ship direction and position multitasking-based SAR image ship target detection method | |
CN111722199A (en) | Radar signal detection method based on convolutional neural network | |
CN115291210B (en) | 3D-CNN ground penetrating radar three-dimensional image pipeline identification method combined with attention mechanism | |
CN111445515B (en) | Underground cylinder target radius estimation method and system based on feature fusion network | |
CN114966600A (en) | Clutter suppression method and system for B-scan image of ground penetrating radar | |
CN114758230A (en) | Underground target body classification and identification method based on attention mechanism | |
Barkataki et al. | Classification of soil types from GPR B scans using deep learning techniques | |
Almaimani | Classifying GPR images using convolutional neural networks | |
CN116256701B (en) | Ground penetrating radar mutual interference wave suppression method and system based on deep learning | |
Del Rio Vera et al. | Automatic target recognition in synthetic aperture sonar images based on geometrical feature extraction | |
CN111931570B (en) | Through-wall imaging radar human body target detection method based on full convolution network | |
Qian et al. | Deep Learning-Augmented Stand-off Radar Scheme for Rapidly Detecting Tree Defects | |
Guo et al. | Research on tunnel lining image target recognition method based on YOLOv3 | |
CN116106833B (en) | Deep learning-based processing method and system for restraining surface layer steel bar echo | |
Stienessen et al. | Comparison of model types for prediction of seafloor trawlability in the Gulf of Alaska by using multibeam sonar data | |
CN117148306A (en) | Root diameter prediction and positioning method based on ground penetrating radar | |
CN115496917B (en) | Multi-target detection method and device in GPR B-Scan image | |
Busson et al. | Seismic shot gather noise localization using a multi-scale feature-fusion-based neural network | |
CN116609759B (en) | Method and system for enhancing and identifying airborne laser sounding seabed weak echo |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |