CN116256701A - Ground penetrating radar mutual interference wave suppression method and system based on deep learning - Google Patents

Ground penetrating radar mutual interference wave suppression method and system based on deep learning Download PDF

Info

Publication number
CN116256701A
CN116256701A CN202310549759.4A CN202310549759A CN116256701A CN 116256701 A CN116256701 A CN 116256701A CN 202310549759 A CN202310549759 A CN 202310549759A CN 116256701 A CN116256701 A CN 116256701A
Authority
CN
China
Prior art keywords
mutual interference
detection
network
interference wave
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310549759.4A
Other languages
Chinese (zh)
Other versions
CN116256701B (en
Inventor
雷文太
檀鑫
郑智钦
马亚楼
庞泽邦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN202310549759.4A priority Critical patent/CN116256701B/en
Publication of CN116256701A publication Critical patent/CN116256701A/en
Application granted granted Critical
Publication of CN116256701B publication Critical patent/CN116256701B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/36Means for anti-jamming, e.g. ECCM, i.e. electronic counter-counter measures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/885Radar or analogous systems specially adapted for specific applications for ground probing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a system for suppressing ground penetrating radar mutual interference waves based on deep learning, wherein the method comprises the following steps: constructing a simulated detection scene and an initial mutual interference wave suppression network; generating a network data set based on the simulated detection scene and through the simulated detection; dividing the network data set into a training data set and a verification data set; training an initial mutual interference wave suppression network by using a training data set to obtain a trained basic mutual interference wave suppression network; verifying the basic mutual interference wave suppression network through a verification data set, and determining the optimal model parameters of the basic mutual interference wave suppression network; adjusting parameters of the basic mutual interference wave suppression network according to the optimal model parameters to obtain the mutual interference wave suppression network; and acquiring a ground penetrating radar image with mutual interference, and inputting the ground penetrating radar image into a mutual interference wave suppression network to obtain the ground penetrating radar image after mutual interference wave suppression. The invention can restrain the mutual interference wave when the ground penetrating radar is used for detecting a plurality of target scenes.

Description

Ground penetrating radar mutual interference wave suppression method and system based on deep learning
Technical Field
The invention belongs to the technical field of ground penetrating radar data processing, and particularly relates to a ground penetrating radar mutual interference wave suppression method and system based on deep learning.
Background
Ground penetrating radar (Ground Penetrating Radar, GPR for short) is a common nondestructive detection method, and is used for target detection positioning and imaging identification by acquiring scattering echoes of an underground target, and has the characteristics of high efficiency, high anti-interference level, strong penetrating capacity and the like, and is widely applied to the fields of civil engineering, geological exploration, military and the like. The high-frequency electromagnetic wave emitted by the GPR can be reflected after encountering the medium with different electromagnetic characteristics underground, and the position, structure, physical characteristics and other information of an underground target can be deduced according to the wave form, intensity, emission, arrival time and other parameters of the echo received by the receiving antenna, so that the detection of the underground object can be regarded as the detection of the GPR signal. However, when a plurality of targets exist in the detection scene, electromagnetic waves between the targets can generate multiple scattering, so that mutual interference electromagnetic waves can be generated between the two targets, and the mutual interference waves are coupled with real echo signals, so that the mutual interference wave signals with false targets are arranged between the echo signals of the real targets in the GPR B-scan image, and the difficulty in underground target analysis and feature extraction is increased.
The mutual interference wave belongs to one of GPR interference waves, and in the situations that GPR is used for underground pipeline, cavity, mine detection and the like, a multi-target GPR B-scan image is judged and interpreted by an analyst with rich experience. However, due to randomness of the detection scene medium, diversity of detection target types and uncertainty of detection target distribution, inter-disturbance wave forms among targets are also very various, so that the GPR B-scan judgment and interpretation process is very time-consuming and easy to make mistakes. The field of GPR interference wave suppression has been receiving a great deal of attention from scholars both at home and abroad. In the past few years, processing methods based on subspace techniques, such as Principal Component Analysis (PCA), independent Component Analysis (ICA), singular Value Decomposition (SVD), and the like, have been used for GPR interference wave suppression, and are applicable to mutual interference wave suppression. These methods separate the target echo and the inter-disturbance wave based on their signal strength difference, but when their signal strengths are similar, the inter-disturbance wave and the target echo cannot be well separated, and this may result in loss of part of the target echo information that should be originally preserved.
In addition, some techniques based on low-rank sparse representation and morphological component analysis are also used for GPR interference wave suppression, however, the performance of these methods depends largely on the selection of parameters or dictionaries, and the proper parameters or dictionaries are different according to the interference wave morphology in different scenes, meanwhile, the ability of these methods to identify non-uniform inter-interference waves caused by electromagnetic wave scattering among multiple underground targets in GPR B-scan images is weak, and they require more time to process input images, so they are not suitable for application scenes with high real-time requirements. Based on this, it is an urgent problem to improve the detectability of GPR when a plurality of target scenes are contained in the ground.
Disclosure of Invention
The invention provides a ground penetrating radar cross-interference wave suppression method and system based on deep learning, which are used for solving the problem that GPR detection is easy to be interfered by cross-interference waves when facing a plurality of underground target scenes.
In a first aspect, the present invention provides a method for suppressing mutual interference of ground penetrating radar based on deep learning, the method comprising the following steps:
constructing a simulated detection scene and an initial mutual interference wave suppression network;
generating a network data set based on the simulated detection scene and through simulated detection;
Dividing the network data set into a training data set and a verification data set;
training the initial mutual interference wave suppression network by using the training data set to obtain a trained basic mutual interference wave suppression network;
verifying the basic mutual interference wave suppression network through the verification data set, and determining optimal model parameters of the basic mutual interference wave suppression network;
adjusting parameters of the basic mutual interference wave suppression network according to the optimal model parameters to obtain a mutual interference wave suppression network;
and obtaining a ground penetrating radar image with mutual interference, and inputting the ground penetrating radar image into the mutual interference wave suppression network to obtain the ground penetrating radar image after mutual interference wave suppression.
Optionally, the generating the network data set based on the simulated detection scene and through the simulated detection includes the following steps:
based on the empty background of the simulated detection scene, acquiring an empty background detection image through simulation detection;
randomly setting two detection targets in the simulated detection scene, and obtaining a first detection image set through the simulated detection;
generating a double-target detection data set by combining the first detection image set and the empty background detection image, wherein the double-target detection data set comprises double-target mutual interference wave detection images;
Randomly setting three detection targets in the simulated detection scene, and obtaining a second detection image set through the simulation;
generating a three-target detection data set by combining the second detection image set and the empty background detection image, wherein the three-target detection data set comprises three-target mutual interference wave detection images;
and summarizing the dual-target detection data set and the three-target detection data set to obtain a network data set.
Optionally, the initial mutual interference wave suppression network includes a main feature extraction module, a residual intensive module, a self-attention mechanism control module, a feature layer up-sampling module and a convolution prediction network, where the residual intensive module and the self-attention mechanism control module form a jump connection part of the initial mutual interference wave suppression network.
Optionally, the training the initial mutual interference suppression network by using the training data set, and obtaining the trained basic mutual interference suppression network includes the following steps:
initializing the initial mutual interference wave suppression network, and configuring random initial network parameter weights for the initial mutual interference wave suppression network;
reducing loss errors of the trunk feature extraction module, the residual error concentration module, the self-attention mechanism control module, the feature layer up-sampling module and the convolution prediction network by using a preset optimizer;
And inputting the training data set into the initial mutual interference wave suppression network to perform periodic batch training, and obtaining a trained basic mutual interference wave suppression network.
Optionally, a loss function calculation formula of the basic mutual interference wave suppression network is as follows:
Figure SMS_1
in the method, in the process of the invention,
Figure SMS_2
for the loss function, +.>
Figure SMS_3
、/>
Figure SMS_4
For the number of rows and columns of the two-dimensional image, +.>
Figure SMS_5
For a detected image of the training dataset which does not contain cross-talk waves>
Figure SMS_6
For the output result of said convolution prediction network, < >>
Figure SMS_7
And
Figure SMS_8
representing pixel values corresponding to pixel points in the image.
Optionally, the trunk feature extraction module includes five feature extraction layers, each layer of feature extraction layer includes two continuous convolution layers, the number of convolution kernels of the convolution layers in the first layer of the trunk feature extraction module is 64, the number of convolution kernels of the convolution layers in the second layer of the trunk feature extraction module is 128, the number of convolution kernels of the convolution layers in the third layer of the trunk feature extraction module is 256, the number of convolution kernels of the convolution layers in the fourth layer of the trunk feature extraction module is 512, the number of convolution kernels of the convolution layers in the fifth layer of the trunk feature extraction module is 1024, and output results of the feature extraction layers in each layer of the first four layers of the trunk feature extraction module are all used as inputs of the next feature extraction layer.
Optionally, the residual dense module includes four residual dense blocks, each residual dense block includes three dense connection layers and a local feature fusion layer, and a calculation formula of the residual dense block is:
Figure SMS_9
in the method, in the process of the invention,
Figure SMS_11
for the input features of the residual secret block, < >>
Figure SMS_14
For the output result of the dense connection layer,
Figure SMS_16
representing a nonlinear transformation>
Figure SMS_12
For the output result of the local feature fusion layer, < >>
Figure SMS_15
Representing a convolution operation of 1 x 1, +.>
Figure SMS_17
For the final output of the residual block, the final output of the residual block is +.>
Figure SMS_18
Is->
Figure SMS_10
And
Figure SMS_13
the result of the addition on the channel.
Optionally, the self-attention mechanism control module includes a plurality of self-attention mechanism control blocks, and a calculation formula of the self-attention mechanism control blocks is:
Figure SMS_19
in the method, in the process of the invention,
Figure SMS_22
output feature layer for the ith residual error density block, +.>
Figure SMS_27
Outputting the result after the feature layer up-sampling for the feature layer up-sampling module to the trunk feature extraction module, wherein the result is->
Figure SMS_32
、/>
Figure SMS_28
And->
Figure SMS_35
All are convolution operations, ++>
Figure SMS_25
And->
Figure SMS_34
The number of convolution kernels used is one half of the input features, +.>
Figure SMS_26
The number of convolution kernels used is 1, < >>
Figure SMS_33
And->
Figure SMS_20
For convolving the corresponding bias term +.>
Figure SMS_29
For ReLU function>
Figure SMS_23
For Sigmoid function, ++ >
Figure SMS_30
Is->
Figure SMS_24
Parameter setting in the calculation process, wherein the convolution kernel size is 1 multiplied by 1, the step size is 1, the filling size is 0, < >>
Figure SMS_31
For the last output result of the self-attention mechanism control block,/or->
Figure SMS_21
Is the attention coefficient.
Optionally, the feature layer up-sampling module includes four feature sampling layers, each of which includes an up-sampling layer, a feature connection layer and three convolution layers.
In a second aspect, the present invention also provides a deep learning-based ground penetrating radar cross interference suppression system, including a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method as described in the first aspect when executing the computer program.
The beneficial effects of the invention are as follows:
the invention constructs a mutual interference wave suppression network by the following steps: constructing a simulated detection scene and an initial mutual interference wave suppression network; generating a network data set based on the simulated detection scene and through simulated detection; dividing the network data set into a training data set and a verification data set; training the initial mutual interference wave suppression network by using the training data set to obtain a trained basic mutual interference wave suppression network; verifying the basic mutual interference wave suppression network through the verification data set, and determining optimal model parameters of the basic mutual interference wave suppression network; and adjusting parameters of the basic mutual interference wave suppression network according to the optimal model parameters to obtain the mutual interference wave suppression network. After the inter-disturbance wave suppression network is obtained, a ground penetrating radar image with multi-target inter-disturbance wave interference can be obtained, then the ground penetrating radar image is input into the inter-disturbance wave suppression network, and the ground penetrating radar image after the inter-disturbance wave suppression can be finally obtained through the suppression of the inter-disturbance wave suppression network on the multi-target inter-disturbance waves, so that the detection capability of the GPR on the underground containing a plurality of target scenes is improved.
Drawings
Fig. 1 is a schematic flow chart of a method for suppressing mutual interference waves of ground penetrating radars based on deep learning.
Fig. 2 is a schematic diagram of the structure of an initial mutual interference suppression network.
Fig. 3 is a schematic structural diagram of a trunk feature extraction module.
Fig. 4 is a schematic diagram of the structure of the residual error density block.
Fig. 5 is a schematic diagram of the structure of the self-attention mechanism control block.
Fig. 6 is a schematic diagram of the architecture of a feature layer upsampling module and a convolutional prediction network.
Fig. 7 is a schematic diagram of test results of a cross-interference suppression network on a test dataset.
Fig. 7 (a) is a cross-talk wave-containing detection image of two detection targets of the same attribute in a uniform medium.
Fig. 7 (b) is an image of fig. 7 (a) after cross-interference suppression via a cross-interference suppression network.
Fig. 7 (c) is a cross-interference wave-containing detection image of three detection targets of the same attribute in a uniform medium.
Fig. 7 (d) is an image of fig. 7 (c) after cross-interference suppression via a cross-interference suppression network.
Fig. 8 is a schematic diagram of test results of the cross-interference suppression network in the applicability test.
Fig. 8 (a) is a cross-talk wave-containing detection image of two different attribute detection targets in a uniform medium.
Fig. 8 (b) is an image of fig. 8 (a) after cross-interference suppression via a cross-interference suppression network.
Fig. 8 (c) is a cross-talk wave-containing detection image of three different attribute detection targets in a uniform medium.
Fig. 8 (d) is an image of fig. 8 (c) after cross-interference suppression via a cross-interference suppression network.
Fig. 8 (e) is a cross-interference wave-containing detection image of four detection targets of the same attribute in a uniform medium.
Fig. 8 (f) is an image of fig. 8 (e) after cross-interference suppression via a cross-interference suppression network.
Fig. 9 is a schematic diagram of a test result of the mutual interference suppression network in the noise immunity test.
Fig. 9 (a) is a cross-talk wave-containing detection image of two detection targets of the same attribute in a uniform medium.
Fig. 9 (b) is an image of fig. 9 (a) at a signal-to-noise ratio SNR of 0 dB.
Fig. 9 (c) is an image of fig. 9 (a) at a signal-to-noise ratio SNR of-10 dB.
Fig. 9 (d) is an image of fig. 9 (a) at a signal-to-noise ratio SNR of-20 dB.
Fig. 9 (e) is an image of fig. 9 (b) after cross-interference suppression via a cross-interference suppression network.
Fig. 9 (f) is an image of fig. 9 (c) after cross-interference suppression via a cross-interference suppression network.
Fig. 9 (g) is an image of fig. 9 (d) after cross-interference suppression via a cross-interference suppression network.
Detailed Description
The invention discloses a ground penetrating radar mutual interference wave suppression method based on deep learning.
Referring to fig. 1, in one embodiment, the method for suppressing the mutual interference of the ground penetrating radar based on the deep learning specifically includes the following steps:
S101, constructing a simulation detection scene and an initial mutual interference wave suppression network.
The GPR simulation modeling software gprMax is used for constructing a simulation detection scene, wherein the simulation detection scene is a two-dimensional underground area with the size of 1.44m multiplied by 0.32 m. For diversity of the data sets, the GPR antenna was simulated using theoretical hertz dipole sources fed with Ricker waveforms having center frequencies fc of 1.00GHz, 1.25GH, and 1.50GHz, respectively. Two kinds of underground media are used, wherein the first kind is uniform underground media with relative dielectric constant of 6-12, the second kind is a soil mixing model which is more similar to measured data, the mixing soil model consists of sand and clay, the proportion of the sand is changed between 20% and 80%, the upper limit of the water content of the soil is changed between 0.1% and 25%, and the soil mixing model is formed by mixing soil with 50 different water contents.
S102, generating a network data set based on the simulated detection scene through the simulated detection.
Wherein, based on the simulated detection scene constructed in step S101, the gprimax software is used to perform detection simulation and calculation, thereby generating a network data set.
S103, dividing the network data set into a training data set and a verification data set.
The network data set may be divided into a training data set for training a model network and a verification data set for verifying the model network according to a certain proportion, the number of the training data sets is greater than the number of the verification data sets, and in this embodiment, the proportion of the training data set and the verification data set may be 8:2.
S104, training the initial mutual interference wave suppression network by using the training data set to obtain a trained basic mutual interference wave suppression network.
The initial mutual interference wave suppression network is initialized, and then the training data set is input into the initial mutual interference wave suppression network for training, so that a trained basic mutual interference wave suppression network is obtained.
S105, verifying the basic mutual interference wave suppression network through the verification data set, and determining the optimal model parameters of the basic mutual interference wave suppression network.
And inputting the verification data set into the basic mutual interference wave suppression network, and adjusting the weight parameters of the basic mutual interference wave suppression network according to the loss value returned by the network so as to determine the optimal model parameters of the basic mutual interference wave suppression network.
S106, adjusting parameters of the basic mutual interference wave suppression network according to the optimal model parameters to obtain the mutual interference wave suppression network.
S107, acquiring a ground penetrating radar image with mutual interference, and inputting the ground penetrating radar image into a mutual interference wave suppression network to obtain the ground penetrating radar image after mutual interference wave suppression.
The implementation principle of the embodiment is as follows:
the network data set is divided into a training data set and a verification data set by constructing a simulated probing scene to simulate and generate the network data set. And simultaneously constructing an initial mutual interference wave suppression network, and training the initial mutual interference wave suppression network by using a training data set to obtain a basic mutual interference wave suppression network. And verifying the basic mutual interference wave suppression network through the verification data set, so that the basic mutual interference wave suppression network is subjected to parameter adjustment, and the final mutual interference wave suppression network is obtained. The ground penetrating radar image is input into a mutual interference wave suppression network, and the ground penetrating radar image after mutual interference wave suppression can be finally obtained through the suppression of the mutual interference wave suppression network on the multi-target mutual interference wave, so that the detection capability of the GPR on the underground containing a plurality of target scenes is improved.
In one embodiment, the step S102 of generating a network data set based on the simulated probing scene and through the simulated probing specifically includes the following steps:
based on the empty background of the simulated detection scene, acquiring an empty background detection image through simulation detection;
randomly setting two detection targets in a simulated detection scene, and obtaining a first detection image set through simulated detection;
generating a double-target detection data set by combining the first detection image set and the empty background detection image, wherein the double-target detection data set comprises double-target mutual interference detection images;
randomly setting three detection targets in a simulated detection scene, and obtaining a second detection image set through simulation;
generating a three-target detection data set by combining the second detection image set and the empty background detection image, wherein the three-target detection data set comprises three-target mutual interference detection images;
and summarizing the double-target detection data set and the three-target detection data set to obtain a network data set.
In this embodiment, a circular target with a radius of 1cm to 5cm is used as a detection target, the detection target is made of metal or plastic, the distance between the detection targets is set to 0.12m to 0.40m, and the detection target is placed at 9cm to 23cm underground in a simulated detection scene.
Before the simulated detection, dielectric parameters of the underground background medium and configuration parameters of the transmitting-receiving antenna are set. In the simulation detection process, a detection target is not arranged in the simulation detection scene, so that the simulation detection scene is in an empty background, and a GPR B-scan image of the simulation detection scene in the empty background, namely an empty background detection image, is calculated through the gprMax software simulation and is recorded as an image A.
In the same background medium, two detection targets are randomly placed in a simulated detection scene, the two detection targets are respectively a detection target 1 and a detection target 2, the two detection targets are kept contactless and are not exposed out of the ground, the embedded positions and the material parameters of the two detection targets are set, other simulation parameters are kept unchanged, and a GPR B-scan image of the two detection targets in the simulated detection scene is obtained through simulation calculation and is recorded as an image B1.
In the above detection scene, the detection target 1 and the detection target 2 are respectively removed, only one detection target is reserved, other simulation parameters are kept unchanged, and GPR B-scan images when only one detection target in the simulation detection scene is obtained through simulation calculation are respectively recorded as an image C1 and an image C2. Image C1 is a GPR B-scan image that is only reserved for detection target 1, and image C2 is a GPR B-scan image that is only reserved for detection target 2. The sizes of the image a, the image B1, the image C1, and the image C2 are the same. The image B1, the image C1, and the image C2 constitute a first detection image set.
Let image d1=image C1-image a, then image D1 represents the B-scan image of the detection target 1 from which the background wave is removed. Let image d2=image C2-image a, then image D2 represents the B-scan image of the detection target 2 from which the background wave is removed. Let image d3=image d1+image D2, then image D3 represents the superposition of B-scan with the background removed from each of detection target 1 and detection target 2. Let image d0=image B1-image a, then image D0 represents an image obtained by removing background by B-scan when the detection target 1 and the detection target 2 exist simultaneously, and image D0 contains the cross-interference waves of the detection target 1 and the detection target 2.
The above is generated for a single set of dual target detection data sets (D0, D3) 1Is a process of (2). When the initial mutual interference suppression network is trained subsequently, D0 is input, and the corresponding label output is D3. For the deep learning training of the initial mutual interference suppression network, a large number of data sets are required, so that the above simulation detection process can be repeated, and each time the simulation detection process is repeated, the dielectric parameters of the background medium, the configuration parameters of the transmitting-receiving antenna and the related parameters of the two detection targets are randomly generated. Repeating the above simulation detection process for N times to obtain N groups of double-target detection data sets, which are recorded as
Figure SMS_36
,/>
Figure SMS_37
,…,/>
Figure SMS_38
In order to increase the diversity of the network data sets, a three-target detection data set containing three detection targets is also constructed, and the three-target detection data set generation method comprises the following steps:
in the same background medium for the double-target simulation detection, three detection targets are randomly placed, wherein the three detection targets are respectively a detection target 3, a detection target 4 and a detection target 5. And (3) keeping the three detection targets out of contact and exposing the ground, setting the embedded positions and the material parameters of the three detection targets, keeping other simulation parameters unchanged, and obtaining GPR B-scan images of the three detection targets underground through simulation calculation, and recording the GPR B-scan images as an image B2.
In the above detection scene, two detection targets are removed respectively, only one of the detection targets is reserved, other simulation parameters are kept unchanged, and GPR B-scan images of only one underground detection target are obtained through simulation calculation and respectively marked as an image C3, an image C4 and an image C5. Image C3 is a GPR B-scan image when only probe 3 is retained, image C4 is a GPR B-scan image when only probe 4 is retained, and image C5 is a GPR B-scan image when only probe 5 is retained. The sizes of the image a, the image B2, the image C3, the image C4, and the image C5 are the same. Image B2, image C3, image C4 and image C5 constitute a second set of detection images.
Let image d5=image C3-image a, then image D5 represents the B-scan image of the detection target 3 from which the background wave is removed. Let image d6=image C4-image a, then image D6 represents the B-scan image of the detection target 4 from which the background wave is removed. Let image d7=image C5-image a, then image D7 represents the B-scan image of the detection target 5 from which the background wave is removed. Let image d8=image d4+image d5+image D6, then image D8 represents the superposition of B-scan after removal of the background by each of detection target 3, detection target 4, and detection target 5. Let image d4=image B2-image a, then image D4 represents the image after removing the background by the B-scan when the detection target 3, the detection target 4, and the detection target 5 coexist, and image D4 contains the cross-interference waves between the three targets.
The above is a single set of three-target probe data sets
Figure SMS_39
And (5) a generating process. When the initial mutual interference suppression network is trained subsequently, D4 is input, and the corresponding label output is D8. For the deep learning training of the initial mutual interference suppression network, a large number of data sets are required, so that the above simulation detection process can be repeated, and each time the simulation detection process is repeated, the dielectric parameters of the background medium, the configuration parameters of the transmitting-receiving antenna and the relevant parameters of the three detection targets are randomly generated. The above procedure was repeated N times to obtain N sets of three target detection data sets, denoted +. >
Figure SMS_40
,/>
Figure SMS_41
,…,/>
Figure SMS_42
The N groups of double-target detection data sets
Figure SMS_43
,/>
Figure SMS_44
,…,/>
Figure SMS_45
And N sets of three target detection data +.>
Figure SMS_46
,/>
Figure SMS_47
,…,/>
Figure SMS_48
A network data set is constructed. To facilitate network training, the GPR B-scan images in the data set are 256×256 in size, i.e., each picture has 256 tracks of A-scan data, and each track of A-scan data has 256 time acquisition points.
In one embodiment, referring to fig. 2, the initial mutual interference suppression network includes a backbone feature extraction module Encoder, a residual intensive module RDBs, a self-attention mechanism control module AGs, a feature layer up-sampling module Decoder, and a convolution prediction network, where the residual intensive module RDBs and the self-attention mechanism control module AGs form a jump connection portion of the initial mutual interference suppression network.
The main feature extraction module Encoder is used for extracting the picture features, and five effective feature layers of the input picture can be obtained after the main feature extraction module Encoder is used for extracting the picture features; the residual intensive module RDBs are used for inhibiting the mutual interference wave information which is partially not completely eliminated, particularly the characteristics from the lower layer, and are used for receiving the effective characteristic layers with different scales generated by the trunk characteristic extraction module Encoder and then outputting the result to the self-attention mechanism control module AGs; the self-attention mechanism control module AGs is used for coping with focusing of polymorphic mutual interference waves of different scenes, extracting an interested region in a picture, performing intensive mutual interference wave suppression on the specific region, and connecting the feature layers to a feature layer up-sampling module Decoder after finishing focusing tasks; the feature layer up-sampling module Decoder is used for receiving the output feature layer of the self-attention mechanism control module AGs and the feature layer output by the last layer of the main feature extraction module Encoder, up-sampling the feature layer and the feature layer, and finally, a convolution prediction network is used for classifying pictures to generate a final result image.
In one embodiment, step S104 is to train the initial inter-interference suppression network using the training data set, and the obtained trained basic inter-interference suppression network specifically includes the following steps:
initializing an initial mutual interference wave suppression network, and configuring random initial network parameter weights for the initial mutual interference wave suppression network;
reducing loss errors of a trunk feature extraction module, a residual error concentration module, a self-attention mechanism control module, a feature layer up-sampling module and a convolution prediction network by using a preset optimizer;
and inputting the training data set into the initial inter-interference wave suppression network to perform periodic batch training, and obtaining the trained basic inter-interference wave suppression network.
In this embodiment, the preset optimizer is an RMSProp optimizer, and the weight attenuation coefficient of the RMSProp optimizer is set to be
Figure SMS_49
The momentum item parameter is set to 0.9, and the learning rate of the initial cross-interference wave suppression network is set to +.>
Figure SMS_50
. And randomly initializing weight parameters of the initial mutual interference suppression network, transmitting the training data set into the initial mutual interference suppression network in batches for training, and setting the batch size of the network training to be 8. And k GPR B-scan images containing the mutual interference waves and k GPR B-scan images without the mutual interference waves with the same size corresponding to the GPR B-scan images in each batch of data set are used for reducing loss errors of a trunk feature extraction module Encoder, a residual intensive module RDBs, a self-attention mechanism control module AGs, a feature layer up-sampling module Decoder and a convolution prediction network by using an RMSProp optimizer, and training an initial mutual interference wave suppression network. Repeating the batch training until all images in the training data set are transmitted into the initial mutual interference wave suppression network to complete one period of training.
The period training is repeated for 150 times until the loss value of the initial mutual interference wave suppression network is not reduced, the output value of the loss function of the initial mutual interference wave suppression network tends to be stable, the training of the initial mutual interference wave suppression network is completed, and the network weight parameters at the moment are saved.
In one embodiment, the loss function calculation formula of the basic mutual interference wave suppression network is as follows:
Figure SMS_51
in the method, in the process of the invention,
Figure SMS_52
for loss function->
Figure SMS_53
、/>
Figure SMS_54
For the number of rows and columns of the two-dimensional image, +.>
Figure SMS_55
For training of detection images without cross-talk in the dataset,/for the detection of images without cross-talk in the dataset>
Figure SMS_56
For convolving the output result of the prediction network, +.>
Figure SMS_57
And->
Figure SMS_58
Representing pixel values corresponding to pixel points in the image.
In one embodiment, referring to fig. 3, the trunk feature extraction module Encoder is used for extracting features of a picture, and five feature layers with different scales of an input picture can be obtained after the trunk feature extraction module Encoder passes through the trunk feature extraction module Encoder. The main feature extraction module Encoder is divided into five feature extraction layers altogether, the first feature extraction layer is two continuous convolution layers, the number of convolution kernels is 64, after each layer of convolution, data are normalized and then activated by using a ReLU function, and the output of the first layer of the main feature extraction module Encoder is feature FE1. The second to fifth feature extraction layers of the backbone feature extraction module Encoder use similar structures, and each feature extraction layer is calculated by performing maximum pooling operation on the output result of the upper feature extraction layer, connecting two continuous convolution layers, and normalizing data after convolution of each convolution layer and activating the data by using a ReLU function.
The same number of convolution kernels is used for two continuous convolution layers of each of the second-layer feature extraction layer to the fifth-layer feature extraction layer, wherein the number of convolution kernels of the two convolution layers in the second-layer feature extraction layer is 128, the number of convolution kernels of the two convolution layers in the third-layer feature extraction layer is 256, the number of convolution kernels of the two convolution layers in the fourth-layer feature extraction layer is 512, and the number of convolution kernels of the two convolution layers in the fifth-layer feature extraction layer is 1024. The output feature graphs from the second layer feature extraction layer to the fifth layer feature extraction layer of the backbone feature extraction module Encoder are respectively
Figure SMS_60
、/>
Figure SMS_63
、/>
Figure SMS_65
、/>
Figure SMS_61
. The step length of all convolution layers in the trunk feature extraction module Encoder is 1, the filling size is 1, the size of convolution kernel is 3×3, the window size of the largest pooling layer is 3, and the output of the trunk feature extraction module Encoder is the feature
Figure SMS_64
、/>
Figure SMS_66
、/>
Figure SMS_67
、/>
Figure SMS_59
、/>
Figure SMS_62
In one embodiment, the residual intensive modules RDBs are used to suppress the partially incompletely cancelled mutual interference information, especially from low level features. The residual intensive modules RDBs receive feature layers of different scales generated from the backbone feature extraction module Encoder
Figure SMS_68
、/>
Figure SMS_69
、/>
Figure SMS_70
、/>
Figure SMS_71
、/>
Figure SMS_72
The result is then output to the self-attention mechanism control module AGs.
The residual dense modules RDBs are composed of four residual dense blocks RDB, referring to fig. 4, each residual dense block RDB includes three dense connection layers and a local feature fusion layer, the input of each of the dense connection layers is the concatenation of the output feature graphs of all the dense connection layers in front of the dense connection layers in the channel dimension, each dense connection layer is realized by a convolution of 3×3 and a ReLU function, the convolution step size and the filling size are both 1, and the convolution kernel number is set to be one third of the number of the input feature graph channels of the residual dense block RDB, which is also called the growth rate of the residual dense modules. The local feature fusion layer uses convolution of 1 multiplied by 1 to adaptively fuse the output and input feature graphs of the dense connection layer to obtain a feature fusion graph, wherein the convolution step length is 1, and the filling size is 0. And introducing residual learning after the local feature fusion layer, and combining the input feature map and the feature fusion map by a channel addition method to obtain a final result of a residual dense block RDB.
In this embodiment, the calculation formula of the residual density block RDB is:
Figure SMS_73
in the method, in the process of the invention,
Figure SMS_76
input features for residual dense block, +.>
Figure SMS_79
For the output result of the dense connection layer, +.>
Figure SMS_81
Representing a nonlinear transformation comprising a 3 x 3 convolution operation and a ReLU function +. >
Figure SMS_75
For the output result of the local feature fusion layer, +.>
Figure SMS_78
Representing a convolution operation of 1 x 1, +.>
Figure SMS_80
Final output of residual block
Figure SMS_82
Is->
Figure SMS_74
And->
Figure SMS_77
The result of the addition on the channel.
Output results of front four layers of backbone feature extraction module Encoder
Figure SMS_86
、/>
Figure SMS_89
、/>
Figure SMS_92
、/>
Figure SMS_85
Not only as input to the next layer within itself, but also as input to the residual intensive modules RDBs. Will->
Figure SMS_87
、/>
Figure SMS_90
、/>
Figure SMS_93
、/>
Figure SMS_83
The four feature layers are respectively input into four residual dense blocks RDB of a residual dense module RDBs, and the output of the residual dense module RDBs is characterized by +.>
Figure SMS_88
、/>
Figure SMS_91
、/>
Figure SMS_94
、/>
Figure SMS_84
These feature layers serve as part of the input to the self-attention mechanism control module AGs.
In one embodiment, referring to fig. 5, the self-attention mechanism control module AGs is configured to cope with focusing of multi-form mutual interference waves of different scenes, extract an area of interest in a picture, perform dense mutual interference wave suppression on the specific area, and input the feature layers to the feature layer up-sampling module Decoder after completing a focusing task. The self-attention mechanism control module comprises a plurality of self-attention mechanism control blocks, each self-attention mechanism control block AG of the self-attention mechanism control module AGs has two inputs, one is the output feature layer of the residual intensive modules RDBs
Figure SMS_95
Wherein->
Figure SMS_96
The other is the result of up-sampling the feature layer output by the main feature extraction module Encoder by the feature layer up-sampling module Decoder corresponding to the layer +.>
Figure SMS_97
The self-attention mechanism control block AG calculates the attention coefficients of the output feature layer of the residual density block RDB by collecting the region features similar to the two input feature layers
Figure SMS_100
Attention coefficient->
Figure SMS_101
Output feature layer representing the residual dense block RDB>
Figure SMS_104
The final output of the self-attention mechanism control block AG is the output feature layer of the residual error density block RDB>
Figure SMS_99
And attention coefficient->
Figure SMS_103
Is a product of (a) and (b). The final output of the self-attention mechanism control block AG of different levels of the self-attention mechanism control module AGs is characterized by +.>
Figure SMS_105
、/>
Figure SMS_106
、/>
Figure SMS_98
、/>
Figure SMS_102
In the present embodiment, the calculation formula of the self-attention mechanism control block AG is:
Figure SMS_107
in the method, in the process of the invention,
Figure SMS_116
output feature layer for the ith residual error density block,/->
Figure SMS_112
Outputting the result after the feature layer up-sampling for the feature layer up-sampling module to the trunk feature extraction module, wherein ∈10>
Figure SMS_120
、/>
Figure SMS_110
And->
Figure SMS_118
All are convolution operations, ++>
Figure SMS_113
And
Figure SMS_119
the number of convolution kernels used is one half of the input features, +.>
Figure SMS_111
The number of convolution kernels used is 1, < >>
Figure SMS_123
And->
Figure SMS_108
For convolving the corresponding bias term +.>
Figure SMS_117
For ReLU function >
Figure SMS_114
For Sigmoid function, ++>
Figure SMS_121
Is->
Figure SMS_115
Parameter setting in the calculation process, wherein the convolution kernel size in the parameter setting is 1 multiplied by 1, the step size is 1, the filling size is 0, < >>
Figure SMS_122
For the last output result of the self-attention mechanism control block, < ->
Figure SMS_109
Is the attention coefficient.
In one embodiment, referring to fig. 6, the feature layer upsampling module Decoder includes four feature sampling layers, each of which uses a similar structure, each of which includes an upsampling layer, a feature connection layer, and three convolution layers. The feature layer up-sampling module Decoder is used for receiving the output feature layer of the self-attention mechanism control module AGs and the feature layer output by the last layer of the trunk feature extraction module Encoder and up-sampling the feature layers. The first layer of feature sampling layer of the feature layer up-sampling module Decoder firstly receives the feature layer output by the last layer of feature extraction layer of the main feature extraction module Encoder
Figure SMS_125
The method comprises the steps of up-sampling and a layer of convolution layer, wherein the convolution layer comprises convolution, data normalization and ReLU activation operations, the convolution kernel size is 3 multiplied by 3, the step size is 1, the filling size is 1, and the number of the convolution kernels is +.>
Figure SMS_127
One half, 512. Then output the result- >
Figure SMS_130
Output characteristics of RDBs with residual dense modules
Figure SMS_126
The co-incoming self-attention mechanism control block AG gets the feature +.>
Figure SMS_128
Then->
Figure SMS_131
And->
Figure SMS_132
Splicing, and transmitting the spliced result into two continuous convolution layers to obtain a final result of a first characteristic sampling layer of a characteristic layer up-sampling module Decoder>
Figure SMS_124
In two continuous convolution layers, the data is normalized and activated by using a ReLU function after each convolution, wherein the convolution kernels are 3 multiplied by 3, the step sizes are 1, the filling sizes are 1, and the number of the convolution kernels is->
Figure SMS_129
One half, 512.
Final output of the feature layer up-sampling module Decoder first layer feature sampling layer
Figure SMS_133
As input to the next feature sample layer, the above-described operational flows of upsampling, convolution layer, incoming attention mechanism control block AG, stitching, and two consecutive convolution layers are repeated. The up-sampling module of feature layer Decoder has up-sampling of four feature sampling layers, and the input parameters of the first feature sampling layer are the output feature layer of the main feature extraction module Encoder->
Figure SMS_134
And the output characteristics of the residual intensive modules RDBs +.>
Figure SMS_135
Output result is->
Figure SMS_136
The number of convolution kernels used by the convolution operation during this period is 512. The input parameter of the second layer of characteristic sampling layer is the output result of the first layer of characteristic sampling layer +. >
Figure SMS_137
And output characteristics of residual dense modules RDBs
Figure SMS_138
Output result is->
Figure SMS_139
The number of convolution kernels used by the convolution operation during this period is 256.
The input parameter of the third layer of characteristic sampling layer is the output result of the second layer of characteristic sampling layer
Figure SMS_141
And the output characteristics of the residual intensive modules RDBs +.>
Figure SMS_144
Output result is->
Figure SMS_147
The number of convolution kernels used by the convolution operation during this period is 128. The input parameter of the fourth layer of characteristic sampling layer is the output result of the third layer of characteristic sampling layer +.>
Figure SMS_142
And the output characteristics of the residual intensive modules RDBs +.>
Figure SMS_145
Output result is->
Figure SMS_148
The number of convolution kernels used by the convolution operation during this period is 64. The convolution kernel size in all convolution operations in the feature layer up-sampling module Decoder is 3 multiplied by 3, the step length is 1, the filling size is 1, and all output results of the feature layer up-sampling module Decoder are ∈3>
Figure SMS_149
、/>
Figure SMS_140
、/>
Figure SMS_143
、/>
Figure SMS_146
In another embodiment, the network data set may be calculated by gprMax simulation software, where the network data set includes 1000 GPR B-scan images with cross-waves and 1000 GPR B-scan images without cross-waves. 800 of 1000 sets of network data are used as training data sets for the initial inter-interference rejection network, 100 sets are used as basic inter-interference rejection network validation data sets, and 100 sets are used as test data sets. In order to facilitate the training of the network, the length and width of the GPR B-scan image containing the mutual interference waves are unified, and the GPR B-scan image is 256 times 256 in size, namely each image has 256 channels of A-scan data, and each channel of A-scan data has 256 time acquisition points.
In this embodiment, an RMSProp optimizer is used, and the weight attenuation coefficient of the RMSProp optimizer is set to be
Figure SMS_150
The momentum item parameter is set to 0.9, and the learning rate of the initial cross-interference wave suppression network is set to +.>
Figure SMS_151
. The method comprises the steps of randomly initializing weight parameters of an initial mutual interference wave suppression network, respectively transmitting 800 groups of training data sets into the initial mutual interference wave suppression network for training, setting the batch size of network training to be 8, after training for 150 periods in total, enabling the loss value of the initial mutual interference wave suppression network not to be reduced, and storing the network weight parameters at the moment.
And transmitting the 100 groups of verification data sets into a trained basic mutual interference wave suppression network, and adjusting the weight parameters of the basic mutual interference wave suppression network according to the loss value returned by the network so as to determine the optimal model parameters. And adjusting parameters of the basic mutual interference wave suppression network based on the optimal model parameters to obtain the mutual interference wave suppression network. 100 test training sets containing the mutual interference waves are transmitted into the mutual interference wave suppression network to test the training effect of the network, and a test result diagram is shown in fig. 7.
Fig. 7 (a) is an image of two detection targets in a uniform medium, wherein the two detection targets are made of PVC, and fig. 7 (b) is an image of fig. 7 (a) after cross-interference suppression by a cross-interference suppression network. Fig. 7 (c) is a cross-interference wave-containing detection image of three detection targets in a uniform medium, wherein the three detection targets in fig. 7 (c) are all made of metal, and fig. 7 (d) is an image of fig. 7 (c) after cross interference wave suppression through a cross interference wave suppression network. The mutual interference wave suppression effect of the mutual interference wave suppression network is better. Fig. 7 (c) is an extreme case in the test data set, where when the echo hyperbola of the rightmost probe object is superimposed on the echo hyperbolas of the two intermediate probe objects, the cross-talk suppressing network still exhibits a good cross-talk suppressing effect, and the object echo hyperbolas interfered by the cross-talk can be restored to some extent after the cross-talk suppressing.
In one embodiment, some data not in the network data set may be acquired by gprMax simulation software and input to a cross-talk suppression network trained from the network data set for testing. Test results graph referring to fig. 8. Fig. 8 (a) is an image of two detection targets containing a cross-interference wave in a uniform medium, wherein the two detection targets in fig. 8 (a) are a circular cavity and a PVC cylinder respectively, and fig. 8 (b) is an image of fig. 8 (a) after the cross-interference wave is suppressed by a cross-interference wave suppression network. Fig. 8 (c) is an image of three detection targets containing a cross-interference wave in a uniform medium, in fig. 8 (c), the three detection targets are two metal cylinders and a circular cavity respectively, and fig. 8 (d) is an image of fig. 8 (c) after the cross-interference wave is suppressed by a cross-interference wave suppression network.
Fig. 8 (e) is an image of four detection targets in a uniform medium, wherein the four detection targets in fig. 8 (e) are all metal cylinders, and fig. 8 (f) is an image of fig. 8 (e) after cross-interference suppression through a cross-interference suppression network. According to the test result, the cross interference wave suppression network has good applicability, and has good cross interference wave suppression effect under the conditions of a cavity-containing material detection target scene, a different material detection target combination scene and four detection target scenes which are not in a training data set.
In one embodiment, to measure noise immunity of the cross-interference rejection network, some data are designed to add different signal-to-noise ratio SNR to the GPR B-scan image in the test dataset, where the SNR value represents the ratio of the signal intensity of the original image to the added gaussian random white noise intensity. The image added with random noise is input to a cross interference rejection network trained by a network data set for testing, and the test result is referred to fig. 9.
Fig. 9 (a) is a cross-interference wave-containing detection image of two detection targets in a uniform medium, and both detection targets in fig. 9 (a) are metal cylinders. Fig. 9 (b) is an image of fig. 9 (a) when the SNR is 0dB, and fig. 9 (c) is an image of fig. 9 (b) after cross-interference suppression through a cross-interference suppression network. Fig. 9 (d) is an image of fig. 9 (a) when the SNR is-10 dB, and fig. 9 (e) is an image of fig. 9 (d) after cross-interference suppression through a cross-interference suppression network. Fig. 9 (f) is an image of fig. 9 (a) when the SNR is-20 dB, and fig. 9 (g) is an image of fig. 9 (f) after cross-interference suppression through a cross-interference suppression network. The mutual interference wave suppression network has good noise immunity, and has good mutual interference wave suppression effect on the noisy image under the condition that no noise is added in the training data set.
In summary, the method for suppressing the inter-disturbance waves of the ground penetrating radar based on the deep learning can suppress the inter-disturbance waves among multiple targets in the GPR B-scan image, and the inter-disturbance wave suppression network has stronger robustness and can cope with the inter-disturbance waves of different forms in different scenes. Under the condition that the data types in the training data set are limited, the mutual interference wave suppression network can also well suppress the mutual interference waves in the multi-target scene which is not in the training data set, and also has a good mutual interference suppression effect on the GPR B-scan image with noise.
The invention also discloses a ground penetrating radar mutual interference suppression system based on the deep learning, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the ground penetrating radar mutual interference suppression method based on the deep learning as shown in figure 1 when executing the computer program.
The implementation principle of the embodiment is as follows:
by the retrieval of the program, the network data set can be generated by constructing a simulated detection scene to simulate, and is divided into a training data set and a verification data set. And simultaneously constructing an initial mutual interference wave suppression network, and training the initial mutual interference wave suppression network by using a training data set to obtain a basic mutual interference wave suppression network. And verifying the basic mutual interference wave suppression network through the verification data set, so that the basic mutual interference wave suppression network is subjected to parameter adjustment, and the final mutual interference wave suppression network is obtained. The ground penetrating radar image is input into a mutual interference wave suppression network, and the ground penetrating radar image after mutual interference wave suppression can be finally obtained through the suppression of the mutual interference wave suppression network on the multi-target mutual interference wave, so that the detection capability of the GPR on the underground containing a plurality of target scenes is improved.
Those of ordinary skill in the art will appreciate that: the discussion of any of the embodiments above is merely exemplary and is not intended to imply that the scope of the present application is limited to such examples; the technical features of the above embodiments or in the different embodiments may also be combined under the idea of the present application, the steps may be implemented in any order, and there are many other variations of the different aspects of one or more embodiments in the present application as above, which are not provided in details for the sake of brevity.
One or more embodiments herein are intended to embrace all such alternatives, modifications and variations that fall within the broad scope of the present application. Any omissions, modifications, equivalents, improvements, and the like, which are within the spirit and principles of the one or more embodiments in the present application, are therefore intended to be included within the scope of the present application.

Claims (9)

1. The method for suppressing the mutual interference waves of the ground penetrating radar based on the deep learning is characterized by comprising the following steps:
the method comprises the steps of constructing a simulated detection scene and an initial mutual interference wave suppression network, wherein the initial mutual interference wave suppression network comprises a trunk feature extraction module, a residual error intensive module, a self-attention mechanism control module, a feature layer up-sampling module and a convolution prediction network, and the residual error intensive module and the self-attention mechanism control module form a jump connection part of the initial mutual interference wave suppression network;
Generating a network data set based on the simulated detection scene and through simulated detection;
dividing the network data set into a training data set and a verification data set;
training the initial mutual interference wave suppression network by using the training data set to obtain a trained basic mutual interference wave suppression network;
verifying the basic mutual interference wave suppression network through the verification data set, and determining optimal model parameters of the basic mutual interference wave suppression network;
adjusting parameters of the basic mutual interference wave suppression network according to the optimal model parameters to obtain a mutual interference wave suppression network;
and obtaining a ground penetrating radar image with mutual interference, and inputting the ground penetrating radar image into the mutual interference wave suppression network to obtain the ground penetrating radar image after mutual interference wave suppression.
2. The method for suppressing the cross interference of the ground penetrating radar based on the deep learning according to claim 1, wherein the generating the network data set based on the simulated detection scene and through the simulated detection comprises the following steps:
based on the empty background of the simulated detection scene, acquiring an empty background detection image through simulation detection;
randomly setting two detection targets in the simulated detection scene, and obtaining a first detection image set through the simulated detection;
Generating a double-target detection data set by combining the first detection image set and the empty background detection image, wherein the double-target detection data set comprises double-target mutual interference wave detection images;
randomly setting three detection targets in the simulated detection scene, and obtaining a second detection image set through the simulation;
generating a three-target detection data set by combining the second detection image set and the empty background detection image, wherein the three-target detection data set comprises three-target mutual interference wave detection images;
and summarizing the dual-target detection data set and the three-target detection data set to obtain a network data set.
3. The method for deep learning-based ground penetrating radar cross-interference suppression according to claim 1, wherein the training the initial cross-interference suppression network using the training data set to obtain a trained basic cross-interference suppression network comprises the following steps:
initializing the initial mutual interference wave suppression network, and configuring random initial network parameter weights for the initial mutual interference wave suppression network;
reducing loss errors of the trunk feature extraction module, the residual error concentration module, the self-attention mechanism control module, the feature layer up-sampling module and the convolution prediction network by using a preset optimizer;
And inputting the training data set into the initial mutual interference wave suppression network to perform periodic batch training, and obtaining a trained basic mutual interference wave suppression network.
4. The method for suppressing the cross interference of the ground penetrating radar based on the deep learning according to claim 3, wherein a loss function calculation formula of the basic cross interference suppression network is as follows:
Figure QLYQS_1
in the method, in the process of the invention,
Figure QLYQS_2
for the loss function, +.>
Figure QLYQS_3
、/>
Figure QLYQS_4
For the number of rows and columns of the two-dimensional image, +.>
Figure QLYQS_5
For a detected image of the training dataset which does not contain cross-talk waves>
Figure QLYQS_6
For the output result of said convolution prediction network, < >>
Figure QLYQS_7
And->
Figure QLYQS_8
Representing pixel values corresponding to pixel points in the image.
5. The method for suppressing the cross interference of the ground penetrating radar based on deep learning according to claim 1, wherein the trunk feature extraction module comprises five layers of feature extraction layers, each layer of feature extraction layer comprises two continuous convolution layers, the number of convolution kernels of a first layer of the trunk feature extraction module is 64, the number of convolution kernels of a second layer of the trunk feature extraction module is 128, the number of convolution kernels of a third layer of the convolution layers of the trunk feature extraction module is 256, the number of convolution kernels of a fourth layer of the convolution layers of the trunk feature extraction module is 512, the number of convolution kernels of a fifth layer of the trunk feature extraction module is 1024, and output results of the feature extraction layers of all the previous four layers of the trunk feature extraction module are used as inputs of the next layer of the feature extraction layer.
6. The deep learning-based ground penetrating radar mutual interference suppression method according to claim 1, wherein the residual dense module comprises four residual dense blocks, each residual dense block comprises three dense connection layers and one local feature fusion layer, and a calculation formula of the residual dense blocks is as follows:
Figure QLYQS_9
in the method, in the process of the invention,
Figure QLYQS_11
for the input features of the residual secret block, < >>
Figure QLYQS_15
For the output result of the dense connection layer, < >>
Figure QLYQS_17
Representing a nonlinear transformation>
Figure QLYQS_12
For the output result of the local feature fusion layer, < >>
Figure QLYQS_14
A convolution operation of 1 x 1 is represented,
Figure QLYQS_16
for the final output of the residual block, the final output of the residual block is +.>
Figure QLYQS_18
Is->
Figure QLYQS_10
And->
Figure QLYQS_13
The result of the addition on the channel.
7. The deep learning-based ground penetrating radar cross interference suppression method according to claim 6, wherein the self-attention mechanism control module comprises a plurality of self-attention mechanism control blocks, and a calculation formula of the self-attention mechanism control blocks is as follows:
Figure QLYQS_19
,/>
Figure QLYQS_20
in the method, in the process of the invention,
Figure QLYQS_26
output feature layer for the ith residual error density block, +.>
Figure QLYQS_29
Outputting the result after the feature layer up-sampling for the feature layer up-sampling module to the trunk feature extraction module, wherein the result is->
Figure QLYQS_35
、/>
Figure QLYQS_28
And- >
Figure QLYQS_34
All of them are the convolution operations, and,
Figure QLYQS_27
and->
Figure QLYQS_36
The number of convolution kernels used is one half of the input features, +.>
Figure QLYQS_22
The number of convolution kernels used is 1, < >>
Figure QLYQS_30
And->
Figure QLYQS_21
For convolving the corresponding bias term +.>
Figure QLYQS_32
For ReLU function>
Figure QLYQS_24
For Sigmoid function, ++>
Figure QLYQS_31
Is->
Figure QLYQS_25
Parameter setting in the calculation process, wherein the convolution kernel size is 1 multiplied by 1, the step size is 1, the filling size is 0, < >>
Figure QLYQS_33
For the last output result of the self-attention mechanism control block,/or->
Figure QLYQS_23
Is the attention coefficient.
8. The method for suppressing the cross interference of the ground penetrating radar based on the deep learning according to claim 1, wherein the feature layer up-sampling module comprises four feature sampling layers, and each feature sampling layer comprises an up-sampling layer, a feature connection layer and three convolution layers.
9. A deep learning-based ground penetrating radar cross interference suppression system comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 8 when executing the computer program.
CN202310549759.4A 2023-05-16 2023-05-16 Ground penetrating radar mutual interference wave suppression method and system based on deep learning Active CN116256701B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310549759.4A CN116256701B (en) 2023-05-16 2023-05-16 Ground penetrating radar mutual interference wave suppression method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310549759.4A CN116256701B (en) 2023-05-16 2023-05-16 Ground penetrating radar mutual interference wave suppression method and system based on deep learning

Publications (2)

Publication Number Publication Date
CN116256701A true CN116256701A (en) 2023-06-13
CN116256701B CN116256701B (en) 2023-08-01

Family

ID=86686554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310549759.4A Active CN116256701B (en) 2023-05-16 2023-05-16 Ground penetrating radar mutual interference wave suppression method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN116256701B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830331A (en) * 2018-06-22 2018-11-16 西安交通大学 A kind of Ground Penetrating Radar object detection method based on full convolutional network
KR102309343B1 (en) * 2020-04-01 2021-10-06 세종대학교산학협력단 Frequency-wavenumber analysis method and apparatus through deep learning-based super resolution ground penetrating radar image generation
WO2022063727A1 (en) * 2020-09-25 2022-03-31 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Device and method for creating training data for two-dimensional scans of a ground-penetrating radar system
CN114331890A (en) * 2021-12-27 2022-04-12 中南大学 Ground penetrating radar B-scan image feature enhancement method and system based on deep learning
CN114758230A (en) * 2022-04-06 2022-07-15 桂林电子科技大学 Underground target body classification and identification method based on attention mechanism
CN114966600A (en) * 2022-07-29 2022-08-30 中南大学 Clutter suppression method and system for B-scan image of ground penetrating radar
CN115291210A (en) * 2022-07-26 2022-11-04 哈尔滨工业大学 Three-dimensional image pipeline identification method of 3D-CNN ground penetrating radar combined with attention mechanism
CN115345790A (en) * 2022-08-02 2022-11-15 上海应用技术大学 Ground penetrating radar image enhancement method based on window self-attention neural network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830331A (en) * 2018-06-22 2018-11-16 西安交通大学 A kind of Ground Penetrating Radar object detection method based on full convolutional network
KR102309343B1 (en) * 2020-04-01 2021-10-06 세종대학교산학협력단 Frequency-wavenumber analysis method and apparatus through deep learning-based super resolution ground penetrating radar image generation
WO2022063727A1 (en) * 2020-09-25 2022-03-31 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Device and method for creating training data for two-dimensional scans of a ground-penetrating radar system
CN114331890A (en) * 2021-12-27 2022-04-12 中南大学 Ground penetrating radar B-scan image feature enhancement method and system based on deep learning
CN114758230A (en) * 2022-04-06 2022-07-15 桂林电子科技大学 Underground target body classification and identification method based on attention mechanism
CN115291210A (en) * 2022-07-26 2022-11-04 哈尔滨工业大学 Three-dimensional image pipeline identification method of 3D-CNN ground penetrating radar combined with attention mechanism
CN114966600A (en) * 2022-07-29 2022-08-30 中南大学 Clutter suppression method and system for B-scan image of ground penetrating radar
CN115345790A (en) * 2022-08-02 2022-11-15 上海应用技术大学 Ground penetrating radar image enhancement method based on window self-attention neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FEIFEI HOU 等: "Deep Learning-Based Subsurface Target Detection From GPR Scans", IEEE SENSORS JOURNAL, vol. 21, no. 6, XP011839460, DOI: 10.1109/JSEN.2021.3050262 *
侯斐斐 等: "面向探地雷达B-scan图像的目标检测算法综述", 电子与信息学报, vol. 42, no. 1 *

Also Published As

Publication number Publication date
CN116256701B (en) 2023-08-01

Similar Documents

Publication Publication Date Title
CN108229404B (en) Radar echo signal target identification method based on deep learning
Buscombe Shallow water benthic imaging and substrate characterization using recreational-grade sidescan-sonar
CN106772365A (en) A kind of multipath based on Bayes&#39;s compressed sensing utilizes through-wall radar imaging method
US8193967B2 (en) Method and system for forming very low noise imagery using pixel classification
Bell et al. Simulation and analysis of synthetic sidescan sonar images
CN111639746B (en) GNSS-R sea surface wind speed inversion method and system based on CNN neural network
CN112766221B (en) Ship direction and position multitasking-based SAR image ship target detection method
CN111722199A (en) Radar signal detection method based on convolutional neural network
CN115291210B (en) 3D-CNN ground penetrating radar three-dimensional image pipeline identification method combined with attention mechanism
CN111445515B (en) Underground cylinder target radius estimation method and system based on feature fusion network
CN114966600A (en) Clutter suppression method and system for B-scan image of ground penetrating radar
CN114758230A (en) Underground target body classification and identification method based on attention mechanism
Barkataki et al. Classification of soil types from GPR B scans using deep learning techniques
Almaimani Classifying GPR images using convolutional neural networks
CN116256701B (en) Ground penetrating radar mutual interference wave suppression method and system based on deep learning
Del Rio Vera et al. Automatic target recognition in synthetic aperture sonar images based on geometrical feature extraction
CN111931570B (en) Through-wall imaging radar human body target detection method based on full convolution network
Qian et al. Deep Learning-Augmented Stand-off Radar Scheme for Rapidly Detecting Tree Defects
Guo et al. Research on tunnel lining image target recognition method based on YOLOv3
CN116106833B (en) Deep learning-based processing method and system for restraining surface layer steel bar echo
Stienessen et al. Comparison of model types for prediction of seafloor trawlability in the Gulf of Alaska by using multibeam sonar data
CN117148306A (en) Root diameter prediction and positioning method based on ground penetrating radar
CN115496917B (en) Multi-target detection method and device in GPR B-Scan image
Busson et al. Seismic shot gather noise localization using a multi-scale feature-fusion-based neural network
CN116609759B (en) Method and system for enhancing and identifying airborne laser sounding seabed weak echo

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant