WO2023284698A1 - 一种基于深度神经网络的多目标恒虚警率检测方法 - Google Patents
一种基于深度神经网络的多目标恒虚警率检测方法 Download PDFInfo
- Publication number
- WO2023284698A1 WO2023284698A1 PCT/CN2022/105025 CN2022105025W WO2023284698A1 WO 2023284698 A1 WO2023284698 A1 WO 2023284698A1 CN 2022105025 W CN2022105025 W CN 2022105025W WO 2023284698 A1 WO2023284698 A1 WO 2023284698A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- neural network
- false alarm
- deep neural
- radar
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 58
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 39
- 238000007476 Maximum Likelihood Methods 0.000 claims abstract description 18
- 238000000034 method Methods 0.000 claims abstract description 16
- 238000004088 simulation Methods 0.000 claims description 15
- 238000005070 sampling Methods 0.000 claims description 11
- 238000012549 training Methods 0.000 claims description 8
- 238000005259 measurement Methods 0.000 claims description 6
- 239000000654 additive Substances 0.000 claims description 4
- 230000000996 additive effect Effects 0.000 claims description 4
- 238000013461 design Methods 0.000 claims description 3
- 238000004422 calculation algorithm Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 9
- 238000003384 imaging method Methods 0.000 description 9
- 230000000694 effects Effects 0.000 description 4
- 230000015556 catabolic process Effects 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 230000003750 conditioning effect Effects 0.000 description 1
- 238000013434 data augmentation Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/02—Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/02—Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
- G01S13/50—Systems of measurement based on relative movement of target
- G01S13/52—Discriminating between fixed and moving objects or between objects moving at different speeds
- G01S13/536—Discriminating between fixed and moving objects or between objects moving at different speeds using transmission of continuous unmodulated waves, amplitude-, frequency-, or phase-modulated waves
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/66—Radar-tracking systems; Analogous systems
- G01S13/72—Radar-tracking systems; Analogous systems for two-dimensional tracking, e.g. combination of angle and range tracking, track-while-scan radar
- G01S13/723—Radar-tracking systems; Analogous systems for two-dimensional tracking, e.g. combination of angle and range tracking, track-while-scan radar by using numerical data
- G01S13/726—Multiple target tracking
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/89—Radar or analogous systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/35—Details of non-pulse systems
- G01S7/352—Receivers
- G01S7/354—Extracting wanted echo-signals
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/41—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
- G01S7/417—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
- G06F18/24137—Distances to cluster centroïds
- G06F18/2414—Smoothing the distance, e.g. radial basis function networks [RBFN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/02—Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
- G01S13/50—Systems of measurement based on relative movement of target
- G01S13/52—Discriminating between fixed and moving objects or between objects moving at different speeds
- G01S13/522—Discriminating between fixed and moving objects or between objects moving at different speeds using transmissions of interrupted pulse modulated waves
- G01S13/524—Discriminating between fixed and moving objects or between objects moving at different speeds using transmissions of interrupted pulse modulated waves based upon the phase or frequency shift resulting from movement of objects, with reference to the transmitted signals, e.g. coherent MTi
- G01S13/5246—Discriminating between fixed and moving objects or between objects moving at different speeds using transmissions of interrupted pulse modulated waves based upon the phase or frequency shift resulting from movement of objects, with reference to the transmitted signals, e.g. coherent MTi post processors for coherent MTI discriminators, e.g. residue cancellers, CFAR after Doppler filters
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/93—Radar or analogous systems specially adapted for specific applications for anti-collision purposes
- G01S13/931—Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
- G01S2013/9314—Parking operations
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Definitions
- the invention belongs to the technical field of frequency modulated continuous wave (Frequency Modulated Continuous Wave, FMCW) radar multi-target constant false alarm rate (Constant False Alarm Rate, hereinafter referred to as CFAR) detection, and specifically relates to a multi-target constant false alarm rate based on a deep neural network Detection method.
- FMCW Frequency Modulated Continuous Wave
- CFAR Constant False Alarm Rate
- Multi-object detection is very challenging, especially in scenes with densely distributed objects.
- the detection threshold is determined based on pre-estimated background levels.
- interfering targets will inevitably lead to inaccurate background level estimation, resulting in poor detection performance.
- the present invention proposes a multi-target constant false alarm rate detection method based on a deep neural network, which converts target detection into radar peak sequence classification through a deep neural network detector problem to improve detection performance without relying on background level estimation.
- Deep neural network detectors trained on simulated datasets using data augmentation techniques have excellent generalization capabilities and can be deployed in real scenarios. Better computational performance was obtained using an approximate maximum likelihood estimator based on Taylor series during false alarm conditioning.
- a multi-target constant false alarm rate detection method based on deep neural network comprises the following steps:
- n is the number of training samples
- K n is the distance sequence length of the nth sample
- L K is the real label
- the target is marked as 1
- the clutter is marked as 0
- R K [r 1 ,r 2 ,...,r K ] is the peak distance sequence
- I K [i 1 ,i 2 ,...,i K ] is the peak intensity sequence corresponding to R K
- the peak sequence P K (R K ,I K ), where It is obtained by first performing Fourier transform on the radar intermediate frequency signal and then taking the modulus to obtain the radar frequency intensity measurement X, and then taking the peak value of X;
- S2 Construct a deep neural network detector capable of classifying the peak sequence P K , and use the simulation data set to train it to obtain a trained deep neural network detector;
- S5 Design an approximate maximum likelihood estimator based on Taylor series, determine the approximate maximum likelihood estimation of the proportional parameter ⁇ ; calculate the false alarm adjustment threshold according to the specified false alarm rate P FA and the approximate maximum likelihood estimation of the proportional parameter ⁇ T fa , remove the targets lower than T fa in the detection result Y, and output the constant false alarm detection result.
- the enhancement of the simulation data set in S1 is performed in the following manner:
- P c is the clutter power
- SCR k is the dynamic signal-to-clutter ratio set by the kth target
- the radar intermediate frequency signal has a dynamic target number
- r k is the distance of the kth target
- ⁇ k is the sampling distance interval
- D W is the sampling distance window, which is consistent with the radar ranging range; ⁇ is the scaling factor; s is the distance change factor, and obeys the Gaussian distribution N(0,s 2 ); m is the target number, which is a random number. To have a dynamic target number of training samples.
- the deep neural network detector adopts a fully connected neural network.
- ⁇ is the truncation depth.
- the false alarm adjustment threshold Tfa is:
- the multi-target constant false alarm rate detection method based on the deep neural network of the present invention focuses on the FMCW radar multi-target detection method. By using a new detection algorithm, it does not need to rely on the detection threshold determined by pre-estimating the environmental background level to achieve target detection. Comprehensive Effectively overcome the multi-target shadowing effect.
- Figure 1 is a schematic flow chart of a multi-target constant false alarm rate detection method based on a deep neural network.
- Fig. 2 is a schematic diagram of a multi-target scene in a preferred embodiment of the present invention.
- FIG. 3 is the performance comparison figure of the inventive method and existing CFAR detection method, and wherein, figure (a) is original radar imaging, figure (b) is the imaging result of VI-CFAR algorithm; Figure (c) is ICVI-CFAR algorithm Figure (d) is the imaging result of OS-CFAR algorithm, Figure (e) is the imaging result of ICOS-CFAR algorithm, Figure (f) is the imaging result of OR-CFAR algorithm, (g) is the imaging result of ICOR-CFAR algorithm The imaging result of the algorithm, (h) is the imaging result of the SACM-CFAR algorithm, and (i) is the imaging result of the method of the present invention.
- the multi-target constant false alarm rate detection method based on the deep neural network trains the pre-detector based on the deep neural network by establishing a simulation data set of data enhancement technology, and classifies the peak value of the radar signal to distinguish whether it is a target or a radar signal. clutter.
- This method uses a deep neural network detector to complete target detection in a multi-target scene, which can effectively solve the problem of detection performance degradation caused by multi-target occlusion effects.
- the false alarm adjustment threshold is determined by the approximate maximum likelihood estimator based on Taylor series, so that the detection results can reach a constant false alarm rate.
- the multi-target constant false alarm rate detection method based on deep neural network of the present invention specifically comprises the following steps:
- n is the number of training samples
- K n is the distance sequence length of the nth sample
- L K is the real label
- the target is marked as 1
- the clutter is marked as 0
- R K [r 1 ,r 2 ,...,r K ] is the peak distance sequence
- I K [i 1 ,i 2 ,...,i K ] is the peak intensity sequence corresponding to R K
- the peak sequence P K (R K ,I K ), where It is obtained by first performing Fourier transform on the radar intermediate frequency signal and then taking the modulus to obtain the radar frequency intensity measurement X, and then taking the peak value of X.
- the enhancement of the simulation data set in S1 is carried out in the following manner:
- the data set is enhanced by setting a dynamic Signal-to-Clutter Ratio (SCR) for the sample, and the dynamic SCR corresponding to the echo signal is defined as:
- P c is the clutter power, set As a random number, each sample can have a dynamic SCR;
- r k is the distance of the kth target
- ⁇ k is the sampling distance interval, which can be expressed as:
- D W is the sampling distance window, which is consistent with the radar ranging range; ⁇ is the scaling factor; s is the distance change factor, and obeys the Gaussian distribution N(0,s 2 ); by properly selecting ⁇ and s, additive random sampling
- the method can avoid the distance resolution between adjacent sampling points that is smaller than the radar distance resolution.
- m is the target number, setting it as a random number can make each sample have a dynamic target number.
- S2 Construct a deep neural network detector capable of classifying the peak sequence P K , and use the simulation data set to train it to obtain a trained deep neural network detector.
- a neural network architecture can be expressed as a The parametric compound nonlinear function of :
- P K is the peak sequence [ RK, I K ] , n l ( ⁇ ; w l ) represents each layer of the network, and this function maps the input data P K to the output Z K .
- the hidden layer and the output layer use a fully connected layer, which can be expressed as:
- the dimension of the weight coefficient ⁇ l is (M, N)
- the bias coefficient b l is a vector with length M
- h( ⁇ ) is the activation function.
- S5 Design an approximate maximum likelihood estimator based on Taylor series, determine the approximate maximum likelihood estimation of the proportional parameter ⁇ ; calculate the false alarm adjustment threshold according to the specified false alarm rate P FA and the approximate maximum likelihood estimation of the proportional parameter ⁇ T fa , remove the targets lower than T fa in the detection result Y, and output the constant false alarm detection result.
- ⁇ is the truncation depth
- ⁇ is the scale parameter to be estimated
- g'(x) is the first derivative of the function g(x).
- the false alarm rate PFA is:
- the vehicles are densely distributed and located between the open road and the dense trees, which indicates that the object detection for this scene will be done in the multi-object environment as well as the clutter edge environment.
- a high-resolution millimeter-wave radar with a working frequency band of 76-81 GHz is used as a target detection sensor, and the radar system applies the multi-target constant false alarm rate detection method based on a deep neural network of the present invention.
- a data-augmented simulation data set is established, which contains a total of 50,000 frames of data, which are divided into 10 parts on average, of which 8 parts are used as training sets and 2 parts are used as verification sets. Then use the fully connected neural network as the deep neural network detector, and use the simulation data set to train the detector, and use the Adam optimizer to complete the backpropagation, the learning rate is set to 0.01, and the batch size is set to 150. Deploy the trained detector to detect and output the detection result Y.
- Figure 3 is a comparison of the detection results of various detection methods in the test scene in Figure 1.
- the rectangular box in Figure 3(a) represents the target to be detected in the scene within the radar detection range, and Figures (b)-(i) are the algorithms
- the detection results of where the missing parts are marked with circles.
- the results show that the method of the present invention is superior to the existing CFAR detection method, and all targets are completely detected. This result shows that the CFAR method based on the deep neural network in the present invention effectively overcomes the multi-target occlusion effect, and has better performance in dense target scenes.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Computer Networks & Wireless Communication (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Probability & Statistics with Applications (AREA)
- Electromagnetism (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
一种基于深度神经网络的多目标恒虚警检测方法,该方法通过建立使用了数据增强技术的仿真数据集来训练基于深度神经网络的预检测器,对雷达信号峰值进行分类,以区分是目标还是杂波。从原始背景样本中移除预检测器检测到的目标形成缩减样本。基于此缩减样本使用基于泰勒级数的近似最大似然估计器进行背景水平估计得到虚警调节门限,移除预检测结果中低于此门限的目标,输出最终检测结果。该方法不需要依赖预先估计的背景水平来检测目标,在目标密集度很高的场景下依然能保持优越的检测性能。
Description
本发明属于调频连续波(Frequency Modulated Continuous Wave,FMCW)雷达多目标恒虚警率(Constant False Alarm Rate,以下简称CFAR)检测技术领域,具体涉及一种基于深度神经网络的多目标恒虚警率检测方法。
多目标检测具有很大的挑战性,尤其是在目标密集分布的场景。在传统的CFAR检测方法中,检测阈值是基于预先估计的背景水平确定的。但是,干扰目标会难以避免地导致背景水平估计不准确,从而导致检测性能下降。
发明内容
针对传统CFAR检测方法在多目标场景检测性能下降的缺点,本发明提出一种基于深度神经网络的多目标恒虚警率检测方法,通过深度神经网络检测器将目标检测转换为雷达的峰值序列分类问题来提高检测性能,不需要依赖于背景水平估计。在使用了数据增强技术的仿真数据集上训练的深度神经网络检测器具有出色的泛化能力,可以在真实场景中部署。在虚警调节过程中使用基于泰勒级数的近似最大似然估计器获得了更好的计算性能。
本发明的目的通过如下的技术方案来实现:
一种基于深度神经网络的多目标恒虚警率检测方法,该方法包括以下步骤:
S1:通过数据增强技术建立具有动态信杂比和动态目标数的雷达中频信号的仿真数据集
其中,n为训练样本的个数,K
n为第n个样本的距离序列长度;L
K为真实标签,目标被标记为1,杂波被标记为0;R
K=[r
1,r
2,…,r
K]为峰值距离序列,I
K=[i
1,i
2,…,i
K]是与R
K对应的峰值强度序列;峰值序列P
K=(R
K,I
K),其是先将雷达中频信号作傅里叶变换后取模得到雷达频率强度测量X,再对X取峰值得到的;
S2:构建能够对峰值序列P
K进行分类的深度神经网络检测器,并利用所述仿真数据集对其进行训练,得到训练后的深度神经网络检测器;
S3:将待检测的雷达频率强度测量X取峰值,得到的峰值序列P
K输入训练后的深度神经网络检测器,输出目标检测结果Y;
S5:设计基于泰勒级数的近似最大似然估计器,确定比例参数σ的近似最大似然估计; 根据指定的虚警率P
FA和比例参数σ的近似最大似然估计计算得到虚警调节门限T
fa,剔除检测结果Y中低于T
fa的目标,输出恒虚警检测结果。
进一步地,所述S1中仿真数据集的增强按照以下方式进行:
(1)使雷达中频信号具有动态信杂比
将所述仿真数据集对应的雷达中频信号的第k个目标的回波信号乘上其对应的回波功率
其中,P
c为杂波功率,SCR
k为第k个目标设置的动态信杂比
(2)采用加性随机采样方法生成距离序列的方式使雷达中频信号具有动态目标数
其中,r
k为第k个目标的距离,τ
k为采样距离间隔,表示为:
其中,D
W为采样距离窗口,与雷达测距范围一致;μ为缩放因子;s为距离变化因子,且服从高斯分布N(0,s
2);m为目标数,其为随机数,目的为使训练样本具有动态的目标数。
进一步地,所述深度神经网络检测器采用全连接神经网络。
进一步地,所述S5中基于泰勒级数的近似最大似然估计器为:
其中,
b
*=Nα[g′(a)a-g(a)]
g(a)=aexp(-a
2/2)/[1-exp(-a
2/2)]
a=α/2
其中,α为截断深度。
进一步地,所述虚警调节门限T
fa为:
本发明的有益效果如下:
本发明的基于深度神经网络的多目标恒虚警率检测方法,聚焦FMCW雷达多目标检测方法,通过利用新的检测算法,不需依赖预先估计环境背景水平确定的检测阈值来实现目标检测,全面有效的克服了多目标遮蔽效应。
图1是基于深度神经网络的多目标恒虚警率检测方法的流程示意图。
图2是本发明优选实施例的多目标场景示意图。
图3是本发明方法与现有CFAR检测方法的性能对比图,其中,图(a)为原始雷达成像,图(b)为VI-CFAR算法的成像结果;图(c)为ICVI-CFAR算法的成像结果;图(d)是OS-CFAR算法的成像结果,图(e)为ICOS-CFAR算法的成像结果,图(f)为OR-CFAR算法的成像结果,(g)为ICOR-CFAR算法的成像结果,(h)为SACM-CFAR算法的成像结果,(i)为本发明方法的成像结果。
下面根据附图和优选实施例详细描述本发明,本发明的目的和效果将变得更加明白,应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
本发明提供的基于深度神经网络的多目标恒虚警率检测方法,通过建立数据增强技术的仿真数据集来训练基于深度神经网络的预检测器,对雷达信号峰值进行分类,以区分是目标还是杂波。该方法在多目标场景下使用深度神经网络检测器完成目标检测,能够有效解决多目标遮蔽效应导致的检测性能下降的问题。同时通过基于泰勒级数的近似最大似然估计器来确定虚警调节门限,使检测结果达到恒定的虚警率。
本发明的基于深度神经网络的多目标恒虚警率检测方法,具体包括如下步骤:
S1:通过数据增强技术建立具有动态信杂比和动态目标数的雷达中频信号的仿真数据集
其中,n为训练样本的个数,K
n为第n个样本的距离序列长度;L
K为真实标签,目标被标记为1,杂波被标记为0;R
K=[r
1,r
2,…,r
K]为峰值距离序列,I
K=[i
1,i
2,…,i
K]是与R
K对应的峰值强度序列;峰值序列P
K=(R
K,I
K),其是先将雷达中频信号作傅里叶变换后取模得到雷达频率强度测量X,再对X取峰值得到的。
所述S1中仿真数据集的增强按照以下方式进行:
(1)通过为样本设置动态信杂比(Signal-to-Clutter Ratio,SCR)来增强数据集,回波信 号对应的动态SCR被定义为:
(2)通过为样本设置动态目标数来增强数据集,采用加性随机采样方法生成距离序列
其中,r
k为第k个目标的距离,τ
k为采样距离间隔,可以表示为:
其中,D
W为采样距离窗口,与雷达测距范围一致;μ为缩放因子;s为距离变化因子,且服从高斯分布N(0,s
2);通过适当选择μ和s,加性随机采样方法可以避免相邻采样点之间产生小于雷达距离分辨率。m为目标数,将其设置为随机数,可以使每个样本具有动态的目标数。
S2:构建能够对峰值序列P
K进行分类的深度神经网络检测器,并利用仿真数据集对其进行训练,得到训练后的深度神经网络检测器。
S3:将待检测的雷达频率强度测量X取峰值,得到的峰值序列P
K输入训练后的深度神经网络检测器,输出目标检测结果Y。
其中,P
K为峰值序列[R
K,I
K],n
l(·;w
l)表示每层网络,该函数将输入数据P
K映射为输出Z
K。隐藏层和输出层采用全连接层,可以表示为:
其中,权重系数Φ
l的维度为(M,N),偏置系数b
l是长度为M的向量,h(·)为激活函数。
将深度神经网络检测器部署于检测,其输出Z
K可以表示为概率质量函数:
S5:设计基于泰勒级数的近似最大似然估计器,确定比例参数σ的近似最大似然估计;根据指定的虚警率P
FA和比例参数σ的近似最大似然估计计算得到虚警调节门限T
fa,剔除检测结果Y中低于T
fa的目标,输出恒虚警检测结果。
其中,基于泰勒级数的近似最大似然估计以及虚警调节门限T
fa的计算具体按照以下方式进行:
其中,α为截断深度,σ为待估计的比例参数;
令ξ=α/σ,以及g(x)=xexp(-x
2/2)/[1-exp(-x
2/2)]
上述公式改写为:
其中,g(ξ)可以在点ξ=α/σ处泰勒展开并且舍去高次项,得到近似结果:
g(ξ)≈g(a)+g′(a)(ξ-a) (11)
其中,g′(x)为函数g(x)的一次导数。
通过g(ξ)的方程可将对数似然函数的求导方程进一步改写为:
上述方程的解等价于:
其中,
b
*=Nα[g′(a)a-g(a)]
虚警率P
FA为:
如图2所示,在真实道路场景中,车辆密集分布并且位于空旷的道路和茂密的树木之间,这表明针对该场景的目标检测将在多目标环境以及杂波边缘环境下完成。工作频段在76-81GHz的高分辨率毫米波雷达作为目标检测传感器,雷达***应用了本发明的基于深度神经网络的多目标恒虚警率检测方法。
在该实施例中,建立经过数据增强的仿真数据集,总共包含50,000帧数据,将其平均分为10份,其中8份用作训练集,2份用作验证集。随后采用全连接神经网络作为深度神经网络检测器,并使用仿真数据集训练检测器,采用Adam优化器完成反向传播,学习速率设置为0.01,批量大小设置为150。将训练完的检测器部署于检测并输出检测结果Y。最后从原始样本中移除检测结果Y获取缩减样本
并通过近似最大似然估计器估计比例参数σ,之后确定虚警调节门限T
fa,将低于T
fa的目标移除,最后输出恒虚警检测结果。
图3为在图1的测试场景中各检测方法的检测结果对比,图3(a)中矩形框表示场景在雷达检测范围内存在的待检测目标,图(b)-(i)为各个算法的检测结果,其中漏检的部分用圆圈标记。结果显示本发明方法优于现有的CFAR检测方法,完整的检测出了全部的目标。该结果表明本发明中的基于深度神经网络的CFAR方法有效的克服了多目标遮蔽效应,在密集目标场景下有较好的性能。
本领域普通技术人员可以理解,以上所述仅为发明的优选实例而已,并不用于限制发明,尽管参照前述实例对发明进行了详细的说明,对于本领域的技术人员来说,其依然可 以对前述各实例记载的技术方案进行修改,或者对其中部分技术特征进行等同替换。凡在发明的精神和原则之内,所做的修改、等同替换等均应包含在发明的保护范围之内。
Claims (5)
- 一种基于深度神经网络的多目标恒虚警率检测方法,其特征在于,该方法包括以下步骤:S1:通过数据增强技术建立具有动态信杂比和动态目标数的雷达中频信号的仿真数据集 其中,n为训练样本的个数,K n为第n个样本的距离序列长度;L K为真实标签,目标被标记为1,杂波被标记为0;R K=[r 1,r 2,…,r K]为峰值距离序列,I K=[i 1,i 2,…,i K]是与R K对应的峰值强度序列;峰值序列P K=(R K,I K),其是先将雷达中频信号作傅里叶变换后取模得到雷达频率强度测量X,再对X取峰值得到的;S2:构建能够对峰值序列P K进行分类的深度神经网络检测器,并利用所述仿真数据集对其进行训练,得到训练后的深度神经网络检测器;S3:将待检测的雷达频率强度测量X取峰值,得到的峰值序列P K输入训练后的深度神经网络检测器,输出目标检测结果Y;S5:设计基于泰勒级数的近似最大似然估计器,确定比例参数σ的近似最大似然估计;根据指定的虚警率P FA和比例参数σ的近似最大似然估计计算得到虚警调节门限T fa,剔除检测结果Y中低于T fa的目标,输出恒虚警检测结果。
- 根据权利要求1所述的基于深度神经网络的多目标恒虚警率检测方法,其特征在于,所述S1中仿真数据集的增强按照以下方式进行:(1)使雷达中频信号具有动态信杂比将所述仿真数据集对应的雷达中频信号的第k个目标的回波信号乘上其对应的回波功率其中,P c为杂波功率,SCR k为第k个目标设置的动态信杂比(2)采用加性随机采样方法生成距离序列的方式使雷达中频信号具有动态目标数其中,r k为第k个目标的距离,τ k为采样距离间隔,表示为:其中,D W为采样距离窗口,与雷达测距范围一致;μ为缩放因子;s为距离变化因子,且服从高斯分布N(0,s 2);m为目标数,其为随机数,目的为使训练样本具有动态的目标数。
- 根据权利要求1所述的基于深度神经网络的多目标恒虚警率检测方法,其特征在于,所述深度神经网络检测器采用全连接神经网络。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/451,818 US12044799B2 (en) | 2021-07-14 | 2023-08-17 | Deep neural network (DNN)-based multi-target constant false alarm rate (CFAR) detection methods |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110793105.7A CN113534120B (zh) | 2021-07-14 | 2021-07-14 | 一种基于深度神经网络的多目标恒虚警率检测方法 |
CN202110793105.7 | 2021-07-14 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/451,818 Continuation-In-Part US12044799B2 (en) | 2021-07-14 | 2023-08-17 | Deep neural network (DNN)-based multi-target constant false alarm rate (CFAR) detection methods |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023284698A1 true WO2023284698A1 (zh) | 2023-01-19 |
Family
ID=78127812
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/105025 WO2023284698A1 (zh) | 2021-07-14 | 2022-07-12 | 一种基于深度神经网络的多目标恒虚警率检测方法 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113534120B (zh) |
WO (1) | WO2023284698A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115840226A (zh) * | 2023-02-27 | 2023-03-24 | 中国科学院空天信息创新研究院 | 一种方位向多通道ScanSAR快速目标检测方法 |
CN117452390A (zh) * | 2023-12-25 | 2024-01-26 | 厦门大学 | 一种ddma-mimo雷达速度估计方法 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113534120B (zh) * | 2021-07-14 | 2023-06-30 | 浙江大学 | 一种基于深度神经网络的多目标恒虚警率检测方法 |
CN115494472B (zh) * | 2022-11-16 | 2023-03-10 | 中南民族大学 | 一种基于增强雷达波信号的定位方法、毫米波雷达、装置 |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104732243A (zh) * | 2015-04-09 | 2015-06-24 | 西安电子科技大学 | 基于cnn的sar目标识别方法 |
CN106228124A (zh) * | 2016-07-17 | 2016-12-14 | 西安电子科技大学 | 基于卷积神经网络的sar图像目标检测方法 |
CN107239740A (zh) * | 2017-05-05 | 2017-10-10 | 电子科技大学 | 一种多源特征融合的sar图像自动目标识别方法 |
CN108921029A (zh) * | 2018-06-04 | 2018-11-30 | 浙江大学 | 一种融合残差卷积神经网络和pca降维的sar自动目标识别方法 |
CN108921030A (zh) * | 2018-06-04 | 2018-11-30 | 浙江大学 | 一种快速学习的sar自动目标识别方法 |
CN109188388A (zh) * | 2018-09-03 | 2019-01-11 | 中国科学院声学研究所 | 一种对抗多目标干扰的恒虚警检测方法 |
CN111562569A (zh) * | 2020-04-21 | 2020-08-21 | 哈尔滨工业大学 | 基于加权群稀疏约束的Weibull背景下多目标恒虚警检测方法 |
CN112163450A (zh) * | 2020-08-24 | 2021-01-01 | 中国海洋大学 | 基于s3d学习算法的高频地波雷达船只目标检测方法 |
CN112684428A (zh) * | 2021-01-15 | 2021-04-20 | 浙江大学 | 一种基于信号代理的多目标恒虚警率检测方法 |
CN113534120A (zh) * | 2021-07-14 | 2021-10-22 | 浙江大学 | 一种基于深度神经网络的多目标恒虚警率检测方法 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10838057B2 (en) * | 2016-10-14 | 2020-11-17 | Lockheed Martin Corporation | Radar system and method for determining a rotational state of a moving object |
CN110378204B (zh) * | 2019-06-06 | 2021-03-26 | 东南大学 | 一种基于车载毫米波雷达的多目标分类方法 |
KR102060286B1 (ko) * | 2019-10-29 | 2019-12-27 | 주식회사 코드42 | 이미지 정보를 활용한 레이더 오브젝트 검출 임계값 결정 방법 및 이를 이용한 레이더 오브젝트 정보 생성 장치 |
CN111722199B (zh) * | 2020-08-10 | 2023-06-20 | 上海航天电子通讯设备研究所 | 一种基于卷积神经网络的雷达信号检测方法 |
CN113033083B (zh) * | 2021-03-10 | 2022-06-17 | 浙江大学 | 一种基于密度峰值聚类径向基神经网络波达方向估计方法 |
-
2021
- 2021-07-14 CN CN202110793105.7A patent/CN113534120B/zh active Active
-
2022
- 2022-07-12 WO PCT/CN2022/105025 patent/WO2023284698A1/zh active Application Filing
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104732243A (zh) * | 2015-04-09 | 2015-06-24 | 西安电子科技大学 | 基于cnn的sar目标识别方法 |
CN106228124A (zh) * | 2016-07-17 | 2016-12-14 | 西安电子科技大学 | 基于卷积神经网络的sar图像目标检测方法 |
CN107239740A (zh) * | 2017-05-05 | 2017-10-10 | 电子科技大学 | 一种多源特征融合的sar图像自动目标识别方法 |
CN108921029A (zh) * | 2018-06-04 | 2018-11-30 | 浙江大学 | 一种融合残差卷积神经网络和pca降维的sar自动目标识别方法 |
CN108921030A (zh) * | 2018-06-04 | 2018-11-30 | 浙江大学 | 一种快速学习的sar自动目标识别方法 |
CN109188388A (zh) * | 2018-09-03 | 2019-01-11 | 中国科学院声学研究所 | 一种对抗多目标干扰的恒虚警检测方法 |
CN111562569A (zh) * | 2020-04-21 | 2020-08-21 | 哈尔滨工业大学 | 基于加权群稀疏约束的Weibull背景下多目标恒虚警检测方法 |
CN112163450A (zh) * | 2020-08-24 | 2021-01-01 | 中国海洋大学 | 基于s3d学习算法的高频地波雷达船只目标检测方法 |
CN112684428A (zh) * | 2021-01-15 | 2021-04-20 | 浙江大学 | 一种基于信号代理的多目标恒虚警率检测方法 |
CN113534120A (zh) * | 2021-07-14 | 2021-10-22 | 浙江大学 | 一种基于深度神经网络的多目标恒虚警率检测方法 |
Non-Patent Citations (1)
Title |
---|
CAO ZHIHUI; FANG WENWEI; SONG YUYING; HE LAI; SONG CHUNYI; XU ZHIWEI: "DNN-Based Peak Sequence Classification CFAR Detection Algorithm for High-Resolution FMCW Radar", IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, vol. 60, 24 September 2021 (2021-09-24), USA, pages 1 - 15, XP011899561, ISSN: 0196-2892, DOI: 10.1109/TGRS.2021.3113302 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115840226A (zh) * | 2023-02-27 | 2023-03-24 | 中国科学院空天信息创新研究院 | 一种方位向多通道ScanSAR快速目标检测方法 |
CN117452390A (zh) * | 2023-12-25 | 2024-01-26 | 厦门大学 | 一种ddma-mimo雷达速度估计方法 |
CN117452390B (zh) * | 2023-12-25 | 2024-05-03 | 厦门大学 | 一种ddma-mimo雷达速度估计方法 |
Also Published As
Publication number | Publication date |
---|---|
US20240004032A1 (en) | 2024-01-04 |
CN113534120A (zh) | 2021-10-22 |
CN113534120B (zh) | 2023-06-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023284698A1 (zh) | 一种基于深度神经网络的多目标恒虚警率检测方法 | |
CN104076355B (zh) | 基于动态规划的强杂波环境中弱小目标检测前跟踪方法 | |
CN104899866B (zh) | 一种智能化的红外小目标检测方法 | |
US9188666B2 (en) | System and method for distribution free target detection in a dependent non-Gaussian background | |
Jing et al. | AENN: A generative adversarial neural network for weather radar echo extrapolation | |
CN105425223B (zh) | 广义帕累托杂波下稀疏距离扩展雷达目标的检测方法 | |
CN110221266B (zh) | 一种基于支持向量机的航海雷达目标快速检测方法 | |
Golbon-Haghighi et al. | Ground clutter detection for weather radar using phase fluctuation index | |
CN110501683B (zh) | 一种基于四维数据特征的海陆杂波分类方法 | |
Knudde et al. | Indoor tracking of multiple persons with a 77 GHz MIMO FMCW radar | |
CN106569193A (zh) | 基于前‑后向收益参考粒子滤波的海面小目标检测方法 | |
Golbon-Haghighi et al. | Detection of ground clutter for dual-polarization weather radar using a novel 3D discriminant function | |
CN108133468A (zh) | 自适应参数增强和尾迹辅助检测的恒虚警率舰船检测方法 | |
Yin et al. | Radar target and moving clutter separation based on the low-rank matrix optimization | |
Tian et al. | Performance evaluation of deception against synthetic aperture radar based on multifeature fusion | |
Fan et al. | Multifractal correlation analysis of autoregressive spectrum-based feature learning for target detection within sea clutter | |
Sinha et al. | Estimation of Doppler profile using multiparameter cost function method | |
CN108196238B (zh) | 高斯背景下基于自适应匹配滤波的杂波图检测方法 | |
Ngo et al. | A sensitivity analysis approach for evaluating a radar simulation for virtual testing of autonomous driving functions | |
Wen et al. | Modeling of correlated complex sea clutter using unsupervised phase retrieval | |
CN112946653A (zh) | 双极化气象雷达信号恢复方法、***及存储介质 | |
CN111707999A (zh) | 一种基于多特征与集成学习结合的海面漂浮小目标检测方法 | |
Słota | Decomposition techniques for full-waveform airborne laser scanning data | |
CN106778870B (zh) | 一种基于rpca技术的sar图像舰船目标检测方法 | |
Fan et al. | A deceptive jamming template synthesis method for SAR using generative adversarial nets |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22841334 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22841334 Country of ref document: EP Kind code of ref document: A1 |