CN114818828A - Training method of radar interference perception model and radar interference signal identification method - Google Patents

Training method of radar interference perception model and radar interference signal identification method Download PDF

Info

Publication number
CN114818828A
CN114818828A CN202210547511.XA CN202210547511A CN114818828A CN 114818828 A CN114818828 A CN 114818828A CN 202210547511 A CN202210547511 A CN 202210547511A CN 114818828 A CN114818828 A CN 114818828A
Authority
CN
China
Prior art keywords
interference
signal
spectrum
training
narrow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210547511.XA
Other languages
Chinese (zh)
Other versions
CN114818828B (en
Inventor
宫健
王欢
郎彬
丁学科
李欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202210547511.XA priority Critical patent/CN114818828B/en
Publication of CN114818828A publication Critical patent/CN114818828A/en
Application granted granted Critical
Publication of CN114818828B publication Critical patent/CN114818828B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The application provides a training method of a radar interference perception model and a radar interference signal identification method, wherein the training method of the radar interference perception model comprises the following steps: acquiring training sample data; the training sample data comprises a comb spectrum interference signal, an intermittent sampling interference signal, a narrow-band aiming frequency interference signal and a frequency spectrum dispersion interference signal; performing time-frequency analysis processing on the training sample data to generate a time-frequency distribution image; and inputting the time-frequency distribution image into an initial target detection model for training to generate a radar interference perception model. In the embodiment of the application, a target detection idea in the field of computer vision is introduced to establish a target detection model, and only four single interference signals need to be provided during training of the target detection model. And the trained target detection model can identify all interference signal types in the image, so that the radar interference of four single interference signals, six pairwise composite signals and four triple composite signals can be flexibly sensed.

Description

Training method of radar interference perception model and radar interference signal identification method
Technical Field
The application relates to the field of data processing, in particular to a training method of a radar interference perception model and a radar interference signal identification method.
Background
The radar is used as important equipment with the capabilities of detecting, positioning, tracking, guiding and the like on enemy targets, and the anti-interference capability of the radar for dealing with various electromagnetic interference threats is the guarantee of the normal performance of the radar in fighting under the current complex and changeable electromagnetic environment. Since most existing anti-interference technologies are directed to specific types of interference, selection of the best anti-interference countermeasure firstly needs an efficient radar interference sensing technology for guiding coordination. Therefore, the method has important military application value and research significance for the research of radar interference sensing technology.
However, the existing research on radar interference perception technology needs to acquire all radar interference situations, i.e., a data set of all interference situations needs to be constructed. For example, when four common interference signals are studied, a situation that four interference signals are combined with each other (there are six composite signals) needs to be obtained, if the combination of three interference signals is considered, four composite signals appear, and therefore, 10 composite interference data sets need to be additionally established. For the generation of simulation data, complicated data set establishment steps are needed, and great workload is needed; for measured data, it is extremely difficult to collect interference data from non-cooperative targets in a complex battlefield environment.
Therefore, the existing research on radar interference perception has the problems of large workload, high difficulty and the like in data set construction.
Disclosure of Invention
The embodiment of the application aims to provide a training method of a radar interference perception model and a radar interference signal identification method, so as to solve the problems of large workload, high difficulty and the like of data set construction in the research on radar interference perception.
The invention is realized by the following steps:
in a first aspect, an embodiment of the present application provides a method for training a radar interference perception model, including: acquiring training sample data; the training sample data comprises a comb spectrum interference signal, an intermittent sampling interference signal, a narrow-band aiming frequency interference signal and a frequency spectrum dispersion interference signal; performing time-frequency analysis processing on the training sample data to generate a time-frequency distribution image; and inputting the time-frequency distribution image into an initial target detection model for training to generate a radar interference perception model.
In the embodiment of the application, only four single signals, namely a comb spectrum interference signal, an intermittent sampling interference signal, a narrow-band aiming frequency interference signal and a spectrum dispersion interference signal, are acquired as training sample data, then the training sample data is subjected to time-frequency analysis processing, and a time-frequency distribution image is generated and then input into a target detection model for training. In other words, in the embodiment of the present application, a "target detection" idea in the computer vision field is introduced to establish a target detection model, and only four kinds of single interference signals need to be provided during training of the target detection model. And the trained target detection model can identify all interference signal types in the image, so that the radar interference of four single interference signals, six pairwise composite signals and four triple composite signals can be flexibly sensed.
With reference to the technical solution provided by the first aspect, in some possible implementation manners, the inputting the time-frequency distribution image into an initial target detection model for training includes: acquiring an image marked by the time-frequency distribution image by a user; and inputting the marked time-frequency distribution image into the initial target detection model for supervised training.
In the embodiment of the application, the time-frequency distribution image marked by the user is input into the target detection model of the initial point for supervised training, so that the target detection model can effectively learn the characteristics of the time-frequency distribution image corresponding to each signal.
With reference to the technical solution provided by the first aspect, in some possible implementation manners, the initial target detection model is a YOLO v5s model.
In the embodiment of the application, the YOLO v5s model is used as the target detection model, so that the detection performance of the target detection model can be improved, the YOLO v5s model has less parameter quantity compared with other target detection models, and the complexity of network model training can be reduced.
With reference to the technical solution provided by the first aspect, in some possible implementation manners, the eighth layer of the YOLO v5s model is a packet convolutional layer.
In the embodiment of the present application, replacing the standard convolutional layer of the eighth layer of the YOLO v5s model with the packet convolutional layer can significantly reduce the parameters and the calculation amount of the eighth layer, and further reduce the parameters and the calculation amount of the YOLO v5s model.
In combination with the technical solution provided by the first aspect, in some possible implementations, the seventh layer, the tenth layer, and the twenty-fourth layer of the YOLO v5s model are Ghost convolutional layers.
In the embodiment of the present application, replacing the C3 layer of the seventh, tenth, and twenty-fourth layers of the YOLO v5s model with the Ghost convolutional layer can significantly reduce the parameter amount and the calculation amount of each of the seventh, tenth, and twenty-fourth layers, and further reduce the parameter amount and the calculation amount of the YOLO v5s model.
In a second aspect, an embodiment of the present application provides a radar interference signal identification method, including: acquiring an interference signal to be detected; performing time-frequency analysis processing on the interference signal to be detected to generate a time-frequency distribution image to be detected; inputting the time-frequency distribution image to be detected into the radar interference perception model generated by the training method of the radar interference perception model according to any one of claims 1 to 5, and obtaining the identification result of the interference signal to be detected; wherein the recognition result comprises: the system comprises a comb spectrum interference signal, an intermittent sampling interference signal, a narrow-band aiming interference signal, a spectrum dispersion interference signal, a comb spectrum interference and intermittent sampling interference composite signal, a comb spectrum interference and narrow-band aiming interference composite signal, a comb spectrum interference and spectrum dispersion interference composite signal, an intermittent sampling interference and narrow-band aiming interference composite signal, an intermittent sampling interference and spectrum dispersion interference composite signal, a narrow-band aiming interference and spectrum dispersion interference composite signal, a comb spectrum interference, intermittent sampling interference and narrow-band aiming interference triple composite signal, a comb spectrum interference signal, an intermittent sampling interference and spectrum dispersion interference triple composite signal, a comb spectrum interference, narrow-band aiming interference and spectrum dispersion interference triple composite signal, and one of the intermittent sampling interference, narrow-band aiming interference and spectrum dispersion interference triple composite signal.
In a third aspect, an embodiment of the present application provides a training apparatus for a radar interference perception model, including: the first acquisition module is used for acquiring training sample data; the training sample data comprises a comb spectrum interference signal, an intermittent sampling interference signal, a narrow-band aiming frequency interference signal and a frequency spectrum dispersion interference signal; the first processing module is used for performing time-frequency analysis processing on the training sample data to generate a time-frequency distribution image; and the training module is used for inputting the time-frequency distribution image into an initial target detection model for training to generate a radar interference perception model.
In a fourth aspect, an embodiment of the present application provides a radar interference signal identification apparatus, including: the second acquisition module is used for acquiring the interference signal to be detected; the second processing module is used for performing time-frequency analysis processing on the interference signal to be detected to generate a time-frequency distribution image to be detected; the identification module is configured to input the to-be-detected time-frequency distribution image into a radar interference perception model generated by the training method for a radar interference perception model provided in the embodiment of the first aspect, so as to obtain an identification result of the to-be-detected interference signal; wherein the recognition result comprises: the system comprises a comb spectrum interference signal, an intermittent sampling interference signal, a narrow-band aiming interference signal, a spectrum dispersion interference signal, a comb spectrum interference and intermittent sampling interference composite signal, a comb spectrum interference and narrow-band aiming interference composite signal, a comb spectrum interference and spectrum dispersion interference composite signal, an intermittent sampling interference and narrow-band aiming interference composite signal, an intermittent sampling interference and spectrum dispersion interference composite signal, a narrow-band aiming interference and spectrum dispersion interference composite signal, a comb spectrum interference, intermittent sampling interference and narrow-band aiming interference triple composite signal, a comb spectrum interference signal, an intermittent sampling interference and spectrum dispersion interference triple composite signal, a comb spectrum interference, narrow-band aiming interference and spectrum dispersion interference triple composite signal, and one of the intermittent sampling interference, narrow-band aiming interference and spectrum dispersion interference triple composite signal.
In a fifth aspect, an embodiment of the present application provides an electronic device, including: a processor and a memory, the processor and the memory connected; the memory is used for storing programs; the processor is configured to call a program stored in the memory, to execute the embodiments of the first aspect, and/or to execute the methods provided by the embodiments of the second aspect.
In a sixth aspect, embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, the computer program, when executed by a processor, performs the method according to the first aspect, and/or performs the method according to the second aspect.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Fig. 2 is a flowchart illustrating steps of a method for training a radar interference perception model according to an embodiment of the present disclosure.
Fig. 3 is a schematic diagram of a time-frequency distribution image corresponding to each of four interference signals provided in this embodiment.
Fig. 4 is a comparison diagram of the original eighth layer Conv layer and the packet convolution layer after replacement in the YOLO v5s model according to the embodiment of the present application.
Fig. 5 is a schematic diagram of an operation process of the Ghost convolutional layer according to the embodiment of the present application.
Fig. 6 is a schematic structural diagram of a YOLO v5s model according to an embodiment of the present application.
Fig. 7 is a schematic diagram of a confusion matrix of test results of the YOLO v5s model according to an embodiment of the present application.
Fig. 8 is a schematic diagram of a sensing result of a triple interference composite signal according to an embodiment of the present disclosure.
Fig. 9 is a schematic diagram of a sensing result of a dual interference composite signal according to an embodiment of the present disclosure.
Fig. 10 is a flowchart illustrating steps of a method for identifying a radar interference signal according to an embodiment of the present disclosure.
Fig. 11 is a block diagram of a training apparatus for a radar interference perception model according to an embodiment of the present disclosure.
Fig. 12 is a block diagram of a radar jamming signal identifying apparatus according to an embodiment of the present application.
Icon: 100-an electronic device; 110-a processor; 120-a memory; 200-training means of a radar interference perception model; 210-a first obtaining module; 220-a first processing module; 230-a training module; 300-radar interference signal identification means; 310-a second obtaining module; 320-a second processing module; 330-identification module.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
Referring to fig. 1, a schematic block diagram of an electronic device 100 applying a training method for a radar interference sensing model and/or a radar interference signal identification method according to an embodiment of the present disclosure is provided. In the embodiment of the present application, the electronic Device 100 may be a terminal or a server, and the terminal may be, but is not limited to, a Personal Computer (PC), a smart phone, a tablet Computer, a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), and the like. The server may be, but is not limited to, a web server, a database server, a cloud server, or a server assembly composed of a plurality of sub-servers, etc. Of course, the above-mentioned devices are only used to facilitate understanding of the embodiments of the present application, and should not be taken as limiting the embodiments.
Structurally, electronic device 100 may include a processor 110 and a memory 120.
The processor 110 and the memory 120 are electrically connected directly or indirectly to enable data transmission or interaction, for example, the components may be electrically connected to each other via one or more communication buses or signal lines. The training device of the radar interference perception model includes at least one software module which can be stored in the memory 120 in the form of software or Firmware (Firmware) or solidified in an Operating System (OS) of the electronic device 100. The processor 110 is configured to execute executable modules stored in the memory 120, such as software functional modules and computer programs included in the training apparatus for the radar interference perception model, so as to implement the training method for the radar interference perception model. Also for example, the radar disturbance signal identification means comprises at least one software module which may be stored in the form of software or Firmware (Firmware) in the memory 120 or solidified in the operating system of the electronic device 100. The processor 110 is configured to execute executable modules stored in the memory 120, such as software functional modules and computer programs included in the radar interference signal identification apparatus, so as to implement the radar interference signal identification method.
The processor 110 may execute the computer program upon receiving the execution instruction.
The processor 110 may be an integrated circuit chip having signal processing capabilities. The Processor 110 may also be a general-purpose Processor, for example, a Central Processing Unit (CPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a discrete gate or transistor logic device, or a discrete hardware component, which may implement or execute the methods, steps, and logic blocks disclosed in the embodiments of the present Application. Further, a general purpose processor may be a microprocessor or any conventional processor or the like.
The Memory 120 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), and an electrically Erasable Programmable Read-Only Memory (EEPROM). The memory 120 is used for storing a program, and the processor 110 executes the program after receiving the execution instruction.
It should be noted that the structure shown in fig. 1 is only an illustration, and the electronic device 100 provided in the embodiment of the present application may also have fewer or more components than those shown in fig. 1, or have a different configuration than that shown in fig. 1. Further, the components shown in fig. 1 may be implemented by software, hardware, or a combination thereof.
According to the training method of the radar interference perception model and the radar interference signal identification method, a target detection idea in the field of computer vision is introduced to establish the target detection model, and only four single interference signals need to be provided during training of the target detection model. And the trained target detection model can identify all interference signal types in the image, so that the radar interference of four single interference signals, six pairwise composite signals and four triple composite signals can be flexibly sensed. To facilitate understanding of the present solution, first, a training process of the radar interference perception model is described.
Referring to fig. 2, fig. 2 is a flowchart illustrating steps of a method for training a radar interference perception model according to an embodiment of the present application, where the method is applied to the electronic device 100 shown in fig. 1. It should be noted that, the training method for the radar interference perception model provided in the embodiment of the present application is not limited by the sequence shown in fig. 2 and the following steps, and the method includes: step S101-step S103.
Step S101: and acquiring training sample data.
The training sample data includes Comb Spectrum interference signal (CSJ), intermittent Sampling Interference Signal (ISRJ), Narrowband frequency Aiming interference signal (NAJ), and Spectrum dispersive interference signal (SMSP).
In one embodiment, a chirp signal may be used as the radar transmission signal to generate training sample data from a mathematical model of squashing and spoofing type interference. It should be noted that the mathematical models of the jamming and spoofing type described above can be referred to the models provided in the prior art, and are not described in detail in this application.
That is, the training sample data may be derived from simulation data. Wherein, the main simulation parameter setting of the interference signal can refer to table one.
Watch 1
Figure BDA0003650014170000081
Of course, the training sample data may also be derived from measured data or obtained through other simulation experiments, and the present application is not limited thereto.
In one embodiment, 125 groups of samples are generated by simulation for each of the four interference signals, and the total number of the samples is 500 groups of samples, and then the samples are divided in a way that the training set is 80% and the verification set is 20%. Wherein, the training set is 80% of the training sample data. And the remaining 20% of the validation set is used for verifying the training level of the model (i.e. the detection error of the model) after the subsequent model training is completed, so as to select the best model.
Step S102: and performing time-frequency analysis processing on the training sample data to generate a time-frequency distribution image.
After the training sample data is obtained, performing Time-frequency analysis processing on the training sample data by using Short-Time Fourier Transform (STFT) to obtain a Time-frequency distribution image reflecting the change of the instantaneous frequency along with Time.
Wherein, the expression of the short-time Fourier transform is as follows:
STFT(t,f)=∫j(τ)g * (τ-t)e -j2πfτ dτ (1)
wherein, in formula (1), j (t) represents an interference signal, and g (t) represents a window function; g (t) represents the conjugate of the window function.
Since the above-mentioned specific formula of the short-time fourier transform is well known in the art, it will not be described herein too much.
Referring to fig. 3, fig. 3 shows time-frequency distribution images corresponding to four interference signals, wherein the first is a time-frequency distribution image corresponding to a comb spectrum interference signal (CSJ), the second is a time-frequency distribution image corresponding to an intermittent sampling Interference Signal (ISRJ), the third is a time-frequency distribution image corresponding to a narrowband frequency-aiming interference signal (NAJ), the fourth is a time-frequency distribution image corresponding to a spectrum dispersion interference signal (SMSP), an abscissa in each image represents time (unit: second), and an ordinate in each image represents frequency (unit: hertz).
In an embodiment, the sizes of the time-frequency distribution images may be unified, for example, the sizes of the time-frequency distribution images are uniformly scaled to 224 × 224 pixels, and of course, the specific sizes may also be determined according to actual requirements, which is not limited in this application.
Step S103: and inputting the time-frequency distribution image into an initial target detection model for training to generate a radar interference perception model.
And finally, constructing an initial target detection model, training by taking the time-frequency distribution image as the input of the target detection model so as to enable the model to perform feature extraction on the interference signal in a time-frequency domain, and finally training to generate a radar interference perception model for perceiving the radar interference signal.
In an embodiment, a supervised learning manner is adopted to train the target detection model, that is, the step S103 may specifically include: acquiring an image marked by a user on a time-frequency distribution image; and inputting the marked time-frequency distribution image into an initial target detection model for supervised training.
In the embodiment of the application, the time-frequency distribution image marked by the user is input into the target detection model of the initial point for supervised training, so that the target detection model can effectively learn the characteristics of the time-frequency distribution image corresponding to each signal.
The radar interference perception model obtained by the training mode can detect the characteristics of the interference signals in the time-frequency distribution image through learning. Therefore, when one time-frequency distribution image simultaneously contains the characteristics corresponding to different interference signals, the radar interference perception model can also be simultaneously detected. Therefore, in the embodiment of the application, a target detection idea in the field of computer vision is introduced to establish the target detection model, and only four single interference signals need to be provided during the training of the target detection model. The trained target detection model (namely the radar interference perception model) can identify all interference signal types in the image, and further flexible perception of radar interference of four single interference signals, six pairwise composite signals and four triple composite signals is achieved.
The trained radar interference perception model aims at detecting single interference signals and composite signals, so that the single interference signals and the composite signals can be used as test samples in a test stage. Wherein the test specimen comprises: comb spectrum interference signal (CSJ), intermittently sampled Interference Signal (ISRJ), narrowband aiming interference signal (NAJ), spectrum dispersion interference signal (SMSP), comb spectrum interference and intermittently sampled interference composite signal (CSJ + ISRJ), comb spectrum interference and narrowband aiming interference composite signal (CSJ + NAJ), comb spectrum interference and spectrum dispersion interference composite signal (CSJ + SMSP), intermittently sampled interference and narrowband aiming interference composite signal (ISRJ + NAJ), intermittently sampled interference and spectrum dispersion interference composite signal (ISRJ + SMSP), narrowband aiming interference and spectrum dispersion interference composite signal (NAJ + SMSP), comb spectrum interference, intermittently sampled interference and narrowband aiming interference triple composite signal (CSJ + ISRJ + NAJ), comb spectrum interference signal, intermittently sampled interference and narrowband aiming interference triple composite signal (CSJ + ISRJ + SMSP), comb spectrum interference, narrowband aiming triple interference and spectrum interference composite signal (CSJ + ISRJ + SMSP), and an intermittent sampling interference, narrow-band frequency-aiming interference and spectral dispersion interference triple composite signal (ISRJ + NAJ + SMSP).
In one embodiment, the test samples are spaced 2dB apart at JNR between 0dB and 16dBGenerating, under 9 JNR conditions, 5 groups of samples are generated for each interference signal, and a total of 9 × 14 × 5 is 630 groups of test samples. Let P jam For interfering signal power, P noise To add noise power to the interfering signal, JNR is defined as follows:
Figure BDA0003650014170000111
the test samples are used for evaluating the generalization ability and the effect of the finally screened radar interference perception model.
In the embodiment of the present application, a YOLO series model is used as an initial target detection model. Specifically, a YOLO v5s model in the YOLO series may be adopted as the target detection model. The YOLO v5s can improve the detection performance of the target detection model, and the YOLO v5s model has less parameter quantity compared with other target detection models, so that the complexity of network model training can be reduced.
Please refer to table two, which is the level name of the YOLO v5s model and the parameters when training using the YOLO v5s model.
Watch two
Figure BDA0003650014170000112
Figure BDA0003650014170000121
Since the above-mentioned YOLO v5s model is well known in the art for the hierarchy of the present application, it will not be described herein too much.
From table two, the total amount of parameters in training using the YOLO v5s model was 7071633. And the Conv layer of the 8 th layer and the C3 layers of the 7 th, 10 th and 24 th layers occupy 58.98% of the total network parameters, so in the embodiment of the application, the four-layer structure of the YOLO v5s model is also subjected to lightweight improvement.
As a lightweight embodiment, the eighth layer of the YOLO v5s model is provided as a packet convolutional layer.
Referring to fig. 4, fig. 4 is a diagram illustrating a comparison between the original eighth layer Conv layer and the packet convolution layer after replacement in the YOLO v5s model.
The Conv layer is a standard Convolutional layer, which is the most basic operation in the CNN (Convolutional neural network) and is the basis of the CNN.
Assume a set of input feature map sizes C in ×H in ×W in ;C in 、H in 、W in The number (Channel), height and width of the input feature map channels are respectively. Obtaining the size of an output characteristic diagram C through a group of convolution kernels with the size of K multiplied by K out ×H out ×W out . For standard convolution, the parameter quantity is (K × K × C) in the calculation process in )×C out . For the packet convolution, let the number of packets be G, and divide the feature map equally on the Channel, i.e. C in the/G channels are organized into a group, and correspondingly, the Channel of the output characteristic graph after each group of convolution operation becomes C out (vii)/G, in this case, the number of parameters per group is (KxKxC) in /G)×C out G, the total number of groups is (K multiplied by C) in )×C out G, setting the value of G for 1/G of standard convolution operation to be satisfied by C in And C out And (4) trimming.
In the operation shown in FIG. 4, C in =8,C out When G is 2, the parameters of the packet convolution are 1/2 of the standard convolution. Of course, in other embodiments, G may also be set to 4 as needed, which is not limited in this application.
It can be seen that, in the embodiment of the present application, replacing the standard convolutional layer of the eighth layer of the YOLO v5s model with the packet convolutional layer can significantly reduce the parameters and the calculation amount of the eighth layer, and further reduce the parameters and the calculation amount of the YOLO v5s model.
As another lightweight embodiment, the seventh, tenth and twenty-fourth layers of the YOLO v5s model are Ghost convolutional layers.
It should be noted that, in the conventional deep learning neural network, a large number of redundant feature maps are obtained by convolution operation, and these feature maps are very similar to each other. The process of obtaining the redundant feature map by convolution operation includes a large number of network parameters, which consumes a large amount of computing resources. The redundant information ensures the comprehensive understanding of the input data and is crucial to the accuracy of the model, so that the redundant characteristic diagrams cannot be directly removed. To obtain these redundant feature maps using lower cost computations, the Ghost convolutional layer is used as the seventh, tenth, and twenty-fourth layers of the YOLO v5s model.
The Ghost convolution is a staged convolution operation module, and the operation process is as shown in fig. 5. Assuming that the convolution kernel size is K multiplied by K in the operation process, and the final output characteristic diagram size is C out ×H out ×W out For C in ×H in ×W in Firstly, the input characteristic diagram is processed by standard convolution operation to obtain C in Characteristic diagram of/s channels, the process parameter is (KxKxC) in )×C out And s. And performing linear transformation on the feature maps to obtain redundant feature maps, wherein phi refers to linear transformation operation, and channel-by-channel convolution is performed in the maps and can be regarded as G-C in The process parameter is KxKxC out X (s-1)/s. The redundant feature map obtained by the linear transformation is called the 'Ghost' of the previous feature map, the final output feature map is formed by combining the two feature maps, and the parameter number of the whole process is (K multiplied by C) in )×C out /s+(K×K×C in )×C out And/s is about 1/s of the standard convolution parameter.
It should be noted that s can be set according to practical situations, for example, the input feature map is eight dimensions, and becomes four through conventional convolution, so that s is 2, that is, s is 2 in fig. 5.
It can be seen that, in the embodiment of the present application, replacing the C3 layer of the seventh layer, the tenth layer, and the twenty-fourth layer of the YOLO v5s model with the Ghost convolutional layer can significantly reduce the parameter amount and the calculation amount of each of the seventh layer, the tenth layer, and the twenty-fourth layer, and further reduce the parameter amount and the calculation amount of the YOLO v5s model.
Referring to fig. 6, fig. 6 is a schematic structural diagram of the YOLO v5s model after the weight reduction improvement. The input data is time-frequency analysis images of various interferences, firstly, time-frequency domain feature information is extracted through a feature extraction backbone network (from a first layer to a tenth layer), then feature information of different scales is fused through a feature fusion network (from a tenth layer to a twenty-fifth layer), prediction is carried out on feature maps of three scales, and interference perception results are output.
The following object detection model describes the relevant contents of training configuration and experimental verification.
The main hyper-parameters in the network training process are set as follows: the training period (epochs) is 1000, the batch size (batch size) is 64, a Stochastic Gradient Descent (SGD) method is selected as an optimizer, the momentum factor is set to be 0.937, the weight attenuation coefficient is 0.0005, the initial learning rate is 0.01, the learning rate is gradually adjusted to be 1/5 in the model training process, and the model is ensured to be gradually converged.
Evaluation indexes are as follows:
the perception performance evaluation index of the network uses a Mean Average Precision (mAP) index commonly used in a target detection task, and simultaneously calculates the mAP under two Intersection Over Union (IOU) thresholds. mAP @0.5 and mAP @0.5:0.95, respectively, the former is mAP when IOU is 0.5, and the latter is mAP of IOU from 0.5 to 0.95, step size is 0.05, and average mAP of 10 IOU.
In the experiment, the result of sensing certain radar interference comprises 4 conditions, namely True Positive (TP), the sensing is correct, and the sensing result is a Positive sample; false Positive (FP), sensing error, sensing result is Positive negative sample; true Negative (TN), correct perception, Negative sample of perception result; false Negative (FN), perception error, and Negative positive sample. Whether the sensing result is a positive sample or a negative sample is determined according to whether the IOU is larger than a threshold value. The Recall (Recall) and accuracy (Precision) definitions are shown in equation (3) and equation (4), respectively.
Figure BDA0003650014170000151
Figure BDA0003650014170000152
The mAP is defined as formula (5) and formula (6), wherein AP is a Precision mean value of a certain class, P is Precision, R is Recall, P (R) represents a curve of Precision and Recall rate, C is the number of interference classes, and the larger the mAP is, the stronger the perception performance is.
Figure BDA0003650014170000153
Figure BDA0003650014170000154
For the model size evaluation index, the size of a parameter (Params) is adopted, and the operation complexity is evaluated by Floating Point Operations (FLOPs).
Ablation test:
the original YOLO v5s and the improved algorithm were compared for ablation and the improved ability of each module to the original YOLO v5s was evaluated, with the parameters, calculated amounts, mapp @0.5 and mapp @0.5:0.95 as listed in table three and table four, respectively.
It can be found that after the improvement by using the Ghost module, the network parameter amount and the calculated amount are obviously reduced, and the perception performance is also improved. For the use of packet convolution, the number of network parameters and the amount of computation are further reduced, but when G is 4, the perceptual performance is degraded. Compared with the results of comprehensive ablation experiments, the final improved structure is that a Ghost module is used, and the grouped convolution G is 2, so that the improved algorithm still keeps obvious improvement of the perception precision under the condition that the parameter quantity is reduced by 24.4%.
Watch III
Figure BDA0003650014170000161
Watch four
Figure BDA0003650014170000162
And (3) comparison test:
the improved network (i.e. the target detection model with the improved structure) is compared with the perceptual performance of several typical target detection networks such as YOLO v5s, YOLO v3 and SSD.
The perception performance of the YOLO v5s and the improved network of the embodiment of the application is obviously superior to that of the classical YOLO v3 and SSD network no matter under the evaluation indexes of mAP @0.5 and mAP @0.5: 0.95. For mAP @0.5, YOLO v5s is very close to the perceptual performance of the improved network herein, differing by at most 1.2%. For the mAP @0.5:0.95 with more strict evaluation indexes, the perception performance of the improved network of the embodiment of the application is still better than that of the improved YOLO v5s under different JNRs, and the perception performance can be improved by 4.1% at the maximum JNR of 16 dB.
When the JNR is as low as 0dB, the confusion matrix of the test results of the improved network according to the embodiment of the present application is shown in fig. 7, where the horizontal axis is a true interference class label, and the vertical axis is an interference class label predicted by the improved network, and the Background label refers to the Background of an image. The value is called Recall, which indicates the ratio of the number of perceived correct interference samples to the number of actually existing interference samples for a certain type of interference. The confusion matrix shows that the Recall of the improved network can reach more than 0.94 for the CSJ, ISRJ and SMSP interferences in all the composite types. At the moment, JNR is low, the background noise is close to the characteristics of the NAJ, and the JNR is an important reason for high interference perception difficulty under the condition of low JNR, so that the perception result is confused, but the Recall of the NAJ can still reach 0.89. It can be seen that the improved network herein still maintains a higher interference aware performance at JNR of 0 dB.
The perception result proves that:
the network model provided by the embodiment of the application is used for conducting a perception experiment on 10 types of composite interference, and composite interference samples under different JNR conditions are randomly selected. And the output sensing result can display an interference sensing frame, the distribution area of the interference signal in the time-frequency image is subjected to frame selection, and the type of the interference signal is labeled. The sensing results of the four triple composite mode interferences and the six pairwise composite mode interferences are shown in fig. 8 and fig. 9, respectively.
Fig. 8(a) shows the sensing result of the comb spectrum interference signal, the intermittent sampling interference and the spectrum dispersion interference triple composite signal (CSJ + ISRJ + SMSP); fig. 8(b) is a composite triple signal (CSJ + ISRJ + NAJ) of comb spectrum interference, intermittent sampling interference, and narrowband frequency-aiming interference; fig. 8(c) shows an intermittently sampled interference signal, a narrowband frequency-aiming interference and a spectrally dispersive interference triple composite signal (ISRJ + NAJ + SMSP); fig. 8(d) shows a comb spectrum interference signal, a narrow-band frequency-aiming interference and a spectrum dispersion interference triple composite signal (CSJ + NAJ + SMSP).
Fig. 9(a) is an intermittent sampling interference and spectrum dispersion interference composite signal (ISRJ + SMSP), fig. 9(b) is a comb spectrum interference and narrowband aiming frequency interference composite signal (CSJ + NAJ), fig. 9(c) is a narrowband aiming frequency interference and spectrum dispersion interference composite signal (NAJ + SMSP), fig. 9(d) is a comb spectrum interference and intermittent sampling interference composite signal (CSJ + ISRJ), fig. 9(e) is a comb spectrum interference and spectrum dispersion interference composite signal (CSJ + SMSP), and fig. 9(f) is an intermittent sampling interference and narrowband aiming frequency interference composite signal (ISRJ + NAJ).
The perception effect shows that the interference perception frame can accurately mark the frame to select the interference signal, no missing detection occurs, and the interference signal with the partial shielding condition can be accurately perceived.
Referring to fig. 10, an application process of the radar interference perception model obtained by training in the foregoing embodiment is described below, where fig. 10 is a diagram of a radar interference signal identification method provided in an embodiment of the present application, and the method includes: step S201 to step S203.
Step S201: and acquiring an interference signal to be detected.
Step S202: and performing time-frequency analysis processing on the interference signal to be detected to generate a time-frequency distribution image to be detected.
Step S203: and inputting the time-frequency distribution image to be detected into a radar interference perception model to obtain an identification result of the interference signal to be detected.
Wherein the identification result comprises: the system comprises a comb spectrum interference signal, an intermittent sampling interference signal, a narrow-band aiming interference signal, a spectrum dispersion interference signal, a comb spectrum interference and intermittent sampling interference composite signal, a comb spectrum interference and narrow-band aiming interference composite signal, a comb spectrum interference and spectrum dispersion interference composite signal, an intermittent sampling interference and narrow-band aiming interference composite signal, an intermittent sampling interference and spectrum dispersion interference composite signal, a narrow-band aiming interference and spectrum dispersion interference composite signal, a comb spectrum interference, intermittent sampling interference and narrow-band aiming interference triple composite signal, a comb spectrum interference signal, an intermittent sampling interference and spectrum dispersion interference triple composite signal, a comb spectrum interference, narrow-band aiming interference and spectrum dispersion interference triple composite signal, and one of the intermittent sampling interference, narrow-band aiming interference and spectrum dispersion interference triple composite signal.
That is, after the final radar interference perception model is generated through training, the radar interference perception model can be used for perception detection of interference signals to be detected.
Referring to fig. 11, based on the same inventive concept, an embodiment of the present application further provides a training apparatus 200 for a radar interference perception model, the apparatus including:
a first obtaining module 210, configured to obtain training sample data; the training sample data comprises a comb spectrum interference signal, an intermittent sampling interference signal, a narrow-band aiming frequency interference signal and a frequency spectrum dispersion interference signal.
The first processing module 220 is configured to perform time-frequency analysis processing on the training sample data to generate a time-frequency distribution image.
And the training module 230 is configured to input the time-frequency distribution image into an initial target detection model for training, so as to generate a radar interference sensing model.
Optionally, the training module 230 is specifically configured to obtain an image obtained by marking the time-frequency distribution image by a user; and inputting the marked time-frequency distribution image into the initial target detection model for supervised training.
Optionally, the initial target detection model is a YOLO v5s model.
Optionally, the eighth layer of the YOLO v5s model is a packet convolutional layer.
Optionally, the seventh, tenth, and twenty-fourth layers of the YOLO v5s model are Ghost convolutional layers.
Referring to fig. 12, based on the same inventive concept, an embodiment of the present application provides a radar interference signal identification apparatus 300, including:
the second obtaining module 310 is configured to obtain an interference signal to be detected.
The second processing module 320 is configured to perform time-frequency analysis processing on the interference signal to be detected, and generate a time-frequency distribution image to be detected.
An identification module 330, configured to input the time-frequency distribution image to be detected into the radar interference perception model generated by the training method of the radar interference perception model according to any one of claims 1 to 5, to obtain an identification result of the interference signal to be detected; wherein the recognition result comprises: the system comprises a comb spectrum interference signal, an intermittent sampling interference signal, a narrow-band aiming interference signal, a spectrum dispersion interference signal, a comb spectrum interference and intermittent sampling interference composite signal, a comb spectrum interference and narrow-band aiming interference composite signal, a comb spectrum interference and spectrum dispersion interference composite signal, an intermittent sampling interference and narrow-band aiming interference composite signal, an intermittent sampling interference and spectrum dispersion interference composite signal, a narrow-band aiming interference and spectrum dispersion interference composite signal, a comb spectrum interference, intermittent sampling interference and narrow-band aiming interference triple composite signal, a comb spectrum interference signal, an intermittent sampling interference and spectrum dispersion interference triple composite signal, a comb spectrum interference, narrow-band aiming interference and spectrum dispersion interference triple composite signal, and one of the intermittent sampling interference, narrow-band aiming interference and spectrum dispersion interference triple composite signal.
It should be noted that, as those skilled in the art can clearly understand, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Based on the same inventive concept, embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed, the computer program performs the methods provided in the above embodiments.
The storage medium may be any available medium that can be accessed by a computer or a data storage device including one or more integrated servers, data centers, and the like. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the division of the units into only one type of logical function may be implemented in other ways, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A method for training a radar interference perception model is characterized by comprising the following steps:
acquiring training sample data; the training sample data comprises a comb spectrum interference signal, an intermittent sampling interference signal, a narrow-band aiming frequency interference signal and a frequency spectrum dispersion interference signal;
performing time-frequency analysis processing on the training sample data to generate a time-frequency distribution image;
and inputting the time-frequency distribution image into an initial target detection model for training to generate a radar interference perception model.
2. The method of claim 1, wherein the inputting the time-frequency distribution image into an initial object detection model for training comprises:
acquiring an image marked by the time-frequency distribution image by a user;
and inputting the marked time-frequency distribution image into the initial target detection model for supervised training.
3. The method of claim 1, wherein the initial target detection model is a YOLO v5s model.
4. The method of claim 3, wherein the eighth layer of the YOLO v5s model is a packet convolutional layer.
5. The method of claim 3, wherein the seventh, tenth, and twenty-fourth layers of the YOLO v5s model are Ghost convolutional layers.
6. A radar interference signal identification method is characterized by comprising the following steps:
acquiring an interference signal to be detected;
performing time-frequency analysis processing on the interference signal to be detected to generate a time-frequency distribution image to be detected;
inputting the time-frequency distribution image to be detected into the radar interference perception model generated by the training method of the radar interference perception model according to any one of claims 1 to 5, and obtaining the identification result of the interference signal to be detected; wherein the identification result comprises: the system comprises a comb spectrum interference signal, an intermittent sampling interference signal, a narrow-band aiming interference signal, a spectrum dispersion interference signal, a comb spectrum interference and intermittent sampling interference composite signal, a comb spectrum interference and narrow-band aiming interference composite signal, a comb spectrum interference and spectrum dispersion interference composite signal, an intermittent sampling interference and narrow-band aiming interference composite signal, an intermittent sampling interference and spectrum dispersion interference composite signal, a narrow-band aiming interference and spectrum dispersion interference composite signal, a comb spectrum interference, intermittent sampling interference and narrow-band aiming interference triple composite signal, a comb spectrum interference signal, an intermittent sampling interference and spectrum dispersion interference triple composite signal, a comb spectrum interference, narrow-band aiming interference and spectrum dispersion interference triple composite signal, and one of the intermittent sampling interference, narrow-band aiming interference and spectrum dispersion interference triple composite signal.
7. A training device for a radar interference perception model is characterized by comprising:
the first acquisition module is used for acquiring training sample data; the training sample data comprises a comb spectrum interference signal, an intermittent sampling interference signal, a narrow-band aiming frequency interference signal and a frequency spectrum dispersion interference signal;
the first processing module is used for performing time-frequency analysis processing on the training sample data to generate a time-frequency distribution image;
and the training module is used for inputting the time-frequency distribution image into an initial target detection model for training to generate a radar interference perception model.
8. A radar interference signal identifying apparatus, comprising:
the second acquisition module is used for acquiring the interference signal to be detected;
the second processing module is used for performing time-frequency analysis processing on the interference signal to be detected to generate a time-frequency distribution image to be detected;
the identification module is used for inputting the time-frequency distribution image to be detected into the radar interference perception model generated by the training method of the radar interference perception model according to any one of claims 1 to 5 to obtain the identification result of the interference signal to be detected; wherein the recognition result comprises: the system comprises a comb spectrum interference signal, an intermittent sampling interference signal, a narrow-band aiming interference signal, a spectrum dispersion interference signal, a comb spectrum interference and intermittent sampling interference composite signal, a comb spectrum interference and narrow-band aiming interference composite signal, a comb spectrum interference and spectrum dispersion interference composite signal, an intermittent sampling interference and narrow-band aiming interference composite signal, an intermittent sampling interference and spectrum dispersion interference composite signal, a narrow-band aiming interference and spectrum dispersion interference composite signal, a comb spectrum interference, intermittent sampling interference and narrow-band aiming interference triple composite signal, a comb spectrum interference signal, an intermittent sampling interference and spectrum dispersion interference triple composite signal, a comb spectrum interference, narrow-band aiming interference and spectrum dispersion interference triple composite signal, and one of the intermittent sampling interference, narrow-band aiming interference and spectrum dispersion interference triple composite signal.
9. An electronic device, comprising: a processor and a memory, the processor and the memory connected;
the memory is used for storing programs;
the processor is configured to run a program stored in the memory, to perform the method of any of claims 1-5, and/or to perform the method of claim 6.
10. A computer-readable storage medium, on which a computer program is stored which, when executed by a computer, performs the method of any one of claims 1-5 and/or performs the method of claim 6.
CN202210547511.XA 2022-05-18 2022-05-18 Training method of radar interference perception model and radar interference signal identification method Active CN114818828B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210547511.XA CN114818828B (en) 2022-05-18 2022-05-18 Training method of radar interference perception model and radar interference signal identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210547511.XA CN114818828B (en) 2022-05-18 2022-05-18 Training method of radar interference perception model and radar interference signal identification method

Publications (2)

Publication Number Publication Date
CN114818828A true CN114818828A (en) 2022-07-29
CN114818828B CN114818828B (en) 2024-04-05

Family

ID=82514404

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210547511.XA Active CN114818828B (en) 2022-05-18 2022-05-18 Training method of radar interference perception model and radar interference signal identification method

Country Status (1)

Country Link
CN (1) CN114818828B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115840875A (en) * 2022-11-10 2023-03-24 北京擎天信安科技有限公司 Millimeter wave radar abnormal signal detection method and system based on analog transducer
CN117452367A (en) * 2023-12-21 2024-01-26 西安电子科技大学 SAR load radiation signal extraction method and device based on broadband imaging radar

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019161076A1 (en) * 2018-02-19 2019-08-22 Digital Global Systems, Inc. Systems, methods, and devices for unmanned vehicle detection and threat management
US20200143279A1 (en) * 2018-11-06 2020-05-07 DeepSig Inc. Radio frequency band segmentation, signal detection and labelling using machine learning
CN111541511A (en) * 2020-04-20 2020-08-14 中国人民解放军海军工程大学 Communication interference signal identification method based on target detection in complex electromagnetic environment
CN112904282A (en) * 2021-01-20 2021-06-04 北京理工大学 Radar interference signal identification method based on PWVD and convolutional neural network
CN113469073A (en) * 2021-07-06 2021-10-01 西安电子科技大学 SAR image ship detection method and system based on lightweight deep learning
CN113486898A (en) * 2021-07-08 2021-10-08 西安电子科技大学 Radar signal RD image interference identification method and system based on improved ShuffleNet
CN114266299A (en) * 2021-12-16 2022-04-01 京沪高速铁路股份有限公司 Method and system for detecting defects of steel structure of railway bridge based on unmanned aerial vehicle operation
CN114429156A (en) * 2022-01-21 2022-05-03 西安电子科技大学 Radar interference multi-domain feature countermeasure learning and detection identification method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019161076A1 (en) * 2018-02-19 2019-08-22 Digital Global Systems, Inc. Systems, methods, and devices for unmanned vehicle detection and threat management
US20200143279A1 (en) * 2018-11-06 2020-05-07 DeepSig Inc. Radio frequency band segmentation, signal detection and labelling using machine learning
CN111541511A (en) * 2020-04-20 2020-08-14 中国人民解放军海军工程大学 Communication interference signal identification method based on target detection in complex electromagnetic environment
CN112904282A (en) * 2021-01-20 2021-06-04 北京理工大学 Radar interference signal identification method based on PWVD and convolutional neural network
CN113469073A (en) * 2021-07-06 2021-10-01 西安电子科技大学 SAR image ship detection method and system based on lightweight deep learning
CN113486898A (en) * 2021-07-08 2021-10-08 西安电子科技大学 Radar signal RD image interference identification method and system based on improved ShuffleNet
CN114266299A (en) * 2021-12-16 2022-04-01 京沪高速铁路股份有限公司 Method and system for detecting defects of steel structure of railway bridge based on unmanned aerial vehicle operation
CN114429156A (en) * 2022-01-21 2022-05-03 西安电子科技大学 Radar interference multi-domain feature countermeasure learning and detection identification method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
QINZHE LV等: ""Radar Deception Jamming Recognition Based on Weighted Ensemble CNN With Transfer Learning"", pages 1 - 11 *
XIONGXIN ZOU等: ""A lightweight model based on YOLOv5 for helmet wearing detection"", pages 1 - 6 *
成李博: ""基于深度学习的滑坡灾害检测模型研究"" *
王书坤等: ""改进的轻量型YOLOv5绝缘子缺陷检测算法研究"", pages 456 - 461 *
郎彬DENG: ""一种小样本数据驱动的雷达复合干扰轻量化感知网络"", pages 1 - 13 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115840875A (en) * 2022-11-10 2023-03-24 北京擎天信安科技有限公司 Millimeter wave radar abnormal signal detection method and system based on analog transducer
CN117452367A (en) * 2023-12-21 2024-01-26 西安电子科技大学 SAR load radiation signal extraction method and device based on broadband imaging radar
CN117452367B (en) * 2023-12-21 2024-03-26 西安电子科技大学 SAR load radiation signal extraction method and device based on broadband imaging radar

Also Published As

Publication number Publication date
CN114818828B (en) 2024-04-05

Similar Documents

Publication Publication Date Title
CN114818828B (en) Training method of radar interference perception model and radar interference signal identification method
EP2814218B1 (en) Detecting anomalies in work practice data by combining multiple domains of information
CN108833458B (en) Application recommendation method, device, medium and equipment
CN111191601B (en) Method, device, server and storage medium for identifying peer users
CN109817339B (en) Patient grouping method and device based on big data
CN103370722B (en) The system and method that actual volatility is predicted by small echo and nonlinear kinetics
US11373760B2 (en) False detection rate control with null-hypothesis
CN110969200A (en) Image target detection model training method and device based on consistency negative sample
CN112596964A (en) Disk failure prediction method and device
CN110287703B (en) Method and device for detecting vehicle safety risk
CN111597399A (en) Computer data processing system and method based on data fusion
CN111161789B (en) Analysis method and device for key areas of model prediction
CN113296992A (en) Method, device, equipment and storage medium for determining abnormal reason
CN117332324A (en) Pipeline leakage detection method and device, electronic equipment and storage medium
US11341394B2 (en) Diagnosis of neural network
CN116720946A (en) Credit risk prediction method, device and storage medium based on recurrent neural network
US9122705B1 (en) Scoring hash functions
CN113890833B (en) Network coverage prediction method, device, equipment and storage medium
CN115225359A (en) Honeypot data tracing method and device, computer equipment and storage medium
Jung et al. Spatial cluster detection for ordinal outcome data
CN114218574A (en) Data detection method and device, electronic equipment and storage medium
CN109783745B (en) Method, device and computer equipment for personalized typesetting of pages
CN112613224A (en) Training method, detection method, device and equipment of bridge structure detection model
Hopkins et al. Gap analysis and the geographical distribution of parasites
CN112308099A (en) Sample feature importance determination method, and classification model training method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant