CN115542318B - Unmanned aerial vehicle group target-oriented air-ground combined multi-domain detection system and method - Google Patents

Unmanned aerial vehicle group target-oriented air-ground combined multi-domain detection system and method Download PDF

Info

Publication number
CN115542318B
CN115542318B CN202211244116.0A CN202211244116A CN115542318B CN 115542318 B CN115542318 B CN 115542318B CN 202211244116 A CN202211244116 A CN 202211244116A CN 115542318 B CN115542318 B CN 115542318B
Authority
CN
China
Prior art keywords
knowledge
unmanned aerial
radar
aerial vehicle
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211244116.0A
Other languages
Chinese (zh)
Other versions
CN115542318A (en
Inventor
张小飞
吴启晖
李建峰
晋本周
徐帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202211244116.0A priority Critical patent/CN115542318B/en
Publication of CN115542318A publication Critical patent/CN115542318A/en
Application granted granted Critical
Publication of CN115542318B publication Critical patent/CN115542318B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/411Identification of targets based on measurements of radar reflectivity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Databases & Information Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Electromagnetism (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses an air-ground combined multi-domain detection system and method for an unmanned aerial vehicle group target, comprising networking radar configured on a ground monitoring station, distributed spectrum passive monitoring equipment, mobile multi-spectrum sensing equipment and mobile spectrum monitoring equipment which are arranged on an unmanned aerial vehicle; networking radar, distributed spectrum passive monitoring equipment, mobile multispectral sensing equipment and mobile spectrum monitoring equipment cooperatively perform multi-domain cooperative detection to realize comprehensive situation identification and presentation of cluster targets; the multi-source heterogeneous radar, spectrum and multi-spectrum information are fused by adopting a joint convolution self-coding deep learning network; and constructing and knowledge reasoning the knowledge graph of the group target, wherein the knowledge graph comprises the processes of source data acquisition, information extraction, knowledge fusion, knowledge processing and knowledge base updating. The invention combines the advantages of different sensing modes in different distances, different directions, different granularity and other dimensions, and breaks through the limitation of single-domain detection through cascade, collaboration and fusion modes.

Description

Unmanned aerial vehicle group target-oriented air-ground combined multi-domain detection system and method
Technical Field
The invention belongs to the technical field of unmanned aerial vehicle reconnaissance, and particularly relates to an air-ground combined multi-domain detection system and method for an unmanned aerial vehicle group target.
Background
In recent years, along with the development of unmanned aerial vehicle related technologies, unmanned aerial vehicles are widely applied to various aspects such as aerial photography, agriculture, plant protection, disaster relief, infectious disease monitoring, mapping, news reporting, electric power inspection and the like. In the military field, based on detection, autonomous countermeasure and strike of unmanned aerial vehicle group targets, a part of technologies have already entered an application stage. In sharp contrast to the widespread use of unmanned aerial vehicles, however, the management and countering of unmanned aerial vehicles is relatively delayed, especially in the detection of targets for a group of unmanned aerial vehicles.
The existing detection technology for unmanned aerial vehicles mainly has two defects, namely, based on single domain information, the architecture and the integrity are lost, and the detection tracking recognition accuracy is not high; and secondly, the recognition capability of the target intention of the whole unmanned aerial vehicle group is lacking.
Disclosure of Invention
The technical problem to be solved by the invention is to provide the air-ground combined multi-domain detection system for the unmanned aerial vehicle group target aiming at the defects of the prior art, thereby realizing the comprehensive application of information obtained by radar, spectrum monitoring equipment and multi-spectrum equipment, and having high-precision tracking and intention recognition capability for the unmanned aerial vehicle group target.
In order to achieve the technical purpose, the invention adopts the following technical scheme:
the air-ground combined multi-domain detection system for the unmanned aerial vehicle group target comprises a networking radar, a distributed spectrum passive monitoring device, a mobile multi-spectrum sensing device and a mobile spectrum monitoring device;
the networking radar and the distributed spectrum passive monitoring equipment are configured on a ground monitoring station;
the mobile multispectral sensing device and the mobile spectrum monitoring device are arranged on the unmanned plane for approaching reconnaissance;
the networking radar is used for cooperating with the distributed spectrum passive monitoring equipment to track and detect targets of unmanned aerial vehicle groups in the detection range of any radar equipment in the networking radar and identify movement intention;
the mobile multispectral sensing device and the mobile spectrum monitoring device are used for performing approaching reconnaissance on an unmanned aerial vehicle group target along with the unmanned aerial vehicle under the guidance of the networking radar;
the system coordinates and processes multi-domain information based on radar information, spectrum information and multispectral information returned by the radar, the mobile multispectral sensing equipment and the mobile spectrum monitoring equipment, so as to realize tracking monitoring and intention recognition of the unmanned aerial vehicle group target.
In order to optimize the technical scheme, the specific measures adopted further comprise:
the networking radar comprises a plurality of radar devices for actively transmitting electromagnetic waves, wherein each radar device is responsible for detecting a part of fixed airspace; the radar equipment actively radiates electromagnetic waves, has a system maximum detection distance threshold value, and is used for detecting targets of unmanned aerial vehicles of an enemy cluster at first, and when a certain radar detects the unmanned aerial vehicle of the enemy cluster, other radars keep detecting a responsible airspace so as to keep the detection capability of a plurality of unmanned aerial vehicle clusters; meanwhile, the distributed spectrum passive monitoring equipment is guided to monitor communication signals and command signals of the enemy unmanned aerial vehicle cluster, and high-precision tracking of the enemy unmanned aerial vehicle cluster is kept together with the networking radar.
The distributed spectrum passive monitoring equipment is provided with an antenna array, and each array element is an omnidirectional passive receiving array element;
the distributed spectrum passive monitoring equipment only opens a single antenna channel at ordinary times, searches and detects a protection airspace by 360 degrees, and opens all channel antennas after detecting communication signals of unmanned aerial vehicle group targets or after receiving guidance from a networking radar, so as to jointly receive communication signals of the unmanned aerial vehicle group targets, and then carries out direction of arrival estimation on the communication signals to obtain pitch angle and azimuth angle information of the unmanned aerial vehicle group; and cooperates with networking radar to keep track of enemy unmanned aerial vehicle.
The air-ground combined multi-domain detection method for the unmanned aerial vehicle group target comprises the following steps:
step 1, carrying out multi-domain collaborative detection on networking radar, distributed spectrum passive monitoring equipment, mobile multispectral sensing equipment and mobile spectrum monitoring equipment in a collaborative manner, so as to realize comprehensive situation identification and presentation of cluster targets;
step 2, adopting a joint convolution self-coding deep learning network to fuse the multi-source heterogeneous radar, the frequency spectrum and the multi-spectrum information obtained in the step 1;
step 3: and constructing and knowledge reasoning the knowledge graph of the group target, wherein the knowledge graph comprises the processes of source data acquisition, information extraction, knowledge fusion, knowledge processing and knowledge base updating.
The step 1 includes:
the networking radar receives a target echo, the distributed spectrum equipment receives a target radar echo, an active detection radar, active interference, communication and signaling, and azimuth collaborative detection of a cluster target is carried out through information and data collaboration;
the networking radar is based on echoes of different targets, and the mobile multispectral sensing equipment is based on multiband spectral signal fusion of the targets to cooperatively identify the forms of the targets with coarse granularity and fine granularity;
the distributed spectrum passive monitoring equipment performs deep analysis on the cluster target radar signals and the communication signaling, the mobile multispectral sensing equipment performs fine recognition on the target form, the distributed spectrum passive monitoring equipment performs behavior analysis and threat assessment on the target in a cooperative mode, and finally, comprehensive situation recognition and presentation of the cluster target are formed based on cooperative processing of the equipment.
In the step 2, firstly, a multi-task loss function suitable for image fusion is designed, and a joint convolution self-coding network is used for simultaneously training radar, infrared and polarized images; and then inputting the images to be fused on the trained network, designing a fusion rule according to the characteristics of the redundancy characteristics and the complementary characteristics of the images to be fused, realizing fusion of the characteristic layers, and obtaining a fused image after decoding and reconstructing.
The joint convolution self-coding deep learning network comprises a feature learning stage and a fusion stage;
in the feature learning stage, the input heterologous image is subjected to a coding layer to obtain public features and private features of the heterologous image, the coding layer is divided into a public branch and a plurality of private branches, weights are shared for the public branches and are not shared for the private branches; the private branches distinguish different images through learning features, the learned private features represent complementary relations among radar, infrared and polarized images, and the public branches are used for learning public features representing redundant relations;
the radar, infrared and polarized images to be fused are input into a joint convolution self-coding deep learning network, and the public characteristics and the private characteristics of the original image are respectively output through the last layer of the coding layer.
In the fusion stage, a weighted fusion strategy is selected and adopted according to different feature forms, the features of the public features of the two original images at the same position are subjected to weighted fusion, and the fused public features and private features are subjected to a decoding layer and then are combined to obtain a fused result.
The source data in the step 3 include data, expert knowledge, and environment and task conditions;
the data comprise target batch numbers, radar working mode information, and radiation source batch numbers and working frequency point information;
expert knowledge includes combat effort and attribute knowledge;
the environment and task conditions comprise combat tasks and environment information;
the information extraction is respectively carried out from three layers of data, expert knowledge, environment and task, and the information extraction is carried out through data mining and deep learning and is divided into entity extraction, relation extraction and attribute extraction;
entity extraction refers to identifying and extracting related entities from data;
the relation extraction refers to extracting semantic link relation among entities, and is divided into supervised learning extraction and remote supervised learning extraction;
attribute extraction refers to extracting characteristics and properties of related entities in data.
The knowledge information fusion comprises knowledge fusion of a concept layer and a data layer;
knowledge fusion of concept layers comprises ontology alignment, which means a process of determining mapping relations among ontologies such as concepts, relations, attributes and the like;
knowledge fusion of the knowledge data layer includes coreference resolution and entity alignment.
The knowledge processing comprises ontology construction, knowledge reasoning and quality assessment;
the ontology construction refers to semantic basis of entity communication in the knowledge graph, and the semantic basis is presented in a net structure consisting of 'dotted lines and planes';
the knowledge reasoning is to acquire new knowledge or conclusion through semantic analysis of the triples, and comprises rational reasoning and judgment reasoning;
the quality evaluation refers to evaluating the generated knowledge data and importing the data meeting the standard into a knowledge graph;
the updating of the knowledge base comprises updating of a concept layer and updating of a data layer, wherein the updating mode comprises incremental updating and comprehensive updating, the incremental updating represents adding new knowledge into the existing digital map, the comprehensive updating represents starting from zero, and all data after updating are used as input;
the updating of the concept layer means that new concepts are obtained after the data is newly added, and the new concepts are automatically added into the concept layer of the knowledge base;
updating the data layer refers to adding or updating entity, relationship and attribute values in consideration of reliability of data sources and consistency of data.
The knowledge graph constructed in the step 3 includes knowledge contents that: target/radiation source knowledge, target group knowledge, model knowledge, and evaluation and decision knowledge;
the knowledge of the target/radiation source mainly comes from expert knowledge and knowledge extracted from input data of the active and passive sensors, and comprises knowledge of batch numbers, attributes, related trails and motion trail of the target and the radiation source;
the target group knowledge is derived from expert knowledge and mainly comprises a cooperative relationship, task division and combat intention;
the knowledge extraction and processing are developed based on Skelet method, remote supervision method, supervised learning algorithm, unsupervised learning algorithm and label propagation algorithm.
Step 3, introducing reinforcement learning into the knowledge graph to realize relationship reasoning, and establishing a knowledge graph relationship reasoning model based on reinforcement learning for graph construction under the condition of lack of specific scene data;
in a knowledge graph relation reasoning model based on reinforcement learning, the task goal of relation reasoning is to search a reliable prediction path between entity pairs, and the problem of how to set the problem in the problem of searching the path is solved by utilizing an agent of reinforcement learning, so that the effect of making a sequence decision is achieved; the knowledge graph environment is modeled by a Markov decision process, and in each step, the agent obtains states and rewards in the interaction process with the environment, and the agent selects a most likely relation to expand the reasoning path through a strategy network.
The invention has the following beneficial effects:
the networking radar, the distributed spectrum passive monitoring equipment, the mobile multispectral sensing equipment and the mobile spectrum monitoring equipment cooperatively perform multi-domain cooperative detection to realize comprehensive situation identification and presentation of cluster targets; the multi-source heterogeneous radar, spectrum and multi-spectrum information are fused by adopting a joint convolution self-coding deep learning network; and constructing and knowledge reasoning the knowledge graph of the group target, wherein the knowledge graph comprises the processes of source data acquisition, information extraction, knowledge fusion, knowledge processing and knowledge base updating. The advantages of different sensing modes such as radar, spectrum monitoring equipment, multispectral equipment information and the like in different distances, different directions, different granularity and other dimensions are combined, and the limitation of single-domain detection is broken through various modes such as cascading, cooperation, fusion and the like. In addition, the invention has intention recognition capability facing the unmanned aerial vehicle group target.
Drawings
FIG. 1 is a view of a system use scenario of the present invention;
FIG. 2 is a schematic diagram of the range of a radar, a mobile multispectral sensing device, and a mobile spectrum monitoring device;
FIG. 3 is a system architecture of the present invention;
FIG. 4 is a schematic diagram showing the association of the devices and the extracted information of the system of the present invention
FIG. 5 is a schematic diagram of the cooperative detection of devices of the system of the present invention;
FIG. 6 is a schematic diagram of a joint convolution self-coding deep learning network fusion of the present invention;
FIG. 7 is a schematic diagram of knowledge graph construction and knowledge reasoning according to the present invention. .
Detailed Description
Embodiments of the present invention are described in further detail below with reference to the accompanying drawings.
The air-ground combined multi-domain detection system for the unmanned aerial vehicle group target is used in the scene shown in fig. 1, the detection target is an enemy unmanned aerial vehicle group target, and the detection target is an unmanned aerial vehicle group formed by a plurality of unmanned aerial vehicles, generally has group control capability and part of autonomous control capability. The air-ground combined multi-domain detection system for the unmanned aerial vehicle group target comprises networking radar, distributed spectrum passive monitoring equipment, mobile multi-spectrum sensing equipment and mobile spectrum monitoring equipment;
the networking radar and the distributed spectrum passive monitoring equipment are configured on a ground monitoring station;
the mobile multispectral sensing device and the mobile spectrum monitoring device are arranged on the unmanned plane for approaching reconnaissance;
the networking radar is used for cooperating with the distributed spectrum passive monitoring equipment to track and detect targets of unmanned aerial vehicle groups in the detection range of any radar equipment in the networking radar and identify movement intention;
the mobile multispectral sensing device and the mobile spectrum monitoring device are used for performing approaching reconnaissance on an unmanned aerial vehicle group target along with the unmanned aerial vehicle under the guidance of the networking radar;
the system coordinates and processes multi-domain information based on radar information, spectrum information and multispectral information returned by the radar, the mobile multispectral sensing equipment and the mobile spectrum monitoring equipment, so as to realize tracking monitoring and intention recognition of the unmanned aerial vehicle group target.
The system comprises a networking radar at the ground end, a distributed spectrum passive monitoring device, and a mobile multispectral sensing device and a mobile spectrum monitoring device which are configured on the unmanned aerial vehicle. The radar equipment actively emits electromagnetic waves to detect, the detection range is the farthest, and after the unmanned aerial vehicle cluster is detected, the ground distributed spectrum passive monitoring equipment is guided to monitor the target activity situation of the unmanned aerial vehicle cluster. Simultaneously guiding the my unmanned aerial vehicle group targets carrying the mobile multispectral sensing equipment and the mobile spectrum monitoring equipment to perform approaching reconnaissance on the enemy unmanned aerial vehicle group targets;
according to the system, radar echo information received by the radar, information of the communication link and the command link of the target of the enemy unmanned aerial vehicle monitored by the passive frequency spectrum monitoring equipment and multiband spectrum information are cooperatively processed, so that the target of the enemy unmanned aerial vehicle is tracked with high precision and intention recognition is performed.
In the working process of the multi-domain collaborative detection system for the unmanned aerial vehicle group target shown in fig. 1, when the networking radar detects the unmanned aerial vehicle group target, the frequency spectrum monitoring equipment is guided to monitor, and meanwhile, the unmanned aerial vehicle group is guided to approach reconnaissance.
The networking radar consists of a plurality of radars capable of actively transmitting electromagnetic waves, and each radar is responsible for detecting a part of fixed airspace.
The radar actively radiates electromagnetic waves and has the furthest detection distance, so that targets of the enemy cluster unmanned aerial vehicle can be discovered firstly.
When a certain radar discovers the unmanned aerial vehicle clusters, the rest radars keep detecting the responsible airspace so as to keep the detecting capability of a plurality of unmanned aerial vehicle clusters.
Meanwhile, the frequency spectrum monitoring equipment at the guiding ground end, namely the distributed frequency spectrum passive monitoring equipment, monitors communication signals and command signals of the enemy unmanned aerial vehicle cluster and keeps high-precision tracking of the enemy unmanned aerial vehicle cluster together with the networking radar.
The ground-end frequency spectrum monitoring equipment, namely the distributed frequency spectrum passive monitoring equipment, is provided with an antenna array, and each array element is an omni-directional passive receiving array element. The equipment only opens a single antenna channel at ordinary times, searches and detects a protection airspace by 360 degrees, and starts all channel antennas after detecting communication signals of unmanned aerial vehicle group targets or after receiving guidance from a networking radar, so as to jointly receive communication signals of the unmanned aerial vehicle group targets, and then carries out direction of arrival estimation on the communication signals to obtain pitch angle and azimuth angle information of the unmanned aerial vehicle group; and cooperates with networking radar to keep track of enemy unmanned aerial vehicle.
After the networking radar detects the target, on the other hand, the unmanned aerial vehicle group equipped with the mobile multispectral sensing device and the mobile spectrum monitoring device on the other hand needs to be guided to perform approaching reconnaissance on the target of the enemy unmanned aerial vehicle group.
When the distance between the target and the equipment is smaller than 4.5km, the radar equipment tracks and identifies the target; when the distance between the target and the equipment is smaller than 2km, the mobile multispectral sensing equipment and the radar work cooperatively to track and identify the target; when the distance between the target and the device is smaller than 500m, the mobile spectrum monitoring device and the radar work cooperatively to track and identify the target. Specific:
as shown in fig. 2, the distance threshold of the radar found target is 5Km, and within a distance of 4.5Km, the networking radar can perform stable high-precision tracking and movement intention recognition on the enemy unmanned aerial vehicle group target, and at this time, the radar can stably and successfully guide the unmanned aerial vehicle group equipped with the mobile multispectral sensing device and the mobile spectrum monitoring device to perform approaching reconnaissance on the enemy unmanned aerial vehicle group target.
When the unmanned aerial vehicle group abuts against the distance of 2Km, the airborne spectrum sensing equipment, namely the mobile spectrum monitoring equipment, can carry out the direction estimation of the arrival of the unmanned aerial vehicle group according to the communication signals sent by the intercepted target of the unmanned aerial vehicle group, and feed back the estimation result to the networking radar to carry out cooperative tracking on the unmanned aerial vehicle. Meanwhile, the unmanned aerial vehicle group equipped with the mobile multispectral sensing device and the mobile spectrum monitoring device is guided to further approach the enemy unmanned aerial vehicle group.
When the unmanned aerial vehicle group advances to a distance of 500m, the airborne multi-band spectrum device, namely the mobile multi-spectrum sensing device starts to sense an enemy unmanned aerial vehicle group target, a multi-spectrum video is shot through review sampling, multi-spectrum video data are fused, a spectrum signal is reconstructed, and then the spectrum signal is compared with a spectrum database through a machine learning algorithm, so that the intention characteristics of the unmanned aerial vehicle group are identified. And meanwhile, uploading the processed characteristics to a networking radar to realize the cooperative processing of the multidimensional and multi-domain information.
Fig. 3 is a schematic diagram of the system of the present invention, showing the mechanism of action of networking radar detection, distributed spectrum monitoring, multispectral detection and the synergistic relationship between them.
The networking radar is based on an active phased array system, simultaneously receives a plurality of beams, then realizes tracking of a dense maneuvering target through a refined dense target detection technology and a false alarm suppression technology, and simultaneously compares the target with targets in a strategy library and a false target template library so as to recognize the target intention of the enemy unmanned aerial vehicle group while suppressing the false target.
The distributed spectrum monitoring device does not actively emit electromagnetic waves, but adopts a broadband receiver and an omni-directional antenna to monitor broadband spectrum. And then, positioning the target of the enemy unmanned aerial vehicle group by using a high-resolution positioning algorithm aiming at the dense target, so as to realize high-precision tracking. Meanwhile, according to the spectrum situation and electromagnetic characteristic difference of the unmanned aerial vehicle group target in different intentions, the intentions of the unmanned aerial vehicle group target are identified.
And the mobile multispectral sensing equipment shoots spectral videos of a plurality of wave bands, and then utilizes a spectrum composite sampling and signal reconstruction technology to realize multispectral data fusion. And meanwhile, the convolutional neural network is trained in advance based on the spectrum database, so that the hidden target is monitored and tracked.
Fig. 4 is a schematic diagram showing association between a device and extracted information in a multi-domain cognitive collaborative detection architecture, wherein a radar acquires imaging and position information, a spectrum device acquires target communication spectrum and position information, and a multi-spectrum device acquires form and quantity information.
The air-ground combined multi-domain detection method for the unmanned aerial vehicle group target, which is realized by the system, comprises the following steps:
step 1, carrying out multi-domain collaborative detection on networking radar, distributed spectrum passive monitoring equipment, mobile multispectral sensing equipment and mobile spectrum monitoring equipment in a collaborative manner, so as to realize comprehensive situation identification and presentation of cluster targets;
the invention combines the wide-area detection advantage of the radar, the hidden advantage of the spectrum sensing equipment and the acquisition capability of the multispectral sensing local airspace refined information, and breaks through the key technologies of cooperative target detection, cooperative tracking and cooperative identification.
The radar is based on an adaptive processing architecture, and a radar detection and identification database is constructed by adopting networking radar, a digital active phased array system and a receiving simultaneous multi-beam technology. And a new signal processing and data processing architecture is adopted, so that the dense weak target detection capability is improved.
The frequency spectrum sensing equipment adopts low-cost wide-area coverage distributed frequency spectrum passive monitoring equipment and aerial unmanned aerial vehicle-mounted mobile frequency spectrum monitoring equipment, and through wide-area distributed sensing and unmanned aerial vehicle-mounted dynamic sensing, the frequency spectrum sensing equipment is fused with ground terminal data to construct an unmanned aerial vehicle image transmission signal and control signal identification database, so that the detection, positioning and identification capability of the cluster unmanned aerial vehicle radiation source signals is improved.
The mobile multispectral sensing equipment adopts a lightweight multispectral camera, is provided with a plurality of independent imagers, is respectively matched with a special optical filter, can enable each imager to receive a spectrum in an accurate wavelength range, forms two distributed equipment in the air, performs wide-area, accurate and multiband reproduction on a near-field unmanned aerial vehicle group target, and completes efficient group target identification by combining the intelligent fusion and identification technology of the terminal.
The cooperative detection process according to the detection distance is shown in fig. 5.
The networking radar receives a target echo, the distributed spectrum equipment receives a target radar echo, an active detection radar, active interference, communication and signaling, and azimuth collaborative detection of a cluster target is carried out through information and data collaboration;
the networking radar is based on echoes of different targets, and the mobile multispectral sensing equipment is based on multiband spectral signal fusion of the targets to cooperatively identify the forms of the targets with coarse granularity and fine granularity;
the distributed spectrum passive monitoring equipment performs deep analysis on the cluster target radar signals and the communication signaling, the mobile multispectral sensing equipment performs fine recognition on the target form, the distributed spectrum passive monitoring equipment performs behavior analysis and threat assessment on the target in a cooperative mode, and finally, comprehensive situation recognition and presentation of the cluster target are formed based on cooperative processing of the equipment.
Step 2, adopting a joint convolution self-coding deep learning network to fuse the multi-source heterogeneous radar, the frequency spectrum and the multi-spectrum information obtained in the step 1;
compared with a single sensing system, the radar and multispectral heterogeneous information fusion can enhance the survivability of the system, improve the reliability and robustness of the whole system, enhance the credibility of data and improve the final recognition accuracy of group targets.
The invention adopts the joint convolution self-coding deep learning network as shown in fig. 6 to fuse radar, frequency spectrum and multispectral information.
Firstly, a multi-task loss function suitable for image fusion is designed, and a joint convolution self-coding network is used for simultaneously training radar, infrared and polarized images.
And inputting the images to be fused on the trained network, designing a fusion rule according to the characteristics of the redundant characteristics and the complementary characteristics of the images to be fused, realizing fusion of the characteristic layers, and obtaining a fused image after decoding and reconstructing.
The whole network consists of a feature learning stage and a fusion stage.
In the feature learning stage, firstly, an input heterologous image passes through a coding layer to obtain public features and private features of the input heterologous image, and the coding layer of a self-coding network is divided into a public branch and a plurality of private branches.
For common branches, the weights are shared;
for private branches, weights are not shared.
Because of weight sharing, the sharing features of the public branches are forced to learn the public features of a plurality of input images, meanwhile, the private branches achieve the purpose of distinguishing different images through learning features, the learned private features represent the complementary relationship among radar, infrared and polarized images, and the public branches tend to learn some public features representing the redundant relationship, so that the purpose of network joint learning is achieved.
The training process achieves the ability of the original image to reconstruct an image through the joint convolution self-encoding network. The input data and the output data of the joint convolution self-coding network are the same, the coding layer compresses the input data, and the compressed data is reconstructed through the decoding layer to obtain the final output data.
In the fusion stage, radar, infrared and polarized images to be fused are input into a joint convolution self-coding network, and public features and private features of an original image are respectively output through the last layer of the coding layer.
According to different feature forms, for complementary features, the loss-free retention is required as much as possible during fusion, if a weighted fusion strategy is adopted, the fused features are reduced, and if a large fusion strategy is adopted, the loss-free retention is required for complementary information at the same position in the original image, so that a better fusion effect is achieved. For redundant features, the general view and structural information of the original images are generally reflected instead of complementary information by observing the feature morphology, if a large strategy of features is simply adopted, the result value of the fusion features overflows, important features are covered, the fusion effect is affected, and the weighting fusion strategy is adopted to conduct weighting fusion on the features of the common features of the two original images at the same position.
The fused public features and private features are combined after passing through a decoding layer, and the fused result can be directly obtained.
And 3, constructing and reasoning knowledge graphs of the group targets.
The construction process of the knowledge graph mainly comprises the processes of source data acquisition, information extraction, knowledge fusion, knowledge processing and the like. The source data mainly comprises data, expert knowledge and environment and task conditions. The data mainly comprises information such as a target batch number and a radar working mode input by an active detection sensor, and information such as a radiation source batch number and a working frequency point input by a passive detection sensor; expert knowledge includes combat level knowledge such as combat effort and attributes; the characteristics of the environment, the task and the like comprise combat tasks, environment information and the like.
The information extraction of the knowledge graph mainly comprises entity extraction, relation extraction and attribute extraction. Entity extraction refers to the identification and extraction of relevant entities from data, which are the most fundamental and critical parts of information extraction. The relation extraction is used for solving the problem of semantic links among entities and is divided into supervised learning extraction and remote supervised learning extraction. Attribute extraction refers to extracting characteristics and properties of related entities in data. The information extraction is mainly carried out from three layers of data, expert knowledge, environment and tasks respectively, and the information extraction is carried out through technologies such as data mining, deep learning and the like.
Knowledge information fusion is embodied in both a concept layer and a data layer. Knowledge fusion of concept layers is mainly expressed as ontology alignment, which refers to a process of determining mapping relations among ontologies such as concepts, relations, attributes and the like. Knowledge fusion of the knowledge data layer is mainly manifested by coreference resolution and entity alignment. The plurality of knowledge bases are combined into a novel knowledge base which is more uniform and dense through ontology alignment, entity alignment and the like.
Through knowledge extraction and knowledge fusion, entities and ontologies are identified and extracted from information sources, disambiguation and unification are performed, and the obtained associated data is basic expression of objective facts, but the objective facts are not knowledge systems required by knowledge maps.
Knowledge processing mainly comprises ontology construction, knowledge reasoning and quality assessment.
The ontology refers to a semantic basis of entity communication in the knowledge graph, and is mainly represented by a net structure consisting of 'dotted lines and planes'.
The knowledge reasoning is to acquire new knowledge or conclusion through semantic analysis of the triples, and comprises rational reasoning and judgment reasoning.
The quality evaluation evaluates the generated knowledge data and imports the data meeting the standard into a knowledge graph.
The updating of the knowledge base comprises the updating of a concept layer and the updating of a data layer, wherein the updating mode comprises increment updating and comprehensive updating, the increment updating represents adding new knowledge into the existing digital map, the comprehensive updating represents starting from zero, and all data after updating are used as input. The update of the concept layer means that new concepts are obtained after the data is newly added, and the new concepts need to be automatically added into the concept layer of the knowledge base. The update of the data layer refers to adding or updating entity, relation and attribute values, and the update of the data layer needs to consider various factors such as reliability of a data source, consistency of data (whether contradiction or redundancy exists or not) and the like.
The knowledge content constructed by the knowledge graph mainly comprises target/radiation source knowledge, target group knowledge, model knowledge and evaluation and decision knowledge. The knowledge of the target/radiation source mainly comes from expert knowledge and knowledge extracted from data input by the active and passive sensors, and the data information mainly comprises knowledge of batch numbers, attributes, related trails, motion tracks and the like of the target and the radiation source. The target group knowledge mainly originates from expert knowledge and mainly comprises cooperative relationships, task division, combat intention and the like.
The knowledge extraction and processing method is mainly developed based on Skelet method, remote supervision method, supervised learning algorithm, unsupervised learning algorithm, labeling propagation algorithm and the like; the software development tool mainly uses Neo4j to construct and store a knowledge graph, tensorFlow is used as an implementation framework of deep learning, and Python and C++/C are used as basic programming languages to implement engineering.
Aiming at the map construction under the condition of lack of certain specific scene data, the invention provides a multi-hop relationship reasoning framework based on reinforcement learning by introducing reinforcement learning into a knowledge map to realize relationship reasoning. Knowledge graph relationship inference model based on reinforcement learning as shown in fig. 7, the task goal of relationship inference is to search for reliable predicted paths between entity pairs. By utilizing the reinforcement learning agent to solve the problem of how to set the problem in the problem of searching the path, the effect of making a sequence decision can be achieved. The knowledge graph environment is modeled by a Markov decision process, and in each step, the agent obtains states and rewards in the interaction process with the environment, and the agent learns to select a most likely relation to expand the reasoning path through the strategy network.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above examples, and all technical solutions belonging to the concept of the present invention belong to the protection scope of the present invention. It should be noted that modifications and adaptations to the invention without departing from the principles thereof are intended to be within the scope of the invention as set forth in the following claims.

Claims (8)

1. Unmanned aerial vehicle group target-oriented air-ground combined multi-domain detection system comprises networking radar, distributed spectrum passive monitoring equipment, mobile multi-spectrum sensing equipment and mobile spectrum monitoring equipment, and is characterized in that:
the networking radar and the distributed spectrum passive monitoring equipment are configured on a ground monitoring station;
the mobile multispectral sensing device and the mobile spectrum monitoring device are arranged on the unmanned plane for approaching reconnaissance;
the networking radar is used for cooperating with the distributed spectrum passive monitoring equipment to track and detect targets of unmanned aerial vehicle groups in the detection range of any radar equipment in the networking radar and identify movement intention;
the mobile multispectral sensing device and the mobile spectrum monitoring device are used for performing approaching reconnaissance on an unmanned aerial vehicle group target along with the unmanned aerial vehicle under the guidance of the networking radar;
the system coordinates and processes multi-domain information based on radar information, spectrum information and multispectral information returned by radar, mobile multispectral sensing equipment and mobile spectrum monitoring equipment, so as to realize tracking monitoring and intention recognition of unmanned aerial vehicle group targets;
the distributed spectrum passive monitoring equipment is provided with an antenna array, and each array element is an omnidirectional passive receiving array element;
the distributed spectrum passive monitoring equipment only opens a single antenna channel at ordinary times, searches and detects a protection airspace by 360 degrees, and opens all channel antennas after detecting communication signals of unmanned aerial vehicle group targets or after receiving guidance from a networking radar, so as to jointly receive communication signals of the unmanned aerial vehicle group targets, and then carries out direction of arrival estimation on the communication signals to obtain pitch angle and azimuth angle information of the unmanned aerial vehicle group; and cooperates with networking radar to keep track of enemy unmanned aerial vehicle;
the air-ground combined multi-domain detection method for the unmanned aerial vehicle group target, which is realized according to the system, comprises the following steps:
step 1, carrying out multi-domain collaborative detection on networking radar, distributed spectrum passive monitoring equipment, mobile multispectral sensing equipment and mobile spectrum monitoring equipment in a collaborative manner, so as to realize comprehensive situation identification and presentation of cluster targets;
step 2, adopting a joint convolution self-coding deep learning network to fuse the multi-source heterogeneous radar, the frequency spectrum and the multi-spectrum information obtained in the step 1;
step 3: and constructing and knowledge reasoning the knowledge graph of the group target, wherein the knowledge graph comprises the processes of source data acquisition, information extraction, knowledge fusion, knowledge processing and knowledge base updating.
2. The unmanned aerial vehicle group target-oriented air-ground joint multi-domain detection system according to claim 1, wherein:
the networking radar comprises a plurality of radar devices which actively emit electromagnetic waves, and each radar device is responsible for detecting a part of fixed airspace; the radar equipment actively radiates electromagnetic waves, has a system maximum detection distance threshold value, and is used for detecting targets of unmanned aerial vehicles of an enemy cluster at first, and when a certain radar detects the unmanned aerial vehicle of the enemy cluster, other radars keep detecting a responsible airspace so as to keep the detection capability of a plurality of unmanned aerial vehicle clusters; meanwhile, the distributed spectrum passive monitoring equipment is guided to monitor communication signals and command signals of the enemy unmanned aerial vehicle cluster, and high-precision tracking of the enemy unmanned aerial vehicle cluster is kept together with the networking radar.
3. The unmanned aerial vehicle group target-oriented air-ground joint multi-domain detection system according to claim 1, wherein:
the step 1 comprises the following steps:
the networking radar receives a target echo, the distributed spectrum equipment receives a target radar echo, an active detection radar, active interference, communication and signaling, and azimuth collaborative detection of a cluster target is carried out through information and data collaboration;
the networking radar is based on echoes of different targets, and the mobile multispectral sensing equipment is based on multiband spectral signal fusion of the targets to cooperatively identify the forms of the targets with coarse granularity and fine granularity;
the distributed spectrum passive monitoring equipment performs deep analysis on the cluster target radar signals and the communication signaling, the mobile multispectral sensing equipment performs fine recognition on the target form, the distributed spectrum passive monitoring equipment performs behavior analysis and threat assessment on the target in a cooperative mode, and finally, comprehensive situation recognition and presentation of the cluster target are formed based on cooperative processing of the equipment.
4. The unmanned aerial vehicle group target-oriented air-ground joint multi-domain detection system according to claim 1, wherein:
in the step 2, firstly, a multi-task loss function suitable for image fusion is designed, and a joint convolution self-coding network is used for simultaneously training radar, infrared and polarized images; and then inputting the images to be fused on the trained network, designing a fusion rule according to the characteristics of the redundancy characteristics and the complementary characteristics of the images to be fused, realizing fusion of the characteristic layers, and obtaining a fused image after decoding and reconstructing.
5. The unmanned aerial vehicle group target-oriented air-ground joint multi-domain detection system according to claim 4, wherein:
the joint convolution self-coding deep learning network comprises a feature learning stage and a fusion stage;
in the feature learning stage, the input heterologous image is subjected to a coding layer to obtain public features and private features of the heterologous image, the coding layer is divided into a public branch and a plurality of private branches, weights are shared for the public branches and are not shared for the private branches; the private branches distinguish different images through learning features, the learned private features represent complementary relations among radar, infrared and polarized images, and the public branches are used for learning public features representing redundant relations;
the radar, infrared and polarized images to be fused are input into a joint convolution self-coding deep learning network, and the public characteristics and the private characteristics of the original image are respectively output through the last layer of the coding layer;
in the fusion stage, a weighted fusion strategy is selected and adopted according to different feature forms, the features of the public features of the two original images at the same position are subjected to weighted fusion, and the fused public features and private features are subjected to a decoding layer and then are combined to obtain a fused result.
6. The unmanned aerial vehicle group target-oriented air-ground joint multi-domain detection system according to claim 1, wherein:
step 3, the source data comprise data, expert knowledge and environment and task conditions;
the data comprise target batch numbers, radar working mode information, and radiation source batch numbers and working frequency point information;
expert knowledge includes combat effort and attribute knowledge;
the environment and task conditions comprise combat tasks and environment information;
the information extraction is respectively carried out from three layers of data, expert knowledge, environment and task, and the information extraction is carried out through data mining and deep learning and is divided into entity extraction, relation extraction and attribute extraction;
entity extraction refers to identifying and extracting related entities from data;
the relation extraction refers to extracting semantic link relation among entities, and is divided into supervised learning extraction and remote supervised learning extraction;
extracting attributes, namely extracting the characteristics and properties of related entities in the data;
the knowledge fusion comprises knowledge fusion of a concept layer and a data layer;
knowledge fusion of concept layers comprises ontology alignment, which means a process of determining mapping relations among ontologies such as concepts, relations, attributes and the like;
knowledge fusion of the knowledge data layer includes coreference resolution and entity alignment;
the knowledge processing comprises ontology construction, knowledge reasoning and quality assessment;
the ontology construction refers to semantic basis of entity communication in the knowledge graph, and the semantic basis is presented in a net structure consisting of 'dotted lines and planes';
the knowledge reasoning is to acquire new knowledge or conclusion through semantic analysis of the triples, and comprises rational reasoning and judgment reasoning;
the quality evaluation refers to evaluating the generated knowledge data and importing the data meeting the standard into a knowledge graph;
the updating of the knowledge base comprises updating of a concept layer and updating of a data layer, wherein the updating mode comprises incremental updating and comprehensive updating, the incremental updating represents adding new knowledge into the existing digital map, the comprehensive updating represents starting from zero, and all data after updating are used as input;
the updating of the concept layer means that new concepts are obtained after the data is newly added, and the new concepts are automatically added into the concept layer of the knowledge base;
updating the data layer refers to adding or updating entity, relationship and attribute values in consideration of reliability of data sources and consistency of data.
7. The unmanned aerial vehicle group target-oriented air-ground joint multi-domain detection system according to claim 1, wherein:
the knowledge graph constructed in the step 3 comprises the following knowledge contents: target/radiation source knowledge, target group knowledge, model knowledge, and evaluation and decision knowledge;
the knowledge of the target/radiation source mainly comes from expert knowledge and knowledge extracted from input data of the active and passive sensors, and comprises knowledge of batch numbers, attributes, related trails and motion trail of the target and the radiation source;
the target group knowledge is derived from expert knowledge and mainly comprises a cooperative relationship, task division and combat intention;
the knowledge extraction and processing are developed based on Skelet method, remote supervision method, supervised learning algorithm, unsupervised learning algorithm and label propagation algorithm.
8. The unmanned aerial vehicle group target-oriented air-ground joint multi-domain detection system according to claim 1, wherein:
step 3, introducing reinforcement learning into the knowledge graph to realize relationship reasoning, and establishing a knowledge graph relationship reasoning model based on reinforcement learning for graph construction under the condition of lack of specific scene data;
in the knowledge graph relation reasoning model based on reinforcement learning, the task goal of relation reasoning is to search a reliable prediction path between entity pairs, and sequence decision is carried out in the process of searching a path problem through the reinforcement learning agent; the knowledge graph environment is modeled by a Markov decision process, and in each step, the agent obtains states and rewards in the interaction process with the environment, and the agent selects a most likely relation to expand the reasoning path through a strategy network.
CN202211244116.0A 2022-10-12 2022-10-12 Unmanned aerial vehicle group target-oriented air-ground combined multi-domain detection system and method Active CN115542318B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211244116.0A CN115542318B (en) 2022-10-12 2022-10-12 Unmanned aerial vehicle group target-oriented air-ground combined multi-domain detection system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211244116.0A CN115542318B (en) 2022-10-12 2022-10-12 Unmanned aerial vehicle group target-oriented air-ground combined multi-domain detection system and method

Publications (2)

Publication Number Publication Date
CN115542318A CN115542318A (en) 2022-12-30
CN115542318B true CN115542318B (en) 2024-01-09

Family

ID=84733483

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211244116.0A Active CN115542318B (en) 2022-10-12 2022-10-12 Unmanned aerial vehicle group target-oriented air-ground combined multi-domain detection system and method

Country Status (1)

Country Link
CN (1) CN115542318B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116541472A (en) * 2023-03-22 2023-08-04 麦博(上海)健康科技有限公司 Knowledge graph construction method in medical field
CN116299424B (en) * 2023-05-10 2023-07-18 武汉能钠智能装备技术股份有限公司四川省成都市分公司 Unmanned aerial vehicle identification system and method
CN116359836B (en) * 2023-05-31 2023-08-15 成都金支点科技有限公司 Unmanned aerial vehicle target tracking method and system based on super-resolution direction finding
CN116718198B (en) * 2023-08-10 2023-11-03 湖南璟德科技有限公司 Unmanned aerial vehicle cluster path planning method and system based on time sequence knowledge graph
CN116893413B (en) * 2023-09-11 2023-12-01 中国电子科技集团公司信息科学研究院 Distributed real-aperture airborne early warning radar detection system and method
CN116893414B (en) * 2023-09-11 2023-12-05 中国电子科技集团公司信息科学研究院 Unmanned aerial vehicle cluster-mounted radar detection system and method
CN117236448B (en) * 2023-11-15 2024-02-09 中国人民解放军空军预警学院 Radar intention reasoning and model training method based on time sequence knowledge graph

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7773204B1 (en) * 2006-07-20 2010-08-10 United States Of America As Represented By The Secretary Of The Navy Apparatus and method for spatial encoding of a search space
CN107561593A (en) * 2017-05-12 2018-01-09 广州联通控股有限公司 A kind of small-sized unmanned aircraft composite detecting device
CN109981192A (en) * 2019-04-08 2019-07-05 南京航空航天大学 A kind of airspace is cracked down upon evil forces the frequency spectrum monitoring system and method for winged unmanned plane
CN110097528A (en) * 2019-04-11 2019-08-06 江南大学 A kind of image interfusion method based on joint convolution autoencoder network
CN110596698A (en) * 2019-07-25 2019-12-20 姚碧琛 Active and passive integrated unmanned aerial vehicle detection and identification technology
CN110992298A (en) * 2019-12-02 2020-04-10 深圳市唯特视科技有限公司 Genetic algorithm-based radiation source target identification and information analysis method
CN111159249A (en) * 2019-12-13 2020-05-15 深圳市唯特视科技有限公司 Target identification method, device and system based on knowledge graph and storage medium
WO2020180844A1 (en) * 2019-03-05 2020-09-10 The Procter & Gamble Company Wireless measurement of human product interaction
CN112394382A (en) * 2020-10-14 2021-02-23 中国科学院西安光学精密机械研究所 Long-time shielding resistant low-slow small target tracking device and method
CN112797846A (en) * 2020-12-22 2021-05-14 中国船舶重工集团公司第七0九研究所 Unmanned aerial vehicle prevention and control method and system
CN113156417A (en) * 2020-12-11 2021-07-23 西安天和防务技术股份有限公司 Anti-unmanned aerial vehicle detection system and method and radar equipment
CN114219870A (en) * 2021-12-08 2022-03-22 南京航空航天大学 Auxiliary image generation method and system for super-resolution reconstruction of hyperspectral image of unmanned aerial vehicle
CN114489148A (en) * 2021-12-30 2022-05-13 中国航天***科学与工程研究院 Anti-unmanned aerial vehicle system based on intelligent detection and electronic countermeasure
CN114508966A (en) * 2021-11-17 2022-05-17 航天科工微电子***研究院有限公司 Ground-air combined multi-level interception accompanying defense system
CN114729804A (en) * 2019-08-30 2022-07-08 泰立戴恩菲力尔商业***公司 Multispectral imaging system and method for navigation
CN115019204A (en) * 2022-06-07 2022-09-06 北京庚图科技有限公司 Knowledge graph battlefield target identification method, device, equipment and medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113892127A (en) * 2019-05-17 2022-01-04 奇跃公司 Method and apparatus for corner detection using a neural network and a corner detector

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7773204B1 (en) * 2006-07-20 2010-08-10 United States Of America As Represented By The Secretary Of The Navy Apparatus and method for spatial encoding of a search space
CN107561593A (en) * 2017-05-12 2018-01-09 广州联通控股有限公司 A kind of small-sized unmanned aircraft composite detecting device
WO2020180844A1 (en) * 2019-03-05 2020-09-10 The Procter & Gamble Company Wireless measurement of human product interaction
CN109981192A (en) * 2019-04-08 2019-07-05 南京航空航天大学 A kind of airspace is cracked down upon evil forces the frequency spectrum monitoring system and method for winged unmanned plane
CN110097528A (en) * 2019-04-11 2019-08-06 江南大学 A kind of image interfusion method based on joint convolution autoencoder network
CN110596698A (en) * 2019-07-25 2019-12-20 姚碧琛 Active and passive integrated unmanned aerial vehicle detection and identification technology
CN114729804A (en) * 2019-08-30 2022-07-08 泰立戴恩菲力尔商业***公司 Multispectral imaging system and method for navigation
CN110992298A (en) * 2019-12-02 2020-04-10 深圳市唯特视科技有限公司 Genetic algorithm-based radiation source target identification and information analysis method
CN111159249A (en) * 2019-12-13 2020-05-15 深圳市唯特视科技有限公司 Target identification method, device and system based on knowledge graph and storage medium
CN112394382A (en) * 2020-10-14 2021-02-23 中国科学院西安光学精密机械研究所 Long-time shielding resistant low-slow small target tracking device and method
CN113156417A (en) * 2020-12-11 2021-07-23 西安天和防务技术股份有限公司 Anti-unmanned aerial vehicle detection system and method and radar equipment
CN112797846A (en) * 2020-12-22 2021-05-14 中国船舶重工集团公司第七0九研究所 Unmanned aerial vehicle prevention and control method and system
CN114508966A (en) * 2021-11-17 2022-05-17 航天科工微电子***研究院有限公司 Ground-air combined multi-level interception accompanying defense system
CN114219870A (en) * 2021-12-08 2022-03-22 南京航空航天大学 Auxiliary image generation method and system for super-resolution reconstruction of hyperspectral image of unmanned aerial vehicle
CN114489148A (en) * 2021-12-30 2022-05-13 中国航天***科学与工程研究院 Anti-unmanned aerial vehicle system based on intelligent detection and electronic countermeasure
CN115019204A (en) * 2022-06-07 2022-09-06 北京庚图科技有限公司 Knowledge graph battlefield target identification method, device, equipment and medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"蜂甲一体"作战体系架构研究;樊延平等;《舰船电子工程》(第5期);10-14+59 *
Design and Application of a Practical Decision-Making and Commanding Platform for Emergency Rescue;Xiaohui Huang等;《2021 IEEE 12th International Conference on Software Engineering and Service Science (ICSESS)》;105-108 *
Unmanned Aerial Vehicle Visual Detection and Tracking using Deep Neural Networks: A Performance Benchmark;Brian K. S. Isaac-Medina等;《2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)》;1223-1232 *
关于无人机"低慢小"飞行器侦测反制***的技术比较分析;刘卓等;《中国新技术新产品》(第6期);24-26 *

Also Published As

Publication number Publication date
CN115542318A (en) 2022-12-30

Similar Documents

Publication Publication Date Title
CN115542318B (en) Unmanned aerial vehicle group target-oriented air-ground combined multi-domain detection system and method
CN110175186B (en) Intelligent ship environment threat target sensing system and method
CN112797846B (en) Unmanned aerial vehicle prevention and control method and system
Hall et al. Mathematical techniques in multisensor data fusion
CN110084094B (en) Unmanned aerial vehicle target identification and classification method based on deep learning
Llinas et al. An introduction to multi-sensor data fusion
US20220094710A1 (en) Detection of cyber attacks targeting avionics systems
Sengupta et al. A DNN-LSTM based target tracking approach using mmWave radar and camera sensor fusion
US9448304B2 (en) Ground moving target indicator (GMTI) radar that converts radar tracks to directed graphs (DG), and creates weighted DGs aligned with superimposed with digital maps
CN103076605A (en) Secondary surveillance radar track extraction method for multimode polling and S-mold roll-calling interrogation
CN113156417B (en) Anti-unmanned aerial vehicle detection system, method and radar equipment
EP3722997B1 (en) Method and apparatus for automatic detection of antenna site conditions
Wu et al. LiDAR-aided mobile blockage prediction in real-world millimeter wave systems
CN112505050A (en) Airport runway foreign matter detection system and method
Najarro et al. Fundamental limitations and state-of-the-art solutions for target node localization in WSNs: a review
CN116972694A (en) Unmanned plane cluster attack-oriented countering method and system
Akter et al. An explainable multi-task learning approach for rf-based uav surveillance systems
Huang et al. Stif: A spatial-temporal integrated framework for end-to-end micro-uav trajectory tracking and prediction with 4d mimo radar
WO2023166146A1 (en) Use of one or more observation satellites for target identification
US20230143374A1 (en) Feature extraction, labelling, and object feature map
Jdidi et al. Unsupervised Disentanglement for PostIdentification of GNSS Interference in the Wild
Petrov et al. Identification of radar signals using neural network classifier with low-discrepancy optimisation
CN114609597A (en) Dry invasion integrated radar waveform design method for unmanned aerial vehicle cluster detection fusion
Ghosh et al. A hybrid CNN-transformer architecture for semantic segmentation of radar sounder data
Mareï et al. The regionalization of maritime networks: Evidence from a comparative analysis of maritime basins

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant