CN110969637B - Multi-threat target reconstruction and situation awareness method based on generation countermeasure network - Google Patents

Multi-threat target reconstruction and situation awareness method based on generation countermeasure network Download PDF

Info

Publication number
CN110969637B
CN110969637B CN201911210172.0A CN201911210172A CN110969637B CN 110969637 B CN110969637 B CN 110969637B CN 201911210172 A CN201911210172 A CN 201911210172A CN 110969637 B CN110969637 B CN 110969637B
Authority
CN
China
Prior art keywords
threat
target
layer
image
reconstruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911210172.0A
Other languages
Chinese (zh)
Other versions
CN110969637A (en
Inventor
夏春秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Vision Technology Co Ltd
Original Assignee
Shenzhen Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Vision Technology Co Ltd filed Critical Shenzhen Vision Technology Co Ltd
Priority to CN201911210172.0A priority Critical patent/CN110969637B/en
Publication of CN110969637A publication Critical patent/CN110969637A/en
Application granted granted Critical
Publication of CN110969637B publication Critical patent/CN110969637B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Aiming, Guidance, Guns With A Light Source, Armor, Camouflage, And Targets (AREA)

Abstract

The invention discloses a multi-threat target reconstruction and situation awareness method based on a generated countermeasure network, which comprises the steps of collecting navigation states of the multi-threat targets in a monitored area, marking a time stamp, and forming time-synchronous original scene point cloud data, an infrared image and a visible light image; the infrared image and the visible light image are fused, and three-dimensional scene and target reconstruction is realized by fusing the image and the point cloud data; multiple generation of an countermeasure network output simulation track and a reconstruction target, and acquisition of simulation track segments corresponding to the multiple threat targets respectively; and updating the monitoring search area of the target according to the threat degree perception variable weight, and outputting threat degree of the threat target by applying an annealing algorithm to realize firepower distribution. According to the invention, the target fusion spectrum information and the space pose and position are obtained, the robustness is improved, the false recognition caused by shielding is avoided, the target is continuously and accurately tracked, monitored and interfered, the threat target is subjected to firepower distribution according to the variable weight of the distance threat degree in the monitoring range of the my, and the optimal solution is searched to improve the intelligent grouping of the large-scale cluster.

Description

Multi-threat target reconstruction and situation awareness method based on generation countermeasure network
Technical Field
The invention relates to the fields of artificial intelligence, multi-sensor measurement, multi-threat target situation awareness and the like, in particular to a multi-threat target reconstruction and situation awareness method based on a generated countermeasure network.
Background
The battlefield target threat level assessment provides high credibility basis for weapon system fire distribution, effectively shortens the time of commander situation awareness and strategy formulation, and improves the combat efficiency and quality. The weapon system fire distribution refers to a process of distributing a certain number of certain types of equipment on the my side to each equipment of the enemy side on the basis of comprehensively considering the factors such as the battle tasks and situations executed and the performance of the battle equipment of both the enemy and me sides.
The main stream algorithms at present comprise ant colony algorithms, artificial bee colony algorithms and the like, have the defects of lack of early pheromone, easy sinking into local optimum, slower convergence to large space problems and the like, and cannot be directly applied to air combat firepower distribution. The multi-attribute decision method carries out quantization treatment on a plurality of qualitative or quantitative target attribute values affecting the threat degree, and then calculates to obtain the comprehensive evaluation value of the target by combining a weight vector and a certain combination rule, however, the target threat degree evaluation and the firepower distribution are a dynamic multi-variable multi-constraint combination optimization problem, have the characteristics of antagonism, initiative, uncertainty and the like, relate to the dynamic multi-variable multi-constraint combination optimization problem and are difficult to solve by the traditional method.
Disclosure of Invention
In order to avoid the defects of the prior art, the invention provides a multi-threat target reconstruction and situation awareness method based on a generation countermeasure network, which comprises the steps of collecting navigation states of the multi-threat targets in a supervision area, marking a time stamp and forming time-synchronous original scene point cloud data, an infrared image and a visible light image; the infrared image and the visible light image are fused, and three-dimensional scene and target reconstruction is realized by fusing the image and the point cloud data; multiple generation of an countermeasure network output simulation track and a reconstruction target, and acquisition of simulation track segments corresponding to the multiple threat targets respectively; and updating the monitoring search area of the target according to the threat degree perception variable weight, and outputting threat degree of the threat target by applying an annealing algorithm to realize firepower distribution.
According to the invention, the target fusion spectrum information and the space pose and position information are obtained, the robustness is improved, the false recognition caused by shielding is avoided, the target is continuously and accurately tracked, monitored and interfered, the threat target is subjected to firepower distribution according to the variable weight of the distance threat degree in the monitoring range of the my, and the optimal solution is searched to improve the intelligent grouping of the large-scale cluster.
In order to achieve the above object, the present invention provides a multi-threat target reconstruction and situation awareness method based on generation of an countermeasure network, which mainly includes:
collecting navigation states of a plurality of threat targets in a supervision area, marking time stamps, and forming time-synchronous original scene point cloud data, an infrared image and a visible light image;
the infrared image and the visible light image are fused, and three-dimensional scene and target reconstruction is realized by the image and the point cloud data;
constructing and generating an countermeasure network, outputting a simulation track and a simulation reconstruction target, and acquiring simulation track segments corresponding to the multiple threat targets respectively;
and updating the monitoring search area of the target according to the threat degree perception variable weight, outputting threat degree of the threat target through an annealing algorithm, and distributing fire power of the target.
The fusion of the infrared image and the visible light image specifically comprises the following steps:
defogging an extracted infrared image and a visible light image, filtering small noise points by adopting an image binarization method, and extracting contour areas of a plurality of edges by adopting a self-adaptive edge algorithm to obtain the maximum contour of each target; the targets include enemy targets and surrounding scenes;
detecting feature points of the infrared image and the visible light image and contour points of the target in the infrared image and the visible light image, and fitting when the contour area is larger than a preset threshold value to obtain a preprocessed infrared image and a preprocessed visible light image;
carrying out K layers of NSCT decomposition on the preprocessed infrared image and the preprocessed visible light image, and constructing an average gradient and a Q factor matrix to obtain a low-frequency subband coefficient and a high-frequency subband coefficient;
processing the image high-frequency information, performing PCNN processing on the corresponding high-frequency subband coefficients, taking the subband coefficients as external input excitation of the PCNN, and dividing the image into the rest K-1 layers except the highest layer scale K and calculating the highest layer scale K respectively;
processing the low-frequency information of the image, giving higher weight to pixels to the image area with high energy in the fusion process to the low-frequency subband coefficient, normalizing the variance, judging the variance and a preset variance threshold, and fusing according to different rules;
and performing NSCT inverse transformation, reconstructing the low-frequency coefficient and each high-frequency subband coefficient obtained by fusion to obtain a fused image, taking the central coordinate position of the two images as the position of a target at the moment, and then mapping the target center of the visible light image into the infrared image to obtain the visible light-infrared image so as to further obtain the fusion of the azimuth and the angle of the target area.
Wherein, the realizing the three-dimensional scene and the target reconstruction specifically comprises:
creating a three-dimensional voxel grid for each three-dimensional point of the input point cloud data, searching coordinate values of all point clouds, and finding out the maximum X of the three-dimensional coordinate axis direction max 、Y max 、Z max And a minimum value X min 、Y min 、Z min Determining the side length L of the large cube grid, if the side length L of the large cube grid is larger than the preset side length L 0 Then dividing a plurality of voxel grids along the X, Y, Z direction;
presetting a point cloud number N O Sequentially comparing the number n of the point clouds in a plurality of voxel grids with a preset point cloud number threshold value, and deleting the voxel grids if the number n of the point clouds is smaller than the preset value;
again comparing the side lengths L of the small cube grids i And a preset side length L 0 If the side length is greater than L 0 Continuously dividing a plurality of small cubes, if the small cubes are smaller than or equal to L 0 Traversing the points in the voxel grid, and using the gravity center of the voxel grid to approximately replace other points in the voxel grid, wherein the gravity center is calculated according to the formula:
Figure BDA0002297900350000031
wherein d i Representation point (x) i ,y i ,z i ) Distance d to the center of the region of each voxel grid i Represents the minimum value of the distance, and when the minimum value is reached (x i ,y i ,z i ) Is the gravity center, i is more than or equal to 0 and less than or equal to n;
Figure BDA0002297900350000032
wherein d j Representation point (x) j ,y j ,z j ) To the center of gravity (x) of the region of each voxel grid 0 ,y 0 ,z 0 ) Distance d of (d) max Represents the maximum value of the distance, the corresponding point is the furthest point sought, max { d ] j "represents { d } j A maximum value of 0.ltoreq.j.ltoreq.n-1;
preserving centroid points (x 0 ,y 0 ,z 0 ) Removing the error point pairs by using RANSAC, processing all voxel grids to obtain filtered point cloud data, setting a threshold value tau, and if tau is less than or equal to d max Then reserve to conform to d j And a center of gravity point, otherwise only the center of gravity point is reserved, and the center of gravity point and the point smaller than the maximum distance value are reserved points.
Further, calculating the average curvature of the point cloud according to the point cloud retention points, and taking the voxel with the minimum average curvature as a seed voxel to perform region growing to form a super voxel; and (3) accurately extracting the characteristic points of the target contour and positioning the characteristic areas by estimating the average curvature of the extrinsic curved geometric characteristics of the super-voxels.
Wherein, the construction generates an countermeasure network, which specifically comprises: two generator networks, which obtain a plurality of pieces of simulation track data corresponding to the simulation reconstruction targets; the first generator network inputs the point cloud super-voxels, inputs the visible light-infrared image, enables the generation of the countermeasure network training until the generator generates simulation target data with the same distribution as the real target data, and outputs a simulation reconstruction target; similarly, outputting simulation reconstruction surrounding scenes; and the second generator network inputs the target real track data to generate an countermeasure network, so that the countermeasure network is trained until the generators generate simulated track data with the same distribution as the real track data, and then the generators generate a plurality of groups of simulated track data by using the generators generating the countermeasure network.
Further, the first generator network outputs a reconstruction target and a three-dimensional scene through the 3-layer convolution layer, the 4-layer expansion convolution layer, the 3-layer deconvolution layer and the final convolution layer by performing fusion operation of point cloud and images by the generator and registration training;
the convolution kernel sizes of the 3-layer convolution layers are 7×7, 5×5 and 3×3 respectively, the step length is 2, and the number of the feature images is 64, 128 and 256 respectively; the convolution kernel size of the 4-layer expansion convolution is 3 multiplied by 3, the expansion factors are 2, 4, 8 and 16 respectively, the step length is 1, and the number of the characteristic diagrams is 256, 256 and 256 respectively; the convolution kernel of the 3-layer deconvolution layer is 3 multiplied by 3, the step length is 2, the number of the feature images is 128, 64 and 32 respectively, and the feature images are filled through the 3-layer deconvolution layer; the convolution kernel of the final convolution layer is 3 multiplied by 3, the convolution step length is 1, and the number of the feature images is 3; and adding a BN layer and an lrerlu layer to the output of each convolution layer, and activating the output of the final convolution layer by adopting a Tanh function.
Further, the second generator network outputs a virtual target model through a 3-layer convolution layer, a 6-layer residual layer, a 3-layer deconvolution layer and a final convolution layer;
the convolution kernel sizes of the 3-layer convolution layers are 7×7, 5×5 and 3×3 respectively, and the number of the feature maps is 64, 128 and 256 respectively; each residual layer in the 6 residual layers comprises two convolution layers and residual connection, the convolution kernel size is 3 multiplied by 3, and the number of the feature images is 256; the convolution kernel sizes of the 3 deconvolution layers are 3 multiplied by 3, and the number of the characteristic diagrams is 256, 128 and 64 respectively; the convolution kernel of the final convolution layer is 3 multiplied by 3, the step length is 2, and the number of the feature images is 3; each convolution layer of the second generator network then also contains a BN layer and lrinlu activation layer, the last layer being the Tanh function activation function.
The method for updating the monitoring search area of the target in the my part comprises the following steps of: the threat assessment model is built based on the target type, the striking capacity, the defending capacity, the information reliability and the guarantee aspect due to threat degrees with different degrees caused by the change of the track states of the enemy targets in the my monitoring area, the monitoring search range of the enemy targets is dynamically set by using the change condition of the threat degree weight value, decision basis is provided for the selection of the targets, and the center of gravity of the battle is determined.
Further, the threat degree perception variable weight is calculated as follows:
the my target in the monitoring area is set to N, n= {1,2.., N } is denoted as my nth target; the enemy target number M, m= {1,2,..m }, denoted as the mth threat target; different threat targets have different threat indicators K, k= {1,2,..k } is denoted as the kth threat indicator;
according to the position of the threat target and threat degree evaluation, constructing a state weight value of a threat degree index:
Figure BDA0002297900350000041
w k represents the threat level index k state weight value, w k (X) threat level index k state weight value, w representing My target and threat target of corresponding position mk The weight value of the kth index of the mth target, X mk A kth threat indicator representing threat target m representing my n search scope,
Figure BDA0002297900350000042
the average value of the sum of K threat indexes corresponding to the mth threat target is represented, sigma is a variable weight factor, and the value range is [ -0.5,0.5];δ m Representing threat weight corresponding to m, wherein the value of the threat weight is related to the track of the target m;
wherein X represents the position relation between the my object and the threat object, and is represented by a matrix N multiplied by M,
Figure BDA0002297900350000043
wherein gk (X) represents the state change weight of threat indicators k of the my target and the threat target:
Figure BDA0002297900350000044
the search step size is set according to the threat level,
X nm =X n (m- 1 )+(rd-0.5rd)*H step (6)
Figure BDA0002297900350000051
rd is the diameter of the range of the random monitoring area of the my target, 0.5rd is represented as radius, H step For adaptive step size adjustment factor, w min Minimum threat value, w, for threat target in my target monitoring area max The threat target maximum threat value within the area is monitored for my target,
Figure BDA0002297900350000052
representing the current optimal solution, i.e. threat objective m 0 The threat level is greatest within the monitored area.
Further, the annealing algorithm takes the monitoring range of any target in the my as a unit, and calculates a corresponding position function with the threat targets according to d threat targets searched in the monitoring area and the tracks of the threat targets entering the range as an initial population;
the threat degree of threat targets is selected, threat values of threat targets in two adjacent my target monitoring areas are selected, two fitness degrees f (m) are calculated, cross operation is carried out according to cross probability, then mutation operation is carried out according to mutation probability, the cross probability pc=0.7, the mutation probability pm=0.02, a new population is obtained, and delta E=f (m) -f (m 0) is obtained; threat values of threat targets are calculated by threat weights;
executing an acceptance determination process; for the corresponding position of threat object m newly entering the monitoring area, if delta E<0, then the new m fire distribution is accepted; if ΔE is less than or equal to 0, the new model m is represented by probability P=exp (- ΔE/T) k ) Receiving, temperature T k Is the current temperature;
when the model is accepted, m0=m is set; Δe is the threat objective with the greatest threat as the objective function; judging whether convergence conditions are met, and outputting an optimal solution if the convergence conditions are met, wherein the optimal solution distributes firepower for the target according to the recognized threat target position, the track and the threat degree value; t decreases geometrically and ends when T < 0.0001.
According to the method, the target fusion spectrum information and the space pose and position information are obtained, and the accurate positioning and three-dimensional simultaneous acquisition of three-dimensional point cloud real-time imaging are realized; by changing a classical generation countermeasure network structure, two generator networks are constructed, simulation targets and simulation tracks are output, robustness is improved, false recognition caused by shielding is avoided, continuous and accurate tracking monitoring and interference of targets are achieved, threat degree variable weight is designed by taking the priority sequence of the positions away from the targets as basic constraint, multi-target threat values are calculated according to the threat degree variable weight, firepower distribution of threat targets is achieved, and large-scale intelligent grouping of clusters is improved by searching optimal solutions.
Drawings
FIG. 1 is a flow chart of a method for multi-threat target reconstruction and situation awareness method based on generating an antagonism network according to the present invention.
FIG. 2 is a flow chart of visible and infrared video image processing based on a multi-threat target reconstruction and situation awareness method for generating an countermeasure network in accordance with the present invention.
FIG. 3 is a flow chart of a method for acquiring simulated multi-threat targets and their respective corresponding simulated track segments based on generating a multi-threat target reconstruction and situation awareness method for an countermeasure network in accordance with the present invention.
FIG. 4 is a diagram showing the effect of fire distribution based on a multi-threat target reconstruction and situation awareness method for generating an countermeasure network according to the present invention.
Detailed Description
It should be noted that, without conflict, the embodiments and features of the embodiments in the present application may be combined with each other, and the present invention will be further described in detail with reference to the drawings and the specific embodiments.
FIG. 1 is a flow chart of a method for generating a multi-threat target reconstruction and situation awareness method for an countermeasure network, which mainly comprises the following steps:
step 1, collecting navigation states of a plurality of threat targets in a supervision area, marking time stamps, and forming time-synchronous original scene point cloud data, infrared images and visible light images;
step 2, fusing an infrared image and a visible light image, and realizing three-dimensional scene and target reconstruction by using the image and point cloud data;
step 3, constructing and generating an countermeasure network, outputting a simulation track and a simulation reconstruction target, and acquiring simulation track segments corresponding to the multiple threat targets respectively;
and step 4, updating the monitoring search area of the target according to the threat degree perception variable weight, outputting threat degree of the threat target through an annealing algorithm, and distributing fire power of the target.
Acquiring navigation states of a plurality of threat targets in a supervision area, acquiring original scene point cloud data by utilizing a laser radar sensor, acquiring an infrared image by utilizing an infrared sensor, acquiring a visible light image by utilizing a visible light camera, unifying a coordinate system in advance, realizing coordinate calibration of the targets, marking time stamps on the navigation states of the plurality of threat targets, and performing time synchronization of the plurality of targets.
The infrared image and the visible light image are fused, and because the infrared light and the visible light have respective limitations and advantages under different environments, the infrared image and the visible light image of the same time frame are fused, and fig. 2 is a processing flow chart of the visible light image and the infrared video image based on the method for generating the multi-threat target situation awareness of the countermeasure network, which mainly shows the fusion processing of the infrared image and the light image, specifically:
firstly, defogging an extracted infrared image and a visible light image, filtering small noise points by adopting an image binarization method, and extracting contour areas of a plurality of edges by adopting a self-adaptive edge algorithm to obtain the maximum contour of each target; the targets include enemy targets and surrounding scenes;
detecting feature points of the infrared image and the visible light image and contour points of the target in the infrared image and the visible light image, and fitting when the contour area is larger than a preset threshold value to obtain a preprocessed infrared image and a preprocessed visible light image;
carrying out K layers of NSCT decomposition on the preprocessed infrared image and the preprocessed visible light image, and constructing an average gradient and a Q factor matrix to obtain a low-frequency subband coefficient and a high-frequency subband coefficient;
processing the image high-frequency information, performing PCNN processing on the corresponding high-frequency subband coefficients, taking the subband coefficients as external input excitation of the PCNN, and dividing the image into the rest K-1 layers except the highest layer scale K and calculating the highest layer scale K respectively;
the method specifically comprises the following steps:
performing PCNN processing on the corresponding high-frequency subband coefficients, taking the subband coefficients as external input excitation of the PCNN, and adaptively calculating the link strength beta value of the PCNN:
Figure BDA0002297900350000071
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0002297900350000072
represents the energy of M N in the region of the high-frequency coefficient matrix centered on (x, y), and +.>
Figure BDA0002297900350000073
Representing decomposition coefficients of an image subjected to a K-layer NSCT transform at (x, y);
in order to highlight the target detail information in the source image in the fused image, determining the fusion coefficient of the target detail information in the highest layer scale K of the image by adopting a method of taking the absolute value, and then the corresponding fusion rule can be expressed as follows:
Figure BDA0002297900350000074
wherein I is 1 And I 2 Is the high frequency subband decomposition coefficients for image a and image B;
taking the rest K-1 layers except the highest layer scale K of the image as the neuron input of PCNN, respectively calculating the ignition times of each pixel in each sub-image of the infrared image A and the visible light image B, determining a high-frequency fusion coefficient during fusion according to the ignition times, and carrying out fusion according to the following rules:
Figure BDA0002297900350000075
wherein T is 1 And T 2 Respectively is I 1 And I 2 Ignition times, W, of pulse output via PCNN network 1 And W is 2 High frequency subband coefficients of infrared image A and visible light image BWeight, thresh is a threshold;
processing the low-frequency information of the image, giving higher weight to pixels to the image area with high energy in the fusion process to the low-frequency subband coefficient, normalizing the variance, judging the variance and a preset variance threshold, and fusing according to different rules;
the method specifically comprises the following steps:
processing the image low-frequency information specifically comprises the following steps: first, the pixel saliency is calculated, expressed as:
Figure BDA0002297900350000076
wherein I is S (i, j) represents the pixel value of the image, U S Representing the mean of image pixels, s=ir, vis is used to represent infrared and visible light pictures, U R Representing a regional tie value;
then, in the fusion process, higher weight is given to pixels in the image area with high energy, w ir And w vis Weights of infrared and visible light images are respectively represented, F L (x, y) represents the fused low frequency component, E vis Represents energy in the visible region, E ir Representing infrared region energy;
when the pixel is in the target area, the pixel energy in the infrared image is relatively concentrated, so that the infrared image area is high in energy, and the corresponding visible light image area is relatively low in energy, so that the visible light image is given a small weight, and the weight setting range is smaller than 0.3:
Figure BDA0002297900350000081
F L (x,y)=w vis ×vis L (x,y)+(1-w vis )×ir L (x,y) (8)
the pixel energy of the visible light image is relatively concentrated, the area energy is large, and the infrared image area energy is relatively small, so that the infrared image is given a small weight, and the weight is set to be less than 0.3:
Figure BDA0002297900350000082
F L (x,y)=w ir ×ir L (x,y)+(1-w ir )×vis L (x,y) (10)
finally normalize the local variance using
Figure BDA0002297900350000083
Wherein Qv is Representing the visible light image area variance, Q ir Representing infrared image region variance;
when the difference value of the normalized local variance is larger, namely G (i, j) > T, T represents a preset variance threshold value, which indicates that the difference between two image areas is larger, and the area variance is larger is selected:
Figure BDA0002297900350000084
when the difference in normalized local variance is relatively small, i.e. when G (i, j) < T,
Figure BDA0002297900350000085
wherein C is F (x, y) represents the fused low-frequency coefficient, then PCNN processing is carried out on the low-frequency subband coefficient, 4 times of the coefficient value is used as the external input of the PCNN, wherein T represents a preset threshold value, and the value is between 0.3 and 0.4.
And performing NSCT inverse transformation, reconstructing the low-frequency coefficient and each high-frequency subband coefficient obtained by fusion to obtain a fused image, taking the central coordinate position of the two images as the position of a target at the moment, and then mapping the target center of the visible light image into the infrared image to obtain the visible light-infrared image so as to further obtain the fusion of the azimuth and the angle of the target area.
The implementation of three-dimensional scene and target reconstruction specifically comprises:
creating a three-dimensional voxel grid for each three-dimensional point of the input point cloud data, searching coordinate values of all point clouds, and finding out the maximum X of the three-dimensional coordinate axis direction max 、Y max 、Z max And a minimum value X mun 、Y mun 、Z min Determining the side length L of the large cube grid, if the side length L of the large cube grid is larger than the preset side length L 0 Then dividing a plurality of voxel grids along the X, Y, Z direction;
presetting a point cloud number N O Sequentially comparing the number n of the point clouds in a plurality of voxel grids with a preset point cloud number threshold value, and deleting the voxel grids if the number n of the point clouds is smaller than the preset value;
again comparing the side lengths L of the small cube grids i And a preset side length L 0 If the side length is greater than L 0 Continuously dividing a plurality of small cubes, if the small cubes are smaller than or equal to L 0 Traversing the points in the voxel grid, and using the gravity center of the voxel grid to approximately replace other points in the voxel grid, wherein the gravity center is calculated according to the formula:
Figure BDA0002297900350000091
wherein d i Representation point (x) i ,y i ,z i ) Distance d to the center of the region of each voxel grid i Represents the minimum value of the distance, and when the minimum value is reached (x i ,y i ,z i ) Is the gravity center, i is more than or equal to 0 and less than or equal to n;
Figure BDA0002297900350000092
wherein d j Representation point (x) j ,y j ,z j ) To the center of gravity (x) of the region of each voxel grid 0 ,y 0 ,z 0 ) Distance d of (d) max Represents the maximum value of the distance, the corresponding point is the furthest point sought, max { d ] j "represents { d } j A maximum value of 0.ltoreq.j.ltoreq.n-1;
preserving centroid points (x 0 ,y 0 ,z 0 ) Removing the error point pairs by using RANSAC, processing all voxel grids to obtain filtered point cloud data, setting a threshold value tau, and if tau is less than or equal to d max Then reserve to conform to d j And a center of gravity point, otherwise only the center of gravity point is reserved, and the center of gravity point and the point smaller than the maximum distance value are reserved points.
The point cloud retaining points calculate the average curvature of the point cloud, the voxels with the minimum average curvature are used as seed voxels, region growing is carried out to form super voxels, and the accurate extraction of the characteristic points of the target contour and the characteristic region positioning are realized by estimating the average curvature of the external bending geometrical characteristics of the super voxels.
Fig. 3 is a flowchart of acquiring simulated multi-threat targets and respective corresponding simulated track segments based on a multi-threat target reconstruction and situation awareness method for generating an countermeasure network, mainly showing that two generator networks respectively process point cloud data, fused infrared-visible light images and tracks, and output the simulated multi-threat targets and the respective corresponding simulated track segments through a discriminator.
Generating a first generator network of an countermeasure network, inputting the point cloud super-voxel, inputting the visible light-infrared image, performing fusion operation of the point cloud and the image, performing registration training, enabling the generated countermeasure network to train until a generator generates simulation target data with the same distribution as that of real target data, and outputting a simulation reconstruction target; similarly, outputting simulation reconstruction surrounding scenes; the convolution kernel sizes of the 3-layer convolution layers are 7×7, 5×5 and 3×3 respectively, the step length is 2, and the number of the feature images is 64, 128 and 256 respectively; the convolution kernel size of the 4-layer expansion convolution is 3 multiplied by 3, the expansion factors are 2, 4, 8 and 16 respectively, the step length is 1, and the number of the characteristic diagrams is 256, 256 and 256 respectively; the convolution kernel of the 3-layer deconvolution layer is 3 multiplied by 3, the step length is 2, the number of the feature images is 128, 64 and 32 respectively, and the feature images are filled through the 3-layer deconvolution layer; the convolution kernel of the final convolution layer is 3 multiplied by 3, the convolution step length is 1, and the number of the feature images is 3; and adding a BN layer and an lrerlu layer to the output of each convolution layer, and activating the output of the final convolution layer by adopting a Tanh function.
And generating a second generator network of the countermeasure network, wherein the second generator network inputs the target real track data and inputs the target real track data to generate the countermeasure network, so that the generated countermeasure network trains to generate simulation track data with the same distribution of the real track data, and then, the generator of the generated countermeasure network is utilized to generate a plurality of groups of simulation track data. And generating a plurality of groups of simulation track data through the 3-layer convolution layer, the 6-layer residual error layer, the 3-layer deconvolution layer and the final convolution layer.
The convolution kernel sizes of the 3-layer convolution layers are 7×7, 5×5 and 3×3 respectively, and the number of the feature maps is 64, 128 and 256 respectively; each residual layer in the 6 residual layers comprises two convolution layers and residual connection, the convolution kernel size is 3 multiplied by 3, and the number of the feature images is 256; the convolution kernel sizes of the 3 deconvolution layers are 3 multiplied by 3, and the number of the characteristic diagrams is 256, 128 and 64 respectively; the convolution kernel of the final convolution layer is 3 multiplied by 3, the step length is 2, and the number of the feature images is 3; each convolution layer of the second generator network then also contains a BN layer and lrinlu activation layer, the last layer being the Tanh function activation function.
According to threat degree perception variable weight, updating a monitoring search area of the targets on the my, wherein threat degrees of different degrees are caused by the change of the track states of the targets on the enemy in the monitoring area of the my, threat target track positions are taken as basic factors, a threat assessment model is established based on the aspects of target type, striking capacity, defending capacity, information reliability and guarantee, the monitoring search range of the targets on the my is dynamically set by utilizing the change condition of the threat degree weight value, decision basis is provided for the selection of the targets, and the center of gravity of the battle is determined.
Further, a method for calculating threat degree variable weight, wherein the my target in the monitoring area is set as N, n= {1,2., N } is expressed as an N-th target of my; the enemy target number M, m= {1,2,..m }, denoted as the mth threat target; different threat targets have different threat indicators K, k= {1,2,..k } is denoted as the kth threat indicator;
according to the position of the threat target and threat degree evaluation, constructing a state weight value of a threat degree index:
Figure BDA0002297900350000101
/>
w k represents the threat level index k state weight value, w k (X) threat level index k state weight value, w representing My target and threat target of corresponding position mk The weight value of the kth index of the mth target, X mk A kth threat indicator representing threat target m representing my n search scope,
Figure BDA0002297900350000102
the average value of the sum of K threat indexes corresponding to the mth threat target is represented, sigma is a variable weight factor, and the value range is [ -0.5,0.5];δ m Representing threat weight corresponding to m, wherein the value of the threat weight is related to the track of the target m;
wherein X represents the position relation between the my object and the threat object, and is represented by a matrix N multiplied by M,
Figure BDA0002297900350000103
wherein g k (X) state change weights of threat indicators k representing the my target and threat target:
Figure BDA0002297900350000104
the search step size is set according to the threat level,
X nm =X n(m-1) +(rd-0.5rd)*H step (6)
Figure BDA0002297900350000111
rd is the radius of the random monitoring area range of the my target, and 0.5rd is represented as the radius,H step For adaptive step size adjustment factor, w min Minimum threat value, w, for threat target in my target monitoring area max The threat target maximum threat value within the area is monitored for my target,
Figure BDA0002297900350000112
representing the current optimal solution, i.e. threat objective m 0 The threat level is greatest within the monitored area.
FIG. 4 is a diagram showing the effect of thermal distribution based on a method for reconstructing multi-threat targets and perceiving situation of a generated countermeasure network, wherein the upper diagram is a part of a monitoring target area on the my side, the circle part is an enemy target, and the number of threat targets entering the monitoring area is 12; the lower graph is a map for distributing and displaying the fire power of the states of enemy, the distribution situation of the number of targets 11 and the number of threat targets 12 is shown, the coordinate values are the relative distances in two-dimensional display, firstly, a key preferential detection area is established, the length of the monitoring area and the width of the monitoring area are input, the monitoring area is generated, fire power deployment is generated according to sensor data, the number of threat targets and the track, optimizing calculation is carried out on a deployment scheme, the deployment node coordinates of the targets are stored in a database, and a final fire power deployment scheme is displayed.
The annealing algorithm is utilized to find the optimal distribution, and the method mainly comprises the following steps:
taking the monitoring range of any target in the my as a unit, and calculating a corresponding position function with the threat targets according to d threat targets searched in the monitoring area and the tracks of the threat targets entering the range as an initial population;
the threat degree of threat targets is selected, threat values of threat targets in two adjacent my target monitoring areas are selected, two fitness degrees f (m) are calculated, cross operation is carried out according to cross probability, then mutation operation is carried out according to mutation probability, the cross probability pc=0.7, the mutation probability pm=0.01, a new population is obtained, and delta E=f (m) -f (m 0) is obtained; threat values of threat targets are calculated by threat weights;
executing an acceptance determination process; for the corresponding location of threat object m newly entering the monitored area,if delta E<0, then the new m fire distribution is accepted; if ΔE is less than or equal to 0, the new model m is represented by probability P=exp (- ΔE/T) k ) Receiving, temperature T k Is the current temperature;
when the model is accepted, m0=m is set; Δe is the threat objective with the greatest threat as the objective function; judging whether convergence conditions are met, and outputting an optimal solution if the convergence conditions are met, wherein the optimal solution distributes firepower for the target according to the recognized threat target position, the track and the threat degree value; t decreases geometrically and ends when T < 0.0001.
It will be understood by those skilled in the art that the present invention is not limited to the details of the foregoing embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or scope of the invention. Further, various modifications and variations of the present invention may be made by those skilled in the art without departing from the spirit and scope of the invention, and such modifications and variations should also be considered as being within the scope of the invention. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.

Claims (9)

1. The multi-threat target reconstruction and situation awareness method based on the generation of the countermeasure network is characterized by mainly comprising the following steps:
collecting navigation states of a plurality of threat targets in a monitoring area of the targets on the my side, marking time stamps, and forming time-synchronous original scene point cloud data, infrared images and visible light images;
the infrared image and the visible light image are fused, and three-dimensional scene and target reconstruction is realized by the image and the point cloud data;
constructing and generating an countermeasure network, outputting a simulation track and a simulation reconstruction target, and acquiring simulation track segments corresponding to the multiple threat targets respectively;
updating the monitoring search area of the target according to the threat degree perception variable weight, outputting threat degree of the threat target through an annealing algorithm, and distributing fire power of the target;
according to the position of the threat target and threat degree evaluation, constructing a state weight value of a threat degree index:
Figure 488911DEST_PATH_IMAGE001
w k represents the threat level index k state weight value, w k (X) threat level index k state weight value, w representing My target and threat target of corresponding position mk The weight value of the kth index of the mth target, X mk A kth threat indicator representing threat target m for my n search range,
Figure 258284DEST_PATH_IMAGE002
the average value of the sum of K threat indexes corresponding to the mth threat target is represented, sigma is a variable weight factor, and the value range is [ -0.5,0.5];δ m Representing threat weight corresponding to m, wherein the value of the threat weight is related to the track of the target m;
wherein X represents the position relation between the my object and the threat object, and is represented by a matrix N multiplied by M,
Figure 261750DEST_PATH_IMAGE003
wherein g k (X) state change weights of threat indicators k representing the my target and threat target:
Figure 825586DEST_PATH_IMAGE004
2. the multi-threat target reconstruction and situation awareness method based on the generation countermeasure network of claim 1, wherein the infrared image and the visible light image are fused, specifically comprising:
defogging an extracted infrared image and a visible light image, filtering small noise points by adopting an image binarization method, and extracting contour areas of a plurality of edges by adopting a self-adaptive edge algorithm to obtain the maximum contour of each target;
the targets include enemy targets and surrounding scenes;
detecting feature points of the infrared image and the visible light image and contour points of the target in the infrared image and the visible light image, and fitting when the contour area is larger than a preset threshold value to obtain a preprocessed infrared image and a preprocessed visible light image;
carrying out K layers of NSCT decomposition on the preprocessed infrared image and the preprocessed visible light image, and constructing an average gradient and a Q factor matrix to obtain a low-frequency subband coefficient and a high-frequency subband coefficient;
processing the image high-frequency information, performing PCNN processing on the corresponding high-frequency subband coefficients, taking the subband coefficients as external input excitation of the PCNN, and dividing the image into the rest K-1 layers except the highest layer scale K and calculating the highest layer scale K respectively;
processing the low-frequency information of the image, giving higher weight to pixels to the image area with high energy in the fusion process to the low-frequency subband coefficient, normalizing the variance, judging the variance and a preset variance threshold, and fusing according to different rules;
and performing NSCT inverse transformation, reconstructing the low-frequency coefficient and each high-frequency subband coefficient obtained by fusion to obtain a fused image, taking the central coordinate position of the two images as the position of a target at the moment, and then mapping the target center of the visible light image into the infrared image to obtain the visible light-infrared image so as to further obtain the fusion of the azimuth and the angle of the target area.
3. The multi-threat target reconstruction and situation awareness method based on the generation countermeasure network according to claim 1, wherein the implementation of the three-dimensional scene and target reconstruction specifically comprises:
creating a three-dimensional voxel grid for each three-dimensional point of the input point cloud data, searching coordinate values of all point clouds, and finding out the maximum X of the three-dimensional coordinate axis direction max 、Y max 、Z max And a minimum value X min 、Y min 、X min Determining the side length L of the large cube grid if the side of the large cube gridThe length L is greater than the preset side length L 0 Then dividing a plurality of voxel grids along the X, Y, Z direction;
presetting a point cloud number N 0 Sequentially comparing the number n of the point clouds in a plurality of voxel grids with a preset point cloud number threshold value, and deleting the voxel grids if the number n of the point clouds is smaller than the preset value;
again comparing the side lengths L of the small cube grids i And a preset side length L 0 If the side length is greater than L 0 Continuously dividing a plurality of small cubes, if the small cubes are smaller than or equal to L 0 Traversing the points in the voxel grid, and using the gravity center of the voxel grid to approximately replace other points in the voxel grid, wherein the gravity center is calculated according to the formula:
Figure 415968DEST_PATH_IMAGE005
wherein d i Representation point (x) i ,y i ,z i ) Distance d to the center of the region of each voxel grid i Represents the minimum value of the distance, and when the minimum value is reached (x i ,y i ,z i ) Is the gravity center, i is more than or equal to 0 and less than or equal to n;
Figure 989032DEST_PATH_IMAGE006
wherein d j Representation point (x) j ,y j ,z j ) To the center of gravity (x) of the region of each voxel grid 0 ,y 0 Z 0), d max Represents the maximum value of the distance, the corresponding point is the furthest point sought, max { d ] j "represents { d } j A maximum value of 0.ltoreq.j.ltoreq.n-1;
preserving centroid points (x 0 ,y 0 ,z 0 ) Removing the error point pairs by using RANSAC, processing all voxel grids to obtain filtered point cloud data, setting a threshold value tau, and if tau is less than or equal to dmax, keeping to be consistent with d j And the center of gravity point, otherwise only the center of gravity point is reserved, the center of gravity point and the distance is smallerThe point from the maximum is the reserved point.
4. The multi-threat target reconstruction and situation awareness method based on the generation countermeasure network of claim 3, wherein according to the point cloud retention points, calculating the average curvature of the point cloud, and taking the voxels with the minimum average curvature as seed voxels, performing region growing to form super voxels;
and (3) accurately extracting the characteristic points of the target contour and positioning the characteristic areas by estimating the average curvature of the extrinsic curved geometric characteristics of the super-voxels.
5. The multi-threat target reconstruction and situation awareness method based on generating an countermeasure network according to claim 1, wherein the constructing the generating an countermeasure network specifically comprises: two generator networks, which obtain a plurality of pieces of simulation track data corresponding to the simulation reconstruction targets;
the first generator network inputs the point cloud super-voxels, inputs the visible light-infrared image, enables the generation of the countermeasure network training until the generator generates simulation target data with the same distribution as the real target data, and outputs a simulation reconstruction target;
similarly, outputting simulation reconstruction surrounding scenes;
and the second generator network inputs the target real track data to generate an countermeasure network, so that the countermeasure network is trained until the generators generate simulated track data with the same distribution as the real track data, and then the generators generate a plurality of groups of simulated track data by using the generators generating the countermeasure network.
6. The multi-threat target reconstruction and situation awareness method based on the generation countermeasure network according to claim 5 is characterized in that the first generator network outputs a reconstruction target and a three-dimensional scene through 3 layers of convolution layers, 4 layers of expansion convolution layers, 3 layers of deconvolution layers and a final convolution layer, the generator performs fusion operation of point cloud and images, and registration training is performed;
the convolution kernel sizes of the 3-layer convolution layers are 7×7, 5×5 and 3×3 respectively, the step length is 2, and the number of the feature images is 64, 128 and 256 respectively;
the convolution kernel size of the 4-layer expansion convolution is 3 multiplied by 3, the expansion factors are 2, 4, 8 and 16 respectively, the step length is 1, and the number of the characteristic diagrams is 256, 256 and 256 respectively;
the convolution kernel of the 3-layer deconvolution layer is 3 multiplied by 3, the step length is 2, the number of the feature images is 128, 64 and 32 respectively, and the feature images are filled through the 3-layer deconvolution layer;
the convolution kernel of the final convolution layer is 3 multiplied by 3, the convolution step length is 1, and the number of the feature images is 3;
and adding a BN layer and an lrerlu layer to the output of each convolution layer, and activating the output of the final convolution layer by adopting a Tanh function.
7. The multi-threat target reconstruction and situation awareness method based on a generation countermeasure network of claim 5, wherein the second generator network generates a plurality of groups of simulation track data through a 3-layer convolution layer, a 6-layer residual layer, a 3-layer deconvolution layer and a final convolution layer;
the convolution kernel sizes of the 3-layer convolution layers are 7×7, 5×5 and 3×3 respectively, and the number of the feature maps is 64, 128 and 256 respectively;
each residual layer in the 6 residual layers comprises two convolution layers and residual connection, the convolution kernel size is 3 multiplied by 3, and the number of the feature images is 256;
the convolution kernel sizes of the 3 deconvolution layers are 3 multiplied by 3, and the number of the characteristic diagrams is 256, 128 and 64 respectively;
the convolution kernel of the final convolution layer is 3 multiplied by 3, the step length is 2, and the number of the feature images is 3;
each convolution layer of the second generator network then also contains a BN layer and lrinlu activation layer, the last layer being the Tanh function activation function.
8. The multi-threat target reconstruction and situation awareness method based on the generation of the countermeasure network according to claim 1, wherein the updating the my target monitoring search area according to the threat degree awareness variable weight specifically comprises: the threat degree of different degrees is caused by the change of the state of the target track of the enemy in the monitored area, the position of the target track of the threat is taken as a basic factor, a threat assessment model is established based on the aspects of the type of the target, the striking capability, the defending capability, the information reliability and the guarantee, the monitoring search range of the target of the enemy is dynamically set by utilizing the change condition of the threat degree weight value, a decision basis is provided for the selection of the target, and the center of gravity of the battle is determined.
9. The multi-threat target reconstruction and situation awareness method based on generation of an countermeasure network of claim 1, wherein the annealing algorithm mainly comprises: taking the monitoring range of any target in the my as a unit, and calculating a corresponding position function with the threat targets according to d threat targets searched in the monitoring area and the tracks of the threat targets entering the range as an initial population;
executing an acceptance determination process; for the corresponding position of threat object m newly entering the monitoring area, if delta E<0, then the new m fire distribution is accepted; if ΔE is less than or equal to 0, the new model m is represented by probability P=exp (- ΔE/T) k ) Receiving, temperature T k Is the current temperature;
when the model is accepted, m0=m is set; Δe is the threat objective with the greatest threat as the objective function; judging whether convergence conditions are met, and outputting an optimal solution if the convergence conditions are met, wherein the optimal solution distributes firepower for the target according to the recognized threat target position, the track and the threat degree value; t decreases geometrically and ends when T < 0.0001.
CN201911210172.0A 2019-12-02 2019-12-02 Multi-threat target reconstruction and situation awareness method based on generation countermeasure network Active CN110969637B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911210172.0A CN110969637B (en) 2019-12-02 2019-12-02 Multi-threat target reconstruction and situation awareness method based on generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911210172.0A CN110969637B (en) 2019-12-02 2019-12-02 Multi-threat target reconstruction and situation awareness method based on generation countermeasure network

Publications (2)

Publication Number Publication Date
CN110969637A CN110969637A (en) 2020-04-07
CN110969637B true CN110969637B (en) 2023-05-02

Family

ID=70032473

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911210172.0A Active CN110969637B (en) 2019-12-02 2019-12-02 Multi-threat target reconstruction and situation awareness method based on generation countermeasure network

Country Status (1)

Country Link
CN (1) CN110969637B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113671981A (en) * 2020-05-14 2021-11-19 北京理工大学 Remote laser guidance aircraft control system and control method thereof
CN111722220B (en) * 2020-06-08 2022-08-26 北京理工大学 Rocket target identification system based on parallel heterogeneous sensor
CN111899353A (en) * 2020-08-11 2020-11-06 长春工业大学 Three-dimensional scanning point cloud hole filling method based on generation countermeasure network
CN112365582B (en) * 2020-11-17 2022-08-16 电子科技大学 Countermeasure point cloud generation method, storage medium and terminal
CN112801403A (en) * 2021-02-10 2021-05-14 武汉科技大学 Method and system for predicting potential threat degree of aerial target based on SSA-BP
CN112884802B (en) * 2021-02-24 2023-05-12 电子科技大学 Attack resistance method based on generation
CN112990363A (en) * 2021-04-21 2021-06-18 中国人民解放军国防科技大学 Battlefield electromagnetic situation sensing and utilizing method
CN113192182A (en) * 2021-04-29 2021-07-30 山东产研信息与人工智能融合研究院有限公司 Multi-sensor-based live-action reconstruction method and system
CN114722407B (en) * 2022-03-03 2024-05-24 中国人民解放军战略支援部队信息工程大学 Image protection method based on endogenic type countermeasure sample

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7876258B2 (en) * 2006-03-13 2011-01-25 The Boeing Company Aircraft collision sense and avoidance system and method
CN105654232A (en) * 2015-12-24 2016-06-08 大连陆海科技股份有限公司 Coastal monitoring and defense decision-making system based on multi-dimensional space fusion and method thereof
CN107832885B (en) * 2017-11-02 2022-02-11 南京航空航天大学 Ship formation fire power distribution method based on self-adaptive migration strategy BBO algorithm
CN108564129B (en) * 2018-04-24 2020-09-08 电子科技大学 Trajectory data classification method based on generation countermeasure network
CN110415342B (en) * 2019-08-02 2023-04-18 深圳市唯特视科技有限公司 Three-dimensional point cloud reconstruction device and method based on multi-fusion sensor
CN110428008A (en) * 2019-08-02 2019-11-08 深圳市唯特视科技有限公司 A kind of target detection and identification device and method based on more merge sensors

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
S. Akbari 等."A new framework of a decision support system for air to air combat tasks".《 2000 ieee international conference on systems, man and cybernetics》.2002,第3卷全文. *
姚跃亭 等."改进遗传算法的防空目标分配".《计算技术与自动化》.2010,第29卷(第14期),全文. *
宋遐淦 等."改进模拟退火遗传算法在协同空战中的应用".哈尔滨工程大学学报.2017,第38卷(第11期),全文. *

Also Published As

Publication number Publication date
CN110969637A (en) 2020-04-07

Similar Documents

Publication Publication Date Title
CN110969637B (en) Multi-threat target reconstruction and situation awareness method based on generation countermeasure network
CN107945265B (en) Real-time dense monocular SLAM method and system based on on-line study depth prediction network
CN111797716A (en) Single target tracking method based on Siamese network
JP5487298B2 (en) 3D image generation
CN112001958B (en) Virtual point cloud three-dimensional target detection method based on supervised monocular depth estimation
CN106940704A (en) A kind of localization method and device based on grating map
CN109029363A (en) A kind of target ranging method based on deep learning
CN107560592A (en) A kind of precision ranging method for optronic tracker linkage target
CN111998862B (en) BNN-based dense binocular SLAM method
CN109323697B (en) Method for rapidly converging particles during starting of indoor robot at any point
CN113674335B (en) Depth imaging method, electronic device and storage medium
CN116258817A (en) Automatic driving digital twin scene construction method and system based on multi-view three-dimensional reconstruction
CN112614163B (en) Target tracking method and system integrating Bayesian track reasoning
CN113139602A (en) 3D target detection method and system based on monocular camera and laser radar fusion
CN112053391A (en) Monitoring and early warning method and system based on dynamic three-dimensional model and storage medium
CN110796691A (en) Heterogeneous image registration method based on shape context and HOG characteristics
CN108876861B (en) Stereo matching method for extraterrestrial celestial body patrolling device
CN116645392A (en) Space target relative pose iterative estimation method and system based on key point weight
CN117214904A (en) Intelligent fish identification monitoring method and system based on multi-sensor data
CN111260687A (en) Aerial video target tracking method based on semantic perception network and related filtering
CN117710583A (en) Space-to-ground image three-dimensional reconstruction method, system and equipment based on nerve radiation field
CN108830890A (en) A method of scene geometric information being estimated from single image using production confrontation network
CN109377447B (en) Contourlet transformation image fusion method based on rhododendron search algorithm
CN116129292A (en) Infrared vehicle target detection method and system based on few sample augmentation
CN115588133A (en) Visual SLAM method suitable for dynamic environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant