CN113343863B - Fusion characterization network model training method, fingerprint characterization method and equipment thereof - Google Patents

Fusion characterization network model training method, fingerprint characterization method and equipment thereof Download PDF

Info

Publication number
CN113343863B
CN113343863B CN202110655987.0A CN202110655987A CN113343863B CN 113343863 B CN113343863 B CN 113343863B CN 202110655987 A CN202110655987 A CN 202110655987A CN 113343863 B CN113343863 B CN 113343863B
Authority
CN
China
Prior art keywords
fusion
fingerprint
feature
training sample
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110655987.0A
Other languages
Chinese (zh)
Other versions
CN113343863A (en
Inventor
刘雯
邓中亮
陈宏�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN202110655987.0A priority Critical patent/CN113343863B/en
Publication of CN113343863A publication Critical patent/CN113343863A/en
Application granted granted Critical
Publication of CN113343863B publication Critical patent/CN113343863B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a fusion characterization network model training method, a fingerprint characterization method and equipment thereof, wherein the training method comprises the following steps: carrying out feature extraction on the channel state information data by utilizing a multilayer perceptron network to obtain a channel state information feature map; respectively extracting the features of the image data of each azimuth by using the convolutional neural networks with the same weight to obtain a feature map of the image of each azimuth; fusing feature maps of images of all directions of the same training sample to obtain a multi-direction feature map; splicing the channel state information characteristic diagram and the multi-azimuth characteristic diagram to construct a fusion representation; correspondingly constructing a fusion fingerprint database by using fusion characteristics of the channel state information and the images, and performing parameter optimization by using set measure indexes to obtain a fusion characteristic network model; and setting a measure index for measuring the distance between the characteristic fingerprints in the fused fingerprint library. Through the scheme, the feature discrimination can be improved, and the positioning accuracy is improved.

Description

Fusion characterization network model training method, fingerprint characterization method and equipment thereof
Technical Field
The invention relates to the technical field of positioning, in particular to a fusion characterization network model training method, a fingerprint characterization method and equipment thereof.
Background
With the development of informatization and intellectualization of society, information such as navigation, positioning and the like has a greater and greater proportion in daily life, and the position service industry is widely applied in various fields.
Various positioning methods have been proposed for complex indoor environments, and the positioning methods can be classified into a positioning technology based on Wireless-Fidelity (Wi-Fi), bluetooth, ultra wide band and other radio signals, and a positioning technology based on infrared, ultrasonic, visual, inertial and other non-radio signals according to signal sources.
Among many positioning sources, two signal sources of Wi-Fi and vision are concerned with the advantages of rich positioning information, low hardware cost and the like. Wi-Fi features include Received Signal Strength (RSS) and Channel State Information (CSI), where CSI can provide more detailed subcarrier Information, and is more time-stable and can achieve better positioning effect. The positioning public report accuracy based on the CSI reaches the meter level, but the problems of insufficient discrimination and difficulty in determining the unique position still exist in the complex indoor environment.
Fingerprint positioning algorithms mainly relate different positions in indoor environment to certain fingerprint characteristics, so that the positioning problem is regarded as a fingerprint matching pattern recognition problem. Any "location unique" feature can be used as a location fingerprint, common location fingerprints including multipath structure, received signal strength, channel state information, image features, and the like. Fingerprint positioning usually includes two stages of off-line library construction and on-line matching. And the off-line stage is mainly used for completing the establishment of the relation between each position and the fingerprint. Firstly, reference points for signal acquisition are determined in a scene to be positioned, fingerprint data acquisition is completed on each position point by using related equipment, the fingerprint data is recorded, and reference point coordinates are marked. In the online stage, the fingerprint characteristics measured by the equipment to be positioned are compared and matched with a given database so as to estimate the position of the equipment. Commonly used matching algorithms include K-nearest neighbor matching, weighted K-nearest neighbor matching, neural networks, and the like.
The positioning algorithm based on the fingerprint has higher positioning precision and lower requirement on hardware, and can finish data acquisition without adding additional equipment. The positioning technology based on the visual image is widely applied to navigation and positioning due to the fact that equipment does not need to be deployed in advance, signal acquisition cost is low, and the like. The indoor positioning of image matching has the advantages of stable characteristics, small noise influence and the like, but the problems of generally high characteristic dimension, complex characteristic matching process, more occupied resources, difficulty in achieving real-time performance and low cost exist.
The fingerprint characteristics of a single positioning source are easily interfered by the environment and have instability. And due to the limitation of information, the single positioning source generally has the problems of low feature precision, insufficient completeness and the like. In contrast, the fused fingerprint features of the multi-sensor data can give position information from different angles, further enhancing the accuracy and stability of the fingerprint. The data fusion of the multiple positioning sensors has better fault tolerance, complementarity, instantaneity and economy. Multi-source fusion localization is becoming a trend for high-precision indoor localization.
The data fusion algorithm is the key to utilizing multi-sensor information. The existing multisource fusion positioning method at present can be mainly divided into decision-level fusion and feature-level fusion: the decision-level fusion is a method for performing high-level decision based on the judgment results of a plurality of sensors and according to a certain rule; the feature level fusion is to perform feature information extraction on the sensor raw data respectively and then fuse the extracted feature information to form richer and more stable features, so that the features are used for final decision making.
Compared with decision-level fusion, the positioning process based on feature-level fusion is simpler, and the real-time performance and the usability of the system are further enhanced. However, most heterogeneous feature fusion algorithms in the existing fusion positioning system still stay in feature combination or screening, and the problem of insufficient fingerprint feature differentiation is difficult to radically change.
Disclosure of Invention
In view of the above, the present invention provides a method for training a fusion characterization network model, a method for characterizing fingerprints, and a device thereof, so as to solve one or more of the defects in the prior art.
In order to achieve the purpose, the invention is realized by adopting the following scheme:
according to an aspect of the embodiments of the present invention, there is provided a method for training a fusion characterization network model, including:
acquiring a training sample set, wherein each training sample comprises channel state information data corresponding to the same environment position and image data of a plurality of directions;
carrying out feature extraction on the channel state information data in the training samples by utilizing a multilayer perceptron network to obtain a channel state information feature map corresponding to the corresponding training samples;
respectively extracting the features of the image data of each direction in the training sample by using the convolutional neural networks with the same weight to obtain a feature map of the image of each direction corresponding to the corresponding training sample;
fusing the feature maps of the images of all the directions corresponding to the same training sample to obtain a multi-direction feature map corresponding to the corresponding training sample;
splicing the channel state information characteristic diagram and the multi-azimuth characteristic diagram corresponding to the same training sample by using the characteristic fusion layer, and then constructing fusion representation of the channel state information and the image corresponding to the corresponding training sample;
constructing a fusion fingerprint database by using channel state information corresponding to each training sample and feature fingerprints corresponding to fusion representation of images, and performing parameter optimization on a network model comprising the multilayer perceptron network, the convolutional neural network and the feature fusion layer by using set measure indexes based on the fusion fingerprint database so as to enable the distance between the feature fingerprints in the same environmental position to be close and the distance between the feature fingerprints in different environmental positions to be far, thereby obtaining a trained network model as a fusion representation network model; and setting a measure index for measuring the distance between the characteristic fingerprints in the fused fingerprint library.
In some embodiments, the channel state information data in the training samples is channel state amplitude data.
In some embodiments, the fusing the feature maps of the images of all the orientations corresponding to the same training sample to obtain a multi-orientation feature map corresponding to the corresponding training sample includes:
and obtaining the multi-azimuth characteristic diagram corresponding to the corresponding training sample by superposing and fusing the characteristic diagrams of the images of all azimuths corresponding to the same training sample.
In some embodiments, the performing feature extraction on the image data of each orientation in the training sample by using the convolutional neural networks with the same weight to obtain a feature map of the image of each orientation corresponding to the corresponding training sample includes:
and respectively carrying out one-dimensional key feature extraction on the image data of each direction in the training sample by using the convolutional neural networks with the same weight to obtain a feature map of the image of each direction corresponding to the corresponding training sample.
In some embodiments, the convolutional neural network comprises: a convolutional layer, a pooling layer, and a leveling layer;
respectively carrying out one-dimensional key feature extraction on the image data of each azimuth in the training sample by using the convolutional neural networks with the same weight to obtain a feature map of the image of each azimuth corresponding to the corresponding training sample, wherein the feature map comprises the following steps:
performing convolution operation on the image data of each direction in the training sample by using the convolution layer to obtain a multi-dimensional feature map;
performing maximum pooling operation on the multi-dimensional feature map by using a pooling layer to obtain a simplified and dimension-reduced feature map;
and sequentially carrying out nonlinear transformation on the simplified and dimension-reduced characteristic diagram by using a ReLu activation function, randomly discarding part of neuron nodes by using a Dropout strategy, and carrying out tiling and spreading by using a tiling layer to obtain the characteristic diagram of the image of each direction corresponding to the corresponding training sample.
In some embodiments, constructing a fused fingerprint library by using the channel state information corresponding to each training sample and the feature fingerprint corresponding to the fused representation of the image includes:
dividing channel state information corresponding to each training sample and characteristic fingerprints corresponding to the fusion characterization of the images according to triples, thereby forming a fusion fingerprint library; each triple comprises a characteristic fingerprint anchor sample, a characteristic fingerprint positive sample with the same position as the characteristic fingerprint anchor sample and a characteristic fingerprint negative sample with the different position from the characteristic fingerprint anchor sample;
based on a fusion fingerprint library and by utilizing set measure indexes, carrying out parameter optimization on a network model comprising the multilayer perceptron network, the convolutional neural network and the feature fusion layer, and comprising the following steps:
and taking the inverse number of the set measure index as a minimized objective function, and performing parameter optimization on a network model comprising the multilayer perceptron network, the convolutional neural network and the feature fusion layer by using an Adam algorithm and based on a fusion fingerprint library.
In some embodiments, the objective function is represented as:
Figure BDA0003112778440000041
wherein L represents the value of the target function, D represents the set measure index, N is the total number of the triples, v i a 、v i p 、v i n Respectively representing a characteristic fingerprint anchor sample, a characteristic fingerprint positive sample and a characteristic fingerprint negative sample, and alpha represents an adjustable parameter.
According to another aspect of the embodiments of the present invention, there is also provided a method for characterizing a fused feature fingerprint, including:
acquiring channel state information data at a set environment position and image data of a plurality of azimuths;
and processing the channel state information data and the image data of a plurality of directions by using the fusion characterization network model obtained by training in the method of any embodiment to obtain the characteristic fingerprint at the set environment position.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method according to any of the above embodiments when executing the computer program.
According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the method of any of the above embodiments.
According to the fusion characterization network model training method, the fusion characteristic fingerprint characterization method, the electronic device and the computer readable storage medium, the deep fusion of the heterogeneous positioning data of the two heterogeneous characterization networks of the channel state information and the multi-azimuth image is realized, and the two heterogeneous data have certain complementarity, so that the positioning information can be enriched, and the positioning accuracy can be improved. Moreover, the convolutional networks with the same weight are used for carrying out feature extraction on the images in different directions, so that the direction feature extraction results have certain direction relevance, different direction features are fused together, the influence of direction difference can be eliminated, and the information is richer. In addition, the network model parameters are optimized by using the measure indexes, so that the feature discrimination can be improved, and the positioning result can be more accurate.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts. In the drawings:
FIG. 1 is a schematic flow chart diagram of a method for training a converged characterization network model according to an embodiment of the invention;
FIG. 2 is a schematic diagram illustrating a network model for fusion characterization of CSI and images according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a framework structure for optimizing parameters of a fusion characterization model according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the embodiments of the present invention are further described in detail below with reference to the accompanying drawings. The exemplary embodiments and descriptions of the present invention are provided to explain the present invention, but not to limit the present invention.
Aiming at the problems that the feature precision of a single positioning source is low and the completeness is insufficient under a complex indoor environment and the feature discrimination is difficult to change by the current fusion algorithm, the inventor considers that the CSI (channel state information) and the image positioning have certain complementarity and fusion data can enrich the positioning information. The multi-source fusion positioning can effectively solve the problems of low feature precision, insufficient completeness and the like of a single positioning source in a complex environment.
Moreover, most of heterogeneous characteristic fusion algorithms in the existing fusion positioning system still stay in characteristic combination or screening, and a uniform fusion characterization method of heterogeneous characteristics is still lacked, so that the problem of insufficient fingerprint characteristic discrimination is difficult to radically change, and the fingerprint discrimination can be further improved by introducing the measure index into the fusion characterization.
Based on the high-discrimination fusion characterization problem of the channel state information and the multi-azimuth image heterogeneous data, the invention provides a fusion characterization network model training method, which is used for carrying out feature level deep fusion on CSI and image heterogeneous positioning data and carrying out network parameter training by taking a measure index as an optimization target, so that a high-discrimination fusion characterization model is obtained.
Fig. 1 is a schematic flow chart of a method for training a converged characterization network model according to an embodiment of the present invention, and as shown in fig. 1, the method for training a converged characterization network model according to the embodiment may include the following steps S110 to S160.
Specific embodiments of steps S110 to S160 will be described in detail below.
Step S110: and acquiring a training sample set, wherein each training sample comprises channel state information data corresponding to the same environment position and image data of a plurality of orientations.
In step S110, the environment location may be, for example, the location of one location point of the indoor environment. Image data of a plurality of orientations can be obtained by taking images from different orientations for the same position point. The number of orientations may be determined as desired, such as 2, 3, 4, 5, etc. The channel state information data may be, for example, data of a WiFi signal.
The channel state information data in the training samples is channel state amplitude data, and specifically, may be channel state amplitude raw data (CSI data). In other embodiments, the channel state information data may be channel state phase data, and in particular may be corrected from the channel state phase raw data in general.
Step S120: and performing feature extraction on the channel state information data in the training samples by using the multi-layer perceptron network to obtain a channel state information feature map corresponding to the corresponding training sample.
In step S120, the number of layers of the multi-layer perceptron network may be three, for example. The channel state information data can be in a vector form, and the multi-layer perceptron network can well extract the characteristics of the vector data.
Step S130: and respectively carrying out feature extraction on the image data of each direction in the training sample by using the convolutional neural networks with the same weight to obtain a feature map of the image of each direction corresponding to the corresponding training sample.
In step S130, the convolutional neural networks with the same weight may refer to that when the images in each orientation are subjected to feature extraction by using the convolutional neural network (CNN network), the weights in the convolutional neural networks used are the same, so that the feature maps of the images in different orientations can be associated in position. The activation function may be a ReLu activation function, and the ReLu activation function may be used to perform a nonlinear transformation.
In specific implementation, step S130, namely, performing feature extraction on the image data of each orientation in the training sample by using the convolutional neural networks with the same weight, to obtain a feature map of the image of each orientation corresponding to the corresponding training sample, specifically may include the steps of: s131, performing one-dimensional key feature extraction on the image data of each direction in the training sample by using the convolutional neural networks with the same weight to obtain a feature map of the image of each direction corresponding to the corresponding training sample.
In this embodiment, by extracting the one-dimensional key features, the orientation difference at the feature extraction stage can be eliminated.
In a further embodiment, in the step S130, the convolutional neural network may include: convolutional layers, pooling layers, and lay-down layers. In specific implementation, step S131, that is, performing one-dimensional key feature extraction on the image data of each orientation in the training sample by using the convolutional neural networks with the same weight, to obtain a feature map of the image of each orientation corresponding to the corresponding training sample, specifically, the method may include the steps of: s1311, performing convolution operation on the image data of each direction in the training sample by using the convolution layer to obtain a multi-dimensional feature map; s1312, performing maximum pooling operation on the multi-dimensional feature map by using a pooling layer to obtain a simplified and dimension-reduced feature map; and S1313, sequentially performing nonlinear transformation on the simplified and dimension-reduced feature map by using a ReLu activation function, randomly discarding part of neuron nodes by using a Dropout strategy, and performing tiling and expansion by using a tiling layer to obtain the feature map of the image of each direction corresponding to the corresponding training sample.
The activation function can be mainly used for carrying out nonlinear transformation in the dimension reduction process, and the complexity in the transformation fitting process is enhanced. The simplification and dimensionality reduction process comprises linear transformation and nonlinear transformation, the linear transformation and the nonlinear transformation can jointly complete the conversion of a data domain, and a result obtained after the transformation can be called a feature map. The Dropout strategy is used for processing intermediate nodes (hidden nodes) in the neural network, and random discarding can increase sample diversity and improve model robustness in the training process. The 'tiling expansion' belongs to one layer in the network, and the operation of the layer is mainly to expand a multidimensional characteristic diagram into a one-dimensional vector according to a rule and then connect the next layer.
In this embodiment, the feature map after dimension reduction is simplified, the receptive field is expanded while over-fitting is prevented. The calculation result contains more random structures through a Dropout strategy. One-dimensional features can be obtained by tiling the unfolding.
Step S140: and fusing the feature maps of the images of all the directions corresponding to the same training sample to obtain a multi-direction feature map corresponding to the corresponding training sample.
In step S140, a new feature vector with more information can be obtained by fusing the features of each direction.
In specific implementation, the adopted fusion strategy can be superposition or splicing.
For example, in the step S140, that is, the feature maps of the images in all the orientations corresponding to the same training sample are fused to obtain the multi-orientation feature map corresponding to the corresponding training sample, specifically, the method may include the steps of: and S141, overlapping and fusing the feature maps of the images of all the directions corresponding to the same training sample to obtain a multi-direction feature map corresponding to the corresponding training sample.
In this example, the features extracted from the images with different orientations can be overlaid element by an overlay strategy to realize disordered fusion. The information can be overlapped in an overlapping mode, and the information amount of each dimension can be increased under the condition that the characteristic dimension is not changed. Thus, the unordered fusion of the multi-azimuth features can be completed, and meanwhile, the calculation amount is reduced.
In other embodiments, the fusion may be performed by using a splicing method, and specifically, the splicing may be performed in an azimuth order.
Step S150: and splicing the channel state information characteristic diagram and the multi-azimuth characteristic diagram corresponding to the same training sample by using the characteristic fusion layer to construct fusion representation of the channel state information and the image corresponding to the corresponding training sample.
In step S150, the feature fusion layer may include a sensing layer, for example, may be a basic sensing layer.
Step S160: constructing a fusion fingerprint database by using channel state information corresponding to each training sample and feature fingerprints corresponding to fusion characterization of images, and performing parameter optimization on a network model comprising the multilayer perceptron network, the convolutional neural network and the feature fusion layer by using set measure indexes based on the fusion fingerprint database so as to enable the distance between the feature fingerprints at the same environmental position to be close and the distance between the feature fingerprints at different environmental positions to be far, thereby obtaining a trained network model as a fusion characterization network model; and setting a measure index for measuring the distance between the characteristic fingerprints in the fused fingerprint library.
In specific implementation, in the step S160, constructing a fused fingerprint library by using the channel state information corresponding to each training sample and the feature fingerprint corresponding to the fused representation of the image may specifically include the steps of: s161, dividing the channel state information corresponding to each training sample and the characteristic fingerprint corresponding to the fusion representation of the image according to the triples, thereby forming a fusion fingerprint library; each triple comprises a characteristic fingerprint anchor sample, a characteristic fingerprint positive sample with the same position as the characteristic fingerprint anchor sample, and a characteristic fingerprint negative sample with the position different from the characteristic fingerprint anchor sample.
In the step S160, performing parameter optimization on the network model including the multi-layered perceptron network, the convolutional neural network and the feature fusion layer by using a set measure index based on the fusion fingerprint library, specifically, the method may include the steps of: and S162, taking the opposite number of the set measure indexes as a minimized target function, and performing parameter optimization on a network model comprising the multilayer perceptron network, the convolutional neural network and the feature fusion layer by utilizing an Adam algorithm and based on a fusion fingerprint library.
For example, the objective function may be expressed as:
Figure BDA0003112778440000081
wherein L represents the value of the target function, D represents the set measure index, N is the total number of the triples, v i a 、v i p 、v i n Respectively representing a characteristic fingerprint anchor sample, a characteristic fingerprint positive sample and a characteristic fingerprint negative sample, and alpha represents an adjustable parameter.
In addition, the embodiment of the invention also provides a fusion characteristic fingerprint characterization method, which comprises the following steps:
s210: acquiring channel state information data at a set environment position and image data of a plurality of azimuths;
s220: the fusion characterization network model obtained by training by using the fusion characterization network model training method of any embodiment of the invention processes the channel state information data and the image data of a plurality of directions to obtain the characteristic fingerprint of the set environment position.
One environment location point can correspond to one or more fingerprint characteristics, and for one environment, a fingerprint library can be formed by a plurality of different location points and corresponding characteristic fingerprints thereof for positioning.
In addition, an embodiment of the present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method for training the fused representation network model according to any of the above embodiments or the method for characterizing the fused feature fingerprint according to any of the above embodiments when executing the program.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the fusion characterization network model training method according to any of the above embodiments or the fusion feature fingerprint characterization method according to any of the above embodiments.
According to the embodiment of the invention, heterogeneous CSI data and multi-azimuth image data are mapped to the fusion characterization domain through the neural network, and the parameter optimization of the characterization network is carried out through maximizing the discrimination of the fusion fingerprint, so that the final fusion characterization network is obtained, and the method can be used for the construction and matching positioning of the high-discrimination fusion fingerprint of the CSI-image. The CSI and the image positioning have certain complementarity, and the positioning information can be enriched by fusing data. The fusion of the channel state information and the multi-azimuth image represents the deep fusion of two heterogeneous positioning data of a network, so that the fingerprint discrimination can be enhanced, and the positioning accuracy can be improved.
The above method is described below with reference to a specific example, however, it should be noted that the specific example is only for better describing the present application and is not to be construed as limiting the present application.
Aiming at the problems of low feature precision and insufficient completeness of a single positioning source in a complex indoor environment and difficulty in changing feature discrimination of a current fusion algorithm, the fusion characterization network for channel state information and multi-azimuth images of the embodiment enhances fingerprint discrimination and improves positioning accuracy by realizing deep fusion of two heterogeneous positioning data.
The fused fingerprint representation method of the embodiment is realized based on a fused representation network model. The network model adopts a shared Convolutional Neural Network (CNN) and a superposition strategy to obtain multi-azimuth image characteristics, adopts a multilayer perceptron to extract CSI amplitude characteristics, reduces the dimension of the combined characteristics to obtain final fusion characterization, and completes parameter optimization in the whole network mainly by maximizing the measure indexes of a fusion characterization fingerprint library.
Fig. 2 is a schematic diagram of a fusion characterization network model of CSI and images according to an embodiment of the present invention. Referring to fig. 2, two inputs of the network model are CSI raw amplitude data and multi-azimuth image raw data (multi-azimuth image raw data ": means indoor scene images collected by a camera from different azimuths at a certain position point, without other pre-processing, i.e. multi-azimuth image raw data), respectively (taking four azimuths as an example). The data structure of the raw amplitude of a single CSI in a packet can be expressed as:
CSI=[CSI 1 ,…,CSI i ,…,CSI K ] (1)
wherein K is the total number of subcarriers in the channel, CSI i Representing the amplitude of the subcarrier i in the channel.
The data structure of the multi-azimuth image may be represented as:
Figure BDA0003112778440000101
wherein, the Image i Representing the image in the ith orientation, M and N represent the number of pixel rows and columns, respectively, of a single image.
The CSI amplitude and the multi-azimuth image are two heterogeneous positioning data, and in order to obtain the fusion representation of the positioning data, the network needs to extract and process the characteristics of the two data respectively to realize the dimensionality reduction and the isomorphism of the two data.
In order to realize the extraction of key features of the CSI amplitude vector, a classical multilayer perceptron is adopted to process the amplitude vector, each layer completes linear transformation through weight and bias and completes nonlinear transformation through an activation function, the fitting degree of the network to complex nonlinearity can be enhanced through the increase of the number of layers, and most requirements can be met by generally selecting three layers. The recurrence formula from layer l-1 neurons to layer l neurons in the perceptron can be expressed as:
Figure BDA0003112778440000102
Figure BDA0003112778440000103
wherein,
Figure BDA0003112778440000104
is the value of the jth neuron at layer l,
Figure BDA0003112778440000105
is the value of the ith neuron of the l-th layer,
Figure BDA0003112778440000106
is the jth neuron of layer l-1
Figure BDA0003112778440000107
And the ith neuron of the l layer
Figure BDA0003112778440000108
The weight of the connection between the two,
Figure BDA0003112778440000109
for the bias corresponding to the ith neuron in the ith layer, the activation function phi is a ReLu function, and the expression is as follows:
ReLU(x)=max(0,x) (5)
wherein x is the number of neurons.
The final output characteristics of the perceptron are:
Figure BDA00031127784400001010
wherein L is the total number of layers of the multilayer perceptron, y i As the output CSI characteristic y CSI Dimension i node value.
In order to realize the key feature extraction of the high-dimensional multi-azimuth image vector, a shared CNN model is firstly designed to perform one-dimensional key feature extraction on each azimuth image so as to extract the key features related to the position in each azimuth image and eliminate the azimuth difference in the feature extraction stage; and then, overlapping and fusing the key features of all the directions to obtain a new feature vector with richer information so as to complete disorder fusion of the multi-direction features and reduce the calculated amount.
For the shared CNN network model, the CNN network includes convolution, pooling, dropout, and flat layers. Mainly adopts convolution method to input azimuth n Image n And (3) extracting characteristics of rotation invariance and translation invariance, wherein a two-dimensional convolution formula is as follows:
Figure BDA0003112778440000111
where X and W represent the input and weight of the whole, respectively, n _ in represents the total number of channels of the input data, and X k Is an input matrix of the k-th channel, W k For the sub-convolution kernel matrix corresponding to the k-th channel, s (i, j) is the value of the corresponding position element of the output matrix corresponding to the convolution kernel W, i and j denote rows and columns, respectively, and b denotes the offset.
Obtaining a multi-dimensional characteristic diagram after convolution operation, simplifying and reducing the dimension of the characteristic diagram by adopting maximum pooling, enlarging a receptive field and preventing overfitting; and (5) completing nonlinear transformation by adopting a Relu activation function. Then, in order to enhance the network robustness, a Dropout strategy is adopted to randomly discard part of neuron nodes to increase the randomness, so that the calculation result contains more random structures. Finally, in order to obtain one-dimensional key features of the image, the result feature graph is spread in a tiled mode to obtain initial features h of the azimuth n image n The formula is as follows:
h n =Shared_model(Image n ) (8)
where Shared _ model () represents the CNN model.
The feature extraction based on the shared CNN model is a data compression and unified feature extraction process for respectively performing position correlation on multi-azimuth image data of the same position point by using models with the same weight, wherein the sharing enables effective features of the images to be extracted for representation of position features under the condition that azimuth information is lost, azimuth difference in a feature extraction stage is eliminated, and the position correlation of each azimuth image feature is enhanced.
After the shared CNN model extracts each orientation feature, the network needs to adopt a fusion strategy to perform information integration on the multi-channel orientation feature map. The feature graph fusion strategy comprises two types of concatenation of Contract and addition. The splicing mode is commonly used for combining features or fusing output layer information, and mainly realizes the combination of channel numbers, namely the feature dimension number is increased but the information quantity under each dimension is unchanged. The superposition mode mainly completes superposition of information, and increases the information amount of each dimensionality under the condition that the characteristic dimensionality is not changed. For the feature map information of the one-dimensional image, the extract cascade splicing has the problems of higher data dimension, more redundant information and greater dependence on the orientation splicing sequence, so that the embodiment adopts an Add stacking strategy to stack the features extracted from the images in different directions element by element to realize disordered fusion. The feature map h of the multi-azimuth image can be represented as:
Figure BDA0003112778440000121
wherein m is the number of digits, n is the characteristic dimension of the single-digit image, h i Feature map representing orientation i, h i,j An element on a characteristic dimension j representing an orientation i, i taking an integer from 1 to m, j taking an integer from 1 to n.
Fusion feature y based on superimposed information IMA Can be expressed as:
y IMA =[r 1 r 2 …r i …r n ] (10)
wherein,
Figure BDA0003112778440000122
in order to obtain fusion representation of two heterogeneous data characteristics, a basic sensing layer is adopted as a characteristic fusion layer, fitting of a fusion representation mapping process is realized by using a full connection and an activation function after splicing CSI amplitude characteristics and multi-azimuth image characteristics, linear transformation of a characteristic domain is realized by endowing different dimensions of the two characteristics with respective weight parameters, and nonlinear transformation of the characteristic domain is realized by the activation function, so that a final heterogeneous characteristic fusion representation domain is constructed, wherein the calculation formula is as follows:
Figure BDA0003112778440000123
wherein,
Figure BDA0003112778440000124
is the M-th element in the heterogeneous parameter space feature vector at the k-th fingerprint point, M is the dimension of the heterogeneous feature space vector, y i CSI,k And y j IMA,k I dimension of the channel state information characteristic vector of the kth fingerprint point and j dimension, N of the multi-azimuth image characteristic vector CSI And N IMA The dimensions of the two eigenvectors, c,
Figure BDA0003112778440000125
Is an adjustable parameter.
In order to implement high-resolution fusion fingerprint characterization, the present embodiment implements parameter optimization of a fusion characterization network model according to a measure index of a heterogeneous fusion feature space.
The high-quality fused fingerprint features should satisfy the following conditions in the characterization domain: the distance between the characteristic fingerprints of the same position point is short, and the distance between the characteristic fingerprints of different position points is long. According to the principle, the embodiment utilizes the classical Euclidean distance to perform basic distance measurement calculation between fusion tokens, and defines the measure index of the whole fusion fingerprint library on the basis. Fig. 3 is a schematic diagram of a parameter optimization framework structure of a fused characterization model in an embodiment of the present invention, and referring to fig. 3, all fused fingerprints are divided according to triples, each triplet includes an anchor sample v a Positive sample v at the same location as the anchor sample p Negative sample v different in position from anchor sample n In order to reduce the distance between fingerprints at the same position point and increase the distance between fingerprints at different position points, the measure index D of the fused fingerprint database can be expressed as:
Figure BDA0003112778440000131
wherein,
Figure BDA0003112778440000132
respectively an anchor sample position point fingerprint, a positive sample position point fingerprint and a negative sample position point fingerprint in the ith triplet, wherein alpha is a minimum threshold value for distinguishing a negative sample pair distance from a positive sample pair distance, and N is the total triplet sample number formed by the anchor sample and the positive and negative sample pairs.
The measure index of the heterogeneous feature fusion characterization domain can quantitatively describe the feature discrimination of the fingerprint of the fusion characterization domain theoretically and further guide the optimization of the fusion characterization parameters. The parameters of the fusion characterization model comprise a multilayer perceptron, a shared CNN and weights and thresholds in the feature fusion layer, each parameter represents the contribution of information of each dimension in the feature vector to the fusion characterization domain, and the optimization process selects proper parameters to enable feature information with high discrimination degree to have relatively high contribution in the fusion characterization. The heterogeneous characteristic space measure index is in direct proportion to the difference degree of fingerprints at different positions in the fingerprint database and in inverse proportion to the difference degree of fingerprints at the same position, and the measure index of the fingerprint database of a target constructed by the high-discrimination fingerprint database is maximized. Therefore, the inverse of the measure index D is chosen as the minimized objective function L:
Figure BDA0003112778440000133
the optimization algorithm adopts an Adam algorithm, and the updating formula is as follows:
Figure BDA0003112778440000134
wherein t represents the number of times, w t Represents the weight parameter after the t-th update, w t-1 Represents the weight parameter after t-1 updating, and alpha and epsilon representThe parameters that can be set are,
Figure BDA0003112778440000135
is m t The correction of (2) is performed,
Figure BDA0003112778440000136
is v is t And (4) correcting.
Figure BDA0003112778440000137
Wherein,
Figure BDA0003112778440000138
and
Figure BDA0003112778440000139
are all constants of the decay of the control exponent after the t-th update, m t Is the exponential moving average of the gradient after the t-th update (obtained by the first moment of the gradient), v t Is the squared gradient after the t-th update (found by the second moment of the gradient). m is t 、v t The update formula of (2) is as follows:
m t =β 1 m t-1 +(1-β 1 )g t (16)
Figure BDA0003112778440000141
wherein g is t Refers to the first derivative after the t-th update.
The parameters in the above equation may be set by default as: α =0.001, β 1 =0.9,β 2 =0.999,ε=10 -8
In the fusion characterization network of the CSI and the image, a shared CNN and a superposition strategy are adopted to complete feature vector extraction of a multi-azimuth image, a multi-layer sensor is adopted to complete CSI amplitude feature extraction, finally, fusion characterization conversion is performed after the two features are spliced, and parameter training is performed on the network by taking a measure index of a fusion fingerprint as an optimization target to obtain a final high-discrimination fusion characterization network model. By the aid of a fusion characterization network of the channel state information and the multi-azimuth image, fusion fingerprint characterization of heterogeneous positioning data is achieved, fingerprint information is effectively enriched, fingerprint discrimination is enhanced, and positioning accuracy is improved.
In the description herein, reference to the description of the terms "one embodiment," "a particular embodiment," "some embodiments," "for example," "an example," "a particular example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. The sequence of steps involved in the various embodiments is provided to schematically illustrate the practice of the invention, and the sequence of steps is not limited and can be suitably adjusted as desired.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A fusion characterization network model training method is characterized by comprising the following steps:
acquiring a training sample set, wherein each training sample comprises channel state information data corresponding to the same environment position and image data of a plurality of directions;
carrying out feature extraction on the channel state information data in the training samples by utilizing a multilayer perceptron network to obtain a channel state information feature map corresponding to the corresponding training samples;
respectively extracting the features of the image data of each direction in the training sample by using the convolutional neural networks with the same weight to obtain a feature map of the image of each direction corresponding to the corresponding training sample;
fusing the feature maps of the images of all the directions corresponding to the same training sample to obtain a multi-direction feature map corresponding to the corresponding training sample;
splicing the channel state information characteristic diagram and the multi-azimuth characteristic diagram corresponding to the same training sample by using the characteristic fusion layer, and then constructing fusion representation of the channel state information and the image corresponding to the corresponding training sample;
constructing a fusion fingerprint database by using channel state information corresponding to each training sample and feature fingerprints corresponding to fusion characterization of images, and performing parameter optimization on a network model comprising the multilayer perceptron network, the convolutional neural network and the feature fusion layer by using set measure indexes based on the fusion fingerprint database so as to enable the distance between the feature fingerprints at the same environmental position to be close and the distance between the feature fingerprints at different environmental positions to be far, thereby obtaining a trained network model as a fusion characterization network model; setting a measure index for measuring the distance between the characteristic fingerprints in the fused fingerprint library;
the method for constructing the fusion fingerprint database by using the channel state information corresponding to each training sample and the characteristic fingerprint corresponding to the fusion characterization of the image comprises the following steps:
dividing channel state information corresponding to each training sample and characteristic fingerprints corresponding to the fusion characterization of the images according to triples, thereby forming a fusion fingerprint library; each triple comprises a characteristic fingerprint anchor sample, a characteristic fingerprint positive sample with the same position as the characteristic fingerprint anchor sample, and a characteristic fingerprint negative sample with the position different from the characteristic fingerprint anchor sample.
2. The method of training a fusion characterization network model of claim 1 wherein the channel state information data in the training samples is channel state magnitude data.
3. The method for training the fusion characterization network model according to claim 1, wherein the fusing the feature maps of the images of all orientations corresponding to the same training sample to obtain the multi-orientation feature map corresponding to the corresponding training sample comprises:
and obtaining the multi-azimuth characteristic diagram corresponding to the corresponding training sample by superposing and fusing the characteristic diagrams of the images of all azimuths corresponding to the same training sample.
4. The method for training the fusion characterization network model according to claim 1, wherein the step of performing feature extraction on the image data of each orientation in the training sample by using the convolutional neural network with the same weight to obtain a feature map of the image of each orientation corresponding to the corresponding training sample comprises the steps of:
and respectively performing one-dimensional key feature extraction on the image data of each direction in the training sample by using the convolutional neural networks with the same weight to obtain a feature map of the image of each direction corresponding to the corresponding training sample.
5. The method of fusion characterization network model training according to claim 4, wherein the convolutional neural network comprises: a convolutional layer, a pooling layer, and a leveling layer;
respectively carrying out one-dimensional key feature extraction on the image data of each azimuth in the training sample by using the convolutional neural networks with the same weight to obtain a feature map of the image of each azimuth corresponding to the corresponding training sample, wherein the feature map comprises the following steps:
performing convolution operation on the image data of each direction in the training sample by using the convolution layer to obtain a multi-dimensional feature map;
performing maximum pooling operation on the multi-dimensional feature map by using a pooling layer to obtain a simplified and dimension-reduced feature map;
and sequentially carrying out nonlinear transformation on the simplified and dimension-reduced characteristic diagram by using a ReLu activation function, randomly discarding part of neuron nodes by using a Dropout strategy, and carrying out tiled expansion by using a tiled layer to obtain the characteristic diagram of the image of each direction corresponding to the corresponding training sample.
6. The fusion characterization network model training method of claim 1,
based on a fusion fingerprint library and by utilizing set measure indexes, carrying out parameter optimization on a network model comprising the multilayer perceptron network, the convolutional neural network and the feature fusion layer, and comprising the following steps:
and taking the inverse number of the set measure index as a minimized objective function, and performing parameter optimization on a network model comprising the multilayer perceptron network, the convolutional neural network and the feature fusion layer by using an Adam algorithm and based on a fusion fingerprint library.
7. The fusion characterization network model training method of claim 6,
the objective function is represented as:
Figure FDA0003864885230000021
wherein L represents the value of the target function, D represents the set measure index, N is the total number of the triples, v i a 、v i p 、v i n Respectively representing a characteristic fingerprint anchor sample, a characteristic fingerprint positive sample and a characteristic fingerprint negative sample, and alpha represents an adjustable parameter.
8. A method for fused feature fingerprint characterization, comprising:
acquiring channel state information data and image data of a plurality of directions at a set environment position;
processing the channel state information data and the image data of a plurality of orientations by using the fusion characterization network model obtained by training according to the method of any one of claims 1 to 7 to obtain the characteristic fingerprint at the set environmental position.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 8 are implemented when the processor executes the program.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
CN202110655987.0A 2021-06-11 2021-06-11 Fusion characterization network model training method, fingerprint characterization method and equipment thereof Active CN113343863B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110655987.0A CN113343863B (en) 2021-06-11 2021-06-11 Fusion characterization network model training method, fingerprint characterization method and equipment thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110655987.0A CN113343863B (en) 2021-06-11 2021-06-11 Fusion characterization network model training method, fingerprint characterization method and equipment thereof

Publications (2)

Publication Number Publication Date
CN113343863A CN113343863A (en) 2021-09-03
CN113343863B true CN113343863B (en) 2023-01-03

Family

ID=77476739

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110655987.0A Active CN113343863B (en) 2021-06-11 2021-06-11 Fusion characterization network model training method, fingerprint characterization method and equipment thereof

Country Status (1)

Country Link
CN (1) CN113343863B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114268918B (en) * 2021-11-12 2022-10-18 北京航空航天大学 Indoor CSI fingerprint positioning method for rapid off-line library building
CN114240843A (en) * 2021-11-18 2022-03-25 支付宝(杭州)信息技术有限公司 Image detection method and device and electronic equipment
CN115131619B (en) * 2022-08-26 2022-11-22 北京江河惠远科技有限公司 Extra-high voltage part sorting method and system based on point cloud and image fusion

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110933628A (en) * 2019-11-26 2020-03-27 西安电子科技大学 Fingerprint indoor positioning method based on twin network
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning
CN112040400A (en) * 2020-08-25 2020-12-04 西安交通大学 Single-site indoor fingerprint positioning method based on MIMO-CSI, storage medium and equipment
CN112712557A (en) * 2020-12-17 2021-04-27 上海交通大学 Super-resolution CIR indoor fingerprint positioning method based on convolutional neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning
CN110933628A (en) * 2019-11-26 2020-03-27 西安电子科技大学 Fingerprint indoor positioning method based on twin network
CN112040400A (en) * 2020-08-25 2020-12-04 西安交通大学 Single-site indoor fingerprint positioning method based on MIMO-CSI, storage medium and equipment
CN112712557A (en) * 2020-12-17 2021-04-27 上海交通大学 Super-resolution CIR indoor fingerprint positioning method based on convolutional neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
C-Map: Hyper-Resolution Adaptive Preprocessing System for CSI Amplitude-Based Fingerprint Localization;Wen Liu 等;《IEEE ACCESS》;20190916;全文 *
Multi-Scene Doppler Power Spectrum Modeling;WEN LIU 等;《IEEE ACCESS》;20210114;全文 *

Also Published As

Publication number Publication date
CN113343863A (en) 2021-09-03

Similar Documents

Publication Publication Date Title
CN113343863B (en) Fusion characterization network model training method, fingerprint characterization method and equipment thereof
CN112818903B (en) Small sample remote sensing image target detection method based on meta-learning and cooperative attention
CN111882580B (en) Video multi-target tracking method and system
Zhao et al. Discriminative feature learning for unsupervised change detection in heterogeneous images based on a coupled neural network
Haeusser et al. Associative domain adaptation
CN106909924B (en) Remote sensing image rapid retrieval method based on depth significance
CN103810699B (en) SAR (synthetic aperture radar) image change detection method based on non-supervision depth nerve network
CN111199214B (en) Residual network multispectral image ground object classification method
AU2018209336B2 (en) Determining the location of a mobile device
CN110298404A (en) A kind of method for tracking target based on triple twin Hash e-learnings
CN107451619A (en) A kind of small target detecting method that confrontation network is generated based on perception
CN105825235A (en) Image identification method based on deep learning of multiple characteristic graphs
CN111105439B (en) Synchronous positioning and mapping method using residual attention mechanism network
US7593566B2 (en) Data recognition device
CN100517387C (en) Multi likeness measure image registration method
CN114693983B (en) Training method and cross-domain target detection method based on image-instance alignment network
CN113191213A (en) High-resolution remote sensing image newly-added building detection method
CN110276746A (en) A kind of robustness method for detecting change of remote sensing image
CN117372877A (en) Star map identification method and device based on neural network and related medium
CN115393404A (en) Double-light image registration method, device and equipment and storage medium
CN110349176B (en) Target tracking method and system based on triple convolutional network and perceptual interference learning
CN111340011A (en) Self-adaptive time sequence shift neural network time sequence behavior identification method and system
CN113487530B (en) Infrared and visible light fusion imaging method based on deep learning
CN117253161A (en) Remote sensing image depth recognition method based on feature correction and multistage countermeasure defense
CN116630637A (en) optical-SAR image joint interpretation method based on multi-modal contrast learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant