CN115797163B - Target data cross-domain inversion augmentation method based on remote sensing image - Google Patents
Target data cross-domain inversion augmentation method based on remote sensing image Download PDFInfo
- Publication number
- CN115797163B CN115797163B CN202310101406.8A CN202310101406A CN115797163B CN 115797163 B CN115797163 B CN 115797163B CN 202310101406 A CN202310101406 A CN 202310101406A CN 115797163 B CN115797163 B CN 115797163B
- Authority
- CN
- China
- Prior art keywords
- image
- domain
- representing
- data
- remote sensing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a target data cross-domain inversion augmentation method based on a remote sensing image, which comprises the following steps: step 1, generating image data multi-domain conversion of an countermeasure network based on a cycle; step 2, multi-domain data augmentation based on contrast learning; and step 3, image migration synthesis is carried out to obtain a multi-domain augmentation data set. The method takes a generated countermeasure network as a framework, introduces a multi-domain conversion method of image data based on circularly generated countermeasure network and a multi-domain data augmentation method based on contrast learning, transfers a visible light remote sensing image into an infrared image and an SAR image, takes a synthesized data set as a matching reference image of the unmanned aircraft, and therefore realizes navigation positioning tasks of the unmanned aircraft by utilizing the multi-domain image in the multi-source sensor; the method has good performance and improves the accuracy of the positioning matching algorithm.
Description
Technical Field
The invention belongs to the technical field of image dataset preparation, relates to target data, and in particular relates to a target data cross-domain inversion augmentation method based on remote sensing images.
Background
In recent years, unmanned patrol aircrafts develop rapidly and are gradually applied to a plurality of fields such as military reconnaissance striking, mapping exploration, fire rescue, electric power line patrol and the like, and the patrol aircrafts realize intelligent visual navigation and positioning technology by utilizing a multi-source image sensor carried by the patrol aircrafts and become a current research hot spot.
With the progress of technology, the resolution of optical remote sensing images is continuously improved. And acquiring information of a remote target and surrounding environment by utilizing an optical remote sensing image, so that the unmanned patrol vehicle navigation, positioning, reconnaissance, striking and other tasks are realized.
With the gradual perfection of artificial intelligence technology, the human society is necessarily a big data age with high-speed and intelligent development in the future, and the intelligent scene matching technology becomes one of important approaches for navigation and positioning. The intelligent matching algorithm model based on deep learning is obtained by training a data set and analyzing different differentiation information of mining data, so that a large number of multi-domain heterogeneous images are needed to be used as data support in the early model training process, and the quality of the data set directly influences the capability of the artificial intelligent algorithm model. Most research is focused mainly on the algorithm model of artificial intelligence, but ignores the large amount of data required by intelligent algorithms as drivers to obtain better algorithm performance.
Because of the lack of other domain image samples, the navigation positioning by utilizing the multi-source imaging sensor is a difficult task, so that the development of the target data cross-domain inversion augmentation method based on the remote sensing image is a task with very practical significance and higher difficulty.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention aims to provide a target data cross-domain inversion augmentation method based on remote sensing images, which solves the technical problem that the positioning accuracy of various imaging technologies in unmanned patrol aircraft navigation positioning tasks in the prior art is to be further improved.
In order to solve the technical problems, the invention adopts the following technical scheme:
a target data cross-domain inversion augmentation method based on remote sensing images comprises the following steps:
step 1, generating image data multi-domain conversion of a countermeasure network based on a loop:
step 101, generating an image of the countermeasure network based on the loop.
Step 102, generating image discrimination of the countermeasure network based on the loop generation.
Step 103, designing a total loss function between the generated image and the true value.
And 2, multi-domain data augmentation based on contrast learning.
Step 3, image migration synthesis is carried out to obtain a multi-domain augmentation data set:
step 301, a set of visible light remote sensing image/infrared image unpaired data sets is given for eachAnd->Representation, a set of visible remote sensing image/SAR image unpaired datasets is respectively represented by +.>And->Representing, given a remote sensing image dataset of visible light to be converted +.>For use as a verification set;
step 302, training the two sets of data given in step 301 by the method of loop-based generation of image data multi-domain conversion of the countermeasure network in step 1, and converting the visible light remote sensing image into a corresponding infrared image dataset by model reasoningSAR image dataset +.>;
Step 303, training the two sets of data given in step 301 by the multi-domain data augmentation method based on contrast learning in step 2, and converting the visible light remote sensing image into a corresponding infrared image dataset by model reasoningSAR image dataset +.>;
Step 304, respectively fusing the data sets obtained in step 302 and step 303 to form a fused infrared image data setAnd fusing the SAR image dataset +.>Thereby forming a multi-domain augmented data set.
Step 4, similarity calculation and matching test:
and (3) taking the multi-domain augmentation data set obtained in the step (3) as a reference image, and calculating the similarity of the images through a PSNR algorithm and an LPIPS algorithm.
And (3) taking the multi-domain augmentation data set obtained in the step (3) as a reference graph, and carrying out a matching test through an ORB algorithm and a LoFTR algorithm.
Compared with the prior art, the invention has the following technical effects:
the method takes a generated countermeasure network as a framework, introduces a multi-domain conversion method of image data based on circularly generated countermeasure network and a multi-domain data augmentation method based on contrast learning, transfers a visible light remote sensing image into an infrared image and an SAR image, takes a synthesized data set as a matching reference image of the unmanned aircraft, and realizes navigation positioning tasks of the unmanned aircraft by utilizing the multi-domain image in the multi-source sensor; the method has good performance and improves the accuracy of the positioning matching algorithm.
And (II) the method does not need a training data set based on image pairs in the model training process based on a loop generation countermeasure network and a multi-domain data augmentation method based on contrast learning, so that the difficulty of data preparation before training is greatly reduced, and the image conversion efficiency is improved.
And (III) the method converts the single-domain image into multiple domains, reduces the limitation of a single sensor of the unmanned aircraft in visual navigation, utilizes the multiple-domain image in the multiple-source sensor to perform navigation positioning, and effectively improves the positioning precision of the aircraft.
(IV) the method of the invention performs a large amount of data generation and experimental comparison. The multi-domain data set generated by the method improves the probability of image matching and the effectiveness is well verified by comparing the traditional matching algorithm with the existing intelligent matching algorithm.
And (V) the method of the invention prevents the over fitting problem frequently occurring in the deep learning training by adding the data set, improves the precision and generalization capability of the model, enriches the variety of the heterogeneous data set, and realizes the visual navigation positioning of the multi-domain image.
Drawings
FIG. 1 is a schematic diagram of a loop generation countermeasure network architecture.
Fig. 2 is a comparative study generator framework diagram.
Fig. 3 (a) and fig. 3 (b) are schematic diagrams of the conversion effect of the visible remote sensing image/infrared image.
Fig. 4 (a) and fig. 4 (b) are schematic diagrams of the conversion effect of the visible light remote sensing image/SAR image.
Fig. 5 is a schematic diagram of the matching result of the original visible remote sensing image/the original infrared image.
Fig. 6 is a schematic diagram of the converted ir image/original ir matching result.
Fig. 7 is a schematic diagram of the original visible light remote sensing image/original SAR matching result.
Fig. 8 is a schematic diagram of the post-conversion SAR/original SAR matching results.
The following examples illustrate the invention in further detail.
Detailed Description
All the devices and algorithms in the present invention are known in the art, unless otherwise specified.
In the present invention, "/" means "and" for example, "visible light remote sensing image/SAR image" means a visible light remote sensing image and SAR image.
SAR, collectively Synthetic Aperture Radar, synthetic aperture radar.
The invention discloses a target data cross-domain inversion augmentation method based on remote sensing images, and provides a data augmentation method which is expanded from a single domain to multiple domains to solve the problem of multi-source scene matching navigation positioning based on deep learning. According to the method, the data set is added, so that the over-fitting problem frequently occurring in the deep learning training is prevented, and the precision and generalization capability of the model are improved.
According to the invention, a plurality of imaging technologies in the unmanned patrol aircraft navigation positioning task are considered, and the target data cross-domain inversion augmentation method based on the remote sensing image is designed to meet the navigation positioning requirement, so that the variety of the heterogeneous data set is enriched, and the multi-domain image visual navigation positioning is realized.
The following specific embodiments of the present invention are provided, and it should be noted that the present invention is not limited to the following specific embodiments, and all equivalent changes made on the basis of the technical solutions of the present application fall within the protection scope of the present invention.
Examples:
the embodiment provides a target data cross-domain inversion augmentation method based on remote sensing images, which comprises the following steps:
step 1, generating image data multi-domain conversion of a countermeasure network based on a loop:
the loop generation countermeasure network architecture is shown in fig. 1, and the loop generation countermeasure network includes three parts, namely feature extraction (i.e. encoding), image domain conversion and image reconstruction (i.e. decoding).
Step 101, generating an image of the countermeasure network based on the loop generation:
this step aims at learning two domains for a given training sampleAnd->Mapping relationship between ∈>,The method comprises two generation mapping relations +.>And->Firstly, use a generatorSo that the sample is->Domain conversion to->Domain, then use generator->So that the sample is->Domain conversion to->Domain.
In step 10101, an initial convolution operation is performed on the original image, the image size is unchanged, but the feature map of the image is converted from 3 to 64.
In step 10102, the abstract features of the input image are extracted using two convolution layers, and the dimensions of the final input image are converted from 256×256×64 to 64×64×256.
Step 10103, using a plurality of residual modules to extract features from the dataDomain conversion to->Domain.
Step 10104, finally decoding by two-layer deconvolution to realize image decodingDomain to->Domain conversion.
Step 102, discriminating generated images of the loop-generated countermeasure network:
the image generation judging device is a classifier based on four layers of convolution layers, the convolution layers are utilized to extract feature map of an input image from 3 dimensions to 512 dimensions, and then the confidence coefficient of the image is judged through the full connection layer and the average pooling layer.
Step 103, designing a total loss function between the generated image and the true value:
step 10301, fight loss functionApplied to mapping function->And corresponding discriminationAppliance->The method comprises the steps of carrying out a first treatment on the surface of the Will fight the loss function->Applied to mapping function->Corresponding discriminatorD A 。
Wherein:
Arepresentation ofAA domain;
Brepresentation ofBA domain;
D A representation ofAA discriminator corresponding to the domain;
wherein:
arepresenting an image;
brepresenting a true value;
P data () Representing the probability density of the data.
In the present step, the step of the method,for generating a product similar to->Image of Domain->,/>For distinguishing converted image samplesAnd (3) true sample->。
The step pair maps the functionCorresponding discriminatorD A Similar resistance losses are introduced。
Step 10302, for each sheet fromImage of Domain->Using a cyclic consistency loss functionImage->Processing, image->The loop should satisfy the picture +.>Restore to original image, e.g.>。
Wherein:
II indicates the norm.
wherein:
In the present embodiment of the present invention,the relevant importance of these two objectives is controlled.
Step 2, multi-domain data augmentation based on contrast learning:
image generation based on a coder and a decoder, wherein the input field of the generator of the image generation is thatThe output field is->Giving unpaired dataset +.>;
Wherein:
Hrepresenting the height of the image;
Wrepresenting the width of the image;
Cthe number of channels representing the image;
Arepresenting unpaired datasets corresponding to input fields;
Brepresenting unpaired datasets corresponding to the output fields;
arepresenting a datasetAData in (a);
brepresenting a datasetBIs a data set of the data set.
The generator for generating the image is divided into two partial encodersAnd decoder->Thereby generating an output image +.>The method comprises the steps of carrying out a first treatment on the surface of the In this embodiment, the framework of the generator for generating an image is shown in fig. 2. With encoder->And obtaining a high-dimensional feature vector, and performing iterative training through a total contrast loss function to realize multi-domain data augmentation.
The total contrast loss function is:
wherein:
Ga representation generator;
Da representation discriminator;
Arepresenting unpaired datasets corresponding to input fields;
Brepresenting unpaired datasets corresponding to the output fields;
Mrepresenting a multi-layer perceptron network.
In the present embodiment, when,/>The method can be regarded as a lightweight CycleGan network during joint training.
In the present embodiment, the contrast loss functionMaximizing mutual information loss functionAnd an external loss function->Are all calculated using calculation methods commonly known in the art.
Step 3, image migration synthesis is carried out to obtain a multi-domain augmentation data set:
step 301, a set of visible light remote sensing image/infrared image unpaired data sets is given for eachAnd->Representation, a set of visible remote sensing image/SAR image unpaired datasets is respectively represented by +.>And->Representing, given a remote sensing image dataset of visible light to be converted +.>For use as a verification set;
step 302, training the two sets of data given in step 301 by the method of loop-based generation of image data multi-domain conversion of the countermeasure network in step 1, and converting the visible light remote sensing image into a corresponding infrared image dataset by model reasoningSAR image dataset +.>;
Step 303, training the two sets of data given in step 301 by the multi-domain data augmentation method based on contrast learning in step 2, and converting the visible light remote sensing image into a corresponding infrared image dataset by model reasoningSAR image dataset +.>;
Step 304, respectively fusing the data sets obtained in step 302 and step 303 to form a fused infrared image data setAnd fusing the SAR image dataset +.>Thereby forming a multi-domain augmented data set.
In this embodiment, the visible light remote sensing image conversion effect is as shown in fig. 3 (a), 3 (b), 4 (a) and 4 (b).
Step 4, similarity calculation and matching test:
and (3) taking the multi-domain augmentation data set obtained in the step (3) as a reference graph, and calculating the similarity of the images through a PSNR (peak-to-noise ratio) algorithm and an LPIPS (learning perceptual imagepatch similarity) algorithm.
In the step, the generation effect of the visible light remote sensing image/infrared image and the visible light remote sensing image/SAR image is evaluated through the similarity, wherein the larger the PSNR is and the smaller the LPIPS is, the higher the representative image similarity is.
In this example, the evaluation results are shown in tables 1 and 2.
TABLE 1 comparison of visible remote sensing image/Infrared image conversion Effect
TABLE 2 comparison of visible remote sensing image/SAR image conversion effect
And (3) taking the multi-domain augmentation data set obtained in the step (3) as a reference graph, and carrying out a matching test through an ORB (brief feature point description) algorithm and a LoFTR (local feature matching) algorithm.
In this embodiment, the test results are shown in fig. 5, 6, 7 and 8.
Simulation example:
the effect of the invention is further illustrated by the following simulations:
1. simulation conditions:
in order to verify the effectiveness of the invention, multi-domain augmentation is carried out on a plurality of groups of data sets, and corresponding infrared and SAR image results are obtained. Experimental environment: the operating system is Ubuntu18.04, and the processor is a notebook computer of 2.9GHz IntelXeon E5-2667.
2. Simulation experiment:
the invention is utilized to generate a large amount of data and compare experiments. The multi-domain data set generated by the method improves the accuracy of image matching by comparing the traditional matching algorithm with the existing intelligent matching algorithm, and has a good effect on navigation positioning of unmanned aerial vehicles.
Fig. 5 is an original visible remote sensing image/original infrared image matching result. Fig. 6 is a converted infrared image/original infrared matching result. Fig. 7 is an original visible light remote sensing image/original SAR matching result. Fig. 8 is a post-conversion SAR/original SAR matching result. From the above figures, it can be seen that the multi-domain data augmentation method solves the problems of mismatching, inaccurate navigation positioning and the like, realizes navigation positioning of multi-domain images in the multi-source sensor, and effectively improves the positioning accuracy of the aircraft.
Comparative example 1:
this comparative example shows a method of target data cross-domain inversion augmentation, the other steps of which are substantially identical to the examples, except for the first step. In this comparative example, specific:
step one, loss function adjustment:
the loss function of the algorithm is a binary cross entropy loss function, that is, the loss function combining the binary cross entropy and the Sigmoid activation function is used for training.
Comparative example 2:
this comparative example shows a method of target data cross-domain inversion augmentation, the other steps of which are substantially identical to the examples, except for the first step. In this comparative example, specific:
step one, loss function adjustment:
the loss function of the algorithm is the smoothl 1 loss function, i.e. training with a loss function that uses a square function around the 0 point so that it is smoother.
Comparing and analyzing the embodiment, the comparative example 1 and the comparative example 2, it can be found that the method has faster network convergence speed and higher network stability, and fig. 3 (a), fig. 3 (b), fig. 4 (a) and fig. 4 (b) are conversion effects after model training, and matching experiment results show that the loss function used by the invention has better conversion effects.
Comparative example 3:
the comparative example provides a target data cross-domain inversion augmentation method, which adopts a styleGAN model to carry out cross-domain inversion on a remote sensing visible light remote sensing image.
Comparative example 4:
the comparative example provides a target data cross-domain inversion augmentation method, which adopts a Pix2Pix model to carry out cross-domain inversion on a remote sensing visible light remote sensing image.
By comparing and analyzing the embodiment, the comparative example 3 and the comparative example 4, the infrared and SAR images generated by the method have better mode consistency, the details of the image content are ensured to be unchanged, the infrared and SAR images are closer to the real infrared and SAR images, and the comparative example 3 and the comparative example 4 have partial distortion conditions.
Claims (2)
1. The target data cross-domain inversion augmentation method based on the remote sensing image is characterized by comprising the following steps of:
step 1, generating image data multi-domain conversion of a countermeasure network based on a loop:
step 101, generating an image of an countermeasure network based on cyclic generation;
step 101 includes the steps of:
step 10101, performing an initial convolution operation on the original image, wherein the image size is unchanged, but the feature map of the image is converted from 3 to 64;
step 10102, extracting abstract features of the input image by using two convolution layers, and converting the dimension of the final input image from 256×256×64 to 64×64×256;
step 10103, using a plurality of residual modules to extract features from the dataDomain conversion to->A domain;
step 10104, finally decoding by two-layer deconvolution to realize image decodingDomain to->Domain conversion;
step 102, judging a generated image of a loop-generated countermeasure network;
step 102 includes the following steps:
the arbiter for generating image discrimination is a classifier based on four layers of convolution layers, the feature map of the input image is extracted from 3 dimensions to 512 dimensions by using the convolution layers, and then the confidence coefficient of the image is discriminated through the full connection layer and the average pooling layer;
step 103, designing a total loss function between the generated image and the true value;
step 103 includes the following steps:
step 10301, willBDomain-specific counterloss functionApplied to the slaveADomain to domainBDomain-basedBMapping function of domain-specific generator>Corresponding discriminator->The method comprises the steps of carrying out a first treatment on the surface of the Will beADomain-specific counterloss functionApplied to the slaveBDomain to domainADomain-basedAMapping function of domain-specific generator>Corresponding discriminatorD A ;
Wherein:
Arepresentation ofAA domain;
Brepresentation ofBA domain;
D A representation ofAA discriminator corresponding to the domain;
step 10302, for each sheet fromImage of Domain->By means of a cyclic consistency loss function>Image->Processing, image->The loop should satisfy the picture +.>Restoring to an original image;
wherein:
Step 2, multi-domain data augmentation based on contrast learning;
the step 2 comprises the following steps:
image generation based on a coder and a decoder, wherein the input field of the generator of the image generation is thatThe output field is->Giving unpaired dataset +.>;
Wherein:
Hrepresenting the height of the image;
Wrepresenting the width of the image;
Cthe number of channels representing the image;
Arepresenting the correspondence of the input fieldUnpaired datasets;
Brepresenting unpaired datasets corresponding to the output fields;
arepresenting a datasetAData in (a);
brepresenting a datasetBData in (a);
the generator for generating the image is divided into two partial encodersAnd decoder->Thereby generating an output imageThe method comprises the steps of carrying out a first treatment on the surface of the With encoder->Acquiring a high-dimensional feature vector, and performing iterative training through a total contrast loss function to realize multi-domain data augmentation;
the total contrast loss function is:
wherein:
Ga representation generator;
Da representation discriminator;
Arepresenting unpaired datasets corresponding to input fields;
Brepresenting unpaired datasets corresponding to the output fields;
Mrepresenting a multi-layer perceptron network;
step 3, image migration synthesis is carried out to obtain a multi-domain augmentation data set:
step 301, a set of visible light remote sensing image/infrared image unpaired data sets is given for eachAnd->Representation, a set of visible remote sensing image/SAR image unpaired datasets is respectively represented by +.>And->Representing, given a remote sensing image dataset of visible light to be converted +.>For use as a verification set; />
Step 302, training the two sets of data given in step 301 by the method of generating multi-domain conversion of image data of countermeasure network based on loop in step 1Conversion of visible light remote sensing images into corresponding infrared image data sets is achieved through model reasoningSAR image dataset +.>;
Step 303, training the two sets of data given in step 301 by the multi-domain data augmentation method based on contrast learning in step 2, and converting the visible light remote sensing image into a corresponding infrared image dataset by model reasoningSAR image dataset +.>;
2. The method of claim 1, further comprising step 4, similarity calculation and matching test:
taking the multi-domain augmentation data set obtained in the step 3 as a reference image, and calculating the similarity of the images through a PSNR algorithm and an LPIPS algorithm;
and (3) taking the multi-domain augmentation data set obtained in the step (3) as a reference graph, and carrying out a matching test through an ORB algorithm and a LoFTR algorithm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310101406.8A CN115797163B (en) | 2023-02-13 | 2023-02-13 | Target data cross-domain inversion augmentation method based on remote sensing image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310101406.8A CN115797163B (en) | 2023-02-13 | 2023-02-13 | Target data cross-domain inversion augmentation method based on remote sensing image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115797163A CN115797163A (en) | 2023-03-14 |
CN115797163B true CN115797163B (en) | 2023-04-28 |
Family
ID=85430897
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310101406.8A Active CN115797163B (en) | 2023-02-13 | 2023-02-13 | Target data cross-domain inversion augmentation method based on remote sensing image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115797163B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113283444A (en) * | 2021-03-30 | 2021-08-20 | 电子科技大学 | Heterogeneous image migration method based on generation countermeasure network |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112396110B (en) * | 2020-11-20 | 2024-02-02 | 南京大学 | Method for generating augmented image of countermeasure cascade network |
CN113298056A (en) * | 2021-07-27 | 2021-08-24 | 自然资源部国土卫星遥感应用中心 | Multi-mode remote sensing image change detection method, model generation method and terminal equipment |
CN115310515A (en) * | 2022-07-06 | 2022-11-08 | 山东科技大学 | Fault-labeled seismic data sample set amplification method based on generation countermeasure network |
-
2023
- 2023-02-13 CN CN202310101406.8A patent/CN115797163B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113283444A (en) * | 2021-03-30 | 2021-08-20 | 电子科技大学 | Heterogeneous image migration method based on generation countermeasure network |
Also Published As
Publication number | Publication date |
---|---|
CN115797163A (en) | 2023-03-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wang et al. | SFNet-N: An improved SFNet algorithm for semantic segmentation of low-light autonomous driving road scenes | |
Cong et al. | RRNet: Relational reasoning network with parallel multiscale attention for salient object detection in optical remote sensing images | |
WO2021057186A1 (en) | Neural network training method, data processing method, and related apparatuses | |
Jaus et al. | Panoramic panoptic segmentation: Towards complete surrounding understanding via unsupervised contrastive learning | |
CN111127538A (en) | Multi-view image three-dimensional reconstruction method based on convolution cyclic coding-decoding structure | |
Zhu et al. | Shadow compensation for synthetic aperture radar target classification by dual parallel generative adversarial network | |
CN113838064B (en) | Cloud removal method based on branch GAN using multi-temporal remote sensing data | |
CN113743544A (en) | Cross-modal neural network construction method, pedestrian retrieval method and system | |
CN113096239B (en) | Three-dimensional point cloud reconstruction method based on deep learning | |
CN112215296A (en) | Infrared image identification method based on transfer learning and storage medium | |
CN114612660A (en) | Three-dimensional modeling method based on multi-feature fusion point cloud segmentation | |
Hwang et al. | Lidar depth completion using color-embedded information via knowledge distillation | |
CN114387195A (en) | Infrared image and visible light image fusion method based on non-global pre-enhancement | |
CN116612468A (en) | Three-dimensional target detection method based on multi-mode fusion and depth attention mechanism | |
CN112905828A (en) | Image retriever, database and retrieval method combined with significant features | |
CN115272599A (en) | Three-dimensional semantic map construction method oriented to city information model | |
CN114445816A (en) | Pollen classification method based on two-dimensional image and three-dimensional point cloud | |
Zhao et al. | Label freedom: Stable diffusion for remote sensing image semantic segmentation data generation | |
CN117788810A (en) | Learning system for unsupervised semantic segmentation | |
CN115797163B (en) | Target data cross-domain inversion augmentation method based on remote sensing image | |
CN116433904A (en) | Cross-modal RGB-D semantic segmentation method based on shape perception and pixel convolution | |
CN116343016A (en) | Multi-angle sonar image target classification method based on lightweight convolution network | |
CN115131245A (en) | Point cloud completion method based on attention mechanism | |
Yin et al. | M2F2-RCNN: Multi-functional faster RCNN based on multi-scale feature fusion for region search in remote sensing images | |
CN114529939A (en) | Pedestrian identification method based on millimeter wave radar point cloud clustering and deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |