CN112528808A - Celestial body surface obstacle identification method and device - Google Patents

Celestial body surface obstacle identification method and device Download PDF

Info

Publication number
CN112528808A
CN112528808A CN202011404160.4A CN202011404160A CN112528808A CN 112528808 A CN112528808 A CN 112528808A CN 202011404160 A CN202011404160 A CN 202011404160A CN 112528808 A CN112528808 A CN 112528808A
Authority
CN
China
Prior art keywords
image
fused
neural network
convolutional neural
sample set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011404160.4A
Other languages
Chinese (zh)
Inventor
李海超
邱林伟
李志�
黄龙飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Academy of Space Technology CAST
Original Assignee
China Academy of Space Technology CAST
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Academy of Space Technology CAST filed Critical China Academy of Space Technology CAST
Priority to CN202011404160.4A priority Critical patent/CN112528808A/en
Publication of CN112528808A publication Critical patent/CN112528808A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of deep space exploration, and provides a method and a device for identifying obstacles on the surface of a celestial body, which comprise the following steps: carrying out obstacle labeling on an original image set obtained by the deep space exploration rover to obtain a labeled sample set; performing feature fusion on each image in the labeled sample set to obtain a fused labeled sample set; constructing a convolutional neural network, inputting the fused labeled sample set into the convolutional neural network, and obtaining a training model with the minimum loss function; and performing feature fusion on the image to be recognized obtained by the deep space exploration rover, and inputting the fused image to be recognized into a training model to obtain a segmentation recognition result of the barrier. The invention overcomes the problem of insufficient training data volume of the extraterrestrial celestial body patrolling device, realizes the detection of the obstacle of the extraterrestrial celestial body through a single image, and has high accuracy.

Description

Celestial body surface obstacle identification method and device
Technical Field
The invention relates to the technical field of deep space exploration, in particular to a method and a device for recognizing obstacles on the surface of a celestial body.
Background
The patrolling device detection of the surface of the extraterrestrial celestial body is an important component of future deep space detection. Because the extraterrestrial celestial body and the earth are often far away from each other, the communication delay is large, the detection task cannot be finished only by the ground remote control, and the inspection device is required to have quite large autonomous detection capability. For a patrol instrument, the patrol instrument moves in an unknown environment on the surface of an extraterrestrial celestial body, has great uncertainty, and needs to autonomously sense the surrounding environment and identify obstacles (such as rocks, pits and the like) which are dangerous to the movement of the patrol instrument, so that a safe driving route is established, and a surface detection task is smoothly completed.
At present, lunar vehicles and mars vehicles, which are used as patrolling devices that people have successfully launched and landed, mainly detect and identify obstacles by using a stereoscopic vision sensor. The Mars train with the courage and opportunity signs landed respectively in 3 days and 25 days of 1 month in 2004 adopts teleoperation and semi-autonomous control modes in a main working mode, and draws a three-dimensional map by utilizing stereoscopic vision for detecting obstacles and navigating; the curiosity Mars train which successfully lands on the Mars surface in 8 months in 2012 is also a main technology which adopts stereoscopic vision as obstacle avoidance, path planning and navigation positioning; a ' rabbit number ' tour device carried in Chang ' e three in 12 months in 2013 realizes three-dimensional reconstruction of an unknown environment on the lunar surface by adopting a teleoperation working mode and utilizing a stereoscopic vision technology, and realizes a local autonomous obstacle avoidance method based on stereoscopic vision.
On one hand, however, because the base length of the inspection tour device is limited, the traditional three-dimensional reconstruction precision is low, the detection precision of the obstacle is not high, and the effective obstacle detection range provided by the current stereo camera is generally only within ten meters, the problem of the three-dimensional reconstruction precision under the condition of medium and long distances may cause obstacle avoidance and failure of path planning; on the other hand, in order to obtain a dense three-dimensional reconstruction image, all image pixel points need to be matched, the stereo matching calculation amount is large, and meanwhile, a large parallax search range is needed during matching search, so that the calculation amount is further increased.
Disclosure of Invention
Based on the above, the embodiment of the invention provides a celestial body surface obstacle identification method and device, which are used for solving the problems of low precision and large calculation amount of an obstacle identification method of an extraterrestrial celestial body patrolling device in the prior art.
In a first aspect of the embodiments of the present invention, a method for identifying an obstacle on a surface of a celestial body is provided, including:
carrying out obstacle labeling on an original image set obtained by the deep space exploration rover to obtain a labeled sample set;
performing feature fusion on each image in the labeled sample set to obtain a fused labeled sample set;
constructing a convolutional neural network, inputting the fused labeled sample set into the convolutional neural network, and obtaining a training model with the minimum loss function;
and performing feature fusion on the image to be recognized obtained by the deep space exploration rover, and inputting the fused image to be recognized into the training model to obtain a segmentation recognition result of the barrier.
Optionally, the method for performing the feature fusion on each image to be fused includes:
calculating a circular LBP characteristic diagram of the image to be fused by a pixel sampling method of a circular neighborhood of a first area to obtain a first window LBP characteristic diagram of a corresponding image;
calculating a circular LBP characteristic diagram of the image to be fused by a pixel sampling method of a circular neighborhood of a second area to obtain a second window LBP characteristic diagram of a corresponding image;
and respectively taking the image to be fused, the first window LBP characteristic graph and the second window LBP characteristic graph as each channel graph of an RGB channel to obtain the image after the characteristic fusion of the image to be fused.
Optionally, the constructing a convolutional neural network, and inputting the fused labeled sample set into the convolutional neural network to obtain a training model with a minimum loss function, includes:
and constructing a convolutional neural network based on a DeepLabv3+ network, and inputting the fused labeled sample set into the convolutional neural network to obtain a training model with the minimum loss function.
Optionally, the constructing a convolutional neural network based on the deep bv3+ network, and inputting the fused labeled sample set into the convolutional neural network to obtain a training model with a minimum loss function, including:
constructing a backbone network based on a DeepLabv3+ network, and inputting each fused image in the fused labeled sample set into the backbone network to obtain a first layer feature and a second layer feature corresponding to the fused images; wherein the backbone network is a deep residual error network or a lightweight network;
performing convolution and pooling on each first-layer feature in sequence by void space pyramid pooling to obtain a first feature map corresponding to each first-layer feature;
performing upsampling on each first feature map by a residual upsampling conversion method to obtain a corresponding second feature map, and splicing each second feature map with the corresponding second-layer features to obtain a spliced feature map;
and performing up-sampling on the spliced characteristic diagram to obtain a segmentation result of the training sample, and determining a training model with the minimum loss function according to the segmentation result.
Optionally, the upsampling each first feature map by using a residual upsampling conversion method to obtain a corresponding second feature map includes:
sequentially pass through
Figure BDA0002813398830000031
Si,j=Fsim(Qj,Ki)=-||Qj-Ki||2
Wi,j=Fw(Si,j)
Figure BDA0002813398830000032
Figure BDA0002813398830000033
Obtaining a corresponding second characteristic diagram fl j(ii) a Wherein, Cq、Ck、CvEach of which represents a convolutional layer,
Figure BDA0002813398830000034
is represented by the first characteristic diagram fhUp-sampling a feature map obtained by a preset multiple,
Figure BDA0002813398830000035
to represent
Figure BDA0002813398830000036
The (j) th characteristic position of (c),
Figure BDA0002813398830000037
representing said first characteristic diagram fhThe ith characteristic position of (1), FwDenotes the sigmoid function, FmulRepresenting a point-by-point function.
Optionally, the constructing a convolutional neural network, and inputting the fused labeled sample set into the convolutional neural network to obtain a training model with a minimum loss function, includes:
and constructing a convolutional neural network, inputting the fused labeled sample set into the convolutional neural network, calculating a loss function of the convolutional neural network, and training the convolutional neural network by using a gradient descent method to obtain a training model with the minimum loss function.
Optionally, the loss function is a smooth IOU loss function, where the expression of the IOU is:
Figure BDA0002813398830000038
smooth denotes one-hot encoding P of tags in a split taskmSmoothing is performed as follows:
Figure BDA0002813398830000039
wherein M represents the total number of classes of the classification, M represents one class of all classes of the classification, y represents a label class, and epsilon is a preset hyper-parameter.
In a second aspect of the embodiments of the present invention, there is provided a celestial body surface obstacle recognition device including:
the image labeling module is used for performing obstacle labeling on an original image set obtained by the deep space exploration rover to obtain a labeled sample set;
the characteristic fusion module is used for carrying out characteristic fusion on each image in the labeled sample set to obtain the fused labeled sample set;
the model establishing module is used for establishing a convolutional neural network, inputting the fused labeled sample set into the convolutional neural network and obtaining a training model with the minimum loss function;
and the segmentation recognition module is used for performing feature fusion on the image to be recognized obtained by the deep space exploration rover, and inputting the fused image to be recognized into the training model to obtain a segmentation recognition result of the barrier.
In a third aspect of the embodiments of the present invention, there is provided a celestial surface obstacle recognition device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor, when executing the computer program, implements the steps of the celestial surface obstacle recognition method according to any one of the above-mentioned celestial surface obstacles provided by the first aspect of the embodiments.
A fourth aspect of embodiments of the present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the celestial surface obstacle identification method of any one of the aspects provided by the first aspect of embodiments.
Compared with the prior art, the method and the device for identifying the obstacle on the surface of the celestial body have the beneficial effects that:
the invention firstly labels barriers on an original image set obtained by the deep space exploration rover, then performs characteristic fusion to obtain a fused labeled sample set, then constructs a convolutional neural network, inputs the fused labeled sample set into the convolutional neural network to obtain a training model with the minimum loss function, performs characteristic fusion on an image to be identified obtained by the deep space exploration rover, the fused images to be recognized are input into the training model to obtain the segmentation recognition result of the barrier, the problem of insufficient training data volume of the extraterrestrial celestial object patroller is solved, the barrier of the extraterrestrial celestial object can be detected through a single image, the segmentation recognition precision of the barrier is high, the recognition speed is high, and the obstacle dangerous to the patrol movement of the celestial body patrol device can be accurately segmented, and the method is suitable for the segmentation and identification of the obstacles on the surfaces of various celestial bodies.
Drawings
Fig. 1 is a schematic flow chart of an implementation of a method for identifying an obstacle on a surface of a celestial body according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a process for implementing feature fusion according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a specific implementation flow of step S103 in FIG. 1;
FIG. 4 is a schematic flow chart of another implementation of a method for recognizing an obstacle on a surface of a celestial body according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a convolutional neural network provided in an embodiment of the present invention;
fig. 6 is a schematic flow chart of implementing residual upsampling conversion according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a device for recognizing an obstacle on the surface of a celestial body according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of another celestial body surface obstacle recognition device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Referring to fig. 1, a schematic flow chart of an implementation of an embodiment of the method for identifying an obstacle on a surface of a celestial body according to the present embodiment is described in detail as follows:
and S101, carrying out obstacle labeling on an original image set obtained by the deep space exploration rover to obtain a labeled sample set.
Because the base length of the inspection device is limited, the traditional three-dimensional reconstruction precision is low, the detection precision of the obstacle is not high, and the effective obstacle detection range provided by the current stereo camera is generally only within ten meters, the obstacle avoidance and path planning failure can be caused by the precision problem in the three-dimensional reconstruction under the condition of medium and long distances; on the other hand, in order to obtain a dense three-dimensional reconstruction image, all image pixel points need to be matched, the stereo matching calculation amount is large, and meanwhile, a larger parallax search range is needed during matching search, so that the calculation amount is further increased; on the other hand, the data volume of the extraterrestrial celestial body patroller is insufficient, so that the accuracy of the current matching algorithm is not high. Therefore, the method for recognizing the obstacle on the surface of the celestial body based on deep learning can be used for partitioning the rock and the meteorite crater of the rover on the surface of the celestial body in the deep space, and can also be used for partitioning the obstacle of a field robot.
In recent years, deep learning has made a rapid progress in the field of computer vision, and has even demonstrated potential in some areas beyond the human eye, with great success. Different from the traditional recognition algorithm in the computer vision field, the deep learning method does not need manual construction and feature screening, and is excellent in the problems of target detection, segmentation, classification and the like.
Specifically, the criteria for labeling the image in this embodiment may be: (1) for a close shot, marking all barriers threatening the patrol motion of the patrol device, namely the barriers exceeding the obstacle crossing capability designed by the patrol device; (2) for long-range views, only large obstacles are labeled. Further, the embodiment may also divide the labeled sample set into a training sample, a verification sample and a test sample, for example, randomly divide the labeled sample set into the training set, the verification set and the test set according to a ratio of 6:2: 2.
Optionally, the embodiment also cleans the image data, deletes dirty data, and improves the accuracy of segmentation and recognition. It should be understood that the present embodiment does not limit the method of data cleansing.
And S102, performing feature fusion on each image in the labeled sample set to obtain the fused labeled sample set.
For example, in this embodiment, feature fusion may be performed on an original image obtained by the deep space exploration rover to obtain a feature-fused labeled sample set, or feature fusion may be performed on each image in the labeled sample set to obtain the fused labeled sample set, and then the fused labeled sample is divided into a training sample, a verification sample, and a test sample, as shown in fig. 4.
Further, since the sample set image is generally a gray image, in order to improve the segmentation recognition effect, all images input to the network model need to undergo a feature fusion step, that is, the original image of a single channel is fused with different LBP feature maps obtained through an LBP operator. The LBP operator has the advantages of rotation invariance, gray scale invariance and the like, is a texture description algorithm for describing local features of the relation between a central pixel point and adjacent pixel points of an image, and is applied to multiple fields of face, expression, terrain recognition and the like.
Optionally, referring to fig. 2, a specific implementation process of the method for performing the feature fusion on each image to be fused in step S102 may include:
step S201, a circular LBP characteristic diagram of the image to be fused is calculated through a pixel sampling method of a circular neighborhood of a first area, and a first window LBP characteristic diagram of a corresponding image is obtained.
Step S202, a circular LBP characteristic diagram of the image to be fused is calculated through a pixel sampling method of a circular neighborhood of a second area, and a second window LBP characteristic diagram of a corresponding image is obtained.
Step S203, the image to be fused, the first window LBP characteristic graph and the second window LBP characteristic graph are respectively used as each channel graph of the RGB channel, and the image after the characteristic fusion of the image to be fused is obtained.
Exemplarily, a pixel sampling mode of a circular neighborhood is adopted, the radius of a circle is set to be 8, a circular LBP characteristic diagram of an original image is calculated and is called as a small-window LBP characteristic diagram (a first-window LBP characteristic diagram), and the small-window LBP characteristic is favorable for segmenting and identifying small obstacles such as rocks; then, a pixel sampling mode of a circular neighborhood is adopted, the radius of the circle is set to be 16, a circular LBP characteristic diagram of the original image is calculated and is called as a large-window LBP characteristic diagram (a second window LBP characteristic diagram), and the LBP characteristic of the large window is favorable for segmentation and identification of larger obstacles such as meteorite craters and the like; and finally, respectively taking the original image, the small-window LBP characteristic graph and the large-window LBP characteristic graph as each channel graph of the RGB channel to obtain a characteristic fusion image.
Optionally, in this embodiment, the fused image may be resampled to an image (RGB three channels) with a size of 513 × 513 pixels, so as to meet the training hardware limitation and the recognition effect limitation.
And S103, constructing a convolutional neural network, inputting the fused labeled sample set into the convolutional neural network, and obtaining a training model with the minimum loss function.
Optionally, the constructing the convolutional neural network in this embodiment may specifically include: and constructing a convolutional neural network based on a DeepLabv3+ network, inputting the fused labeled sample set into the convolutional neural network to obtain a training model with the minimum loss function, namely improving the traditional DeepLabv3+ network to obtain the constructed convolutional neural network of the embodiment.
In an embodiment, referring to fig. 3, the specific implementation process of constructing a convolutional neural network based on a deplab v3+ network and inputting the fused labeled sample set into the convolutional neural network to obtain a training model with a minimum loss function may include:
step S301, constructing a backbone network (backbone) based on a DeepLabv3+ network, and inputting each fused image in the fused labeled sample set into the backbone network to obtain a first layer feature and a second layer feature corresponding to the fused images; wherein the backbone network is a deep residual error network or a lightweight network.
Specifically, the backbone network of this embodiment may include two forms, one is a deep residual network, which is used as a basic version of the model of this embodiment, and the obstacle segmentation and identification precision of the deep residual network is high, so that obstacles dangerous to the patrol movement of the celestial body patrol device can be accurately segmented, and the deep residual network is suitable for the segmentation and identification of obstacles on the surfaces of various celestial bodies, and the other is a lightweight network, which is used as a fast version of the model of this embodiment, and the identification speed is greatly increased on the premise that a certain identification precision is sacrificed and the identification precision can meet the basic task requirements.
Step S302, carrying out convolution and pooling on each first-layer feature in sequence by cavity space pyramid pooling to obtain a first feature map corresponding to each first-layer feature.
Step S303, performing upsampling on each first feature map by a residual upsampling conversion method to obtain a corresponding second feature map, and splicing each second feature map and the corresponding second-layer feature to obtain a spliced feature map.
And S304, performing up-sampling on the spliced feature map to obtain a segmentation result of the training sample, and determining a training model with the minimum loss function according to the segmentation result.
Illustratively, an image with fused features is input into a backbone network to obtain depth level features, namely first-layer features, after feature extraction of the backbone network, the resolution is changed into one sixteenth of the original resolution, and low-layer features, namely second-layer features, with the resolution obtained in the middle process being one fourth of the original resolution are output. The output features are then Pyramid pooled in void space (ASPP). The pyramid pooling of the cavity space can realize various effective receptive fields through convolution and pooling operations with different expansion rates, obtain multi-resolution characteristics, mine multi-scale context information, and obtain coded high-dimensional characteristics after the process.
Further, in this embodiment, Residual Upsampling Transform (RUT) is used to perform Upsampling on the feature map output by the ASPP by a preset multiple, for example, 4 times, as shown in fig. 5, and the feature map is simultaneously spliced with the low-level features to obtain a spliced feature map, that is, in this embodiment, the Upsampling of deep uv 3+ is redesigned, Residual Upsampling transform is designed, and Upsampling on the feature map output by the ASPP by the preset multiple is performed, and the feature map is simultaneously spliced with the low-level features. And then, performing upsampling on the spliced feature map again, for example, performing 4-time upsampling (bilinear interpolation), and obtaining a final segmentation result to output.
Optionally, the upsampling each first feature map by using a residual upsampling conversion method to obtain a corresponding second feature map includes:
sequentially pass through
Figure BDA0002813398830000091
Si,j=Fsim(Qj,Ki)=-||Qj-Ki||2
Wi,j=Fw(Si,j)
Figure BDA0002813398830000092
Figure BDA0002813398830000093
Obtaining a corresponding second characteristic diagram fl j(ii) a Wherein, Cq、Ck、CvEach of which represents a convolutional layer,
Figure BDA0002813398830000094
is represented by the first characteristic diagram fhUp sampling preset timesThe obtained characteristic diagram is processed according to the data acquisition method,
Figure BDA0002813398830000095
to represent
Figure BDA0002813398830000096
The (j) th characteristic position of (c),
Figure BDA0002813398830000097
representing said first characteristic diagram fhThe ith characteristic position of (1), FwDenotes the sigmoid function, FmulRepresenting a point-by-point function.
Specifically, as shown in FIG. 6, a low-dimensional high-resolution feature map
Figure BDA0002813398830000098
From a high-dimensional low-resolution feature map fhObtained by interpolating up-sampling 4 times, and query (query)
Figure BDA0002813398830000099
Key (key)
Figure BDA00028133988300000910
Value (value)
Figure BDA00028133988300000911
Wherein, l represents a low-dimensional high-resolution feature map, h represents a high-dimensional low-resolution feature map, Cq、Ck、CvAll pass through one convolution layer, and the parameters of the three convolution layers can be different, so the embodiment adopts different symbols to represent,
Figure BDA00028133988300000912
is a characteristic diagram
Figure BDA00028133988300000913
The (j) th characteristic position of (c),
Figure BDA00028133988300000914
is a characteristic diagram fhThe ith characteristic position of。
According to the above Qi、Ki、ViCalculating the similarity Si,j=Fsim(Qj,Ki)=-||Qj-Ki||2Then, the weight W is calculated from the similarityi,j=Fw(Si,j),FwAs sigmoid function, FsimRepresenting a similarity function; finally, multiplying channel by channel (the number of channels in this embodiment may be 256) to obtain the residual output
Figure BDA00028133988300000915
FmulFor dot product, the final output characteristic is
Figure BDA00028133988300000916
In the convolutional neural network established by the method, in order to reduce the calculation amount, a form of block calculation instead of integral calculation is adopted during calculation, context information is retained to the maximum extent, and the method is high in identification precision and high in speed.
In another embodiment, the specific implementation process of constructing the convolutional neural network and inputting the fused labeled sample set into the convolutional neural network to obtain the training model with the minimum loss function includes:
and constructing a convolutional neural network, inputting the fused labeled sample set into the convolutional neural network, calculating a loss function of the convolutional neural network, and training the convolutional neural network by using a gradient descent method to obtain a training model with the minimum loss function.
Optionally, the loss function adopted in this embodiment is a smooth IOU loss function, where the expression of the IOU is:
Figure BDA0002813398830000101
to achieve the effect of suppressing the overfitting, the present embodiment introduces a label smoothing process commonly used in the classification task, that is, a unique hot-coding of labels in the segmentation taskCode PmThe (one-hot coding) is performed with smoothing (smooth) processing, so in the smooth IOU loss function, smooth represents the one-hot coding P of the label in the splitting taskmSmoothing is performed as follows:
Figure BDA0002813398830000102
wherein M represents the total number of classes of the classification, M represents one class of all classes of the classification, y represents a label class, and epsilon is a preset hyper-parameter and a smaller hyper-parameter. Illustratively, when the backbone network is a deep residual network, ε is 10-6Or, when the backbone network is a lightweight network, epsilon is 10-5
And S104, performing feature fusion on the image to be recognized obtained by the deep space exploration rover, and inputting the fused image to be recognized into the training model to obtain a segmentation recognition result of the obstacle.
For example, the test sample after feature fusion may be input into a trained convolutional network, and the test sample set after feature fusion is tested by using the trained convolutional network, so as to obtain a segmentation recognition result of the obstacle.
In the embodiment, the obstacles on the surface of the celestial body are identified by deep learning, the whole segmentation identification process after the network model training is finished is full-automatic, manual participation is not required, and the method is suitable for identifying the obstacles on the surface of various celestial bodies; the original image and the LBP characteristic graph of the original image are fused by utilizing the advantages of rotation invariance, gray scale invariance and the like of the LBP operator, and the fused image is used as the input of the whole network, so that the identification and segmentation accuracy of the barrier is improved; in addition, the embodiment provides two versions of backbone networks, and the basic version of backbone network ResNet has the characteristics of high segmentation identification precision and accuracy, high segmentation identification speed and the like; the backbone network MobileNet of the quick version has the characteristics of high segmentation recognition speed, high recognition precision, high accuracy and the like; in addition, the pre-training model is used in the embodiment, so that the random initialization of model parameters at the initial stage of model training is avoided, the targeted training of various extraterrestrial celestial body environments is facilitated, and the generalization performance of the network model is good; in addition, the embodiment is an end-to-end network model, the marking and training in the early stage are simple, the understanding is easy, the deployment is convenient, the engineering complexity is reduced, and the method is suitable for the application of the deep space exploration patrol device
For example, in this embodiment, an image captured by a rabbit-shaped lunar rover carried by Chang' e three may be used as a sample, and there are three types of segmentation, namely background (background), rock (rock) and meteorite pit (crater), in the following specific process:
(1) and acquiring the picture data taken by the lander Chang' e No. three and the patrolling device Yu rabbit No. three, and cleaning the data (for example, deleting duplicate pictures, and making the pictures invisible to naked eyes). After data cleaning, the number of pictures changed from 541 to 334.
(2) The obtained sample set is randomly divided into a training set, a verification set and a test set according to the ratio of 6:2: 2. Thus, 200 training set samples, 67 verification set samples and 67 test set samples were obtained.
(3) And respectively calculating the small-window LBP characteristic diagrams and the large-window LBP characteristic diagrams of all the images, and respectively carrying out RGB three-channel characteristic fusion on the original image, the small-window LBP characteristic diagrams and the large-window LBP characteristic diagrams to obtain a characteristic fusion diagram. For convenience, the images of all input networks are scaled to a size of 513 × 513 pixels (three channels). In order to avoid overfitting, all training images are subjected to operations such as scaling, center clipping, random Gaussian noise, normalization and the like, and the labels are processed in the same way for calculating loss (only geometric transformation processing is needed, and Gaussian noise and normalization are not needed).
(4) Constructing a model:
1) in the feature extraction process, for the fast segmentation recognition scene, the mobilenet is used as a backbone to perform feature extraction, and the resnet is used as the backbone to perform feature extraction in the basic version. In the middle process, a 24 × 129 × 129 feature map is output for subsequent splicing, and meanwhile, the information loss of the high-dimensional features is compensated. The final output of feature extraction is a 320 × 33 × 33 high-dimensional feature map.
2) After passing through the hole space pyramid pooling (ASPP) module, the feature map becomes 256 × 33 × 33.
3) After residual up-sampling conversion, and after being spliced with the feature map output by the feature extraction module and passing through a convolution layer, the feature map becomes 3 × 129 × 129. Specifically, Fsim=-||Qj-Ki||2,FwAs sigmoid function, FmulIs a dot product.
4) Finally, a result graph with the size of 3 multiplied by 513 is obtained after bilinear interpolation is carried out, and 3 represents three types.
(5) Inputting the training samples into the constructed neural network, calculating smoothIOUloss, training the neural network by using a gradient descent method, and storing the training model with the highest mIOU in the verification set.
(6) And inputting the test sample into the trained network model, and testing the test set sample by using the trained network model to obtain the segmentation recognition result of the obstacles such as the rock, the meteor crater and the like. Table 1 shows the comparison index and the identification speed of DeepLabv3+ and the method of the invention, wherein the backbone network adopts mobilenet and resnet respectively, wherein f/s represents the number of frames identified per second, and the video card used is NVIDIA 2080 TI.
TABLE 1 comparison of the conventional DeepLabv3+ identification method with the method of the invention
Figure BDA0002813398830000121
In the method for identifying the obstacle on the surface of the celestial body, the obstacle is marked on an original image set obtained by the deep space exploration patrol instrument, and carrying out feature fusion to obtain a fused labeled sample set, then constructing a convolutional neural network, inputting the fused labeled sample set into the convolutional neural network to obtain a training model with the minimum loss function, carrying out feature fusion on an image to be recognized obtained by a deep space exploration rover, inputting the fused image to be recognized into the training model to obtain a segmentation recognition result of the barrier, overcoming the problem of insufficient training data volume of the extraterrestrial celestial body exploration rover, realizing the detection of the barrier of the extraterrestrial celestial body through a single image, having high barrier segmentation recognition precision and high recognition speed, and the obstacle dangerous to the patrol movement of the celestial body patrol device can be accurately segmented, and the method is suitable for the segmentation and identification of the obstacles on the surfaces of various celestial bodies.
It should be understood by those skilled in the art that the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Corresponding to the method for identifying the obstacle on the surface of the celestial body described in the above embodiments, the present embodiment provides an apparatus for identifying the obstacle on the surface of the celestial body. Specifically, fig. 7 is a schematic structural diagram of the identification device for the obstacle on the surface of the antenna in this embodiment. For convenience of explanation, only the portions related to the present embodiment are shown.
The celestial body surface obstacle recognition device mainly includes: an image annotation module 110, a feature fusion module 120, a model building module 130, and a segmentation identification module 140.
The image labeling module 110 is configured to perform obstacle labeling on an original image set obtained by the deep space exploration rover, so as to obtain a labeled sample set.
The feature fusion module 120 is configured to perform feature fusion on each image in the labeled sample set to obtain the fused labeled sample set.
The model building module 130 is configured to build a convolutional neural network, and input the fused labeled sample set into the convolutional neural network to obtain a training model with a minimum loss function.
The segmentation recognition module 140 is configured to perform feature fusion on the image to be recognized obtained by the deep space exploration rover, and input the fused image to be recognized into the training model to obtain a segmentation recognition result of the obstacle.
The device for identifying the obstacles on the surface of the celestial body mainly comprises the steps of marking the obstacles on a deep space image set, performing characteristic fusion to obtain a fused marked sample set, constructing a convolutional neural network, inputting the fused marked sample set into the convolutional neural network to obtain a training model with the minimum loss function, performing characteristic fusion on an image to be identified, inputting the fused image to be identified into the training model to obtain a segmentation identification result of the obstacles, overcoming the problem of insufficient training data amount of a celestial body patrolling device, realizing the detection of the obstacles on the celestial body through a single image, having high obstacle segmentation identification precision and high identification speed, accurately segmenting the obstacles dangerous to the patrolling motion of the celestial body patrolling device, and being suitable for the segmentation identification of the obstacles on the surface of various celestial bodies.
The embodiment also provides a schematic diagram of a celestial surface obstacle recognition device 100. As shown in fig. 8, the celestial body surface obstacle recognition device 100 of this embodiment includes: a processor 150, a memory 160 and a computer program 161, such as a program for a method of identification of obstacles on a surface of a celestial object, stored in said memory 160 and operable on said processor 150.
The processor 150, when executing the computer program 161 on the memory 160, implements the steps of the above-mentioned embodiment of the method for recognizing an obstacle on the surface of a celestial object, such as the steps 101 to 103 shown in fig. 1. Alternatively, the processor 150, when executing the computer program 161, implements the functions of each module/unit in each device embodiment described above, for example, the functions of the modules 110 to 140 shown in fig. 7.
Illustratively, the computer program 161 may be partitioned into one or more modules/units that are stored in the memory 160 and executed by the processor 150 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions for describing the execution process of the computer program 161 in the celestial surface obstacle recognition device 100. For example, the computer program 161 may be divided into the image labeling module 110, the feature fusion module 120, the model building module 130, and the segmentation identification module 140, and each module has the following specific functions:
the image labeling module 110 is configured to perform obstacle labeling on an original image set obtained by the deep space exploration rover, so as to obtain a labeled sample set.
The feature fusion module 120 is configured to perform feature fusion on each image in the labeled sample set to obtain the fused labeled sample set.
The model building module 130 is configured to build a convolutional neural network, and input the fused labeled sample set into the convolutional neural network to obtain a training model with a minimum loss function.
The segmentation recognition module 140 is configured to perform feature fusion on the image to be recognized obtained by the deep space exploration rover, and input the fused image to be recognized into the training model to obtain a segmentation recognition result of the obstacle.
The celestial surface obstacle identification device 100 may include, but is not limited to, a processor 150, a memory 160. Those skilled in the art will appreciate that fig. 8 is merely an example of the celestial surface obstacle recognition device 100, and does not constitute a limitation of the celestial surface obstacle recognition device 100, and may include more or less components than those shown, or combine some components, or different components, for example, the celestial surface obstacle recognition device 100 may further include an input/output device, a network access device, a bus, etc.
The Processor 150 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 160 may be an internal storage unit of the celestial surface obstacle recognition device 100, such as a hard disk or a memory of the celestial surface obstacle recognition device 100. The memory 160 may also be an external storage device of the celestial surface obstacle recognition device 100, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card), or the like, provided on the celestial surface obstacle recognition device 100. Further, the memory 160 may also include both an internal storage unit and an external storage device of the celestial surface obstacle recognition device 100. The memory 160 is used to store the computer program and other programs and data required by the celestial surface obstacle recognition device 100. The memory 160 may also be used to temporarily store data that has been output or is to be output.
It will be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing functional units and models are merely illustrated as being divided, and in practical applications, the foregoing functional allocations may be performed by different functional units and modules as needed, that is, the internal structure of the device may be divided into different functional units or modules to perform all or part of the above described functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A method for recognizing obstacles on the surface of a celestial body is characterized by comprising the following steps:
carrying out obstacle labeling on an original image set obtained by the deep space exploration rover to obtain a labeled sample set;
performing feature fusion on each image in the labeled sample set to obtain a fused labeled sample set;
constructing a convolutional neural network, inputting the fused labeled sample set into the convolutional neural network, and obtaining a training model with the minimum loss function;
and performing feature fusion on the image to be recognized obtained by the deep space exploration rover, and inputting the fused image to be recognized into the training model to obtain a segmentation recognition result of the barrier.
2. The celestial surface obstacle recognition method of claim 1, wherein the feature fusion method for each image to be fused comprises:
calculating a circular LBP characteristic diagram of the image to be fused by a pixel sampling method of a circular neighborhood of a first area to obtain a first window LBP characteristic diagram of a corresponding image;
calculating a circular LBP characteristic diagram of the image to be fused by a pixel sampling method of a circular neighborhood of a second area to obtain a second window LBP characteristic diagram of a corresponding image;
and respectively taking the image to be fused, the first window LBP characteristic graph and the second window LBP characteristic graph as each channel graph of an RGB channel to obtain the image after the characteristic fusion of the image to be fused.
3. The method for recognizing the obstacle on the surface of the celestial body according to claim 1, wherein the constructing a convolutional neural network, and inputting the fused labeled sample set into the convolutional neural network to obtain a training model with a minimum loss function comprises:
and constructing a convolutional neural network based on a DeepLabv3+ network, and inputting the fused labeled sample set into the convolutional neural network to obtain a training model with the minimum loss function.
4. The celestial surface obstacle recognition method of claim 3, wherein the constructing a convolutional neural network based on a DeepLabv3+ network, and inputting the fused labeled sample set into the convolutional neural network to obtain a training model with a minimum loss function comprises:
constructing a backbone network based on a DeepLabv3+ network, and inputting each fused image in the fused labeled sample set into the backbone network to obtain a first layer feature and a second layer feature corresponding to the fused images; wherein the backbone network is a deep residual error network or a lightweight network;
performing convolution and pooling on each first-layer feature in sequence by void space pyramid pooling to obtain a first feature map corresponding to each first-layer feature;
performing upsampling on each first feature map by a residual upsampling conversion method to obtain a corresponding second feature map, and splicing each second feature map with the corresponding second-layer features to obtain a spliced feature map;
and performing up-sampling on the spliced characteristic diagram to obtain a segmentation result of the training sample, and determining a training model with the minimum loss function according to the segmentation result.
5. The celestial surface obstacle identification method of claim 4, wherein the upsampling each first feature map by a residual upsampling transformation method to obtain a corresponding second feature map comprises:
sequentially pass through
Figure FDA0002813398820000021
Si,j=Fsim(Qj,Ki)=-||Qj-Ki||2
Wi,j=Fw(Si,j)
Figure FDA0002813398820000022
Figure FDA0002813398820000023
Obtaining a corresponding second characteristic diagram fl j(ii) a Wherein, Cq、Ck、CvEach of which represents a convolutional layer,
Figure FDA0002813398820000024
is represented by the first characteristic diagram fhUp-sampling a feature map obtained by a preset multiple,
Figure FDA0002813398820000025
to represent
Figure FDA0002813398820000026
The (j) th characteristic position of (c),
Figure FDA0002813398820000027
representing said first characteristic diagram fhThe ith characteristic position of (1), FwDenotes the sigmoid function, FmulRepresenting a point-by-point function.
6. The method for recognizing the obstacle on the surface of the celestial body according to claim 1, wherein the constructing a convolutional neural network, and inputting the fused labeled sample set into the convolutional neural network to obtain a training model with a minimum loss function comprises:
and constructing a convolutional neural network, inputting the fused labeled sample set into the convolutional neural network, calculating a loss function of the convolutional neural network, and training the convolutional neural network by using a gradient descent method to obtain a training model with the minimum loss function.
7. The celestial surface obstacle recognition method of claim 6, wherein the penalty function is a smooth IOU loss function, wherein the IOU is expressed by:
Figure FDA0002813398820000028
smooth denotes one-hot encoding P of tags in a split taskmSmoothing is performed as follows:
Figure FDA0002813398820000031
wherein M represents the total number of classes of the classification, M represents one class of all classes of the classification, y represents a label class, and epsilon is a preset hyper-parameter.
8. A celestial body surface obstacle recognition device, comprising:
the image labeling module is used for performing obstacle labeling on an original image set obtained by the deep space exploration rover to obtain a labeled sample set;
the characteristic fusion module is used for carrying out characteristic fusion on each image in the labeled sample set to obtain the fused labeled sample set;
the model establishing module is used for establishing a convolutional neural network, inputting the fused labeled sample set into the convolutional neural network and obtaining a training model with the minimum loss function;
and the segmentation recognition module is used for performing feature fusion on the image to be recognized obtained by the deep space exploration rover, and inputting the fused image to be recognized into the training model to obtain a segmentation recognition result of the barrier.
9. A celestial surface obstacle recognition device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor, when executing the computer program, implements the steps of the celestial surface obstacle recognition method of any one of claims 1-7.
10. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the steps of the celestial surface obstacle identification method of any one of claims 1-7.
CN202011404160.4A 2020-12-02 2020-12-02 Celestial body surface obstacle identification method and device Pending CN112528808A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011404160.4A CN112528808A (en) 2020-12-02 2020-12-02 Celestial body surface obstacle identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011404160.4A CN112528808A (en) 2020-12-02 2020-12-02 Celestial body surface obstacle identification method and device

Publications (1)

Publication Number Publication Date
CN112528808A true CN112528808A (en) 2021-03-19

Family

ID=74997615

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011404160.4A Pending CN112528808A (en) 2020-12-02 2020-12-02 Celestial body surface obstacle identification method and device

Country Status (1)

Country Link
CN (1) CN112528808A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113378390A (en) * 2021-06-15 2021-09-10 浙江大学 Extraterrestrial star traffic analysis method and extraterrestrial star traffic analysis system based on deep learning

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050141617A1 (en) * 2003-12-27 2005-06-30 Samsung Electronics Co., Ltd. Residue image down/up sampling method and apparatus and image encoding/decoding method and apparatus using residue sampling
CN103035013A (en) * 2013-01-08 2013-04-10 东北师范大学 Accurate moving shadow detection method based on multi-feature fusion
CN107065571A (en) * 2017-06-06 2017-08-18 上海航天控制技术研究所 A kind of objects outside Earth soft landing Guidance and control method based on machine learning algorithm
CN111080659A (en) * 2019-12-19 2020-04-28 哈尔滨工业大学 Environmental semantic perception method based on visual information
US20200160065A1 (en) * 2018-08-10 2020-05-21 Naver Corporation Method for training a convolutional recurrent neural network and for semantic segmentation of inputted video using the trained convolutional recurrent neural network
US20200202533A1 (en) * 2018-12-24 2020-06-25 Adobe Inc. Identifying target objects using scale-diverse segmentation neural networks
CN111797836A (en) * 2020-06-18 2020-10-20 中国空间技术研究院 Extraterrestrial celestial body patrolling device obstacle segmentation method based on deep learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050141617A1 (en) * 2003-12-27 2005-06-30 Samsung Electronics Co., Ltd. Residue image down/up sampling method and apparatus and image encoding/decoding method and apparatus using residue sampling
CN103035013A (en) * 2013-01-08 2013-04-10 东北师范大学 Accurate moving shadow detection method based on multi-feature fusion
CN107065571A (en) * 2017-06-06 2017-08-18 上海航天控制技术研究所 A kind of objects outside Earth soft landing Guidance and control method based on machine learning algorithm
US20200160065A1 (en) * 2018-08-10 2020-05-21 Naver Corporation Method for training a convolutional recurrent neural network and for semantic segmentation of inputted video using the trained convolutional recurrent neural network
US20200202533A1 (en) * 2018-12-24 2020-06-25 Adobe Inc. Identifying target objects using scale-diverse segmentation neural networks
CN111080659A (en) * 2019-12-19 2020-04-28 哈尔滨工业大学 Environmental semantic perception method based on visual information
CN111797836A (en) * 2020-06-18 2020-10-20 中国空间技术研究院 Extraterrestrial celestial body patrolling device obstacle segmentation method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
周鹏等: "基于改进DeepLab-v3 + 的火星地形分割算法", 空间控制技术与应用, 30 April 2023 (2023-04-30) *
邢琰等: "月球表面巡视探测自主局部避障规划", 控制理论与应用, 31 December 2019 (2019-12-31) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113378390A (en) * 2021-06-15 2021-09-10 浙江大学 Extraterrestrial star traffic analysis method and extraterrestrial star traffic analysis system based on deep learning
CN113378390B (en) * 2021-06-15 2022-06-24 浙江大学 Method and system for analyzing trafficability of extraterrestrial ephemeris based on deep learning

Similar Documents

Publication Publication Date Title
CN111507927B (en) Method and device for integrating images and point cloud images in neural network
Audebert et al. Joint learning from earth observation and openstreetmap data to get faster better semantic maps
CN112529015A (en) Three-dimensional point cloud processing method, device and equipment based on geometric unwrapping
Zhou et al. Embedded control gate fusion and attention residual learning for RGB–thermal urban scene parsing
Marcu et al. A multi-stage multi-task neural network for aerial scene interpretation and geolocalization
Chen et al. 3D photogrammetry point cloud segmentation using a model ensembling framework
CN114049356B (en) Method, device and system for detecting structure apparent crack
CN116797787B (en) Remote sensing image semantic segmentation method based on cross-modal fusion and graph neural network
CN113012177A (en) Three-dimensional point cloud segmentation method based on geometric feature extraction and edge perception coding
EP4174792A1 (en) Method for scene understanding and semantic analysis of objects
CN110619299A (en) Object recognition SLAM method and device based on grid
CN112257668A (en) Main and auxiliary road judging method and device, electronic equipment and storage medium
CN110569387B (en) Radar-image cross-modal retrieval method based on depth hash algorithm
Chiang et al. Training deep learning models for geographic feature recognition from historical maps
CN112528808A (en) Celestial body surface obstacle identification method and device
Jia et al. Self-supervised depth estimation leveraging global perception and geometric smoothness
CN113592015A (en) Method and device for positioning and training feature matching network
Qayyum et al. Deep convolutional neural network processing of aerial stereo imagery to monitor vulnerable zones near power lines
CN115546649B (en) Single-view remote sensing image height estimation and semantic segmentation multi-task prediction method
Moghalles et al. Weakly supervised building semantic segmentation via superpixel‐CRF with initial deep seeds guiding
Endo et al. High definition map aided object detection for autonomous driving in urban areas
Ahmed et al. Classification of semantic segmentation using fully convolutional networks based unmanned aerial vehicle application
Chougule et al. AGD-Net: Attention-Guided Dense Inception U-Net for Single-Image Dehazing
CN114596474A (en) Monocular depth estimation method fusing multi-mode information
De Giacomo et al. Guided sonar-to-satellite translation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination