CN114596503A - Road extraction method based on remote sensing satellite image - Google Patents

Road extraction method based on remote sensing satellite image Download PDF

Info

Publication number
CN114596503A
CN114596503A CN202210208048.6A CN202210208048A CN114596503A CN 114596503 A CN114596503 A CN 114596503A CN 202210208048 A CN202210208048 A CN 202210208048A CN 114596503 A CN114596503 A CN 114596503A
Authority
CN
China
Prior art keywords
road
feature
features
semantic
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210208048.6A
Other languages
Chinese (zh)
Inventor
殷玮伶
王立君
戚金清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN202210208048.6A priority Critical patent/CN114596503A/en
Publication of CN114596503A publication Critical patent/CN114596503A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • G06F18/2113Selection of the most significant subset of features by ranking or filtering the set of features, e.g. using a measure of variance or of feature cross-correlation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the field of computer road extraction, and provides a road extraction method based on a remote sensing satellite image. The iterative feature enhancement sub-network unit comprises a semantic guide feature enhancement module, a direction perception feature aggregation module and two side branches; the semantic guide feature enhancement module comprises feature selection and feature fusion; the direction perception feature aggregation module comprises a residual error structure of a plurality of branches, and each branch comprises a direction perception deformable convolution and a ReLU unit; the semantic guide feature strengthening module, the direction perception feature aggregation module and the direction perception deformable convolution are provided, so that the road direction is better utilized to automatically align the convolution kernel receptive field and the road area; the method solves the connectivity problem of the automatic road extraction task in the remote sensing satellite image, and the result output by the method has high accuracy and high efficiency.

Description

Road extraction method based on remote sensing satellite image
Technical Field
The invention relates to the field of road extraction (road extraction) in the field of computer vision, in particular to a road extraction method based on remote sensing satellite images.
Background
The image semantic segmentation task aims to help a computer to solve the object types and positions contained in the real environment and identify the content and the corresponding positions existing in the image according to the object types defined by a user. The task of extracting the remote sensing satellite image road based on the semantic segmentation technology aims at predicting a binary road segmentation map and positioning the road position in the remote sensing satellite image. In recent years, with the great success of a machine learning model based on a deep neural network in many computer vision tasks such as image classification, target detection, semantic segmentation and the like, on some large public data sets, the method based on deep learning remarkably improves the quality of remote sensing satellite image road extraction.
However, recent research finds that the task of remote sensing satellite imagery road extraction has more technical challenges than the task of image semantic segmentation. In the remote sensing satellite image, a road area is often shielded by cloud layers or high-rise buildings and shadows thereof, the appearance of the road area is greatly influenced by objective factors such as weather, illumination intensity, shooting angles and the like, and the road area and an adjacent non-road area have texture characteristics which are highly similar. Therefore, the direct application of the image semantic segmentation technology to the remote sensing satellite image road extraction task usually leads to the fact that the extracted road network is broken and far behind the requirements of industrial-level application. Meanwhile, researchers have attempted to obtain better road extraction results by improving the neural network structure. Abhishek Charrasia et al propose a new network architecture LinkNet that enhances information transfer, increases the flow of information between the encoder and decoder, and feeds spatial information directly from the encoder to the decoder. Based on the previous results, Lichen Zhou et al designed an expansion convolution module after the encoder, and the expansion module part included the hole convolution of the cascade mode and the parallel mode. The receptive fields of each path are different, so that the network can capture multi-scale features to obtain better detection results. Yao Wei et al use the road boundaries as a priori information to reduce the deviation along the direction of the road edges. The method for extracting the remote sensing image road based on multi-task learning, StackHourglass, is provided by Anil Batra et al, and the road direction is used as supervision information, so that the road direction information is preliminarily proved to bring gains for the road extraction task. These methods achieve some improvement in road consistency, but do not make full use of road direction information to assist the road extraction task. The establishment of the road topology always follows the guidance of the road direction, and the reasonable utilization of the road direction can promote more accurate road extraction results. The road direction has an inherent correlation with the road extraction task.
The invention provides a road extraction method based on a remote sensing satellite image aiming at the appearance characteristics and the geometric characteristics of road elements of the remote sensing satellite image based on the investigation and analysis of the conventional remote sensing image road extraction method StackHourglass.
Disclosure of Invention
The invention aims to overcome the defect of insufficient utilization of road direction information in the conventional method StackHourglass and provide a novel method for assisting road extraction by fully utilizing the road direction information. The invention aims to provide a binary mask representing road pixels by the proposed method under the condition of a given single RGB image, and the invention is suitable for various remote sensing satellite images.
The invention relates to a road extraction method based on remote sensing satellite images, which is realized based on a convolutional neural network. The invention uses the road direction as prior knowledge to more accurately reason the road area, especially in the occlusion and noise areas, thereby generating more complete and consistent road extraction results. Meanwhile, the invention utilizes the road extraction task to assist the task of road direction prediction. The network proposed by the invention consists of three components: a feature encoder, an iterative feature enhancement sub-network, and a multi-tasking decoder.
The technical scheme of the invention is as follows: an image road extraction method (IterNet) based on remote sensing satellite comprises a feature encoder, an iterative feature enhancer network and a multitask decoder; the specific extraction steps are as follows;
converting a remote sensing satellite image into an RGB image, inputting the RGB image into a feature encoder for feature extraction, outputting a feature map with the resolution of 4 times of the down-sampling size of an original input image, and extracting road semantic features and road direction features;
step 2: the features extracted in the step 1 are sent into an iterative feature enhancement sub-network; the iterative feature enhancer network comprises a plurality of feature enhancement units, wherein each feature enhancement unit comprises a semantic guide feature enhancement module (SGFE), a direction perception feature aggregation module (OAFA) and two side branches;
inputting road semantic features and road direction features to a semantic guide feature enhancing module to obtain enhanced road direction features; the enhanced road direction characteristics are divided into two branches, one branch is used for characteristic fusion, the other branch is input into a side branch to obtain a preliminary road direction prediction result, and the side branch is composed of two convolution layers;
the semantic guide feature enhancement module comprises a feature selection part and a feature fusion part; the characteristic selection part comprises a convolution layer, an activation function layer and an hourglass-shaped sub-network;
in the feature selection part, the semantic features and the direction features of the input road are merged in a cascading mode, and the merged features are sent to a convolution layer and an activation function layer to obtain a single-channel feature confidence coefficient;
B=Sigmoid(Ψ1(cat(Fe,Fo),Θ1)) (1)
where B is the confidence of the single-channel feature, cat (·,) represents the cascade in the channel direction, Ψ1(·,Θ1) The expression parameter is theta1The convolutional layer of (1); feRepresenting road semantic features; foRepresenting road direction characteristics;
the generated single-channel feature confidence coefficient respectively selects the features in the road semantic feature and road direction feature numerical value domains and inputs the feature into an hourglass-shaped sub-network; the hourglass-shaped sub-network firstly performs down-sampling and then performs up-sampling on the selected features, so that the receptive field is favorably expanded to better capture global context information;
Fa=Ψ2((1-B)·Fe+B·Fo,Θ2) (2)
therein Ψ2(·,Θ2) The expression parameter is theta2The convolutional layer of (1); faSelecting the generated features for the passed features;
the feature fusion part performs weighted feature fusion on the input road semantic features, road direction features and the results of the feature selection part by using a fusion mask to obtain enhanced road direction features;
F′o=(1+M)·Fo+(1+M)·Fe, (3)
M=Sigmoid(D(E(Fa)))
wherein D (-) and E (-) represent the encoding and decoding parts, F ', respectively, in an hourglass shaped sub-network'oRepresents an enhanced road direction feature, M represents a fusion mask;
inputting the preliminary road direction prediction result and road semantic features into a direction perception feature aggregation module to generate reinforced road semantic features; the reinforced road semantic features are divided into two branches, one branch is used for feature fusion, the other branch is input to the other side branch, and a preliminary road segmentation result is generated and used for supervising network learning;
the direction perception characteristic aggregation module adjusts the shape and the direction of a receptive field according to the preliminary road direction prediction result and focuses on an information area for road extraction; the direction-aware feature aggregation module comprises a residual structure of a plurality of branches, wherein each branch comprises a direction-aware deformable convolution (OD-Conv) and a ReLU unit; different branches adopt convolution kernels with different sizes and shapes; weighted combination of the features obtained by all branches and the input road semantic features is carried out to output the reinforced road semantic features;
F′e=Fe+λ·∑iΦ0(Fe,α,θi) (4)
wherein, FeRepresenting an input road semantic feature, F'eRepresenting the reinforced road semantic features; alpha is the road direction of the side branch obtained from the road direction characteristics, i is the serial number of the branch, phi0With the expression parameter thetaiLambda is a hyper-parameter of a residual structure, and is used for balancing the weight of the aggregation characteristic and the original characteristic;
w represents the conventional 2D convolutional layer parameters, the output of which can be calculated by equation (5).
Figure BDA0003529951930000041
Where X represents the input signature and Y (p) represents the output value at position p. R represents the receptive field representing 2D convolution. Taking a 3 × 3 convolution kernel as an example, the two-dimensional grid R can be defined as formula (6).
R={(-1,-1),(-1,0),,...,(0,1),(1,1)} (6)
A convolution kernel of the direction perception deformable convolution automatically adjusts the receptive field according to the preliminary road direction prediction result; the direction-sensing deformable convolution layer outputs;
Figure BDA0003529951930000051
wherein X represents an input feature map; y (p) represents an output value at position p; r represents receptive field; α (p) denotes a predicted road direction angle at a position p, and r (p, α) denotes a direction applied to a coordinate p ═ p [ p ]x,py]Rotational translation of (c); w (p)n) Weights of the convolution kernel receptive fields;
Figure BDA0003529951930000052
fusing the enhanced direction features, the enhanced road semantic features, the original road semantic features and the original road direction features, and inputting the fused features into a next feature enhancement unit;
and step 3: and finally, the output characteristics of the sub-network are sent to a multitask decoder to obtain a road extraction result and a road direction prediction result.
The convolution kernels of different sizes and shapes are 5 × 1, 1 × 5 and 3 × 3, respectively.
And when the r (p, alpha) is a non-integer, sampling the input characteristic diagram X by adopting bilinear interpolation.
The feature encoder selects ResNet 50;
the invention has the beneficial effects that: the method solves the connectivity problem of the automatic road extraction task in the remote sensing satellite image, and the result output by the method has high accuracy and high efficiency; the method can make full use of prior information contained in the road direction, and assist in improving the result accuracy of the road extraction task; the proposed semantic guidance feature enhancement module can supplement and enhance road direction information by utilizing road semantic information; the proposed OD-Conv can automatically align the convolution kernel receptive field and the road area by using the road direction better.
Drawings
Fig. 1 is an overall network structure of the present invention.
Fig. 2 is a schematic diagram of an SGFE module for enhancing semantic guidance features according to the present invention.
FIG. 3 is a schematic diagram of the direction-aware deformable convolution according to the present invention. The left side is the receptive field on the input image and the right side is the proposed direction-aware deformable convolution.
Fig. 4 is a schematic diagram of a direction-aware feature aggregation module OAFA according to the present invention.
Fig. 5 is an iterative feature enhancement sub-network proposed by the present invention.
Fig. 6 shows the experimental results of the present invention, which are (a) remote sensing satellite images, (b) road extraction truth values, (c) results of multitask learning only, (d) results of SGFE module, (e) results of OAFA module, and (f) results of the method of the present invention, from left to right, respectively.
Detailed Description
The following further describes a specific embodiment of the present invention with reference to the drawings and technical solutions.
Fig. 1 shows an overall structure of IterNet according to the present invention, where IterNet is composed of a remote sensing satellite image feature encoder, an iterative feature enhancer network, and a multi-task decoder. The remote sensing satellite image is converted into an RGB image and then is sent into the feature encoder provided by the invention for feature extraction, and the extracted remote sensing image features are sent into an iterative feature enhancement sub-network. The iterative feature enhancer network is composed of a plurality of feature enhancement units, and each feature enhancement unit comprises an SGFE module, an OAFA module and two side branches. In each feature enhancement unit, the input features are respectively extracted from road semantic features and road direction features of roads through convolution operation, feature enhancement is carried out through the SGFE module and the OAFA module provided by the invention, and fused features are output and sent to the next feature unit.
Fig. 2 is a diagram of an SGFE module according to the present invention, which can perform feature enhancement on road direction features using road semantic information, and fig. 4 is a diagram of an OAFA module according to the present invention, which can adjust the shape and direction of an OD-Conv receptive field using a preliminary road direction prediction result, and automatically focus on an information area for road extraction. Fig. 3 shows that the OD-Conv proposed by the present invention can adaptively focus on the information area extracted from the road, and aggregate the information of the road surface and its surrounding environment, so as to make the road extraction result more accurate. FIG. 5 is an iterative feature enhancement sub-network proposed by the present invention, where the output of the previous feature enhancement unit is used as the input of the next unit, and the first feature enhancement unit performs feature enhancement by taking the output of the feature encoder as the input feature. And the last characteristic strengthening unit inputs the strengthened characteristics into a two-branch multi-task decoder, restores the resolution of the image through a series of convolution and up-sampling operations and outputs a final prediction result.
The method is trained and tested by using a DeepGlobe data set and a Massachusetts data set. The depglobe dataset, which includes images from three different regions of thailand, indonesia and india, with a total area of 2220 square kilometers, contains satellite images of urban and suburban areas and corresponding pixel-level truth values. The ground resolution of the remote sensing image is 50cm, and the pixel resolution is 1024 multiplied by 1024. The present invention was trained using 4696 images and tested using 1530 images. The Massachusetts dataset consisted of 1171 images from Massachusetts. The size of each image is 1500 × 1500 pixels with a spatial resolution of 1m per pixel. The data set is collected from aerial images, including satellite images of urban and suburban areas and corresponding pixel-level truth values, and the invention uses a validation set of the data set for testing.
The invention is mainly divided into two subtasks of data processing and network training. In data processing, the invention follows the arrangement of the Anil Batra et al and utilizes the road surface truth value to generate the truth value of the road direction prediction task. Firstly, skeletonizing the pixel-level annotations to generate a road centerline true value, and then smoothing the road centerline true value by using a smoothing algorithm method proposed by David H Douglas et al. For each section of the road center line, the method generates a series of key points by utilizing the amplification of interpolation, and calculates the direction angle between every two adjacent key points. In order to simplify the task of direction estimation, the invention sets the interval size to be 10 degrees to cluster the directions. The invention expands the skeletonized road direction truth value by 20 pixels outwards to generate a road direction truth value at a pixel level for supervising a road direction prediction task. Finally, the invention uses 36-dimensional direction class to explain the direction of road surface in the road area, the non-road area is marked as background class, and the road direction prediction problem is converted into the pixel-level classification problem which converts the road direction into 36-class angle and 1-class background.
On network training, all experiments of the invention were performed on a server with 16GB RAM and two NVIDIA 2080Ti GPUs (11GB), using a PyTorch framework for training and testing. During training, the present invention generates 256 × 256 images for use as an input to IterNet using a random crop operation on images of size 512 × 512. In addition, the invention adopts data enhancement operations such as random horizontal flipping, mirroring and rotation to avoid overfitting. The invention optimizes the whole network by using a random gradient descent method, the momentum value is 0.9, the batch size is 32, and the training is performed for 160 rounds in total. The initial learning rate was 2e-2, decreased 10 times after round 60, set to 5e-4 after round 90, and decreased 10 times after round 120. In the reasoning process, the size of the Deepglobe data set image is 512 multiplied by 512, the size of the Massachusetts data set image is 320 multiplied by 320, and the image is directly input into a network without further cutting operation.
The comparison method selected in the concrete implementation of the invention is StckHourglass, which is to simply monitor by using road direction information to obtain a better road extraction result, and in order to perform fair comparison, the StckHourglass method uses the disclosed codes and the suggested parameter settings thereof, and uses the same pre-training network to perform tests on the same test set. From the final experimental results, the IterNet method proposed by the present invention achieves the best performance in all different experimental settings for the indices IoU, F1, Precision and Recall. The higher the indexes such as IoU, F1, Precision and Recall, the higher the accuracy of the method is proved. The results of the specific experiments on the Deepglobe dataset are shown in Table 1 below:
TABLE 1 Deepglobe data set specific experimental results
Figure BDA0003529951930000081
Figure BDA0003529951930000091
The results of the specific experiments on the Massachusetts dataset are shown in table 2 below:
table 2 detailed experimental results on Massachusetts dataset
Figure BDA0003529951930000092

Claims (3)

1. A road extraction method based on remote sensing satellite images is characterized by comprising a feature encoder, an iterative feature enhancer network and a multitask decoder; the specific extraction steps are as follows;
step 1: converting the remote sensing satellite image into an RGB image, inputting the RGB image into a feature encoder for feature extraction, outputting a feature map with the resolution being 4 times of the down-sampling size of the original input image, and acquiring road semantic features and road direction features;
step 2: the features extracted in the step 1 are sent into an iterative feature enhancement sub-network; the iterative feature enhancer network comprises a plurality of feature enhancement units, wherein each feature enhancement unit comprises a semantic guide feature enhancement module, a direction perception feature aggregation module and two side branches;
inputting road semantic features and road direction features to a semantic guide feature enhancing module to obtain enhanced road direction features; the enhanced road direction characteristics are divided into two branches, one branch is used for characteristic fusion, the other branch is input into a side branch to obtain a preliminary road direction prediction result, and the side branch is composed of two convolution layers;
the semantic guide feature enhancement module comprises a feature selection part and a feature fusion part; the characteristic selection part comprises a convolution layer, an activation function layer and an hourglass-shaped sub-network;
in the feature selection part, the semantic features and the direction features of the input road are merged in a cascading mode, and the merged features are sent to a convolution layer and an activation function layer to obtain a single-channel feature confidence coefficient;
B=Sigmoid(Ψ1(cat(Fe,Fo),Θ1)) (1)
where B is the confidence of the single-channel feature, cat (·,) represents the cascade in the channel direction, Ψ1(·,Θ1) The expression parameter is theta1The convolutional layer of (1); feRepresenting road semantic features; foRepresenting road direction characteristics;
the generated single-channel feature confidence coefficient respectively selects the features in the road semantic feature and road direction feature numerical value domains and inputs the feature into an hourglass-shaped sub-network; the hourglass-shaped sub-network firstly carries out down-sampling and then carries out up-sampling on the selected features;
Fa=Ψ2((1-B)·Fe+B·Fo2) (2)
therein Ψ2(·,Θ2) The expression parameter is theta2The convolutional layer of (1); faSelecting the generated features for the passed features;
the feature fusion part performs weighted feature fusion on the input road semantic features, road direction features and the results of the feature selection part by using a fusion mask to obtain enhanced road direction features;
F′o=(1+M)·Fo+(1+M)·Fe, (3)
M=Sigmoid(D(E(Fa)))
wherein D (-) and E (-) represent the encoding and decoding parts, F ', respectively, in an hourglass shaped sub-network'oRepresents an enhanced road direction feature, M represents a fusion mask;
inputting the preliminary road direction prediction result and road semantic features into a direction perception feature aggregation module to generate reinforced road semantic features; the reinforced road semantic features are divided into two branches, one branch is used for feature fusion, the other branch is input to the other side branch, and a preliminary road segmentation result is generated and used for supervising network learning;
the direction perception characteristic aggregation module adjusts the shape and the direction of a receptive field according to the preliminary road direction prediction result and focuses on an information area for road extraction; the direction perception feature aggregation module comprises a residual error structure of a plurality of branches, and each branch comprises a direction perception deformable convolution and a ReLU unit; different branches adopt convolution kernels with different sizes and shapes; weighted combination of the features obtained by all branches and the input road semantic features is carried out to output the reinforced road semantic features;
F′e=Fe+λ·∑iΦ0(Fe,α,θi) (4)
wherein, FeRepresenting road semantic features, F'eRepresenting the reinforced road semantic features; alpha represents the road direction of the side branch obtained from the road direction characteristics, and i represents the pointNumber of branches,. phi0With the expression parameter thetaiLambda is a hyper-parameter of a residual structure, and is used for balancing the weight of the aggregation characteristic and the original characteristic;
a convolution kernel of the direction perception deformable convolution automatically adjusts the receptive field according to the preliminary road direction prediction result; the direction-sensing deformable convolution layer outputs;
Figure FDA0003529951920000031
wherein X represents an input feature map; y (p) represents an output value at position p; r represents receptive field; α (p) denotes a predicted road direction angle at a position p, and r (p, α) denotes a direction applied to a coordinate p ═ p [ p ]x,py]Rotational translation of (c); w (p)n) Weights of the convolution kernel receptive fields;
Figure FDA0003529951920000032
fusing the enhanced direction features, the enhanced road semantic features, the original road semantic features and the original road direction features, and inputting the fused features into a next feature enhancement unit;
and step 3: and finally, the output characteristics of the sub-network are sent to a multitask decoder to obtain a road extraction result and a road direction prediction result.
2. The method for extracting a road based on remote sensing satellite images as claimed in claim 1, wherein the convolution kernels with different sizes and shapes are respectively 5 x 1, 1 x 5 and 3 x 3.
3. The method for extracting a road based on remote sensing satellite images as claimed in claim 1 or 2, wherein when r (p, α) is a non-integer, bilinear interpolation is adopted to sample the input feature map X.
CN202210208048.6A 2022-03-03 2022-03-03 Road extraction method based on remote sensing satellite image Pending CN114596503A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210208048.6A CN114596503A (en) 2022-03-03 2022-03-03 Road extraction method based on remote sensing satellite image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210208048.6A CN114596503A (en) 2022-03-03 2022-03-03 Road extraction method based on remote sensing satellite image

Publications (1)

Publication Number Publication Date
CN114596503A true CN114596503A (en) 2022-06-07

Family

ID=81816336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210208048.6A Pending CN114596503A (en) 2022-03-03 2022-03-03 Road extraction method based on remote sensing satellite image

Country Status (1)

Country Link
CN (1) CN114596503A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116343063A (en) * 2023-05-26 2023-06-27 南京航空航天大学 Road network extraction method, system, equipment and computer readable storage medium
CN116740306A (en) * 2023-08-09 2023-09-12 北京空间飞行器总体设计部 Remote sensing satellite earth observation task planning method and device based on road network guidance

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116343063A (en) * 2023-05-26 2023-06-27 南京航空航天大学 Road network extraction method, system, equipment and computer readable storage medium
CN116343063B (en) * 2023-05-26 2023-08-11 南京航空航天大学 Road network extraction method, system, equipment and computer readable storage medium
CN116740306A (en) * 2023-08-09 2023-09-12 北京空间飞行器总体设计部 Remote sensing satellite earth observation task planning method and device based on road network guidance
CN116740306B (en) * 2023-08-09 2023-11-07 北京空间飞行器总体设计部 Remote sensing satellite earth observation task planning method and device based on road network guidance

Similar Documents

Publication Publication Date Title
US11798132B2 (en) Image inpainting method and apparatus, computer device, and storage medium
CN112488210A (en) Three-dimensional point cloud automatic classification method based on graph convolution neural network
CN114120102A (en) Boundary-optimized remote sensing image semantic segmentation method, device, equipment and medium
CN111028327B (en) Processing method, device and equipment for three-dimensional point cloud
CN111626176B (en) Remote sensing target rapid detection method and system based on dynamic attention mechanism
CN112348036A (en) Self-adaptive target detection method based on lightweight residual learning and deconvolution cascade
CN114596503A (en) Road extraction method based on remote sensing satellite image
CN113989681B (en) Remote sensing image change detection method and device, electronic equipment and storage medium
CN113807361B (en) Neural network, target detection method, neural network training method and related products
WO2023030182A1 (en) Image generation method and apparatus
CN112734755A (en) Lung lobe segmentation method based on 3D full convolution neural network and multitask learning
CN114187520B (en) Building extraction model construction and application method
CN113838064B (en) Cloud removal method based on branch GAN using multi-temporal remote sensing data
Liu et al. Survey of road extraction methods in remote sensing images based on deep learning
CN114359292A (en) Medical image segmentation method based on multi-scale and attention
CN115861619A (en) Airborne LiDAR (light detection and ranging) urban point cloud semantic segmentation method and system of recursive residual double-attention kernel point convolution network
CN113505634A (en) Double-flow decoding cross-task interaction network optical remote sensing image salient target detection method
CN117788296B (en) Infrared remote sensing image super-resolution reconstruction method based on heterogeneous combined depth network
CN112419333A (en) Remote sensing image self-adaptive feature selection segmentation method and system
CN117315169A (en) Live-action three-dimensional model reconstruction method and system based on deep learning multi-view dense matching
CN116342675B (en) Real-time monocular depth estimation method, system, electronic equipment and storage medium
CN107358625B (en) SAR image change detection method based on SPP Net and region-of-interest detection
CN117496347A (en) Remote sensing image building extraction method, device and medium
CN117173595A (en) Unmanned aerial vehicle aerial image target detection method based on improved YOLOv7
CN116740516A (en) Target detection method and system based on multi-scale fusion feature extraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination