CN114519819A - Remote sensing image target detection method based on global context awareness - Google Patents

Remote sensing image target detection method based on global context awareness Download PDF

Info

Publication number
CN114519819A
CN114519819A CN202210126106.0A CN202210126106A CN114519819A CN 114519819 A CN114519819 A CN 114519819A CN 202210126106 A CN202210126106 A CN 202210126106A CN 114519819 A CN114519819 A CN 114519819A
Authority
CN
China
Prior art keywords
feature
target
features
feature map
candidate region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210126106.0A
Other languages
Chinese (zh)
Other versions
CN114519819B (en
Inventor
张科
吴虞霖
王靖宇
苏雨
张烨
李浩宇
谭明虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202210126106.0A priority Critical patent/CN114519819B/en
Publication of CN114519819A publication Critical patent/CN114519819A/en
Application granted granted Critical
Publication of CN114519819B publication Critical patent/CN114519819B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a remote sensing image target detection method based on global context awareness, which comprises the steps of extracting the characteristics of an image by using a depth residual error Network (ResNet 101), further extracting the characteristics by using a characteristic Pyramid Network (FPN, Feature Pyramid Network) and generating a candidate region; after the candidate region is generated, using the feature pooling alignment feature; adding a global context extraction module at the highest layer of the feature extraction network, and fusing extracted features and original features in an addition mode to obtain new features; and finally, classifying the new features by using the full connection layer to generate a target class and a frame. The method fully extracts scene information of the image by utilizing the characteristic of rich high-level feature semantic information, further enhances feature representation, increases the recognition accuracy of dense targets, and also improves the recognition accuracy of other targets to a certain extent, thereby integrally improving the target detection performance in the remote sensing image.

Description

Remote sensing image target detection method based on global context awareness
Technical Field
The invention belongs to the technical field of pattern recognition, and particularly relates to a remote sensing image target detection method.
Background
Remote sensing image analysis is always a hot point of research in the field of computer vision and is widely applied to the fields of urban planning, land utilization management, environmental monitoring and the like. The target detection is a basic task in the field of computer vision, and can provide support for subsequent tasks such as event detection, target tracking, human-computer interaction, scene segmentation and the like. The remote sensing image is usually shot from high altitude, and the shooting angle and the shooting height are not fixed due to the difference of an onboard or satellite-borne sensor. Compared with natural images, the remote sensing image has richer scene information, more target types and dense arrangement, so that the remote sensing image target detection faces a great challenge. Due to the problems, although some algorithms have been proposed for remote sensing image target detection, the performance still has room for improvement, so that remote sensing image target detection is still one of the hot problems of current research.
Stanku (the feature-enhanced SSD algorithm and the application thereof in remote sensing target detection), photonics newspaper 2020,49(01): 154-. The method improves the extraction capability of the network on the small target features by designing a shallow feature enhancement module; and designing a deep feature enhancement module to replace a deep network in the SSD pyramid feature layer. However, the method does not fully utilize rich scene information in the remote sensing image, and the improvement effect is limited.
The original FPN detection algorithm has a poor detection effect on dense targets in a remote sensing image because of the lack of sufficient scene information in a feature pyramid network. The detection of dense objects needs to rely on scene information, for example, cars are only present in parking lots or roads, and cars are generally present around cars. Thus, lack of awareness of context information, i.e., global context, makes it difficult for the network to identify dense targets.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a remote sensing image target detection method based on global context sensing, which comprises the steps of extracting the characteristics of an image by using a depth residual error Network (ResNet 101), further extracting the characteristics by using a characteristic Pyramid Network (FPN) and generating a candidate region; after the candidate region is generated, using the feature pooling alignment feature; adding a global context extraction module at the highest layer of the feature extraction network, and fusing extracted features and original features in an addition mode to obtain new features; and finally, classifying the new features by using the full connection layer to generate a target class and a frame. The method fully extracts scene information of the image by utilizing the characteristic of rich high-level feature semantic information, further enhances feature representation, increases the recognition accuracy of dense targets, and also improves the recognition accuracy of other targets to a certain extent, thereby integrally improving the target detection performance in the remote sensing image.
The technical scheme adopted by the invention for solving the technical problem comprises the following steps:
step 1: preprocessing and dividing the data set;
uniformly cutting the marked images in the standard data set into a plurality of 1024-by-1024 images, wherein the width and the height of the images are respectively reserved with the coincidence rate of 10% of pixels during cutting, and then randomly dividing the images into a training set, a verification set and a test set, wherein the training set, the verification set and the test set are not intersected;
step 2: constructing a target detection deep neural network and training the target detection deep neural network by adopting a gradient descent and back propagation algorithm; the target detection deep neural network firstly adopts Res101 residual error network to extract features, then uses feature pyramid network FPN to generate candidate regions, then carries out local context perception on the candidate regions, and finally obtains target categories and bounding boxes through feature pooling and full connection layers, wherein the method specifically comprises the following steps:
step 2-1: initializing Res101 model parameters by using a pre-training model;
step 2-2: inputting 1024 × 1024 images into a Res101 residual network to extract features, and generating 5 feature maps with different sizes, which are marked as C1-C6, and the scales are 512 × 512,256 × 256,128 × 128,64 × 64,32 × 32 and 16 × 16 respectively;
step 2-3: performing global maximum pooling on the feature map C6 to obtain scene features containing scene information; the scene features are subjected to 10 × 10 convolution and 1 × 1 convolution to obtain global features;
step 2-4: taking the feature map C5 as a feature map P5 of a feature pyramid;
upsampling the feature map C5, and adding the upsampled feature map C4 subjected to 1-by-1 convolution to generate a feature map P4 of a feature pyramid;
upsampling the feature map C4, and adding the upsampled feature map C3 subjected to 1-by-1 convolution to generate a feature map P3 of a feature pyramid;
upsampling the feature map C3, and adding the upsampled feature map C2 subjected to 1-by-1 convolution to generate a feature map P2 of a feature pyramid;
step 2-5: feature maps P2, P3, P4, and P5 of the feature pyramid are 256 in size, respectively2、1282、642、322(ii) a Generating anchor points anchorages for each feature image in the feature pyramid by using an area generation network, wherein the aspect ratio corresponding to each anchorage comprises three types, namely 1:2, 1:1 and 2: 1; thus, the feature pyramid generates 15 different anchors;
and generating a target candidate region by using the anchor, wherein the calculation formula is as follows:
Figure BDA0003500571690000031
wherein (x)c,yc) As the anchor point coordinates, (w, h) are the width and height of the target candidate region, respectively, (x)1,y1) And (x)2,y2) Coordinates of the upper left corner and the lower right corner of the target candidate area;
calculating the intersection ratio IoU of the target candidate region and the real label: if IoU is greater than or equal to 0.7, the target candidate area is set as a positive sample; if IoU <0.3, set the target candidate region to a negative sample; taking the obtained positive sample and negative sample as labels of the training target candidate area;
step 2-6: performing feature pooling on the target candidate region, and calculating a feature layer corresponding to the target candidate region after the feature pooling by adopting an equation (2):
Figure BDA0003500571690000032
wherein 1024 refers to the input image size, k0Is a reference value;
since the target candidate region is generated from four different feature maps P2, P3, P4 and P5 through anchors, the feature pooling corresponds to 4 different feature layers;
the value rules of 4 different feature layers after the feature pooling are as follows:
Figure BDA0003500571690000033
after the target candidate regions in the feature maps P2, P3, P4 and P5 are subjected to feature pooling, each target candidate region respectively outputs 7 × 7 results, that is, 49 features are extracted;
step 2-7: adding the 49 features obtained in the step 2-6 and the global features obtained in the step 2-3, and sequentially inputting the 49 features and the global features into two full-connection layers, wherein the output results of the two full-connection layers are a target category and a target boundary box;
and step 3: and inputting the remote sensing image to be detected into the trained target detection depth neural network, and outputting to obtain a class boundary box of the target.
Preferably, k is0=4。
The invention has the following beneficial effects:
the method fully extracts scene information of the image by utilizing the characteristic of rich high-level feature semantic information, further enhances feature representation, increases the recognition accuracy of dense targets, and also improves the recognition accuracy of other targets to a certain extent, thereby integrally improving the target detection performance in the remote sensing image.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a diagram of a target detection deep neural network according to the present invention.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
The invention discloses a remote sensing image target detection method based on global context sensing, which is designed to improve the accuracy of target identification in a remote sensing image by extracting global context characteristic enhancement feature representation.
As shown in fig. 1, a remote sensing image target detection method based on global context awareness includes the following steps:
step 1: the DOTA dataset is processed. Because the size of the original data image of the DOTA data set is not fixed and the labeled data of the test set is not disclosed, 1869 images with labels are uniformly cut into 1024 x 1024 images for the convenience of neural network training, and the width and the height of each image are respectively reserved with the overlapping rate of 10% of pixels in order to prevent the target from being lost due to image cutting during cutting. The 19219 images and the labeling information thereof are obtained after processing, and are randomly divided into 11531 images of the training set, 3844 images of the verification set and 3844 images of the test set, so that no intersection exists among the training set, the verification set and the test set in the image sample space.
Step 2: as shown in fig. 2, a target detection deep neural network is constructed and trained by using gradient descent and back propagation algorithm; the target detection deep neural network firstly adopts Res101 residual error network to extract features, then uses feature pyramid network FPN to generate candidate regions, then carries out local context perception on the candidate regions, and finally obtains target categories and bounding boxes through feature pooling and full connection layers, wherein the method specifically comprises the following steps:
step 2-1: because the neural network parameters are various and are difficult to train, before the model is trained, a pre-training model is used for initializing Res101 model parameters;
step 2-2: training a neural network on a training data set, inputting 1024 × 1024 images into a Res101 residual network to extract features, and generating 5 feature maps with different sizes, which are marked as C1-C6, and the scales are 512 × 512,256 × 256,128 × 128,64 × 64,32 × 32 and 16 × 16 respectively; selecting C2, C3, C4, and C5 establishes a pyramid. If C1 is used, too much memory is occupied, so the pyramid is not built by C1;
step 2-3: performing global maximum pooling on the feature map C6 to obtain scene features containing scene information; the scene features are subjected to 10 × 10 convolution and 1 × 1 convolution to obtain global features;
step 2-4: taking the feature map C5 as a feature map P5 of a feature pyramid;
upsampling the feature map C5, and adding the upsampled feature map C4 subjected to 1-by-1 convolution to generate a feature map P4 of a feature pyramid;
upsampling the feature map C4, and adding the upsampled feature map C3 subjected to 1-by-1 convolution to generate a feature map P3 of a feature pyramid;
upsampling the feature map C3, and adding the upsampled feature map C2 subjected to 1-by-1 convolution to generate a feature map P2 of a feature pyramid;
the 1 × 1 convolution is to ensure that the number of added feature map channels is the same;
step 2-5: the feature maps P2, P3, P4 and P5 of the feature pyramid are 256 in size respectively2、1282、642、322(ii) a Generating anchor points anchorages for each feature image in the feature pyramid by using an area generation network, wherein the aspect ratio corresponding to each anchorage comprises three types, namely 1:2, 1:1 and 2: 1; thus, the feature pyramid generates 15 different anchors;
and generating a target candidate region by using an anchor, wherein the calculation formula is as follows:
Figure BDA0003500571690000051
wherein (x)c,yc) As the anchor point coordinates, (w, h) are the width and height of the target candidate region, respectively, (x)1,y1) And (x)2,y2) Coordinates of the upper left corner and the lower right corner of the target candidate area;
calculating the intersection ratio IoU (intersection over Union) of the target candidate region and the real label: if IoU is more than or equal to 0.7, the target candidate area is set as a positive sample; if IoU <0.3, set the target candidate region to a negative sample; using the obtained positive sample and negative sample as labels of training target candidate regions (corresponding to each anchor);
step 2-6: performing feature pooling on the target candidate region, and calculating a feature layer corresponding to the target candidate region after the feature pooling by adopting an equation (2):
Figure BDA0003500571690000052
wherein 1024 refers to the input image size, k0Taking 4 as a reference value generally;
since the target candidate region is generated from four different feature maps P2, P3, P4 and P5 through anchors, the feature pooling corresponds to 4 different feature layers;
the value rules of 4 different feature layers after the feature pooling are as follows:
Figure BDA0003500571690000061
after the target candidate regions in the feature maps P2, P3, P4 and P5 are subjected to feature pooling, each target candidate region respectively outputs 7 × 7 results, that is, 49 features are extracted;
step 2-7: adding the 49 features obtained in the step 2-6 and the global features obtained in the step 2-3, and sequentially inputting the 49 features and the global features into two full connection layers, wherein the output results of the two full connection layers are a target category and a target bounding box;
and step 3: and inputting the remote sensing image to be detected into the trained target detection depth neural network, and outputting to obtain a class boundary box of the target.

Claims (2)

1. A remote sensing image target detection method based on global context sensing is characterized by comprising the following steps:
step 1: preprocessing and dividing the data set;
uniformly cutting the marked images in the standard data set into a plurality of 1024 x 1024 images, respectively reserving the coincidence rate of 10% of pixels in width and height during cutting, and randomly dividing the images into a training set, a verification set and a test set, wherein the training set, the verification set and the test set have no intersection;
and 2, step: constructing a target detection deep neural network and training the target detection deep neural network by adopting a gradient descent and back propagation algorithm; the target detection deep neural network firstly adopts Res101 residual error network to extract features, then uses feature pyramid network FPN to generate candidate regions, then carries out local context perception on the candidate regions, and finally obtains target categories and bounding boxes through feature pooling and full connection layers, wherein the method specifically comprises the following steps:
step 2-1: initializing Res101 model parameters by using a pre-training model;
step 2-2: inputting 1024 × 1024 images into a Res101 residual network to extract features, and generating 5 feature maps with different sizes, which are marked as C1-C6, and the scales are 512 × 512,256 × 256,128 × 128,64 × 64,32 × 32 and 16 × 16 respectively;
step 2-3: performing global maximum pooling on the feature map C6 to obtain scene features containing scene information; the scene features are subjected to 10 × 10 convolution and 1 × 1 convolution to obtain global features;
step 2-4: taking the feature map C5 as a feature map P5 of a feature pyramid;
upsampling the feature map C5, and adding the upsampled feature map C4 with the feature map C4 subjected to 1-by-1 convolution to generate a feature map P4 of a feature pyramid;
upsampling the feature map C4, and adding the upsampled feature map C3 subjected to 1-by-1 convolution to generate a feature map P3 of a feature pyramid;
upsampling the feature map C3, and adding the upsampled feature map C2 subjected to 1-by-1 convolution to generate a feature map P2 of a feature pyramid;
step 2-5: the feature maps P2, P3, P4 and P5 of the feature pyramid are 256 in size respectively2、1282、642、322(ii) a Generating anchor points anchorages for each feature image in the feature pyramid by using an area generation network, wherein the aspect ratio corresponding to each anchorage comprises three types, namely 1:2, 1:1 and 2: 1; thus, the feature pyramid generates 15 different anchors;
and generating a target candidate region by using the anchor, wherein the calculation formula is as follows:
Figure FDA0003500571680000021
wherein (x)c,yc) As the anchor point coordinates, (w, h) are the width and height of the target candidate region, respectively, (x)1,y1) And (x)2,y2) Coordinates of the upper left corner and the lower right corner of the target candidate area;
calculating the intersection ratio IoU of the target candidate region and the real label: if IoU is greater than or equal to 0.7, the target candidate area is set as a positive sample; if IoU <0.3, set the target candidate region to a negative sample; taking the obtained positive sample and negative sample as labels of the training target candidate area;
step 2-6: performing feature pooling on the target candidate region, and calculating a feature layer corresponding to the target candidate region after the feature pooling by adopting an equation (2):
Figure FDA0003500571680000022
wherein 1024 refers to the input image size, k0Is a reference value;
since the target candidate region is generated from four different feature maps P2, P3, P4 and P5 through anchors, the feature pooling corresponds to 4 different feature layers;
the value rules of 4 different feature layers after the feature pooling are as follows:
Figure FDA0003500571680000023
after the target candidate regions in the feature maps P2, P3, P4 and P5 are subjected to feature pooling, each target candidate region respectively outputs 7 × 7 results, that is, 49 features are extracted;
step 2-7: adding the 49 features obtained in the step 2-6 and the global features obtained in the step 2-3, and sequentially inputting the 49 features and the global features into two full-connection layers, wherein the output results of the two full-connection layers are a target category and a target boundary box;
and step 3: and inputting the remote sensing image to be detected into the trained target detection depth neural network, and outputting to obtain a class boundary box of the target.
2. The remote sensing image target detection method based on global context awareness, as recited in claim 1, wherein k is a function of the global context awareness0=4。
CN202210126106.0A 2022-02-10 2022-02-10 Remote sensing image target detection method based on global context awareness Active CN114519819B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210126106.0A CN114519819B (en) 2022-02-10 2022-02-10 Remote sensing image target detection method based on global context awareness

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210126106.0A CN114519819B (en) 2022-02-10 2022-02-10 Remote sensing image target detection method based on global context awareness

Publications (2)

Publication Number Publication Date
CN114519819A true CN114519819A (en) 2022-05-20
CN114519819B CN114519819B (en) 2024-04-02

Family

ID=81596492

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210126106.0A Active CN114519819B (en) 2022-02-10 2022-02-10 Remote sensing image target detection method based on global context awareness

Country Status (1)

Country Link
CN (1) CN114519819B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937672A (en) * 2022-11-22 2023-04-07 南京林业大学 Remote sensing rotating target detection method based on deep neural network
CN116486077A (en) * 2023-04-04 2023-07-25 中国科学院地理科学与资源研究所 Remote sensing image semantic segmentation model sample set generation method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018214195A1 (en) * 2017-05-25 2018-11-29 中国矿业大学 Remote sensing imaging bridge detection method based on convolutional neural network
CN111368775A (en) * 2020-03-13 2020-07-03 西北工业大学 Complex scene dense target detection method based on local context sensing
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning
CN112070729A (en) * 2020-08-26 2020-12-11 西安交通大学 Anchor-free remote sensing image target detection method and system based on scene enhancement
CN112766409A (en) * 2021-02-01 2021-05-07 西北工业大学 Feature fusion method for remote sensing image target detection
CN113111740A (en) * 2021-03-27 2021-07-13 西北工业大学 Characteristic weaving method for remote sensing image target detection

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018214195A1 (en) * 2017-05-25 2018-11-29 中国矿业大学 Remote sensing imaging bridge detection method based on convolutional neural network
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning
CN111368775A (en) * 2020-03-13 2020-07-03 西北工业大学 Complex scene dense target detection method based on local context sensing
CN112070729A (en) * 2020-08-26 2020-12-11 西安交通大学 Anchor-free remote sensing image target detection method and system based on scene enhancement
CN112766409A (en) * 2021-02-01 2021-05-07 西北工业大学 Feature fusion method for remote sensing image target detection
CN113111740A (en) * 2021-03-27 2021-07-13 西北工业大学 Characteristic weaving method for remote sensing image target detection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
乔文凡;慎利;戴延帅;曹云刚;: "联合膨胀卷积残差网络和金字塔池化表达的高分影像建筑物自动识别", 地理与地理信息科学, no. 05, 27 August 2018 (2018-08-27) *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937672A (en) * 2022-11-22 2023-04-07 南京林业大学 Remote sensing rotating target detection method based on deep neural network
CN116486077A (en) * 2023-04-04 2023-07-25 中国科学院地理科学与资源研究所 Remote sensing image semantic segmentation model sample set generation method and device
CN116486077B (en) * 2023-04-04 2024-04-30 中国科学院地理科学与资源研究所 Remote sensing image semantic segmentation model sample set generation method and device

Also Published As

Publication number Publication date
CN114519819B (en) 2024-04-02

Similar Documents

Publication Publication Date Title
CN112200161B (en) Face recognition detection method based on mixed attention mechanism
CN110622213B (en) System and method for depth localization and segmentation using 3D semantic maps
CN110188705B (en) Remote traffic sign detection and identification method suitable for vehicle-mounted system
Matzen et al. Nyc3dcars: A dataset of 3d vehicles in geographic context
CN112183203B (en) Real-time traffic sign detection method based on multi-scale pixel feature fusion
CN112084869B (en) Compact quadrilateral representation-based building target detection method
CN111461039B (en) Landmark identification method based on multi-scale feature fusion
CN114519819B (en) Remote sensing image target detection method based on global context awareness
CN113688836A (en) Real-time road image semantic segmentation method and system based on deep learning
CN112287983B (en) Remote sensing image target extraction system and method based on deep learning
CN112560675A (en) Bird visual target detection method combining YOLO and rotation-fusion strategy
CN112766409A (en) Feature fusion method for remote sensing image target detection
Zang et al. Traffic lane detection using fully convolutional neural network
CN111368775A (en) Complex scene dense target detection method based on local context sensing
Li et al. An aerial image segmentation approach based on enhanced multi-scale convolutional neural network
CN117152414A (en) Target detection method and system based on scale attention auxiliary learning method
Meng et al. A block object detection method based on feature fusion networks for autonomous vehicles
CN113111740A (en) Characteristic weaving method for remote sensing image target detection
CN112597996A (en) Task-driven natural scene-based traffic sign significance detection method
CN114494893B (en) Remote sensing image feature extraction method based on semantic reuse context feature pyramid
CN112446292B (en) 2D image salient object detection method and system
Li et al. Learning to holistically detect bridges from large-size vhr remote sensing imagery
CN114550016A (en) Unmanned aerial vehicle positioning method and system based on context information perception
Wang et al. Extraction of main urban roads from high resolution satellite images by machine learning
WO2024000728A1 (en) Monocular three-dimensional plane recovery method, device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant