CN112766184B - Remote sensing target detection method based on multi-level feature selection convolutional neural network - Google Patents

Remote sensing target detection method based on multi-level feature selection convolutional neural network Download PDF

Info

Publication number
CN112766184B
CN112766184B CN202110090408.2A CN202110090408A CN112766184B CN 112766184 B CN112766184 B CN 112766184B CN 202110090408 A CN202110090408 A CN 202110090408A CN 112766184 B CN112766184 B CN 112766184B
Authority
CN
China
Prior art keywords
neural network
convolutional neural
training
image
remote sensing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110090408.2A
Other languages
Chinese (zh)
Other versions
CN112766184A (en
Inventor
蒋晨
郭成昊
夏思宇
罗子娟
李友江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202110090408.2A priority Critical patent/CN112766184B/en
Publication of CN112766184A publication Critical patent/CN112766184A/en
Application granted granted Critical
Publication of CN112766184B publication Critical patent/CN112766184B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Astronomy & Astrophysics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a remote sensing target detection method based on a multi-level characteristic selection convolutional neural network, which comprises the steps of firstly constructing a convolutional neural network model, setting structural parameters of the constructed convolutional neural network and initializing training parameters; preprocessing the training image and converting the label format, and then carrying out data enhancement on the preprocessed training image and the converted label format; performing convolutional neural network model training to obtain network weights and offsets; and inputting the test image into a trained neural network model to obtain positioning and classifying results. Because the closely arranged rotating objects cannot be positioned correctly based on the target identification of the horizontal frame, and meanwhile, the method for classifying the characteristics according to the size of the target is too simple to cause problems, the method for selecting the convolutional neural network by the multi-level characteristics can realize the identification and classification of the objects in any direction, and greatly improve the accuracy.

Description

Remote sensing target detection method based on multi-level feature selection convolutional neural network
Technical Field
The invention belongs to the field of computer vision remote sensing detection, and relates to a remote sensing target detection method based on a multi-level feature selection convolutional neural network.
Background
Through the development of more than thirty years, the modern Remote sensing technology covers space information technologies such as Remote Sensing (RS), a geographic information system (Geology Information System, GIS), a global positioning system (Global Positioning System, GPS) and the like, gradually goes deep into various aspects of national economy, social life and national security, and the development and application level of the Remote sensing technology becomes one of important marks for comprehensive national assessment. In order to acquire richer remote sensing information as much as possible, the spatial resolution of the remote sensing image is generally higher, the size is larger, and the accurate acquisition of valuable target information from a large number of remote sensing images is a time-consuming and labor-consuming task, so how to automatically extract the valuable target information from a large number of remote sensing information has important research significance and application value.
In order to meet the target detection requirements in different application tasks, researchers design descriptions for extracting the characteristics of local textures or edges and the like of targets, and on the basis, detectors capable of detecting different targets are realized. However, as the remote sensing target has very complex local feature information and the texture features of part of the target and the background have higher similarity, the target detector based on the traditional bottom feature descriptor has poor robustness when processing the remote sensing target detection task, and the detection result is unsatisfactory.
Along with the proposal of the convolutional neural network, the target detection technology also has great breakthrough, and particularly has rapid application development in the fields of text detection, aviation images, security and the like. The current mainstream detectors can be classified into horizontal frame and rotating frame detection according to the output form. The horizontal frame detection is generally marked by (x, y, w, h), wherein (x, y) is the coordinate of the center point of the horizontal frame, and w and h represent the width and height of the horizontal frame respectively. However, it is difficult to accurately position objects arranged in any direction by the horizontal frame, especially in aerial images, because the view angles in the remote sensing images are basically high-altitude overlooking due to view angles, the directions of the targets are uncertain, and different directions can be presented. When these objects are closely aligned like cars and boats, the horizontal frame is more likely to encapsulate much of the background noise, which requires arbitrary directional object detection to output the rotating frame for better positioning of the objects.
With the continuous development of convolutional neural networks, the iterative update of the deep learning algorithm is also carried out, and the structure of the network model is also continuously adjusted and optimized, and particularly, the method in the aspects of feature extraction and feature selection has a larger improvement space.
Disclosure of Invention
The invention aims to: aiming at the problems and the shortcomings of the existing method, the invention aims to provide a remote sensing target detection method based on a multi-level feature selection convolutional neural network, by the method, target detection in any direction of a remote sensing image can be realized by utilizing a multi-level feature selection convolutional neural network model, and the positioning and classifying capability of targets is effectively improved.
The technical scheme is as follows: in order to achieve the aim of the invention, the invention adopts the following technical scheme: a remote sensing target detection method based on multi-level characteristic selection convolutional neural network comprises the following steps:
(1) Building a convolutional neural network model, setting structural parameters of the built convolutional neural network, and initializing training parameters;
(2) Preprocessing a training image and converting a label format, and then performing data enhancement on the preprocessed training image and the converted label format;
(3) Performing convolutional neural network model training by using the data obtained in the step (2) to obtain network weights and biases;
(4) Inputting the test image into the neural network model trained in the step (3) to obtain positioning and classifying results.
In the step (1), the structure of building the convolutional neural network model is as follows:
the first part is a backbone network module, and the input image is extracted to the features through a backbone network consisting of a ResNet50 network from bottom to top and a feature pyramid from top to bottom;
the second part is a path aggregation module, and the backbone network is transversely connected with a path aggregation branch from bottom to top, so that a new feature diagram is obtained;
the third part is a feature selection module, a new feature map obtains target candidate regions (proposals) with different scales through a target candidate network (RPN), the candidate regions and groudtluth are mapped into feature maps with different levels, the sizes of IoU-loss are calculated respectively, and the level with the minimum loss is selected for subsequent RoIAlign (region feature alignment);
the final output layer uses proposals to extract features from the feature map and send them to the subsequent full-join and softmax networks for regression and classification.
In step (1), the network configuration parameter setting and training parameter initializing are as follows:
(3.1) setting the step length of a convolution layer in the backbone network to be 1, setting the step length of a maximum pooling layer to be 2, and setting the number of output channels to be 256;
(3.2) the weight of the convolution layer is initialized to truncated normal distributed noise with a mean value of 0 and a standard deviation of 0.1, and all biases in the network are initialized to be constant 0.1;
(3.3) training batch size was set to 2, i.e., 2 pictures were fed into the convolutional neural network each time, the total step size was 80000, the initial learning rate was set to 0.0025,56000 iterations, and the learning rate was adjusted to 2.5 xe -4
In the step (2), preprocessing the training image and converting the label format as follows:
(2.1) normalizing the image size to a size of 1024 x 1024;
(2.2) converting the labels corresponding to the images, and uniformly converting the labels into a coco format required by training;
(2.3) converting vertex coordinates in the form of (x 1, y1, x2, y2, x3, y3, x4, y 4) to (x, y, w, h, α) 1234 ) Wherein (x 1, y1, x2, y2, x3, y3, x4, y 4) represent four vertex coordinates of the quadrangular image, (x, y) are center point coordinates (w, h) representing the minimum bounding rectangle of the training image, and (α) represent the width and height of the minimum bounding rectangle of the training image 1234 ) The offset of the training image on each side of the circumscribed rectangle.
In the step (2), the data enhancement method for the training image is as follows: the training images are randomly turned over and rotated for image enhancement.
Wherein in step (3), the loss function during model training is composed of two parts, namely classification loss and regression loss, wherein the regression loss L reg Consists of two parts, namely horizontal frame loss and sliding vertex loss: i.e. L reg =L h +L r Wherein L is h For horizontal frame loss, by smooths L1 Calculating a function; l (L) r For sliding vertex loss, the formula is as follows:
wherein alpha is i =(α 1234 ) For the offset of training images on each side of the circumscribed rectangle, l 1 ,l 2 In order to determine in which direction the target frame returns to the set threshold value,alpha is alpha i Is a true value of (c).
The beneficial effects are that: compared with the existing remote sensing detection method, the invention has the following beneficial effects:
1. the invention utilizes the multi-level feature extraction convolutional neural network model to carry out target detection of the remote sensing image, can realize classification and positioning of multi-category and multi-directional dense objects, and has high recognition accuracy and good real-time effect;
2. the image enhancement method improves the generalization capability of the convolutional neural network model and improves the detection accuracy through the method of randomly overturning and rotating the image.
3. The invention optimizes the structure of the traditional network by adding the path aggregation module, the feature recognition module and the improvement of the loss function, and has better effect on detection and recognition.
Drawings
FIG. 1 is a flow chart of a remote sensing target detection method based on a multi-level feature selection convolutional neural network of the present invention;
FIG. 2 is a block diagram of a convolutional neural network employed in the present invention;
FIG. 3 is a schematic diagram of a feature selection module according to the present invention;
Detailed Description
The invention is further illustrated by the following examples in conjunction with the accompanying drawings:
the invention discloses a remote sensing target detection method based on a multi-level characteristic selection convolutional neural network, wherein a flow chart of the method is shown in figure 1, and the method comprises the following steps of:
(1) Building convolutional neural network model
Fig. 2 shows a convolutional neural network structure diagram adopted by the invention, and the convolutional neural network structure built by the invention is as follows for a remote sensing image at first:
the first part is a backbone network module, and the input image is subjected to the extraction of features by a backbone network consisting of a ResNet50 network from bottom to top and a feature pyramid which is transversely connected from top to bottom. Depending on the size of the feature map, it can be divided into 5 stages: stage1, stage2, stage3, stage4 and stage5. Wherein the last layer outputs conv2, conv3, conv4 and conv5 of each of stage2 to stage5 are respectively defined as { C ] 2 ,C 3 ,C 4 ,C 5 The step sizes relative to the original picture are {4,8,16,32} respectively, and the conv1 feature map of stage1 is not used for memory reasons. The nearest neighbor upsampling is used in the feature pyramid structure, so that on one hand, the calculation is simpler, and on the other hand, the number of training parameters can be reduced. The cross-connect is to fuse the up-sampled result with the same size convolution feature map. Specifically { C } 2 ,C 3 ,C 4 ,C 5 Each layer in the sequence is subjected to convolution operation, the number of channels is reduced, all output channels are 256 channels, and then the channels are added and fused with the up-sampled characteristic diagram. The fused feature map also needs to be processed with a 3*3 convolution operation after fusion to eliminate the aliasing effects of upsampling.
The second part is a path aggregation module, a backbone network is transversely connected with a path aggregation branch from bottom to top, and a shallow high-resolution characteristic diagram N i And a lower resolution profile P via a lateral connection i+1 Fusion is carried out after 1x1 convolution operation, and a new characteristic diagram N is obtained i+1 . Wherein N is i First passing through a 3x3 convolution layer with a step size of 2 to reduce the spatial resolution, then downsampling the featuresEach element and P in the graph i+1 Each element after 1x1 convolution is sequentially added and fused. The fused characteristic diagram is subjected to a convolution layer with the step length of 1 and 3x3 to obtain a new characteristic diagram N i+1
The third part is a feature selection module, and the new feature map obtains target candidate regions (proposals) with different scales through a target candidate network (RPN), so that the problem to be solved is how to distribute the ROIs with different scales to corresponding feature layers. One of the most intuitive allocation strategies is to map the ROI into feature maps of different resolutions depending on its size. Assuming an ROI of scale w, h, mapping it to level k is:the feature pyramid network maps the target candidate region into feature maps of different scales according to the size of the target candidate region, allocates the candidate region of small size into the shallow feature map, and allocates the candidate region of large size into the deep feature map. This approach, while simple and effective, still has certain limitations, the allocation results may not be optimal. According to the invention, the candidate regions ROI and Groundtrunk are mapped into feature maps of different levels, the size of IoU-loss is calculated respectively, and the level of the minimum loss is selected for subsequent RoIAlign. The method comprises the steps of carrying out a first treatment on the surface of the
The final output layer extracts proposal features from the feature map using proposals and feeds them to the subsequent full-join and softmax networks for regression and classification.
(2) Training data acquisition
Before the remote sensing image is used for model training, the data set is preprocessed firstly, and the data preprocessing method comprises the following steps:
(2.1) normalizing the image size to a size of 1024 x 1024;
(2.2) converting the labels corresponding to the images, and uniformly converting the labels into a coco format required by training;
(2.3) converting vertex coordinates in the form of (x 1, y1, x2, y2, x3, y3, x4, y 4) to (x, y, w, h, α) 1234 ) The sliding vertex representation of (x, y, w, h) isCenter point coordinates and width and height of minimum bounding rectangle, (α) 1234 ) Is the offset of the target on each side of the bounding rectangle.
(3) The network structure parameter setting and training parameter initializing are as follows:
(3.1) the step size of the convolution layer in the backbone network is set to 1, the step size of the maximum pooling layer is set to 2, the number of output channels is 256, and the multi-scale feature { P ] in the network 2 ,P 3 ,P 4 ,P 5 The anchor boxes of the pixel with the dimensions of {32×32,64× 64,128 ×128,256×256} respectively, and each anchor box with the dimensions of {1:1,1:2,2:1} respectively;
(3.2) the weight of the convolution layer is initialized to truncated normal distributed noise with a mean value of 0 and a standard deviation of 0.1, and all biases in the network are initialized to be constant 0.1;
(3.3) training the batch size was set to 2, i.e., 2 pictures were fed into the convolutional neural network per training, and then the average loss of all samples across the batch was calculated. The total training step length is 80000, a random gradient descent algorithm is selected for model optimization, the initial learning rate is set to be 0.0025, and the learning rate is adjusted to be 2.5×e after 56000 iterations -4
(4) Image enhancement
Because the remote sensing data has multiple types and unbalanced quantity among the types, more training samples are needed especially for the multidirectional property of the target, the training data needs to be randomly turned over and rotated, and a larger quantity of training data is obtained at the same time, so that overfitting caused by too few training data is avoided, and the generalization capability of the model is improved.
(5) Model training
And (3) inputting the training data obtained in the step (2) into a network for training, and storing the obtained network weight and bias.
(6) Predicting classification results
Inputting the remote sensing image to be detected into a model after the preprocessing process in the step (2), outputting the probability that the image belongs to various targets and the final coordinates of the targets by the model, and finally selecting the maximum probability category, thereby completing positioning and classification.
As described above, although the present invention has been shown and described with reference to certain preferred embodiments, it is not to be construed as limiting the invention itself. Various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (5)

1. A remote sensing target detection method based on multi-level feature selection convolutional neural network is characterized in that: the method comprises the following steps:
(1) Building a convolutional neural network model, setting structural parameters of the built convolutional neural network, and initializing training parameters; in the step (1), the structure of building the convolutional neural network model is as follows:
the first part is a backbone network module, and the input image is extracted to the features through a backbone network consisting of a ResNet50 network from bottom to top and a feature pyramid from top to bottom;
the second part is a path aggregation module, and the backbone network is transversely connected with a path aggregation branch from bottom to top, so that a new feature diagram is obtained;
the third part is a feature selection module, a new feature map obtains target candidate regions (proposals) with different scales through a target candidate network (RPN), the candidate regions and groudtluth are mapped into feature maps with different levels, the sizes of IoU-loss are calculated respectively, and the level with the minimum loss is selected for subsequent RoIAlign (region feature alignment);
the final output layer uses proposals to extract features from the feature map and send them to the subsequent full-connection and softmax network for regression and classification;
(2) Preprocessing a training image and converting a label format, and then performing data enhancement on the preprocessed training image and the converted label format;
(3) Performing convolutional neural network model training by using the data obtained in the step (2) to obtain network weights and biases;
(4) Inputting the test image into the neural network model trained in the step (3) to obtain positioning and classifying results.
2. The method for detecting the remote sensing target based on the multi-level feature selection convolutional neural network according to claim 1, wherein the method comprises the following steps of: in step (1), the network configuration parameter settings and training parameters are initialized as follows:
(3.1) setting the step length of a convolution layer in the backbone network to be 1, setting the step length of a maximum pooling layer to be 2, and setting the number of output channels to be 256;
(3.2) the weight of the convolution layer is initialized to truncated normal distributed noise with a mean value of 0 and a standard deviation of 0.1, and all biases in the network are initialized to be constant 0.1;
(3.3) training batch size was set to 2, i.e., 2 pictures were fed into the convolutional neural network each time, the total step size was 80000, the initial learning rate was set to 0.0025,56000 iterations, and the learning rate was adjusted to 2.5 xe -4
3. The method for detecting the remote sensing target based on the multi-level feature selection convolutional neural network according to claim 1, wherein the method comprises the following steps of: in step (2), the training image is preprocessed and converted into a label format as follows:
(2.1) normalizing the image size to a size of 1024 x 1024;
(2.2) converting the labels corresponding to the images, and uniformly converting the labels into a coco format required by training;
(2.3) converting vertex coordinates in the form of (x 1, y1, x2, y2, x3, y3, x4, y 4) to (x, y, w, h, α) 1234 ) Wherein (x 1, y1, x2, y2, x3, y3, x4, y 4) represent four vertex coordinates of the quadrangular image, (x, y) are center point coordinates (w, h) representing the minimum bounding rectangle of the training image, and (α) represent the width and height of the minimum bounding rectangle of the training image 1234 ) The offset of the training image on each side of the circumscribed rectangle.
4. The method for detecting the remote sensing target based on the multi-level feature selection convolutional neural network according to claim 1, wherein the method comprises the following steps of: in step (2), the method for enhancing the data of the training image is as follows: the training images are randomly turned over and rotated for image enhancement.
5. The method for detecting the remote sensing target based on the multi-level feature selection convolutional neural network according to claim 1, wherein the method comprises the following steps of: in step (3), the loss function during model training is composed of two parts, namely classification loss and regression loss, wherein the regression loss L reg Consists of two parts, namely horizontal frame loss and sliding vertex loss: i.e. L reg =L h +L t Wherein L is h For horizontal frame loss, by smooths L1 Calculating a function; l (L) r For sliding vertex loss, the formula is as follows:
wherein alpha is i =(α 1234 ) For the offset of training images on each side of the circumscribed rectangle, l 1 ,l 2 In order to determine in which direction the target frame returns to the set threshold value,alpha is alpha i Is a true value of (c).
CN202110090408.2A 2021-01-22 2021-01-22 Remote sensing target detection method based on multi-level feature selection convolutional neural network Active CN112766184B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110090408.2A CN112766184B (en) 2021-01-22 2021-01-22 Remote sensing target detection method based on multi-level feature selection convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110090408.2A CN112766184B (en) 2021-01-22 2021-01-22 Remote sensing target detection method based on multi-level feature selection convolutional neural network

Publications (2)

Publication Number Publication Date
CN112766184A CN112766184A (en) 2021-05-07
CN112766184B true CN112766184B (en) 2024-04-16

Family

ID=75706729

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110090408.2A Active CN112766184B (en) 2021-01-22 2021-01-22 Remote sensing target detection method based on multi-level feature selection convolutional neural network

Country Status (1)

Country Link
CN (1) CN112766184B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298169B (en) * 2021-06-02 2024-03-01 浙江工业大学 Rotating target detection method and device based on convolutional neural network
CN113850256A (en) * 2021-09-10 2021-12-28 北京理工大学 Target detection and identification method based on FSAF and fast-slow weight
CN113822935B (en) * 2021-09-14 2024-02-06 南京邮电大学 Multi-image positioning method based on pix2pix
CN113591810B (en) * 2021-09-28 2021-12-07 湖南大学 Vehicle target pose detection method and device based on boundary tight constraint network and storage medium
CN116547713A (en) * 2021-12-03 2023-08-04 宁德时代新能源科技股份有限公司 Method and system for defect detection
CN117636172B (en) * 2023-12-06 2024-06-21 中国科学院长春光学精密机械与物理研究所 Target detection method and system for weak and small target of remote sensing image

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112016512A (en) * 2020-09-08 2020-12-01 重庆市地理信息和遥感应用中心 Remote sensing image small target detection method based on feedback type multi-scale training

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10068171B2 (en) * 2015-11-12 2018-09-04 Conduent Business Services, Llc Multi-layer fusion in a convolutional neural network for image classification

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112016512A (en) * 2020-09-08 2020-12-01 重庆市地理信息和遥感应用中心 Remote sensing image small target detection method based on feedback type multi-scale training

Also Published As

Publication number Publication date
CN112766184A (en) 2021-05-07

Similar Documents

Publication Publication Date Title
CN112766184B (en) Remote sensing target detection method based on multi-level feature selection convolutional neural network
US10607362B2 (en) Remote determination of containers in geographical region
Fu et al. Fast and accurate detection of kiwifruit in orchard using improved YOLOv3-tiny model
CN110781827B (en) Road edge detection system and method based on laser radar and fan-shaped space division
US20220004762A1 (en) Systems and methods for analyzing remote sensing imagery
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
US9846946B2 (en) Objection recognition in a 3D scene
CN109829398B (en) Target detection method in video based on three-dimensional convolution network
CN108596055B (en) Airport target detection method of high-resolution remote sensing image under complex background
CN108510467B (en) SAR image target identification method based on depth deformable convolution neural network
US10049492B2 (en) Method and apparatus for rendering facades of objects of interest from three-dimensional point clouds
CN109598241B (en) Satellite image marine ship identification method based on Faster R-CNN
CN106909902B (en) Remote sensing target detection method based on improved hierarchical significant model
CN111179217A (en) Attention mechanism-based remote sensing image multi-scale target detection method
CN103984963B (en) Method for classifying high-resolution remote sensing image scenes
CN113920436A (en) Remote sensing image marine vessel recognition system and method based on improved YOLOv4 algorithm
Yang et al. Classified road detection from satellite images based on perceptual organization
CN113033315A (en) Rare earth mining high-resolution image identification and positioning method
EP3553700A2 (en) Remote determination of containers in geographical region
CN116258953A (en) Remote sensing image target detection method
Devi et al. Change detection techniques–A survey
Ding et al. Building detection in remote sensing image based on improved YOLOv5
CN107038710B (en) It is a kind of using paper as the Vision Tracking of target
CN109785318B (en) Remote sensing image change detection method based on facial line primitive association constraint
He et al. Automatic detection and mapping of solar photovoltaic arrays with deep convolutional neural networks in high resolution satellite images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant