CN114898097B - Image recognition method and system - Google Patents

Image recognition method and system Download PDF

Info

Publication number
CN114898097B
CN114898097B CN202210623004.XA CN202210623004A CN114898097B CN 114898097 B CN114898097 B CN 114898097B CN 202210623004 A CN202210623004 A CN 202210623004A CN 114898097 B CN114898097 B CN 114898097B
Authority
CN
China
Prior art keywords
image data
data
mangrove
remote sensing
sentinel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210623004.XA
Other languages
Chinese (zh)
Other versions
CN114898097A (en
Inventor
田金炎
李小娟
王乐
宫辉力
倪荣光
李子怡
尹程阳
李夏荣
时晨
朱琳
陈蓓蓓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Capital Normal University
Original Assignee
Capital Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Capital Normal University filed Critical Capital Normal University
Priority to CN202210623004.XA priority Critical patent/CN114898097B/en
Publication of CN114898097A publication Critical patent/CN114898097A/en
Application granted granted Critical
Publication of CN114898097B publication Critical patent/CN114898097B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Astronomy & Astrophysics (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image recognition method and system, which are characterized in that firstly, up-sampling is carried out on Sentinel-2 data with 10-meter resolution to obtain sub-meter image data, band operation is carried out on the sub-meter image data, and index information and a plurality of band information obtained after the band operation are fused with band information of RGB remote sensing image data with the resolution of Mi Jifen to obtain second image data; dividing the RGB remote sensing image data, removing a non-mangrove growth area and binarizing to obtain label image data; then, performing tiling processing on the RGB remote sensing image data, the second image data and the tag image data to construct an initial sample set for training an image recognition model; and finally, predicting the mangrove object according to the remote sensing image data to be predicted according to the pre-trained image recognition model. The method and the device can rapidly and accurately identify the mangrove in the global scope, and further obtain the distribution map of the mangrove in the global scope.

Description

Image recognition method and system
Technical Field
The invention relates to the technical field of remote sensing, in particular to an image recognition method and system.
Background
The remote sensing image identification is a technology for analyzing spectral information and spatial information of various ground objects in a remote sensing image by using a computer and dividing each pixel in the image into the types of the respective ground objects. Thus, the remote sensing image recognition technology can be utilized to recognize the mangrove forest of the target area and generate a mangrove forest distribution map. Existing mangrove identification can be divided into large and small scales according to its identification scale. The implementation scheme of the small-scale mangrove identification is based on SPORT5 images, and a classification method of SVM (Support Vector Machine ) is adopted for mangrove type analysis and drawing. The large-scale mangrove drawing is a global mangrove drawing based on Landsat, sentinel-2 images.
Based on SPORT5 and color synthetic images, four typical mangrove remote sensing interpretation marks established according to map feature analysis are applied to classification methods of SVM, classification mapping of mangrove in a research area is completed, and finally accuracy verification is carried out on mapping results by combining random sampling points. However, the SVM classifier is at the pixel level and the spatial features are not fully exploited.
In addition, the highest spatial resolution of the SPORT5 image is 2.5 meters, and the classification accuracy is not high enough. SPORT5 image data is not open and is expensive to obtain. Landsat, sentinel-2 data was open but low resolution. And the efficiency and accuracy of mangrove identification by adopting a machine learning network are low.
Disclosure of Invention
Accordingly, the present invention is directed to an image recognition method and system, which can rapidly and accurately identify mangrove forests in the global area by using the medium-high resolution remote sensing image classification technology, the deep learning technology and the image segmentation technology, so as to obtain the distribution map of the mangrove forests in the global area.
In a first aspect, an embodiment of the present invention provides an image recognition method, where the method includes: acquiring RGB remote sensing image data and Sentinel-2 data of a mangrove growth area in the global area; the resolution ratio of the RGB remote sensing image data is smaller than 1 meter; the Sentinel-2 data are data acquired by a multispectral imager with the resolution of more than 1 meter carried by a satellite, and the resolution of the data is 10 meters; performing up-sampling processing on the Sentinel-2 data to obtain first image data; wherein the resolution of the first image data is the same as the resolution of the RGB remote sensing image data; calculating normalized water index information and mangrove index information of the first image data through band operation, and overlapping and fusing the normalized water index information, mangrove index information, near infrared band information and short wave infrared band information of the first image data with red, green and blue visible light band information of the RGB remote sensing image data to obtain second image data; dividing the RGB remote sensing image data according to preset dividing parameters to obtain a plurality of divided image areas of the RGB remote sensing image data; wherein, the types of the objects contained in the same divided image area are the same, and the types of the objects contained in different divided area images are different; the preset segmentation parameters comprise a segmentation scale, a shape factor and smoothness; removing the segmented image areas which do not contain mangrove objects in the segmented image areas of the RGB remote sensing image data according to the Sentinel-2 data, and performing binarization processing to obtain tag image data; performing tiling processing on the RGB remote sensing image data, the second image data and the tag image data to obtain an initial sample set containing a plurality of tiles with equal sizes; and predicting the mangrove object according to the image recognition model trained by the initial sample set in advance, so as to obtain a mangrove prediction result of the remote sensing image data to be predicted.
As a possible implementation, the removing, according to the Sentinel-2 data, a segmented image area that does not include a mangrove object from among a plurality of segmented image areas of the RGB remote sensing image data, and performing binarization processing to obtain tag image data includes: manually labeling a category label for each segmented image region according to the Sentinel-2 data and the type of the object contained in each segmented image region to obtain first label image data; wherein the types of the objects are divided into mangrove and non-mangrove, the mangrove corresponds to a first type tag, and the non-mangrove corresponds to a second type tag; removing the segmented image area with the first type of labels in the first label image data to obtain second label image data; and performing binarization processing on the second tag image data to obtain the tag image data.
As a possible implementation, the acquiring RGB remote sensing image data and Sentinel-2 data of the mangrove growth area worldwide includes: acquiring RGB remote sensing image data of a global mangrove growth area; sentinel-2 data for the growth area of the global mangrove is obtained.
As a possible implementation, the training of the image recognition model includes: dividing the initial sample set into a training set, a verification set and a test set according to a preset proportion relation; setting model training parameters; the model training parameters comprise training batch, learning rate and iteration times; performing iterative training on the semantic segmentation model by using the training set, and verifying the precision of the semantic segmentation model after each iterative training by using the verification set; the accuracy verification indexes of the training set and the verification set about the semantic segmentation model are a first preset loss function and a first cross-correlation function; stopping training until the accuracy verification indexes of the training set and the verification set about the semantic segmentation model are stable and the model convergence condition is met, and obtaining the image recognition model; testing the semantic segmentation model which is completed by training by using the test set; and the test indexes of the training set and the test set about the semantic segmentation model are a second preset loss function and a second cross-correlation function.
As a possible implementation, before the upsampling process is performed on the Sentinel-2 data to obtain the first image data, the method further includes: and carrying out cloud removal processing on the RGB remote sensing image data and the Sentinel-2 data.
In a second aspect, an embodiment of the present invention further provides an image recognition system, including: the data acquisition module is used for acquiring RGB remote sensing image data and Sentinel-2 data of a mangrove growth area in the global range; the resolution ratio of the RGB remote sensing image data is smaller than 1 meter; the Sentinel-2 data are data acquired by a multispectral imager with the resolution of more than 1 meter carried by a satellite, and the resolution of the data is 10 meters; the data preprocessing module is used for carrying out up-sampling processing on the Sentinel-2 data to obtain first image data; wherein the resolution of the first image data is the same as the resolution of the RGB remote sensing image data; the information processing module is used for calculating normalized water index information and mangrove index information of the first image data through band operation, and carrying out superposition fusion on the normalized water index information, the mangrove index information, near infrared band information and short wave infrared band information of the first image data and red, green and blue visible light band information of the RGB remote sensing image data to obtain second image data; the data segmentation module is used for segmenting the RGB remote sensing image data according to preset segmentation parameters to obtain a plurality of segmented image areas of the RGB remote sensing image data; wherein, the types of the objects contained in the same divided image area are the same, and the types of the objects contained in different divided area images are different; the preset segmentation parameters comprise a segmentation scale, a shape factor and smoothness; the tag data making module is used for removing the segmented image areas which do not contain mangrove objects in the multiple segmented image areas of the RGB remote sensing image data according to the Sentinel-2 data, and performing binarization processing to obtain tag image data; the initial sample set forming module is used for tiling the RGB remote sensing image data, the second image data and the label image data to obtain an initial sample set containing a plurality of tiles with equal sizes; and the model prediction module is used for predicting the mangrove object of the remote sensing image data to be predicted according to the image recognition model trained by the initial sample set in advance, so as to obtain a mangrove prediction result of the remote sensing image data to be predicted.
As a possible implementation, the tag data making module is further configured to: manually labeling a category label for each segmented image region according to the Sentinel-2 data and the type of the object contained in each segmented image region to obtain first label image data; wherein the types of the objects are divided into mangrove and non-mangrove, the mangrove corresponds to a first type tag, and the non-mangrove corresponds to a second type tag; removing the segmented image area with the first type of labels in the first label image data to obtain second label image data; and performing binarization processing on the second tag image data to obtain the tag image data.
As a possible implementation, the data acquisition module is further configured to: acquiring RGB remote sensing image data of a global mangrove growth area; sentinel-2 data for the growth area of the global mangrove is obtained.
As a possible implementation, the system further includes: the model training module is used for dividing the initial sample set into a training set, a verification set and a test set according to a preset proportional relation; setting model training parameters; the model training parameters comprise training batch, learning rate and iteration times; performing iterative training on the semantic segmentation model by using the training set, and verifying the precision of the semantic segmentation model after each iterative training by using the verification set; the accuracy verification indexes of the training set and the verification set about the semantic segmentation model are a first preset loss function and a first cross-correlation function; until the accuracy verification indexes of the training set and the verification set about the semantic segmentation model are stable, testing the trained semantic segmentation model by using the test set; the test indexes of the training set and the test set about the semantic segmentation model are a second preset loss function and a second cross-correlation function; and stopping training until the test indexes of the training set and the test set about the semantic segmentation model meet the model convergence condition, and obtaining the image recognition model.
As a possible implementation, the data preprocessing module is further configured to: and carrying out cloud removal processing on the RGB remote sensing image data and the Sentinel-2 data before carrying out up-sampling processing on the Sentinel-2 data to obtain first image data.
The embodiment of the invention provides an image recognition method and system, firstly, RGB remote sensing image data and Sentinel-2 data of a mangrove growth area in the global range are acquired, the Sentinel-2 data is up-sampled to obtain first image data, normalized water body index information and mangrove index information of the first image data are calculated through band operation, the normalized water body index information, mangrove index information, near infrared band information and short wave infrared band information of the first image data are overlapped and fused with red, green and blue visible light band information of the RGB remote sensing image data to obtain second image data, dividing the RGB remote sensing image data according to preset dividing parameters to obtain a plurality of divided image areas of the RGB remote sensing image data, removing the divided image areas which do not contain mangrove objects in the plurality of divided image areas of the RGB remote sensing image data according to the Sentinel-2 data, performing binarization processing to obtain tag image data, performing tiling processing on the RGB remote sensing image data, the second image data and the tag image data to obtain an initial sample set containing a plurality of tiles with equal sizes, and finally performing mangrove object prediction on the remote sensing image data to be predicted according to an image recognition model trained by the initial sample set, so as to obtain a mangrove prediction result of the remote sensing image data to be predicted. By adopting the technology, sub-meter RGB remote sensing image data and non-sub-meter Sentinel-2 data are used as initial image data, so that higher classification accuracy can be ensured; when the label image data is manufactured, the image is segmented in a semi-automatic mode, so that the sample manufacturing time is shortened; the deep learning model is adopted to identify the mangrove forest, so that the imperfection caused by manual identification of the mangrove forest is reduced, and meanwhile, the accuracy and the efficiency of the mangrove forest identification are greatly improved, and the technical guarantee is provided for large-scale and large-scale fine drawing.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an image recognition method according to an embodiment of the present invention;
Fig. 2 is a flowchart of another image recognition method according to an embodiment of the present invention;
FIG. 3 is a diagram showing an example of dynamic changes in loss according to an embodiment of the present invention;
FIG. 4 is a diagram showing an example of dynamic change of MIoU in an embodiment of the present invention;
Fig. 5 is a schematic structural diagram of an image recognition system according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of another image recognition system according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described in conjunction with the embodiments, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Currently, existing mangrove identification can be classified into large and small scales according to its identification scale. The implementation scheme of the small-scale mangrove identification is based on SPORT5 images, and a classification method of SVM (Support Vector Machine ) is adopted for mangrove type analysis and drawing. The large-scale mangrove drawing is a global mangrove drawing based on Landsat, sentinel-2 images. Based on SPORT5 and color synthetic images, four typical mangrove remote sensing interpretation marks established according to map feature analysis are applied to classification methods of SVM, classification mapping of mangrove in a research area is completed, and finally accuracy verification is carried out on mapping results by combining random sampling points. However, the SVM classifier is at the pixel level and the spatial features are not fully exploited. In addition, the highest spatial resolution of the SPORT5 image is 2.5 meters, and the classification accuracy is not high enough. SPORT5 image data is not open and is expensive to obtain. Landsat, sentinel-2 data was open but low resolution. And the efficiency and accuracy of mangrove identification by adopting a machine learning network are low.
Based on the method and the system for identifying the image, provided by the embodiment of the invention, the mangrove in the global scope can be quickly and accurately identified by using a medium-high resolution remote sensing image classification technology, a deep learning technology and an image segmentation technology, and then the mangrove distribution map in the global scope can be obtained.
For the sake of understanding the present embodiment, first, a detailed description will be given of an image recognition method disclosed in the present embodiment, and referring to a schematic flow chart of an image recognition method shown in fig. 1, the method may include the following steps:
Step S102, RGB remote sensing image data and Sentinel-2 data of a mangrove growth area in the global range are obtained; the resolution ratio of the RGB remote sensing image data is less than 1 meter; the Sentinel-2 data are data acquired by a multispectral imager with the resolution of more than 1 meter carried by a satellite, and the resolution of the data is 10 meters.
The method for acquiring the RGB remote sensing image data and the Sentinel-2 data may specifically be a method of crawling through a network, or may be a method of acquiring the RGB remote sensing image data and the Sentinel-2 data from a resource stored locally in advance, and may specifically be determined by itself according to actual needs, which is not limited thereto. For example, the above RGB remote sensing image data is from Google Earth.
Step S104, carrying out up-sampling processing on Sentinel-2 data to obtain first image data; wherein the resolution of the first image data is the same as the resolution of the RGB remote sensing image data.
Since the resolution of the RGB remote sensing image data is less than 1 meter (i.e., the resolution is sub-meter), and the resolution of the Sentinel-2 data is greater than 1 meter (i.e., the resolution is non-sub-meter), in order to increase the non-sub-meter resolution to sub-meter resolution, the Sentinel-2 data may be up-sampled to obtain first image data with sub-meter resolution, so as to ensure that the resolution of the image data obtained in the subsequent image data processing process is sub-meter.
Step S106, calculating normalized water body index information and mangrove index information of the first image data through waves Duan Yun, and overlapping and fusing the normalized water body index information, the mangrove index information, near infrared band information and short wave infrared band information of the first image data with red, green and blue visible light band information of the RGB remote sensing image data to obtain second image data.
The normalized water index information is generally MNDWI, and is used for distinguishing water from non-water; the mangrove index information is typically WFI or the like, and is used to distinguish mangrove from non-mangrove.
Step S108, dividing the RGB remote sensing image data according to preset dividing parameters to obtain a plurality of divided image areas of the RGB remote sensing image data; wherein, the types of the objects contained in the same divided image area are the same, and the types of the objects contained in different divided area images are different; the preset segmentation parameters comprise a segmentation scale, a shape factor and smoothness.
Generally, the higher the resolution is, the smaller the segmentation scale is, and thus the set value of the segmentation scale needs to be determined according to the resolution of the RGB remote sensing image data. In addition, the set values of the shape factor and the smoothness can be determined by itself according to actual needs, and are not limited thereto.
After the RGB remote sensing image data is segmented, the objects contained in each segmented image area can be vegetation, river, soil or the like, the types of the objects contained in the same segmented image area are the same, and the types of the objects contained in different segmented image areas are different.
Step S110, according to the Sentinel-2 data, removing the segmented image areas which do not contain mangrove objects in the segmented image areas of the RGB remote sensing image data, and performing binarization processing to obtain label image data.
Specifically, since the Sentinel-2 data contains a mangrove object, the image area of the RGB remote sensing image data after segmentation, which does not contain the mangrove object, can be manually removed from the image area by referring to the mangrove growth area in the Sentinel-2 data, and binarization processing can be performed to obtain the tag image data.
Step S112, the RGB remote sensing image data, the second image data and the label image data are subjected to tiling processing, and an initial sample set containing a plurality of tiles with equal sizes is obtained.
In order to facilitate training of the subsequent deep learning model, the RGB remote sensing image data, the second image data and the label image data may be segmented into a plurality of equal size tiles, and then each tile may be used as an image sample to form the initial sample set.
And step S114, according to an image recognition model trained by the initial sample set in advance, predicting the mangrove object of the remote sensing image data to be predicted, and obtaining a mangrove prediction result of the remote sensing image data to be predicted.
The image recognition model can be specifically obtained by training an initial deep learning model, and the specific architecture of the initial deep learning model can be determined by itself according to actual needs without limitation.
In order to improve the accuracy of the image recognition model prediction, a spatial attention mechanism can be introduced into the image recognition model, more weights are allocated to mangrove features in the classification process, and fewer weights are allocated to non-mangrove features, so that the accuracy of mangrove recognition is improved. For example, a spatial attention module is designed on one or more layers of the initial semantic segmentation model, and corresponding weights are assigned to mangrove features and non-mangrove features by the spatial attention module through a spatial attention mechanism.
According to the image identification method provided by the embodiment of the invention, RGB remote sensing image data and Sentinel-2 data of a global mangrove growth area are acquired, up-sampling processing is carried out on the Sentinel-2 data to obtain first image data, normalized water index information and mangrove index information of the first image data are calculated through band operation, normalized water index information, mangrove index information, near infrared band information and short wave infrared band information of the first image data are overlapped and fused with red, green and blue visible light band information of the RGB remote sensing image data to obtain second image data, then the RGB remote sensing image data are segmented according to preset segmentation parameters to obtain a plurality of segmented image areas of the RGB remote sensing image data, segmented image areas which do not contain mangrove objects in the RGB remote sensing image data are removed according to the Sentinel-2 data, binarization processing is carried out to obtain tag image data, then the RGB remote sensing image data, the second image data and the tag image data are subjected to tile processing to obtain a final predicted sample set of the RGB remote sensing image data which contains a plurality of initial tiles with equal sizes, and a predicted sample set of the predicted image data is predicted according to the preset segmentation parameters, and a predicted sample set of the predicted image data is predicted according to the predicted image data of the predicted image set of the initial sample set of the initial size of the initial image data. By adopting the technology, sub-meter RGB remote sensing image data and non-sub-meter Sentinel-2 data are used as initial image data, so that higher classification accuracy can be ensured; when the label image data is manufactured, the image is segmented in a semi-automatic mode, so that the sample manufacturing time is shortened; the deep learning model is adopted to identify the mangrove forest, so that the imperfection caused by manual identification of the mangrove forest is reduced, and meanwhile, the accuracy and the efficiency of the mangrove forest identification are greatly improved, and the technical guarantee is provided for large-scale and large-scale fine drawing.
On the basis of the above image recognition method, for convenience of operation, the step S110 (that is, removing the segmented image area excluding the mangrove object from the plurality of segmented image areas of the RGB remote sensing image data according to the Sentinel-2 data, and performing binarization processing to obtain the tag image data) may include:
(11) Manually labeling a category label for each segmented image region according to the Sentinel-2 data and the type of the object contained in each segmented image region to obtain first label image data; wherein the types of the objects are classified into mangrove and non-mangrove, and the mangrove corresponds to the first type tag and the non-mangrove corresponds to the second type tag.
(12) And removing the segmented image area with the first type of labels in the first label image data to obtain second label image data.
(13) And performing binarization processing on the second tag image data to obtain tag image data.
For example, for a plurality of divided image areas of RGB remote sensing image data, a first type tag 2 is marked for the divided image area containing mangrove, a second type tag 1 is marked for the divided image area containing only non-mangrove, then the divided image area with the second type tag 1 is removed, and the remained divided image area with the first type tag 2 is binarized, so as to obtain tag image data.
On the basis of the image recognition method, for convenience of operation, the step S102 (i.e. acquiring RGB remote sensing image data and Sentinel-2 data of the mangrove growth area in the global area) may include:
(21) RGB remote sensing image data of the global mangrove growth area is obtained.
(22) Sentinel-2 data for the growth area of the global mangrove is obtained.
For example, RGB remote sensing image data with resolution less than 1 meter is downloaded in a water jet, and Sentinel-2 data with resolution of 10 meters is downloaded in a remote sensing cloud computing platform GEE (Google EARTH ENGINE).
As a possible implementation manner, the training of the image recognition model may include:
(31) And dividing the initial sample set into a training set, a verification set and a test set according to a preset proportional relation.
The above-mentioned preset ratio relationship can be specifically determined according to actual needs, for example, the ratio relationship of the training set, the verification set and the test set is 6:2:2, 7:2:1, 8:1:1, etc., which is not limited.
(32) Setting model training parameters; the model training parameters comprise training batch, learning rate and iteration times.
The training batch, the learning rate, and the iteration number may be specifically determined according to actual needs, for example, the training batch (i.e. batch_size) is set to 15, 20, or 30, the learning rate (i.e. learning) is set to 0.001, or the like, and the iteration number (i.e. epoch) is set to 120, 150, or 200, or the like, which is not limited thereto.
(33) Performing iterative training on the semantic segmentation model by using a training set, and verifying the precision of the semantic segmentation model after each iterative training by using a verification set; the accuracy verification indexes of the training set and the verification set about the semantic segmentation model are a first preset loss function and a first cross-correlation function.
The semantic segmentation model can adopt Deeplab v1, deeplab v2, deeplab v3, deeplab v3+ and the like, and can be specifically determined according to actual needs without limitation.
The first predetermined loss function and the first cross-ratio function may be specifically determined according to actual needs, for example, the first predetermined loss function uses cross entropy loss, softmax loss, and the like, and the first cross-ratio function uses MIoU and the like, which is not limited.
(34) Until the accuracy verification indexes of the training set and the verification set about the semantic segmentation model are stable, the model convergence condition is met, training is stopped, and an image recognition model is obtained; testing the semantic segmentation model which is completed by training by using a test set; the test indexes of the training set and the test set about the semantic segmentation model are a second preset loss function and a second cross-correlation function.
The second predetermined loss function and the second cross-over function are similar to the first predetermined loss function and the first cross-over function, and are not described in detail.
The model convergence condition may include at least one of: the first preset loss function is smaller than a preset smaller value or is stabilized near a lower value, the first cross ratio function is larger than a preset value close to 1 or is stabilized near a value close to 1, and the iteration number exceeds the preset maximum iteration number.
On the basis of the above image recognition method, in order to further enhance the validity of the image data, before performing the step S104 (that is, performing upsampling processing on the Sentinel-2 data to obtain the first image data), the method may further include: and carrying out cloud removal processing on the RGB remote sensing image data and the Sentinel-2 data.
Based on the above image recognition method, the embodiment of the invention also discloses another image recognition method, as shown in fig. 2, which may include the following steps:
Step S202, RGB remote sensing image data with the resolution of 0.56 m is downloaded in water injection, and Sentinel-2 data with the resolution of 10m is downloaded in GEE.
And S204, performing cloud removal processing on the RGB remote sensing image data and the Sentinel-2 data.
Step S206, up-sampling the Sentinel-2 data by using a resampling tool of ARCGIS software to obtain first image data with a resolution of 0.56 m.
Step S208, calculating MNDWI index information and WFI index information of the first image data through the wave Duan Yun, and overlapping and fusing MNDWI index information, WFI index information, near infrared band information B8 and short wave infrared band information B11 of the first image data with red, green and blue visible light band information of the RGB remote sensing image data to obtain second image data.
Table 1 shows near-infrared band information B8, short-wave infrared band information B11, and short-wave infrared band information B12, respectively, and table 2 shows calculation modes of MNDWI index information and WFI index information, respectively.
TABLE 1 band information
TABLE 2 index information
Step S210, setting segmentation parameters through eCognition software, and carrying out multi-scale optimized segmentation on the RGB remote sensing image data according to the set segmentation parameters to obtain a plurality of segmented image areas of the RGB remote sensing image data; wherein, the types of the objects contained in the same divided image area are the same, and the types of the objects contained in different divided area images are different; the preset segmentation parameters comprise a segmentation scale, a shape factor and smoothness.
The division scale may be set to a value between 30 and 100, such as 50; the shape factor may be set to a value between 0 and 1, such as 0.4; the smoothness may be set to a value between 0 and 1, such as 0.5. In addition, layer names, band weights, etc. can also be set by eCognition software.
Step S212, superposing the segmented RGB remote sensing image data and the Sentinel-2 data, removing the segmented image area of the segmented RGB remote sensing image data which does not contain the mangrove object, and performing binarization processing to obtain the tag image data.
In step S214, the RGB remote sensing image data, the second image data and the label image data are all segmented into a plurality of 256×256 (or 512×512) tiles, so as to obtain an initial sample set including the plurality of tiles.
Step S216, 60% samples which are uniformly distributed in space are extracted from the initial sample set to form a training set, and samples are extracted from the remaining 40% samples of the initial sample set to form a verification set and a test set respectively; the ratio of the training set to the verification set is 4:1.
Step S218, setting training batch to 20, learning rate to 0.001, and iteration number to 150 (or 200); performing iterative training on the Deeplab v & lt3+ & gt model by using a training set, and verifying the precision of the Deeplab v & lt3+ & gt model after each iterative training by using a verification set; the accuracy verification indexes of the training set and the verification set about Deeplab v3+ models are cross entropy loss and MIoU; until the accuracy verification indexes of the training set and the verification set about Deeplab v & lt3+ & gt model are stable and the model convergence condition is met, stopping training to obtain a mangrove identification model; testing the Deeplab v3+ model which is completed by training by using a testing set; wherein, the test indexes of the training set and the test set about the Deeplab v3+ model are cross entropy loss and MIoU.
During the training process, the changes in cross entropy loss and MIoU can also be recorded and corresponding images generated. For example, in FIG. 3, the horizontal axis represents iteration number and the vertical axis represents cross entropy loss; for example, in fig. 4, the horizontal axis represents the number of iterations and the vertical axis is MIoU.
The above model convergence conditions are similar to those described above, and will not be described in detail here.
And S220, carrying out mangrove object prediction on the remote sensing image data to be predicted by using the trained mangrove recognition model to obtain a mangrove prediction result of the remote sensing image data to be predicted.
Specifically, after a trained mangrove forest recognition model is obtained, firstly dividing remote sensing image data to be predicted into a plurality of 256-256 (or 512-512) tiles, then respectively inputting each tile into the mangrove forest recognition model, outputting 256-256 (or 512-512) predicted image data corresponding to each tile through the mangrove forest recognition model, and then splicing and assigning coordinates to the predicted image data corresponding to all the tiles to obtain the mangrove forest predicted image data of the remote sensing image data to be predicted.
The prediction result of the mangrove forest of the remote sensing image data to be predicted can be verified by visual interpretation, so that the prediction precision of the mangrove forest identification model is calculated.
Based on the above image recognition method, the embodiment of the invention also provides an image recognition system, as shown in fig. 5, which comprises the following modules:
The data acquisition module 502 is used for acquiring RGB remote sensing image data and Sentinel-2 data of a mangrove growth area in the global area; the resolution ratio of the RGB remote sensing image data is smaller than 1 meter; the Sentinel-2 data are data acquired by a multispectral imager with the resolution of more than 1 meter carried by a satellite, and the resolution of the data is 10 meters.
A data preprocessing module 504, configured to perform upsampling processing on the Sentinel-2 data to obtain first image data; wherein the resolution of the first image data is the same as the resolution of the RGB remote sensing image data.
The information processing module 506 is configured to calculate normalized water body index information and mangrove index information of the first image data through band operation, and superimpose and fuse the normalized water body index information, the mangrove index information, near infrared band information and short wave infrared band information of the first image data with red, green and blue visible light band information of the RGB remote sensing image data to obtain second image data.
The data segmentation module 508 is configured to segment the RGB remote sensing image data according to a preset segmentation parameter, so as to obtain a plurality of segmented image areas of the RGB remote sensing image data; wherein, the types of the objects contained in the same divided image area are the same, and the types of the objects contained in different divided area images are different; the preset segmentation parameters comprise a segmentation scale, a shape factor and smoothness.
And the tag data making module 510 is configured to remove, from the plurality of segmented image areas of the RGB remote sensing image data, a segmented image area that does not include a mangrove object according to the Sentinel-2 data, and perform binarization processing to obtain tag image data.
An initial sample set forming module 512 is configured to tile the RGB remote sensing image data, the second image data, and the tag image data to obtain an initial sample set including a plurality of tiles with equal sizes.
And the model prediction module 514 is configured to predict a mangrove object of the remote sensing image data to be predicted according to an image recognition model trained in advance by using the initial sample set, so as to obtain a mangrove prediction result of the remote sensing image data to be predicted.
According to the image recognition system provided by the embodiment of the invention, RGB remote sensing image data and Sentinel-2 data of a global mangrove growth area are acquired, up-sampling processing is carried out on the Sentinel-2 data to obtain first image data, normalized water index information and mangrove index information of the first image data are calculated through band operation, the normalized water index information, the mangrove index information, near infrared band information and short wave infrared band information of the first image data are overlapped and fused with red, green and blue visible light band information of the RGB remote sensing image data to obtain second image data, then the RGB remote sensing image data is segmented according to preset segmentation parameters to obtain a plurality of segmented image areas of the RGB remote sensing image data, segmented image areas which do not contain mangrove objects in the RGB remote sensing image data are removed according to the Sentinel-2 data, binarization processing is carried out to obtain tag image data, tile processing is carried out on the normalized water index information, the second image data and the tag image data of the first image data to obtain a predicted image data set which contains a plurality of initial size equal to obtain a predicted image sample set of predicted image data of a predicted image data, and a predicted image set of a predicted image data is predicted according to a predicted image set of initial size equal to the predicted image data is obtained. By adopting the technology, sub-meter RGB remote sensing image data and non-sub-meter Sentinel-2 data are used as initial image data, so that higher classification accuracy can be ensured; when the label image data is manufactured, the image is segmented in a semi-automatic mode, so that the sample manufacturing time is shortened; the deep learning model is adopted to identify the mangrove forest, so that the imperfection caused by manual identification of the mangrove forest is reduced, and meanwhile, the accuracy and the efficiency of the mangrove forest identification are greatly improved, and the technical guarantee is provided for large-scale and large-scale fine drawing.
The tag data creation module 510 is further configured to: manually labeling a category label for each segmented image region according to the Sentinel-2 data and the type of the object contained in each segmented image region to obtain first label image data; wherein the types of the objects are divided into mangrove and non-mangrove, the mangrove corresponds to a first type tag, and the non-mangrove corresponds to a second type tag; removing the segmented image area with the first type of labels in the first label image data to obtain second label image data; and performing binarization processing on the second tag image data to obtain the tag image data.
The data acquisition module 502 is further configured to: acquiring RGB remote sensing image data of a global mangrove growth area; sentinel-2 data for the growth area of the global mangrove is obtained.
The data preprocessing module 504 is further configured to: and carrying out cloud removal processing on the RGB remote sensing image data and the Sentinel-2 data before carrying out up-sampling processing on the Sentinel-2 data to obtain first image data.
Based on the above image recognition system, the embodiment of the present invention further provides another image recognition system, as shown in fig. 6, which further includes:
The model training module 516 is configured to divide the initial sample set into a training set, a verification set and a test set according to a preset proportional relationship; setting model training parameters; the model training parameters comprise training batch, learning rate and iteration times; performing iterative training on the semantic segmentation model by using the training set, and verifying the precision of the semantic segmentation model after each iterative training by using the verification set; the accuracy verification indexes of the training set and the verification set about the semantic segmentation model are a first preset loss function and a first cross-correlation function; stopping training until the accuracy verification indexes of the training set and the verification set about the semantic segmentation model are stable and the model convergence condition is met, and obtaining the image recognition model; testing the semantic segmentation model which is completed by training by using the test set; and the test indexes of the training set and the test set about the semantic segmentation model are a second preset loss function and a second cross-correlation function.
The image recognition system provided by the embodiment of the present invention has the same implementation principle and technical effects as those of the foregoing method embodiment, and for brevity, reference may be made to corresponding contents in the foregoing image recognition method embodiment where the system embodiment is not mentioned.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention, but it should be understood by those skilled in the art that the present invention is not limited thereto, and that the present invention is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. An image recognition method, the method comprising:
Acquiring RGB remote sensing image data and Sentinel-2 data of a mangrove growth area in the global area; the resolution ratio of the RGB remote sensing image data is smaller than 1 meter; the Sentinel-2 data are acquired by a multispectral imager with the resolution of more than 1 meter carried by a Sentinel-2 satellite, and the resolution of the data is 10 meters;
Performing up-sampling processing on the Sentinel-2 data to obtain first image data; wherein the resolution of the first image data is the same as the resolution of the RGB remote sensing image data;
calculating normalized water index information and mangrove index information of the first image data through band operation, and overlapping and fusing the normalized water index information, mangrove index information, near infrared band information and short wave infrared band information of the first image data with red, green and blue visible light band information of the RGB remote sensing image data to obtain second image data;
Dividing the RGB remote sensing image data according to preset dividing parameters to obtain a plurality of divided image areas of the RGB remote sensing image data; wherein, the types of the objects contained in the same divided image area are the same, and the types of the objects contained in different divided area images are different; the preset segmentation parameters comprise a segmentation scale, a shape factor and smoothness;
Removing the segmented image areas which do not contain mangrove objects in the segmented image areas of the RGB remote sensing image data according to the Sentinel-2 data, and performing binarization processing to obtain tag image data;
Performing tiling processing on the RGB remote sensing image data, the second image data and the tag image data to obtain an initial sample set containing a plurality of tiles with equal sizes;
And predicting the mangrove object according to the image recognition model trained by the initial sample set in advance, so as to obtain a mangrove prediction result of the remote sensing image data to be predicted.
2. The image recognition method according to claim 1, wherein the removing, from the Sentinel-2 data, the segmented image region excluding the mangrove object from the plurality of segmented image regions of the RGB remote sensing image data, and performing binarization processing to obtain the tag image data, includes:
manually labeling a category label for each segmented image region according to the Sentinel-2 data and the type of the object contained in each segmented image region to obtain first label image data; wherein the types of the objects are divided into mangrove and non-mangrove, the mangrove corresponds to a first type tag, and the non-mangrove corresponds to a second type tag;
Removing the segmented image area with the first type of labels in the first label image data to obtain second label image data;
And performing binarization processing on the second tag image data to obtain the tag image data.
3. The image recognition method of claim 1, wherein the acquiring RGB remote sensing image data and Sentinel-2 data of the global mangrove growth area comprises:
Acquiring RGB remote sensing image data of a mangrove growth area in the global range;
Sentinel-2 data was obtained for the mangrove growth area worldwide.
4. The image recognition method of claim 1, wherein the training of the image recognition model comprises:
dividing the initial sample set into a training set, a verification set and a test set according to a preset proportion relation;
Setting model training parameters; the model training parameters comprise training batch, learning rate and iteration times;
performing iterative training on the semantic segmentation model by using the training set, and verifying the precision of the semantic segmentation model after each iterative training by using the verification set; the accuracy verification indexes of the training set and the verification set about the semantic segmentation model are a first preset loss function and a first cross-correlation function;
Stopping training until the accuracy verification indexes of the training set and the verification set about the semantic segmentation model are stable and the model convergence condition is met, and obtaining the image recognition model; testing the semantic segmentation model which is completed by training by using the test set; and the test indexes of the training set and the test set about the semantic segmentation model are a second preset loss function and a second cross-correlation function.
5. The image recognition method according to claim 1, wherein before the upsampling process is performed on the Sentinel-2 data to obtain the first image data, the method further comprises:
And carrying out cloud removal processing on the RGB remote sensing image data and the Sentinel-2 data.
6. An image recognition system, the system comprising:
The data acquisition module is used for acquiring RGB remote sensing image data and Sentinel-2 data of a mangrove growth area in the global range; the resolution ratio of the RGB remote sensing image data is smaller than 1 meter; the Sentinel-2 data are data acquired by a multispectral imager with the resolution of more than 1 meter carried by a satellite, and the resolution of the data is 10 meters;
the data preprocessing module is used for carrying out up-sampling processing on the Sentinel-2 data to obtain first image data; wherein the resolution of the first image data is the same as the resolution of the RGB remote sensing image data;
the information processing module is used for calculating normalized water index information and mangrove index information of the first image data through band operation, and carrying out superposition fusion on the normalized water index information, the mangrove index information, near infrared band information and short wave infrared band information of the first image data and red, green and blue visible light band information of the RGB remote sensing image data to obtain second image data;
The data segmentation module is used for segmenting the RGB remote sensing image data according to preset segmentation parameters to obtain a plurality of segmented image areas of the RGB remote sensing image data; wherein, the types of the objects contained in the same divided image area are the same, and the types of the objects contained in different divided area images are different; the preset segmentation parameters comprise a segmentation scale, a shape factor and smoothness;
the tag data making module is used for removing the segmented image areas which do not contain mangrove objects in the multiple segmented image areas of the RGB remote sensing image data according to the Sentinel-2 data, and performing binarization processing to obtain tag image data;
The initial sample set forming module is used for tiling the RGB remote sensing image data, the second image data and the label image data to obtain an initial sample set containing a plurality of tiles with equal sizes;
and the model prediction module is used for predicting the mangrove object of the remote sensing image data to be predicted according to the image recognition model trained by the initial sample set in advance, so as to obtain a mangrove prediction result of the remote sensing image data to be predicted.
7. The image recognition system of claim 6, wherein the tag data creation module is further configured to:
manually labeling a category label for each segmented image region according to the Sentinel-2 data and the type of the object contained in each segmented image region to obtain first label image data; wherein the types of the objects are divided into mangrove and non-mangrove, the mangrove corresponds to a first type tag, and the non-mangrove corresponds to a second type tag;
Removing the segmented image area with the first type of labels in the first label image data to obtain second label image data;
And performing binarization processing on the second tag image data to obtain the tag image data.
8. The image recognition system of claim 6, wherein the data acquisition module is further configured to:
Acquiring RGB remote sensing image data of a mangrove growth area in the global range;
Sentinel-2 data was obtained for the mangrove growth area worldwide.
9. The image recognition system of claim 6, wherein the system further comprises:
The model training module is used for dividing the initial sample set into a training set, a verification set and a test set according to a preset proportional relation; setting model training parameters; the model training parameters comprise training batch, learning rate and iteration times; performing iterative training on the semantic segmentation model by using the training set, and verifying the precision of the semantic segmentation model after each iterative training by using the verification set; the accuracy verification indexes of the training set and the verification set about the semantic segmentation model are a first preset loss function and a first cross-correlation function; stopping training until the accuracy verification indexes of the training set and the verification set about the semantic segmentation model are stable and the model convergence condition is met, and obtaining the image recognition model; testing the semantic segmentation model which is completed by training by using the test set; and the test indexes of the training set and the test set about the semantic segmentation model are a second preset loss function and a second cross-correlation function.
10. The image recognition system of claim 6, wherein the data preprocessing module is further configured to: and carrying out cloud removal processing on the RGB remote sensing image data and the Sentinel-2 data before carrying out up-sampling processing on the Sentinel-2 data to obtain first image data.
CN202210623004.XA 2022-06-01 2022-06-01 Image recognition method and system Active CN114898097B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210623004.XA CN114898097B (en) 2022-06-01 2022-06-01 Image recognition method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210623004.XA CN114898097B (en) 2022-06-01 2022-06-01 Image recognition method and system

Publications (2)

Publication Number Publication Date
CN114898097A CN114898097A (en) 2022-08-12
CN114898097B true CN114898097B (en) 2024-05-10

Family

ID=82726327

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210623004.XA Active CN114898097B (en) 2022-06-01 2022-06-01 Image recognition method and system

Country Status (1)

Country Link
CN (1) CN114898097B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115601688B (en) * 2022-12-15 2023-02-21 中译文娱科技(青岛)有限公司 Video main content detection method and system based on deep learning
CN116862836A (en) * 2023-05-30 2023-10-10 北京透彻未来科技有限公司 System and computer equipment for detecting extensive organ lymph node metastasis cancer
CN117036982B (en) * 2023-10-07 2024-01-09 山东省国土空间数据和遥感技术研究院(山东省海域动态监视监测中心) Method and device for processing optical satellite image of mariculture area, equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564587A (en) * 2018-03-07 2018-09-21 浙江大学 A kind of a wide range of remote sensing image semantic segmentation method based on full convolutional neural networks
CN109543630A (en) * 2018-11-28 2019-03-29 苏州中科天启遥感科技有限公司 Remote sensing image forest land extracting method and system, storage medium, electronic equipment based on deep learning
CN110852225A (en) * 2019-10-31 2020-02-28 中国地质大学(武汉) Remote sensing image mangrove forest extraction method and system based on deep convolutional neural network
WO2021184891A1 (en) * 2020-03-20 2021-09-23 中国科学院深圳先进技术研究院 Remotely-sensed image-based terrain classification method, and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108985238B (en) * 2018-07-23 2021-10-22 武汉大学 Impervious surface extraction method and system combining deep learning and semantic probability

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564587A (en) * 2018-03-07 2018-09-21 浙江大学 A kind of a wide range of remote sensing image semantic segmentation method based on full convolutional neural networks
CN109543630A (en) * 2018-11-28 2019-03-29 苏州中科天启遥感科技有限公司 Remote sensing image forest land extracting method and system, storage medium, electronic equipment based on deep learning
CN110852225A (en) * 2019-10-31 2020-02-28 中国地质大学(武汉) Remote sensing image mangrove forest extraction method and system based on deep convolutional neural network
WO2021184891A1 (en) * 2020-03-20 2021-09-23 中国科学院深圳先进技术研究院 Remotely-sensed image-based terrain classification method, and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李想 ; 刘凯 ; 朱远辉 ; 蒙琳 ; 于晨曦 ; 曹晶晶 ; .基于资源三号影像的红树林物种分类研究.遥感技术与应用.2018,(第02期),全文. *
蒙良莉 ; 凌子燕 ; 蒋卫国 ; 钟仕全 ; 陈燕丽 ; 孙明 ; .基于Sentinel遥感数据的红树林信息提取研究――以广西茅尾海为例.地理与地理信息科学.2020,(第04期),全文. *

Also Published As

Publication number Publication date
CN114898097A (en) 2022-08-12

Similar Documents

Publication Publication Date Title
CN114898097B (en) Image recognition method and system
Ji et al. Fully convolutional networks for multisource building extraction from an open aerial and satellite imagery data set
Huang et al. Individual tree crown detection and delineation from very-high-resolution UAV images based on bias field and marker-controlled watershed segmentation algorithms
CN110929607B (en) Remote sensing identification method and system for urban building construction progress
CN103400151B (en) The optical remote sensing image of integration and GIS autoregistration and Clean water withdraw method
CA2840436C (en) System for mapping and identification of plants using digital image processing and route generation
CN110263717B (en) Method for determining land utilization category of street view image
CN108734143A (en) A kind of transmission line of electricity online test method based on binocular vision of crusing robot
Münzinger et al. Mapping the urban forest in detail: From LiDAR point clouds to 3D tree models
CN105809194B (en) A kind of method that SAR image is translated as optical image
CN110675408A (en) High-resolution image building extraction method and system based on deep learning
CN104778721A (en) Distance measuring method of significant target in binocular image
Liu et al. Building footprint extraction from unmanned aerial vehicle images via PRU-Net: Application to change detection
CN113378785A (en) Forest type identification method and device
Li et al. Pixel‐Level Recognition of Pavement Distresses Based on U‐Net
CN113298042B (en) Remote sensing image data processing method and device, storage medium and computer equipment
CN111079807A (en) Ground object classification method and device
Ibrahim et al. Smart monitoring of road pavement deformations from UAV images by using machine learning
CN116503677B (en) Wetland classification information extraction method, system, electronic equipment and storage medium
Tejeswari et al. Building footprint extraction from space-borne imagery using deep neural networks
CN114821484B (en) Airport runway FOD image detection method, system and storage medium
Faisal et al. Machine learning approach to extract building footprint from high-resolution images: the case study of Makkah, Saudi Arabia
CN113627292B (en) Remote sensing image recognition method and device based on fusion network
CN112036246B (en) Construction method of remote sensing image classification model, remote sensing image classification method and system
Mei et al. A cost effective solution for road crack inspection using cameras and deep neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant