WO2022073452A1 - 一种基于自注意力上下文网络的高光谱遥感图像分类方法 - Google Patents
一种基于自注意力上下文网络的高光谱遥感图像分类方法 Download PDFInfo
- Publication number
- WO2022073452A1 WO2022073452A1 PCT/CN2021/121774 CN2021121774W WO2022073452A1 WO 2022073452 A1 WO2022073452 A1 WO 2022073452A1 CN 2021121774 W CN2021121774 W CN 2021121774W WO 2022073452 A1 WO2022073452 A1 WO 2022073452A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- self
- attention
- feature
- features
- context
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 230000006870 function Effects 0.000 claims description 17
- 238000011176 pooling Methods 0.000 claims description 15
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000004422 calculation algorithm Methods 0.000 claims description 4
- 230000004927 fusion Effects 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 claims description 3
- 239000000284 extract Substances 0.000 description 4
- 230000003595 spectral effect Effects 0.000 description 4
- 238000011160 research Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 238000012938 design process Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/28—Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/094—Adversarial learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/58—Extraction of image or video features relating to hyperspectral data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/70—Labelling scene content, e.g. deriving syntactic or semantic representations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/194—Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A40/00—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
- Y02A40/10—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
Definitions
- the invention belongs to the technical field of computer image processing, and relates to an image classification method, in particular to a hyperspectral remote sensing image classification method based on a self-attention context network.
- hyperspectral remote sensing can obtain continuous remote sensing observation data in spatial and spectral dimensions at the same time.
- hyperspectral remote sensing images have higher spectral resolution and more bands, which can reflect more abundant spectral characteristics of ground objects. Therefore, using hyperspectral images to classify and identify ground objects is one of the important ways to realize earth observation.
- hyperspectral image classification methods are based on deep convolutional neural networks, and have achieved good results in ground object recognition.
- existing deep neural networks are extremely vulnerable to adversarial samples, causing the model prediction results to deviate from the true labels of the samples.
- the existing deep neural network-based hyperspectral remote sensing image classification methods do not fully consider the security and reliability of the network in the design process, making these methods. Highly vulnerable to adversarial attacks. Therefore, there is an urgent need for a hyperspectral remote sensing image classification algorithm with higher safety and reliability, which can better meet the needs of safe and reliable high-precision object recognition.
- the present invention provides a hyperspectral remote sensing image classification method based on a self-attention context network, which includes a backbone network, a self-attention module and a context encoding module.
- the backbone network extracts hierarchical features through three 3 ⁇ 3 dilated convolutional layers and a 2 ⁇ 2 average pooling layer. Then, the features extracted by the backbone network are used as the input of the self-attention module, and self-attention learning is performed to construct the spatial dependencies between pixels to obtain self-attention features. This feature is then used as input to the context encoding module, which learns global context features.
- this method further fuses the global context features with the first two layers of convolution features in the backbone network.
- the technical scheme of the present invention is as follows: firstly, an overall network is constructed, including a backbone network, a self-attention module and a context encoding module, and the backbone network extracts hierarchical features through three dilated convolution layers and an average pooling layer; then , the features extracted by the backbone network are used as the input of the self-attention module, and the self-attention learning is performed, the spatial dependencies between pixels are constructed, and the self-attention features are obtained; the self-attention features are then used as the input of the context encoding module to learn the global context.
- the specific implementation includes the following steps:
- Step 1 Initialize the parameters in the overall network to satisfy a Gaussian distribution with a mean of 0 and a variance of 0.1;
- Step 2 Record the original hyperspectral image as where h, w, and c are the height, width and number of bands of the image respectively, and X is input to the backbone network;
- Step 3 Input the feature C3 of the third dilated convolutional layer into the self-attention module to learn self-attention features m is the number of convolution kernels in the first dilated convolution layer;
- Step 4 Input the self-attention feature S learned by the self-attention module to the context encoding module to learn contextual features
- Step 5 The context feature Z and the first and second convolution features are fused in series to obtain the fused features where U( ) represents the 2-fold bilinear interpolation upsampling operation, and C 1 and C 2 are the features of the first dilated convolutional layer and the features of the second dilated convolutional layer, respectively;
- Step 6 Input the fusion feature H into a convolutional layer, and use the Softmax function to obtain the probability map predicted by the network Calculate the predicted probability map and the cross-entropy loss function between the true label Y;
- Step 7 Use the gradient descent algorithm to optimize the loss function in step 6;
- Step 8 Repeat steps 2-7 above until the overall network converges
- Step 9 Input the target image to be identified into the trained overall network to complete the final hyperspectral remote sensing image classification task.
- step 3 includes the following sub-steps:
- Step 3.1 To reduce the computational burden in the self-attention feature learning process, an average pooling layer is used to halve the spatial size of the input third -layer convolutional feature C3: P avg ( ) is the average pooling operation;
- Step 3.2 Input P 2 to three convolutional layers with n convolution kernels respectively to obtain the corresponding feature maps
- Step 3.3 Resize ⁇ , ⁇ , ⁇ to Calculate the spatial attention map using the following formula
- Step 3.5 Calculate the final self-attention enhanced features
- F( ⁇ ) represents the nonlinear mapping function, which is implemented by a convolutional layer with m convolution kernels
- U( ⁇ ) represents the 2-fold bilinear interpolation upsampling operation
- step 4 includes the following sub-steps:
- Step 4.2 Note To exploit the global statistics in Q to learn an encoding dictionary of visual centers, where Represents the jth element in the dictionary, k is the number of elements in the dictionary D, and the standardized residual between Q and D is calculated as:
- s j represents the scaling factor corresponding to the j-th element in D
- Step 4.3 Calculate the global context vector in Represents a batch normalization operation
- Step 4.4 Use a fully connected layer to upscale the global context vector e to
- W fc and b fc are the parameter matrix and bias vector corresponding to the fully connected layer
- Step 4.5 Calculate the final contextual features
- step 2 the specific implementation of inputting X into the backbone network in step 2 is as follows,
- v is the total number of categories, is the predicted probability map, Y is the true label, and h and w are the height and width of the image, respectively.
- the invention proposes a hyperspectral remote sensing image classification method based on a self-attention context network, which can effectively improve the model's resistance to adversarial samples.
- the present invention constructs the spatial dependency between pixels in the hyperspectral remote sensing image through self-attention learning and context coding, extracts global context features, On the hyperspectral remote sensing data against attack pollution, it can still maintain superior ground object recognition accuracy.
- FIG. 1 is a schematic diagram of a hyperspectral remote sensing image classification method based on a self-attention context network proposed by the present invention.
- the present invention provides a method for classifying hyperspectral remote sensing images based on a self-attention context network.
- the method includes an overall network composed of a backbone network, a self-attention module and a context encoding module.
- the backbone network extracts hierarchical features through three 3 ⁇ 3 dilated convolutional layers and one 2 ⁇ 2 average pooling layer.
- the features extracted by the backbone network are used as the input of the self-attention module, and self-attention learning is performed to construct the spatial dependencies between pixels to obtain self-attention features.
- This feature is then used as input to the context encoding module, which learns global context features.
- this method further fuses the global context features with the first two layers of convolution features in the backbone network.
- Step 1 Initialize the parameters in the overall network to satisfy a Gaussian distribution with a mean of 0 and a variance of 0.1;
- Step 2 Record the original hyperspectral image as where h, w, and c are the height, width and number of bands of the image, respectively.
- Step 3 Input the third layer convolutional feature C3 to the self-attention module to learn self-attention features
- Step 4 Input the self-attention feature S learned by the self-attention module to the context encoding module to learn contextual features
- Step 5 The context feature Z and the first and second layer convolution features are fused in series to obtain the fused features where U( ) represents the 2x bilinear interpolation upsampling operation;
- Step 6 Input the fusion feature H into a 1 ⁇ 1 convolutional layer, and use the Softmax function to obtain the probability map of the network prediction Calculate the predicted probability map Cross-entropy loss with ground-truth label Y
- Step 7 Use the gradient descent algorithm to optimize the loss function in step 6;
- Step 8 Repeat steps 2-7 above until the overall network converges
- Step 9 Input the target image to be identified into the trained overall network to complete the final hyperspectral remote sensing image classification task.
- the specific learning process of the self-attention feature S described in step 3 includes the following sub-steps:
- Step 3.1 To reduce the computational burden during self-attention feature learning, a 2 ⁇ 2 average pooling layer is used to halve the spatial size of the input third -layer convolutional feature C3:
- Step 3.2 Input P 2 into three convolutional layers with n 1 ⁇ 1 convolution kernels respectively to obtain the corresponding feature maps
- Step 3.3 Resize ⁇ , ⁇ , ⁇ to Calculate the spatial attention map using the following formula
- Step 3.5 Calculate the final self-attention enhanced features
- F( ) represents the nonlinear mapping function, which is implemented by a 1 ⁇ 1 convolution layer with m convolution kernels.
- the specific learning process of the context feature Z described in step 4 includes the following sub-steps:
- Step 4.2 Note To use the global statistics in Q to learn the encoding dictionary of the visual center, k is the number of elements in the dictionary D, where Represents the jth element in the dictionary. Calculate the standardized residual between Q and D as:
- s j represents the scaling factor corresponding to the j-th element in D
- s l represents the The scaling factor corresponding to the lth element of ;
- Step 4.3 Calculate the global context vector in Represents a batch normalization operation
- Step 4.4 Use a fully connected layer to upscale the global context vector e to
- W fc and b fc are the parameter matrix and bias vector corresponding to the fully connected layer
- Step 4.5 Calculate the final contextual features
- the above are the implementation steps of a method for classifying hyperspectral remote sensing images based on a self-attention context network related to the present invention.
- the hyperspectral image data should be normalized so that all pixel values are in the range of 0-1. This step will serve as the preprocessing step involved in the present invention.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Remote Sensing (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Astronomy & Astrophysics (AREA)
- Biodiversity & Conservation Biology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (5)
- 一种基于自注意力上下文网络的高光谱遥感图像分类方法,其特征在于:首先构建整体网络,包括一个基干网络、一个自注意力模块和一个上下文编码模块,基干网络通过三个扩张卷积层和一个平均池化层来提取层次化特征;随后,将基干网络提取的特征作为自注意力模块的输入,进行自注意力学习,构建像素间的空间依赖关系,得到自注意力特征;自注意力特征随后作为上下文编码模块的输入,学习全局上下文特征;具体实现包括以下步骤:步骤1:初始化整体网络中的参数,使其满足均值为0方差为0.1的高斯分布;步骤5:将上下文特征Z与第一、第二个卷积特征采用串联的方式进行特征融合,得到融合后的特征 其中U(·)表示2倍双线性内插上采样操作,C 1和C 2分别为第一个扩张卷积层的特征和第二个扩张卷积层的特征;步骤7:利用梯度下降算法,优化步骤6中的损失函数;步骤8:重复上述步骤2-7,直至整体网络收敛;步骤9:将待识别的目标影像,输入到训练好的整体网络中,完成最终的高光谱遥感图像分类任务。
- 根据权利要求1所述的基于自注意力上下文网络的高光谱遥感图像分类方法,其特征在于:步骤3中所述的自注意力特征S,其具体学习过程包括以下子步骤:其中,A (i,j)表示影像中像素i对像素j的影响,k=1,2,…,hw/16;S=F(U(B))+C 3,其中F(·)表示非线性映射函数,具体采用一个具有m个卷积核的卷积层实现,U(·)表示2倍双线性内插上采样操作。
- 根据权利要求1所述的基于自注意力上下文网络的高光谱遥感图像分类方法,其特征在于:步骤4中所述的上下文特征Z,其具体学习过程包括以下子步骤:其中,r ij=q i-d j表示Q中的第i个元素与D中的第j个元素的残差,s j表示D中的第j个元素对应的缩放因子,s l表示D中的第l个元素对应的缩放因子;其中⊙表示通道维度上的点乘。
- 根据权利要求1所述的基于自注意力上下文网络的高光谱遥感图像分类方法,其特征在于:步骤2中将X输入基干网络的具体实现如下,将X输入具有m个3×3卷积核的第一个扩张卷积层,计算第一个扩张卷积层的特征C 1=g(W 1*X+b 1),其中W 1,b 1分别为第一个扩张卷积层对应的参数矩阵与偏置向量,g(x)=max(0,x)为修正线性单元函数;同样的,第二、第三个扩张卷积层的特征表达为C 2=g(W 2*C 1+b 2),C 3=g(W 3*P 1+b 3),其中P 1=P avg(C 2)为第一个池化层的特征,P avg(·)为2×2平均池化操作。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/192,657 US11783579B2 (en) | 2020-10-07 | 2023-03-30 | Hyperspectral remote sensing image classification method based on self-attention context network |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011067782.2 | 2020-10-07 | ||
CN202011067782.2A CN112287978B (zh) | 2020-10-07 | 2020-10-07 | 一种基于自注意力上下文网络的高光谱遥感图像分类方法 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/192,657 Continuation-In-Part US11783579B2 (en) | 2020-10-07 | 2023-03-30 | Hyperspectral remote sensing image classification method based on self-attention context network |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022073452A1 true WO2022073452A1 (zh) | 2022-04-14 |
Family
ID=74422893
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/121774 WO2022073452A1 (zh) | 2020-10-07 | 2021-09-29 | 一种基于自注意力上下文网络的高光谱遥感图像分类方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US11783579B2 (zh) |
CN (1) | CN112287978B (zh) |
WO (1) | WO2022073452A1 (zh) |
Cited By (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114612688A (zh) * | 2022-05-16 | 2022-06-10 | 中国科学技术大学 | 对抗样本生成方法、模型训练方法、处理方法及电子设备 |
CN114758032A (zh) * | 2022-06-15 | 2022-07-15 | 之江实验室 | 基于时空注意力模型的多相期ct图像分类***及构建方法 |
CN114778485A (zh) * | 2022-06-16 | 2022-07-22 | 中化现代农业有限公司 | 基于近红外光谱和注意力机制网络的品种鉴定方法及*** |
CN114821342A (zh) * | 2022-06-02 | 2022-07-29 | 中国科学院地理科学与资源研究所 | 一种遥感影像道路提取方法及*** |
CN114842105A (zh) * | 2022-06-02 | 2022-08-02 | 北京大学 | 一种一体化的条件图像重绘方法及装置 |
CN114863179A (zh) * | 2022-05-18 | 2022-08-05 | 合肥工业大学 | 基于多尺度特征嵌入和交叉注意力的内窥镜图像分类方法 |
CN114863173A (zh) * | 2022-05-06 | 2022-08-05 | 南京审计大学 | 一种面向土地资源审计的自互注意力高光谱图像分类方法 |
CN114897718A (zh) * | 2022-04-29 | 2022-08-12 | 重庆理工大学 | 一种能同时平衡上下文信息和空间细节的低光图像增强方法 |
CN115019174A (zh) * | 2022-06-10 | 2022-09-06 | 西安电子科技大学 | 基于像素重组和注意力的上采样遥感图像目标识别方法 |
CN115063685A (zh) * | 2022-07-11 | 2022-09-16 | 河海大学 | 一种基于注意力网络的遥感影像建筑物特征提取方法 |
CN115131252A (zh) * | 2022-09-01 | 2022-09-30 | 杭州电子科技大学 | 基于二次编解码结构的金属物体表面高光去除方法 |
CN115249332A (zh) * | 2022-09-23 | 2022-10-28 | 山东锋士信息技术有限公司 | 基于空谱双分支卷积网络的高光谱图像分类方法及设备 |
CN115345866A (zh) * | 2022-08-25 | 2022-11-15 | 中国科学院地理科学与资源研究所 | 一种遥感影像中建筑物提取方法、电子设备及存储介质 |
CN115359306A (zh) * | 2022-10-24 | 2022-11-18 | 中铁科学技术开发有限公司 | 一种铁路货检高清图像智能识别方法和*** |
CN115358270A (zh) * | 2022-08-19 | 2022-11-18 | 山东省人工智能研究院 | 一种基于多任务mtef-net的心电分类方法 |
CN115471677A (zh) * | 2022-09-15 | 2022-12-13 | 贵州大学 | 一种基于双通道稀疏化网络的高光谱图像分类方法 |
CN115482463A (zh) * | 2022-09-01 | 2022-12-16 | 北京低碳清洁能源研究院 | 一种生成对抗网络矿区土地覆盖识别方法及*** |
CN115546569A (zh) * | 2022-12-05 | 2022-12-30 | 鹏城实验室 | 一种基于注意力机制的数据分类优化方法及相关设备 |
CN115661655A (zh) * | 2022-11-03 | 2023-01-31 | 重庆市地理信息和遥感应用中心 | 高光谱和高分影像深度特征融合的西南山区耕地提取方法 |
CN115761250A (zh) * | 2022-11-21 | 2023-03-07 | 北京科技大学 | 一种化合物逆合成方法及装置 |
CN115880346A (zh) * | 2023-02-10 | 2023-03-31 | 耕宇牧星(北京)空间科技有限公司 | 一种基于深度学习的可见光遥感图像精确配准方法 |
CN115965953A (zh) * | 2023-01-04 | 2023-04-14 | 哈尔滨工业大学 | 基于高光谱成像与深度学习的粮种品种分类方法 |
CN116052007A (zh) * | 2023-03-30 | 2023-05-02 | 山东锋士信息技术有限公司 | 一种融合时间和空间信息的遥感图像变化检测方法 |
CN116091640A (zh) * | 2023-04-07 | 2023-05-09 | 中国科学院国家空间科学中心 | 一种基于光谱自注意力机制的遥感高光谱重建方法及*** |
CN116129143A (zh) * | 2023-02-08 | 2023-05-16 | 山东省人工智能研究院 | 一种基于串并联网络特征融合的边缘阔提取方法 |
CN116152660A (zh) * | 2023-02-14 | 2023-05-23 | 北京市遥感信息研究所 | 一种基于跨尺度注意力机制的广域遥感图像变化检测方法 |
CN116206099A (zh) * | 2023-05-06 | 2023-06-02 | 四川轻化工大学 | 一种基于sar图像的船舶位置检测方法及存储介质 |
CN116310572A (zh) * | 2023-03-23 | 2023-06-23 | 齐齐哈尔大学 | 金字塔多尺度卷积和自注意力结合的高光谱图像分类方法 |
CN116343052A (zh) * | 2023-05-30 | 2023-06-27 | 华东交通大学 | 一种基于注意力和多尺度的双时相遥感图像变化检测网络 |
CN116400426A (zh) * | 2023-06-06 | 2023-07-07 | 山东省煤田地质局第三勘探队 | 基于电磁法的数据勘测*** |
CN116452820A (zh) * | 2023-06-19 | 2023-07-18 | 中国科学院空天信息创新研究院 | 环境污染等级确定方法及装置 |
CN116503677A (zh) * | 2023-06-28 | 2023-07-28 | 武汉大学 | 一种湿地分类信息提取方法、***、电子设备及存储介质 |
CN116503628A (zh) * | 2023-06-29 | 2023-07-28 | 华侨大学 | 自动化农业机械的图像匹配算法、装置、设备及存储介质 |
CN116563313A (zh) * | 2023-07-11 | 2023-08-08 | 安徽大学 | 基于门控融合注意力的遥感影像大豆种植区域分割方法 |
CN116612333A (zh) * | 2023-07-17 | 2023-08-18 | 山东大学 | 一种基于快速全卷积网络的医学高光谱影像分类方法 |
CN116612334A (zh) * | 2023-07-18 | 2023-08-18 | 山东科技大学 | 一种基于空谱联合注意力机制的医学高光谱图像分类方法 |
CN116630700A (zh) * | 2023-05-22 | 2023-08-22 | 齐鲁工业大学(山东省科学院) | 基于引入通道-空间注意力机制的遥感图像分类方法 |
CN116777892A (zh) * | 2023-07-03 | 2023-09-19 | 东莞市震坤行胶粘剂有限公司 | 基于视觉检测的点胶质量检测方法及其*** |
CN116863342A (zh) * | 2023-09-04 | 2023-10-10 | 江西啄木蜂科技有限公司 | 一种基于大尺度遥感影像的松材线虫病死木提取方法 |
CN117372789A (zh) * | 2023-12-07 | 2024-01-09 | 北京观微科技有限公司 | 图像分类方法及图像分类装置 |
CN117409264A (zh) * | 2023-12-16 | 2024-01-16 | 武汉理工大学 | 基于transformer的多传感器数据融合机器人地形感知方法 |
CN117893816A (zh) * | 2024-01-18 | 2024-04-16 | 安徽大学 | 一种分层次残差光谱空间卷积网络的高光谱图像分类方法 |
CN118155082A (zh) * | 2024-05-13 | 2024-06-07 | 山东锋士信息技术有限公司 | 一种基于高光谱空间及光谱信息的双分支变化检测方法 |
Families Citing this family (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112287978B (zh) * | 2020-10-07 | 2022-04-15 | 武汉大学 | 一种基于自注意力上下文网络的高光谱遥感图像分类方法 |
CN112990230B (zh) * | 2021-03-12 | 2023-05-09 | 西安电子科技大学 | 基于二阶段分组注意力残差机制的光谱图像压缩重建方法 |
CN113313127B (zh) * | 2021-05-18 | 2023-02-14 | 华南理工大学 | 文本图像识别方法、装置、计算机设备和存储介质 |
CN113569724B (zh) * | 2021-07-27 | 2022-04-19 | 中国科学院地理科学与资源研究所 | 基于注意力机制和扩张卷积的道路提取方法及*** |
CN113705641B (zh) * | 2021-08-16 | 2023-11-10 | 武汉大学 | 基于富上下文网络的高光谱图像分类方法 |
CN113705526B (zh) * | 2021-09-07 | 2022-03-04 | 安徽大学 | 一种高光谱遥感影像分类方法 |
CN114220012B (zh) * | 2021-12-16 | 2024-05-31 | 池明旻 | 一种基于深度自注意力网络的纺织品棉麻鉴别方法 |
CN114742170B (zh) * | 2022-04-22 | 2023-07-25 | 马上消费金融股份有限公司 | 对抗样本生成方法、模型训练方法、图像识别方法及装置 |
CN115690917B (zh) * | 2023-01-04 | 2023-04-18 | 南京云创大数据科技股份有限公司 | 一种基于外观和运动智能关注的行人动作识别方法 |
CN116597204A (zh) * | 2023-05-12 | 2023-08-15 | 内蒙古农业大学 | 基于Transformer网络的草地多时相高光谱分类方法 |
CN116824525B (zh) * | 2023-08-29 | 2023-11-14 | 中国石油大学(华东) | 一种基于交通道路影像的图像信息提取方法 |
CN116895030B (zh) * | 2023-09-11 | 2023-11-17 | 西华大学 | 基于目标检测算法和注意力机制的绝缘子检测方法 |
CN117152616A (zh) * | 2023-09-12 | 2023-12-01 | 电子科技大学 | 一种基于光谱增强和双路编码的遥感图像典型地物提取方法 |
CN117218537B (zh) * | 2023-09-13 | 2024-02-13 | 安徽大学 | 基于Transformer和非局部神经网络双分支架构的高光谱图像分类方法 |
CN117216480B (zh) * | 2023-09-18 | 2024-06-28 | 宁波大学 | 一种深度耦合地理时空信息的近地表臭氧遥感估算方法 |
CN117095360B (zh) * | 2023-10-18 | 2023-12-15 | 四川傲空航天科技有限公司 | 基于sar卫星遥感技术的粮食作物监测方法及*** |
CN117671437B (zh) * | 2023-10-19 | 2024-06-18 | 中国矿业大学(北京) | 基于多任务卷积神经网络的露天采场识别与变化检测方法 |
CN117152546B (zh) * | 2023-10-31 | 2024-01-26 | 江西师范大学 | 一种遥感场景分类方法、***、存储介质及电子设备 |
CN117197002B (zh) * | 2023-11-07 | 2024-02-02 | 松立控股集团股份有限公司 | 一种基于感知扩散的图像复原方法 |
CN117422932B (zh) * | 2023-11-17 | 2024-05-28 | 中国矿业大学 | 一种基于多模态强化图注意力网络的高光谱图像分类方法 |
CN117876817B (zh) * | 2023-12-25 | 2024-06-21 | 北京化工大学 | 一种对抗样本生成方法 |
CN117496281B (zh) * | 2024-01-03 | 2024-03-19 | 环天智慧科技股份有限公司 | 一种农作物遥感图像分类方法 |
CN117612018B (zh) * | 2024-01-23 | 2024-04-05 | 中国科学院长春光学精密机械与物理研究所 | 用于光学遥感载荷像散的智能判别方法 |
CN117765402B (zh) * | 2024-02-21 | 2024-05-17 | 山东科技大学 | 一种基于注意力机制的高光谱图像匹配检测方法 |
CN117934978B (zh) * | 2024-03-22 | 2024-06-11 | 安徽大学 | 一种基于对抗学习的高光谱和激光雷达多层融合分类方法 |
CN118090705A (zh) * | 2024-04-19 | 2024-05-28 | 中国科学院合肥物质科学研究院 | 一种基于增强拉曼光谱的农药残留定量检测方法 |
CN118096541B (zh) * | 2024-04-28 | 2024-06-25 | 山东省淡水渔业研究院(山东省淡水渔业监测中心) | 一种渔业遥感测试图像数据处理方法 |
CN118172613B (zh) * | 2024-05-11 | 2024-07-05 | 济南霆盈智能装备科技有限公司 | 一种用于微生物检测方法 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109376804A (zh) * | 2018-12-19 | 2019-02-22 | 中国地质大学(武汉) | 基于注意力机制和卷积神经网络高光谱遥感图像分类方法 |
WO2020097461A1 (en) * | 2018-11-08 | 2020-05-14 | Siemens Aktiengesellschaft | Convolutional neural networks with reduced attention overlap |
CN111274869A (zh) * | 2020-01-07 | 2020-06-12 | 中国地质大学(武汉) | 基于并行注意力机制残差网进行高光谱图像分类的方法 |
CN111582225A (zh) * | 2020-05-19 | 2020-08-25 | 长沙理工大学 | 一种遥感图像场景分类方法及装置 |
CN111738124A (zh) * | 2020-06-15 | 2020-10-02 | 西安电子科技大学 | 基于Gabor变换和注意力的遥感图像云检测方法 |
CN112287978A (zh) * | 2020-10-07 | 2021-01-29 | 武汉大学 | 一种基于自注意力上下文网络的高光谱遥感图像分类方法 |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106485238B (zh) * | 2016-11-01 | 2019-10-15 | 深圳大学 | 一种高光谱遥感图像特征提取和分类方法及其*** |
WO2018081929A1 (zh) * | 2016-11-01 | 2018-05-11 | 深圳大学 | 一种高光谱遥感图像特征提取和分类方法及其*** |
US11544259B2 (en) * | 2018-11-29 | 2023-01-03 | Koninklijke Philips N.V. | CRF-based span prediction for fine machine learning comprehension |
CN110532353B (zh) * | 2019-08-27 | 2021-10-15 | 海南阿凡题科技有限公司 | 基于深度学习的文本实体匹配方法、***、装置 |
CN110688491B (zh) * | 2019-09-25 | 2022-05-10 | 暨南大学 | 基于深度学习的机器阅读理解方法、***、设备及介质 |
WO2021098585A1 (en) * | 2019-11-22 | 2021-05-27 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Image search based on combined local and global information |
CN110969010A (zh) * | 2019-12-06 | 2020-04-07 | 浙江大学 | 一种基于关系指导及双通道交互机制的问题生成方法 |
CN111368896B (zh) * | 2020-02-28 | 2023-07-18 | 南京信息工程大学 | 基于密集残差三维卷积神经网络的高光谱遥感图像分类方法 |
CA3167079A1 (en) * | 2020-03-27 | 2021-09-30 | Mehrsan Javan Roshtkhari | System and method for group activity recognition in images and videos with self-attention mechanisms |
CN111583220B (zh) * | 2020-04-30 | 2023-04-18 | 腾讯科技(深圳)有限公司 | 影像数据检测方法和装置 |
US20210390410A1 (en) * | 2020-06-12 | 2021-12-16 | Google Llc | Local self-attention computer vision neural networks |
US20220108478A1 (en) * | 2020-10-02 | 2022-04-07 | Google Llc | Processing images using self-attention based neural networks |
US20220301311A1 (en) * | 2021-03-17 | 2022-09-22 | Qualcomm Incorporated | Efficient self-attention for video processing |
US20220301310A1 (en) * | 2021-03-17 | 2022-09-22 | Qualcomm Incorporated | Efficient video processing via dynamic knowledge propagation |
-
2020
- 2020-10-07 CN CN202011067782.2A patent/CN112287978B/zh active Active
-
2021
- 2021-09-29 WO PCT/CN2021/121774 patent/WO2022073452A1/zh active Application Filing
-
2023
- 2023-03-30 US US18/192,657 patent/US11783579B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020097461A1 (en) * | 2018-11-08 | 2020-05-14 | Siemens Aktiengesellschaft | Convolutional neural networks with reduced attention overlap |
CN109376804A (zh) * | 2018-12-19 | 2019-02-22 | 中国地质大学(武汉) | 基于注意力机制和卷积神经网络高光谱遥感图像分类方法 |
CN111274869A (zh) * | 2020-01-07 | 2020-06-12 | 中国地质大学(武汉) | 基于并行注意力机制残差网进行高光谱图像分类的方法 |
CN111582225A (zh) * | 2020-05-19 | 2020-08-25 | 长沙理工大学 | 一种遥感图像场景分类方法及装置 |
CN111738124A (zh) * | 2020-06-15 | 2020-10-02 | 西安电子科技大学 | 基于Gabor变换和注意力的遥感图像云检测方法 |
CN112287978A (zh) * | 2020-10-07 | 2021-01-29 | 武汉大学 | 一种基于自注意力上下文网络的高光谱遥感图像分类方法 |
Cited By (73)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114897718B (zh) * | 2022-04-29 | 2023-09-19 | 重庆理工大学 | 一种能同时平衡上下文信息和空间细节的低光图像增强方法 |
CN114897718A (zh) * | 2022-04-29 | 2022-08-12 | 重庆理工大学 | 一种能同时平衡上下文信息和空间细节的低光图像增强方法 |
CN114863173A (zh) * | 2022-05-06 | 2022-08-05 | 南京审计大学 | 一种面向土地资源审计的自互注意力高光谱图像分类方法 |
CN114612688A (zh) * | 2022-05-16 | 2022-06-10 | 中国科学技术大学 | 对抗样本生成方法、模型训练方法、处理方法及电子设备 |
CN114612688B (zh) * | 2022-05-16 | 2022-09-09 | 中国科学技术大学 | 对抗样本生成方法、模型训练方法、处理方法及电子设备 |
CN114863179B (zh) * | 2022-05-18 | 2022-12-13 | 合肥工业大学 | 基于多尺度特征嵌入和交叉注意力的内窥镜图像分类方法 |
CN114863179A (zh) * | 2022-05-18 | 2022-08-05 | 合肥工业大学 | 基于多尺度特征嵌入和交叉注意力的内窥镜图像分类方法 |
CN114821342A (zh) * | 2022-06-02 | 2022-07-29 | 中国科学院地理科学与资源研究所 | 一种遥感影像道路提取方法及*** |
CN114842105A (zh) * | 2022-06-02 | 2022-08-02 | 北京大学 | 一种一体化的条件图像重绘方法及装置 |
CN114821342B (zh) * | 2022-06-02 | 2023-04-18 | 中国科学院地理科学与资源研究所 | 一种遥感影像道路提取方法及*** |
CN115019174A (zh) * | 2022-06-10 | 2022-09-06 | 西安电子科技大学 | 基于像素重组和注意力的上采样遥感图像目标识别方法 |
CN114758032B (zh) * | 2022-06-15 | 2022-09-16 | 之江实验室 | 基于时空注意力模型的多相期ct图像分类***及构建方法 |
CN114758032A (zh) * | 2022-06-15 | 2022-07-15 | 之江实验室 | 基于时空注意力模型的多相期ct图像分类***及构建方法 |
CN114778485A (zh) * | 2022-06-16 | 2022-07-22 | 中化现代农业有限公司 | 基于近红外光谱和注意力机制网络的品种鉴定方法及*** |
CN114778485B (zh) * | 2022-06-16 | 2022-09-06 | 中化现代农业有限公司 | 基于近红外光谱和注意力机制网络的品种鉴定方法及*** |
CN115063685B (zh) * | 2022-07-11 | 2023-10-03 | 河海大学 | 一种基于注意力网络的遥感影像建筑物特征提取方法 |
CN115063685A (zh) * | 2022-07-11 | 2022-09-16 | 河海大学 | 一种基于注意力网络的遥感影像建筑物特征提取方法 |
CN115358270A (zh) * | 2022-08-19 | 2022-11-18 | 山东省人工智能研究院 | 一种基于多任务mtef-net的心电分类方法 |
CN115358270B (zh) * | 2022-08-19 | 2023-06-20 | 山东省人工智能研究院 | 一种基于多任务mtef-net的心电分类方法 |
CN115345866A (zh) * | 2022-08-25 | 2022-11-15 | 中国科学院地理科学与资源研究所 | 一种遥感影像中建筑物提取方法、电子设备及存储介质 |
CN115345866B (zh) * | 2022-08-25 | 2023-05-23 | 中国科学院地理科学与资源研究所 | 一种遥感影像中建筑物提取方法、电子设备及存储介质 |
CN115482463A (zh) * | 2022-09-01 | 2022-12-16 | 北京低碳清洁能源研究院 | 一种生成对抗网络矿区土地覆盖识别方法及*** |
CN115131252A (zh) * | 2022-09-01 | 2022-09-30 | 杭州电子科技大学 | 基于二次编解码结构的金属物体表面高光去除方法 |
CN115471677A (zh) * | 2022-09-15 | 2022-12-13 | 贵州大学 | 一种基于双通道稀疏化网络的高光谱图像分类方法 |
CN115471677B (zh) * | 2022-09-15 | 2023-09-29 | 贵州大学 | 一种基于双通道稀疏化网络的高光谱图像分类方法 |
CN115249332A (zh) * | 2022-09-23 | 2022-10-28 | 山东锋士信息技术有限公司 | 基于空谱双分支卷积网络的高光谱图像分类方法及设备 |
CN115359306A (zh) * | 2022-10-24 | 2022-11-18 | 中铁科学技术开发有限公司 | 一种铁路货检高清图像智能识别方法和*** |
CN115661655A (zh) * | 2022-11-03 | 2023-01-31 | 重庆市地理信息和遥感应用中心 | 高光谱和高分影像深度特征融合的西南山区耕地提取方法 |
CN115661655B (zh) * | 2022-11-03 | 2024-03-22 | 重庆市地理信息和遥感应用中心 | 高光谱和高分影像深度特征融合的西南山区耕地提取方法 |
CN115761250B (zh) * | 2022-11-21 | 2023-10-10 | 北京科技大学 | 一种化合物逆合成方法及装置 |
CN115761250A (zh) * | 2022-11-21 | 2023-03-07 | 北京科技大学 | 一种化合物逆合成方法及装置 |
CN115546569B (zh) * | 2022-12-05 | 2023-04-07 | 鹏城实验室 | 一种基于注意力机制的数据分类优化方法及相关设备 |
CN115546569A (zh) * | 2022-12-05 | 2022-12-30 | 鹏城实验室 | 一种基于注意力机制的数据分类优化方法及相关设备 |
CN115965953A (zh) * | 2023-01-04 | 2023-04-14 | 哈尔滨工业大学 | 基于高光谱成像与深度学习的粮种品种分类方法 |
CN115965953B (zh) * | 2023-01-04 | 2023-08-22 | 哈尔滨工业大学 | 基于高光谱成像与深度学习的粮种品种分类方法 |
CN116129143A (zh) * | 2023-02-08 | 2023-05-16 | 山东省人工智能研究院 | 一种基于串并联网络特征融合的边缘阔提取方法 |
CN116129143B (zh) * | 2023-02-08 | 2023-09-08 | 山东省人工智能研究院 | 一种基于串并联网络特征融合的边缘阔提取方法 |
CN115880346A (zh) * | 2023-02-10 | 2023-03-31 | 耕宇牧星(北京)空间科技有限公司 | 一种基于深度学习的可见光遥感图像精确配准方法 |
CN116152660B (zh) * | 2023-02-14 | 2023-10-20 | 北京市遥感信息研究所 | 一种基于跨尺度注意力机制的广域遥感图像变化检测方法 |
CN116152660A (zh) * | 2023-02-14 | 2023-05-23 | 北京市遥感信息研究所 | 一种基于跨尺度注意力机制的广域遥感图像变化检测方法 |
CN116310572A (zh) * | 2023-03-23 | 2023-06-23 | 齐齐哈尔大学 | 金字塔多尺度卷积和自注意力结合的高光谱图像分类方法 |
CN116310572B (zh) * | 2023-03-23 | 2024-01-23 | 齐齐哈尔大学 | 金字塔多尺度卷积和自注意力结合的高光谱图像分类方法 |
CN116052007B (zh) * | 2023-03-30 | 2023-08-11 | 山东锋士信息技术有限公司 | 一种融合时间和空间信息的遥感图像变化检测方法 |
CN116052007A (zh) * | 2023-03-30 | 2023-05-02 | 山东锋士信息技术有限公司 | 一种融合时间和空间信息的遥感图像变化检测方法 |
CN116091640A (zh) * | 2023-04-07 | 2023-05-09 | 中国科学院国家空间科学中心 | 一种基于光谱自注意力机制的遥感高光谱重建方法及*** |
CN116206099A (zh) * | 2023-05-06 | 2023-06-02 | 四川轻化工大学 | 一种基于sar图像的船舶位置检测方法及存储介质 |
CN116206099B (zh) * | 2023-05-06 | 2023-08-15 | 四川轻化工大学 | 一种基于sar图像的船舶位置检测方法及存储介质 |
CN116630700A (zh) * | 2023-05-22 | 2023-08-22 | 齐鲁工业大学(山东省科学院) | 基于引入通道-空间注意力机制的遥感图像分类方法 |
CN116343052A (zh) * | 2023-05-30 | 2023-06-27 | 华东交通大学 | 一种基于注意力和多尺度的双时相遥感图像变化检测网络 |
CN116343052B (zh) * | 2023-05-30 | 2023-08-01 | 华东交通大学 | 一种基于注意力和多尺度的双时相遥感图像变化检测网络 |
CN116400426A (zh) * | 2023-06-06 | 2023-07-07 | 山东省煤田地质局第三勘探队 | 基于电磁法的数据勘测*** |
CN116400426B (zh) * | 2023-06-06 | 2023-08-29 | 山东省煤田地质局第三勘探队 | 基于电磁法的数据勘测*** |
CN116452820B (zh) * | 2023-06-19 | 2023-09-05 | 中国科学院空天信息创新研究院 | 环境污染等级确定方法及装置 |
CN116452820A (zh) * | 2023-06-19 | 2023-07-18 | 中国科学院空天信息创新研究院 | 环境污染等级确定方法及装置 |
CN116503677B (zh) * | 2023-06-28 | 2023-09-05 | 武汉大学 | 一种湿地分类信息提取方法、***、电子设备及存储介质 |
CN116503677A (zh) * | 2023-06-28 | 2023-07-28 | 武汉大学 | 一种湿地分类信息提取方法、***、电子设备及存储介质 |
CN116503628A (zh) * | 2023-06-29 | 2023-07-28 | 华侨大学 | 自动化农业机械的图像匹配算法、装置、设备及存储介质 |
CN116777892B (zh) * | 2023-07-03 | 2024-01-26 | 东莞市震坤行胶粘剂有限公司 | 基于视觉检测的点胶质量检测方法及其*** |
CN116777892A (zh) * | 2023-07-03 | 2023-09-19 | 东莞市震坤行胶粘剂有限公司 | 基于视觉检测的点胶质量检测方法及其*** |
CN116563313B (zh) * | 2023-07-11 | 2023-09-19 | 安徽大学 | 基于门控融合注意力的遥感影像大豆种植区域分割方法 |
CN116563313A (zh) * | 2023-07-11 | 2023-08-08 | 安徽大学 | 基于门控融合注意力的遥感影像大豆种植区域分割方法 |
CN116612333B (zh) * | 2023-07-17 | 2023-09-29 | 山东大学 | 一种基于快速全卷积网络的医学高光谱影像分类方法 |
CN116612333A (zh) * | 2023-07-17 | 2023-08-18 | 山东大学 | 一种基于快速全卷积网络的医学高光谱影像分类方法 |
CN116612334A (zh) * | 2023-07-18 | 2023-08-18 | 山东科技大学 | 一种基于空谱联合注意力机制的医学高光谱图像分类方法 |
CN116612334B (zh) * | 2023-07-18 | 2023-10-10 | 山东科技大学 | 一种基于空谱联合注意力机制的医学高光谱图像分类方法 |
CN116863342B (zh) * | 2023-09-04 | 2023-11-21 | 江西啄木蜂科技有限公司 | 一种基于大尺度遥感影像的松材线虫病死木提取方法 |
CN116863342A (zh) * | 2023-09-04 | 2023-10-10 | 江西啄木蜂科技有限公司 | 一种基于大尺度遥感影像的松材线虫病死木提取方法 |
CN117372789A (zh) * | 2023-12-07 | 2024-01-09 | 北京观微科技有限公司 | 图像分类方法及图像分类装置 |
CN117372789B (zh) * | 2023-12-07 | 2024-03-08 | 北京观微科技有限公司 | 图像分类方法及图像分类装置 |
CN117409264A (zh) * | 2023-12-16 | 2024-01-16 | 武汉理工大学 | 基于transformer的多传感器数据融合机器人地形感知方法 |
CN117409264B (zh) * | 2023-12-16 | 2024-03-08 | 武汉理工大学 | 基于transformer的多传感器数据融合机器人地形感知方法 |
CN117893816A (zh) * | 2024-01-18 | 2024-04-16 | 安徽大学 | 一种分层次残差光谱空间卷积网络的高光谱图像分类方法 |
CN118155082A (zh) * | 2024-05-13 | 2024-06-07 | 山东锋士信息技术有限公司 | 一种基于高光谱空间及光谱信息的双分支变化检测方法 |
Also Published As
Publication number | Publication date |
---|---|
CN112287978A (zh) | 2021-01-29 |
US11783579B2 (en) | 2023-10-10 |
CN112287978B (zh) | 2022-04-15 |
US20230260279A1 (en) | 2023-08-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022073452A1 (zh) | 一种基于自注意力上下文网络的高光谱遥感图像分类方法 | |
Song et al. | Spatiotemporal satellite image fusion using deep convolutional neural networks | |
CN110598600A (zh) | 一种基于unet神经网络的遥感图像云检测方法 | |
CN112347888B (zh) | 基于双向特征迭代融合的遥感图像场景分类方法 | |
CN113221641B (zh) | 基于生成对抗网络和注意力机制的视频行人重识别方法 | |
CN107368831A (zh) | 一种自然场景图像中的英文文字和数字识别方法 | |
Li et al. | DMNet: A network architecture using dilated convolution and multiscale mechanisms for spatiotemporal fusion of remote sensing images | |
Chen et al. | Convolutional neural network based dem super resolution | |
CN112560831B (zh) | 一种基于多尺度空间校正的行人属性识别方法 | |
CN112419155B (zh) | 一种全极化合成孔径雷达影像超分辨率重建方法 | |
CN115496928B (zh) | 基于多重特征匹配的多模态图像特征匹配方法 | |
Yu et al. | A self-attention capsule feature pyramid network for water body extraction from remote sensing imagery | |
Zuo et al. | A remote sensing image semantic segmentation method by combining deformable convolution with conditional random fields | |
CN115995040A (zh) | 一种基于多尺度网络的sar图像小样本目标识别方法 | |
Wang et al. | MetaPan: Unsupervised adaptation with meta-learning for multispectral pansharpening | |
Vatsavai | High-resolution urban image classification using extended features | |
Vijayalakshmi K et al. | Copy-paste forgery detection using deep learning with error level analysis | |
Zhao et al. | Boundary-aware bilateral fusion network for cloud detection | |
Jiang et al. | Semantic segmentation network combined with edge detection for building extraction in remote sensing images | |
Sreedevi et al. | Development of weighted ensemble transfer learning for tomato leaf disease classification solving low resolution problems | |
CN117392065A (zh) | 一种云边协同太阳能板覆灰状况自主评估方法 | |
Zeng et al. | Masanet: Multi-angle self-attention network for semantic segmentation of remote sensing images | |
CN108460772B (zh) | 基于卷积神经网络的广告骚扰传真图像检测***及方法 | |
Xu et al. | Infrared image semantic segmentation based on improved deeplab and residual network | |
Li et al. | ConvFormerSR: Fusing transformers and convolutional neural networks for cross-sensor remote sensing imagery super-resolution |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21876970 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21876970 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 11.09.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21876970 Country of ref document: EP Kind code of ref document: A1 |