CN115564808B - Multi-resolution hyperspectral/SAR image registration method based on public space-spectrum subspace - Google Patents

Multi-resolution hyperspectral/SAR image registration method based on public space-spectrum subspace Download PDF

Info

Publication number
CN115564808B
CN115564808B CN202211062203.4A CN202211062203A CN115564808B CN 115564808 B CN115564808 B CN 115564808B CN 202211062203 A CN202211062203 A CN 202211062203A CN 115564808 B CN115564808 B CN 115564808B
Authority
CN
China
Prior art keywords
image
hyperspectral
subspace
sar
loss
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211062203.4A
Other languages
Chinese (zh)
Other versions
CN115564808A (en
Inventor
孙伟伟
任凯
杨刚
孟祥超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo University
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN202211062203.4A priority Critical patent/CN115564808B/en
Publication of CN115564808A publication Critical patent/CN115564808A/en
Application granted granted Critical
Publication of CN115564808B publication Critical patent/CN115564808B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a multi-resolution hyperspectral/SAR image registration method based on a public space-spectrum subspace, which comprises the following steps: building a deep public space-spectrum subspace extraction network, mapping the network to a multi-resolution hyperspectral/SAR image pair, and obtaining a public space-spectrum subspace image pair; extracting corner points by adopting a Harris algorithm; constructing descriptors by adopting SIFT diagonal points, and performing corner matching; adopting a GMS method to remove error points; and calculating an affine matrix by adopting the correct matching points, and mapping the affine matrix to the hyperspectral image to realize image registration. The beneficial effects of the invention are as follows: the nonlinear mapping mechanism of deep learning is utilized, and the hyperspectral data and SAR data are registered by combining the effectiveness of Harris corner detection and the stability of SIFT descriptors, so that the problem of space and spectrum difference is solved, the registration precision of multi-resolution hyperspectral/SAR images is greatly improved, reliable support is provided for subsequent application, and the practicability is strong.

Description

Multi-resolution hyperspectral/SAR image registration method based on public space-spectrum subspace
Technical Field
The invention belongs to the technical field of optical remote sensing image processing, and particularly relates to a multi-resolution hyperspectral/SAR image registration method based on a public space-spectrum subspace.
Background
Remote sensing images are the most important data sources in the field of earth observation. However, due to different imaging time, different sensor types and different imaging conditions, geometric deviation often exists among images, and the cooperative application of multi-source data is seriously influenced. The image registration aims at eliminating geometric deviation among images and provides technical support for subsequent image fusion and other processes. Image registration is one of important steps of remote sensing image processing, has important significance for improving collaborative application of multi-source data, and is an important means for improving geometric accuracy among the multi-source data at present.
Hyperspectral data and SAR data are widely used for earth observation, but due to the difference of sensor designs, clear differences in spatial resolution between images lead to inconsistent sharpness, differences in spectral information lead to large gray scale differences, and large differences in the number of image bands, for example hyperspectral images typically contain a number of bands greater than 100, making registration between hyperspectral and SAR images very challenging.
Currently, three main methods for remote sensing data registration are: feature-based methods, gray-based methods, and deep learning-based methods. The feature-based registration method is one of the most commonly used registration methods at present, and the algorithm only needs to extract feature information such as points, lines, edges and the like in the images to be registered and does not need other auxiliary information. However, since the algorithm only adopts a small part of characteristic information of the image, the algorithm has very high requirements on the precision and accuracy of characteristic extraction and characteristic matching and is very sensitive to errors. The gray level-based method is based on gray level information of the whole image, establishes similarity measurement between the image to be registered and a base reference image, and finds out transformation model parameters enabling the similarity measurement to reach an optimal value by utilizing a search algorithm, but the method is sensitive to image gray level change and has low calculation efficiency. The deep learning-based method calculates registration parameters between images non-linearly by extracting deep and shallow features of the images, but is computationally inefficient and requires a large amount of training data. And the registration scheme under the condition that the spatial resolution and the gray scale difference exist simultaneously cannot be considered in the method, when the two conditions exist simultaneously, the registration result of the existing method is poor, and the method has limited practicability.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a multi-resolution hyperspectral/SAR image registration method based on a public space-spectrum subspace.
The multi-resolution hyperspectral/SAR image registration method based on the public space-spectrum subspace comprises the following steps of:
s1, a deep public space-spectrum subspace extraction network is built, the network is trained, the network is mapped to a multi-resolution hyperspectral/SAR image pair, and a public space-spectrum subspace image pair is obtained;
s2, extracting corner points of a public space-spectrum subspace image pair by adopting a Harris algorithm;
s3, constructing descriptors by adopting SIFT (scale invariant feature transform) corner points in the public space-spectrum subspace image pairs, and performing corner point matching;
s4, adopting a GMS method to remove error points;
s5, calculating an affine matrix by adopting correct matching points, and mapping the affine matrix to a hyperspectral image to realize image registration.
Preferably, the step S1 specifically includes: first the network is defined as Λ, which is mathematically expressed as
F in =σ(f(PCA(image,1)))
Wherein F is in For convolution characteristics, sigma (·) and f (·) are a ReLU activation function and convolution operation respectively, image is remote sensing data, and PCA (·, 1) is a first principal component operation of the calculation data;
residual density feature extraction and overall loss calculation are then performed.
Preferably, in step S1: the residual density module in the deep public space-spectrum subspace extraction network is designed as follows:
F r1 =σ(f(F in ))
F r4 =f(cat(F in ,F r1 ,F r2 ))
wherein F is r1 ,F r2 ,F r3 And F r4 Respectively representing convolution characteristics of each layer of the residual density module, cat (·) represents characteristic superposition operation,for characteristic addition operation, F final The convolution characteristic is finally obtained;
subspace consistent loss functions and gradient amplitude consistent loss functions of a public space-spectrum subspace of a multi-resolution hyperspectral/SAR image pair are designed, and the final loss functions are obtained as follows:
loss overall =loss 1 +loss 2
wherein loss is overall Loss as a whole 1 ,loss 2 Respectively represent subspace uniform loss and gradient amplitude uniform loss.
Preferably, in step S1: subspace coherence loss function is
Wherein G is E [1,2, …, G]Is the number of samples, II 1 Is l 1 Norm, loss 1 Loss of subspace consistency;
F Hfinal the hyperspectral image is used as network input to obtain final hyperspectral features, and weight sharing is adopted to obtain final SAR features
F Hfinal =Λ(HS)
Wherein HS is hyperspectral data;
F Mfinal for the final characteristic layer of SAR obtained by network mapping operation, the calculation mode is as follows
F Mfinal =Γ(SAR,Θ)
Where Γ (·) represents the network mapping operation and Θ represents the network model generated during the acquisition of the final hyperspectral features with hyperspectral as input.
Preferably, in step S1: the concrete method for calculating the gradient amplitude consistent loss comprises the following steps of firstly calculating the gradient amplitude of an image
Where Gra represents image gradient, I represents the input image, x and y represent the horizontal and vertical of the image, respectivelyDirection, as indicated by the convolution operator, II x He Pi (a Chinese character) y Representing the filtering operations in the x and y directions respectively,is a partial derivative operator; using hyperspectral and SAR first principal component image, and common subspace image pair as input to obtain
Wherein the method comprises the steps ofRespectively represent HS, SAR, F Hfinal ,F Mfinal The average gradient of hyperspectral and SAR first principal component images is
Taking the gradient as a reference gradient to obtain subspaceAnd Gra ref The losses of (a) are respectively
Finally, the gradient amplitude consistent loss is obtained 2 =loss g1 +loss g2
Preferably, the step S2 specifically includes: adopting the public space-spectrum subspace image pair obtained in the step 1 as a data set, and carrying out corner detection on the public space-spectrum subspace image by using a Harris operator to obtain coordinate information of the corner; the specific method comprises the following steps of
First, the gradient matrix of the image in the horizontal and vertical directions is calculated
Wherein P represents a matrix, and x and y represent horizontal and vertical directions of the image, respectively; constructing an L matrix according to the obtained gradient matrices Px and Py
Wherein pi is a filtering operation; then calculate the score R for each element of the matrix P
Where R is the score value, det (·) is the determinant value of the matrix L, tr (·) is the trace of the matrix L, and k=0.04 is the sensitivity value; and selecting extreme points in the local window as corner points.
Preferably, the step S3 specifically includes: first determining the radius of the image area to be calculated
Where α is the scale of the group where the keypoint is located, d=2 is a constant term;
then rotating the coordinate axis to the main direction of the corner point, dividing pixels in a neighborhood in a radius circle of an image area into 16 multiplied by 16 subdomains, further dividing the pixels into 4 multiplied by 4 blocks, and respectively calculating gradient direction histograms of 8 directions in each block; solving SIFT feature vector of 4×4×8=128 dimension for each corner as the corner descriptor, and normalizing the descriptor
Wherein l j Z as normalized j-th descriptor j Is the j descriptor before normalization;
calculating the similarity between descriptors of all the angular points in the common space-spectrum subspace image pair by using the Euclidean distance, wherein the calculation formula of the Euclidean distance is as follows
Wherein a and b are corner points in the two images, d a,b The Euclidean distance between the points a and b is l aj For the j-th descriptor of normalized corner a, l bj The j descriptor of the normalized corner b; and finding out angular points in which the similarity distance between the two images is nearest and the ratio of the nearest similarity distance to the next nearest similarity distance is lower than a certain threshold value, and matching the angular points two by two.
Preferably, the step S4 specifically includes: firstly, dividing two images into a plurality of non-coincident grids by adopting a multi-neighborhood modelAnd->
N i ={c j |c j ∈C a ,c i ≠c j }
Wherein c j And c i Representing different matched pixels, C a Representing a gridAll matching points in (1), N i C is i Is a neighborhood of (a); definition N i Is of similar neighborhood S i Is S i ={c j |c j ∈C ab ,c i ≠c j }, wherein C ab Representing simultaneous falling on grid->And->Pairs of matching points in (a), thus, can pair S i The modeling is as follows
Wherein B (·, ·) represents binomial distribution, k represents the number of neighbors, n represents the number of matching pairs in the neighborhood, t and ε represent the probabilities that correct and incorrect matches are supported by their certain neighborhood window matches, respectively; thereafter, through S i The mean and standard deviation of (c) construct a discrimination model, and set a threshold value
Wherein DP is the division, E t And E is f Respectively S i Mathematical expectation of correct and incorrect matches, V t And V f Respectively S i Variance of correct and incorrect matches; τ is a threshold value, β is an adjustment coefficient, and the incorrect matching points are removed by setting the threshold value.
Preferably, the step S5 specifically includes: solving the following equation to obtain registration parameters, and performing bilinear interpolation on the hyperspectral image by using the obtained affine transformation parameters to realize registration
Wherein u is n ,v n And u m ,v m Represents the coordinates of the matching points, m=n is the number of matching points, d x And d y Representing the offset in the x and y directions, Ω is the scale factor and α is the rotation angle.
The beneficial effects of the invention are as follows:
1) The invention builds a deep public space-spectrum subspace extraction network, trains the network, adopts the Harris algorithm to extract the corner points of the public space-spectrum subspace image pair, adopts SIFT (scale-invariant feature transform) corner point construction descriptors in the public space-spectrum subspace image pair and performs corner point matching, eliminates error points by adopting a GMS (global system for mobile communications) method, adopts correct matching points to calculate affine matrixes, maps the affine matrixes on hyperspectral images to realize image registration, greatly improves the registration precision of the multispectral/SAR images with multiple resolutions, and provides reliable support for subsequent application.
2) The method utilizes a nonlinear mapping mechanism of deep learning, combines the effectiveness of Harris corner detection and the stability of SIFT descriptors to register hyperspectral data and SAR data, overcomes the problem of space and spectrum difference, realizes high-efficiency and high-quality registration of multi-resolution hyperspectral/SAR images, and has strong practicability.
Drawings
FIG. 1 is a flow chart of step S1 of the present invention;
FIG. 2 is a flow chart of steps S2 to S5 of the present invention;
fig. 3 is a comparison of registration results obtained by different methods.
Detailed Description
The invention is further described below with reference to examples. The following examples are presented only to aid in the understanding of the invention. It should be noted that it will be apparent to those skilled in the art that modifications can be made to the present invention without departing from the principles of the invention, and such modifications and adaptations are intended to be within the scope of the invention as defined in the following claims.
Example 1
As an example, as shown in fig. 1 to 3: the multi-resolution hyperspectral/SAR image registration method based on the public space-spectrum subspace comprises the following steps:
s1, building a deep public space-spectrum subspace extraction network to obtain a public space-spectrum subspace image pair; the method aims to improve the space and gray consistency of the multi-resolution hyperspectral/SAR image, and provides data support for the subsequent corner selection and descriptor matching, and comprises the following specific processes:
first, the network is defined as Λ, and its convolution characteristic is calculated
F in =σ(f(PCA(image,1)))
Wherein F is in For convolution characteristics, sigma (·) and f (·) are a ReLU activation function and convolution operation respectively, image is remote sensing data, and PCA (·, 1) is a first principal component operation of the calculation data;
residual density feature extraction and overall loss calculation are then performed
F r1 =σ(f(F in ))
F r4 =f(cat(F in ,F r1 ,F r2 ))
Wherein F is r1 ,F r2 ,F r3 And F r4 Respectively representing convolution characteristics of each layer of the residual density module, cat (·) represents characteristic superposition operation,for characteristic addition operation, F final The convolution characteristic is finally obtained;
the loss function is divided into two parts
loss overall =loss 1 +loss 2
Wherein loss is overall Loss as a whole 1 ,loss 2 Respectively represent subspace uniform loss and gradient amplitude uniform loss.
The calculation method of the subspace consistency loss function is as follows
Wherein G is E [1,2, …, G]Is the number of samples, II 1 Is l 1 Norm, loss 1 Loss of subspace consistency;
F Hfinal the hyperspectral image is used as network input to obtain final hyperspectral features, and weight sharing is adopted to obtain final SAR features
F Hfinal =Λ(HS)
Wherein HS is hyperspectral data;
F Mfinal for the SAR final feature layer obtained by network mapping operation, the calculation process is as follows
F Mfinal =Γ(SAR,Θ)
Where Γ (·) represents the network mapping operation and Θ represents the network model generated during the acquisition of the final hyperspectral features with hyperspectral as input.
The concrete method for calculating the gradient amplitude consistent loss comprises the following steps of firstly calculating the gradient amplitude of an image
Wherein Gra represents an image gradient, I represents an input image, x and y represent the horizontal and vertical directions of the image, respectively, and wherein, the term "alpha" represents a convolution operator, and n x And pi y Representing the filtering operations in the x and y directions respectively,is a partial derivative operator; using hyperspectral and SAR first principal component image, and common subspace image pair as input to obtain
Wherein the method comprises the steps ofRespectively represent HS, SAR, F Hfinal ,F Mfinal The average gradient of hyperspectral and SAR first principal component images is
Taking the gradient as a reference gradient to obtain subspaceAnd Gra ref The losses of (a) are respectively
Finally, the gradient amplitude consistent loss is obtained 2 =lossg 1 +lossg 2
S2, extracting corner points of a public space-spectrum subspace image pair by adopting a Harris algorithm, fully utilizing the space information of the image, and extracting a more robust characteristic point set, wherein the specific process is as follows: using the public space-spectrum subspace image pair obtained in the step 1 as a data set, and using a Harris operator to perform corner detection to obtain coordinate information of the corner;
first, the gradient matrix of the image in the horizontal and vertical directions is calculated
Wherein P represents a matrix, and x and y represent horizontal and vertical directions of the image, respectively; according to the obtained gradient matrix P x And P y Construction of L matrix
Wherein pi is a filtering operation; then calculate the score R for each element of the matrix P
Where R is the score value, det (·) is the determinant value of the matrix L, tr (·) is the trace of the matrix L, and k=0.04 is the sensitivity value; and selecting extreme points in the local window as corner points.
S3, constructing descriptors for corner points by adopting SIFT, constructing a feature space for matching a feature point set, and performing corner point matching, wherein the method comprises the following specific steps of:
first determining the radius of the image area to be calculated
Where α is the scale of the group where the keypoint is located, d=2 is a constant term;
then rotating the coordinate axis to the main direction of the corner point, dividing pixels in a neighborhood in a radius circle of an image area into 16 multiplied by 16 subdomains, further dividing the pixels into 4 multiplied by 4 blocks, and respectively calculating gradient direction histograms of 8 directions in each block; solving SIFT feature vectors of 4×4×8=128 dimensions for each corner point, and normalizing descriptors in the feature vectors of each corner point
Wherein l j Z as normalized j-th descriptor j Is the j descriptor before normalization;
calculating the similarity between descriptors of all the angular points in the common space-spectrum subspace image pair by using the Euclidean distance, wherein the calculation formula of the Euclidean distance is as follows
Wherein a and b are corner points in the two images, d a,b The Euclidean distance between the points a and b is l aj For the j-th descriptor of normalized corner a, l bj The j descriptor of the normalized corner b; and (3) calculating Euclidean distances of all the angular points in the two images, finding out the angular points with closest similarity distances in the two images and the ratio of the closest similarity distances to the next closest similarity distances being lower than a certain threshold value, and matching the angular points in pairs.
S4, performing error point elimination by using a GMS method, and extracting correct matching points by using grid-based motion statistics by using the spatial characteristics of point set distribution, wherein the specific method comprises the following steps:
firstly, a multi-neighborhood model is adopted to divide a public space-spectrum subspace image into a plurality of non-coincident gridsAnd->
N i ={c j |c j ∈C a ,c i ≠c j }
Wherein c j And c i Representing differentMatched pixel point, C a Representing a gridAll matching points in (1), N i C is i Is a neighborhood of (a); definition N i Is of similar neighborhood S i Is S i ={c j |c j ∈C ab ,c j ≠c j }, wherein C ab Representing simultaneous falling on grid->And->Pairs of matching points in (a), thus, can pair S i The modeling is as follows
Wherein B (·, ·) represents binomial distribution, k represents the number of neighbors, n represents the number of matching pairs in the neighborhood, t and ε represent the probabilities that correct and incorrect matches are supported by their certain neighborhood window matches, respectively; thereafter, through S i The mean and standard deviation of (c) construct a discrimination model, and set a threshold value
Wherein DP is the division, E t And E is f Respectively S i Mathematical expectation of correct and incorrect matches, V t And V f Respectively S i Variance of correct and incorrect matches; τ is a threshold value, β is an adjustment coefficient, and the incorrect matching points are removed by setting the threshold value.
S5, calculating affine matrix mapping on the hyperspectral image to realize image registration, wherein the specific method comprises the following steps: solving the following equation to obtain registration parameters, and performing bilinear interpolation on the hyperspectral image by using the obtained affine transformation parameters to realize registration
Wherein u is n 、v n And u m 、v m Represents the coordinates of the matching points, m=n is the number of matching points, d x And d y Representing the offset in the x and y directions, Ω is the scale factor and α is the rotation angle.
Example two
According to the first embodiment, the registration method provided by the present invention is shown in this embodiment, and the comparison between the registration result obtained by the above method provided by the present invention and the results of the three registration methods currently prevailing.
And respectively adopting SIFT, PSO-SIFR, RIFT and the method of the invention for the four sets of data sets to obtain the registration results of the four sets of data sets, and respectively measuring the accuracy of the registration method for the average root mean square error of the registration results of the four sets of data by the four methods.
The registered hyperspectrum obtained by the four methods is shown in fig. 3, and according to calculation, the four methods respectively register four groups of data sets to obtain average root mean square errors respectively as follows:
average root mean square error rmse=32.67 for SIFT;
average root mean square error RMSE of PSO-sift=22.14;
average root mean square error rmse=46.56 for RIFT;
the average root mean square error rmse=0.76 for the method of the invention.
The accuracy of the registration result obtained by the method provided by the invention is obviously superior to that of other methods, the registration effectiveness between hyperspectral and SAR images can be effectively improved, and high-quality geometric correction data can be obtained.

Claims (5)

1. The multi-resolution hyperspectral/SAR image registration method based on the public space-spectrum subspace is characterized by comprising the following steps of:
s1, a deep public space-spectrum subspace extraction network is built, the network is trained, the network is mapped to a multi-resolution hyperspectral/SAR image pair, and a public space-spectrum subspace image pair is obtained; the step S1 specifically comprises the following steps: firstly extracting PCA features of input data, and extracting mathematical expression of the PCA features of the input data in a deep public space-spectrum subspace extraction network as follows
F in =σ(f(PCA(image,1)))
Wherein F is in For convolution characteristics, sigma (·) and f (·) are a ReLU activation function and convolution operation respectively, image is remote sensing data, and image= [ HS, SAR]HS is a hyperspectral image, SAR is a synthetic aperture radar image; PCA (.1) is the first principal component operation of the calculated data;
after PCA features of input data are extracted, residual intensive feature extraction and total loss calculation are carried out;
the residual dense module in the deep public space-spectrum subspace extraction network is designed as follows:
F r1 =σ(f(F in ))
F r4 =f(cat(F in ,F r1 ,F r2 ))
wherein F is r1 ,F r2 ,F r3 And F r4 Respectively representing convolution characteristics of each layer of the residual dense module, and cat (·) represents characteristic stackThe addition is performed so that,for characteristic addition operation, F final The convolution characteristic is finally obtained;
subspace consistent loss functions and gradient amplitude consistent loss functions of a public space-spectrum subspace of a multi-resolution hyperspectral/SAR image pair are designed, and the final loss functions are obtained as follows:
loss overall =loss 1 +loss 2
wherein loss is overall Loss as a whole 1 ,loss 2 Respectively representing subspace consistent loss and gradient amplitude consistent loss;
subspace coherence loss function is
Wherein G is E [1,2, …, G]For the number of samples to be taken, I.I 1 Is l 1 Norm, loss 1 Loss of subspace consistency;
F Hfinal the hyperspectral image is used as network input to obtain final hyperspectral features, and weight sharing is adopted to obtain final SAR features
F Hfinal =Λ(HS)
Wherein HS is hyperspectral data;
F Mfinal for the final characteristic layer of SAR obtained by network mapping operation, the calculation mode is as follows
F Mfinal =Γ(SAR,Θ)
Wherein Γ (·) represents a network mapping operation, Θ represents a network model generated during the process of obtaining the final hyperspectral features as input;
the concrete method for calculating the gradient amplitude consistent loss comprises the following steps of firstly calculating the gradient amplitude of an image
Wherein Gra represents an image gradient, I represents an input image, x and y represent the horizontal and vertical directions of the image, respectively, and wherein, the term "alpha" represents a convolution operator, and n x And pi y Representing the filtering operations in the x and y directions respectively,is a partial derivative operator; using hyperspectral and SAR first principal component image, and common subspace image pair as input to obtain
Wherein the method comprises the steps ofRespectively represent HS, SAR, F Hfinal ,F Mfinal The average gradient of hyperspectral and SAR first principal component images is
Taking the gradient as a reference gradient to obtain subspaceAnd Gra ref The losses of (a) are respectively
Finally getLoss of uniform gradient to amplitude 2 =loss g1 +loss g2
S2, extracting corner points of a public space-spectrum subspace image pair by adopting a Harris algorithm;
s3, constructing descriptors by adopting SIFT (scale invariant feature transform) corner points in the public space-spectrum subspace image pairs, and performing corner point matching;
s4, adopting a GMS method to remove error points;
s5, calculating an affine matrix by adopting correct matching points, and mapping the affine matrix to a hyperspectral image to realize image registration.
2. The method for multi-resolution hyperspectral/SAR image registration based on the public space-spectrum subspace according to claim 1, wherein step S2 specifically comprises: adopting the public space-spectrum subspace image pair obtained in the step 1 as a data set, and carrying out corner detection on the public space-spectrum subspace image by using a Harris operator to obtain coordinate information of the corner; the specific method comprises the following steps of
First, the gradient matrix of the image in the horizontal and vertical directions is calculated
Wherein P represents a matrix, and x and y represent horizontal and vertical directions of the image, respectively; according to the obtained gradient matrix P x And P y Construction of L matrix
Wherein II is filtering operation; then calculate the score R for each element of the matrix P
Where R is the score value, det (·) is the determinant value of the matrix L, tr (·) is the trace of the matrix L, and k=0.04 is the sensitivity value; and selecting extreme points in the local window as corner points.
3. The method for multi-resolution hyperspectral/SAR image registration based on the public space-spectrum subspace according to claim 1, wherein step S3 specifically comprises: first determining the radius of the image area to be calculated
Where α is the scale of the group where the keypoint is located, d=2 is a constant term;
then rotating the coordinate axis to the main direction of the corner point, dividing pixels in a neighborhood in a radius circle of an image area into 16 multiplied by 16 subdomains, further dividing the pixels into 4 multiplied by 4 blocks, and respectively calculating gradient direction histograms of 8 directions in each block; solving SIFT feature vectors of 4×4×8=128 dimensions for each corner point, and normalizing descriptors in the feature vectors of each corner point
Wherein l j Z as normalized j-th descriptor j Is the j descriptor before normalization;
calculating the similarity between descriptors of all the angular points in the common space-spectrum subspace image pair by using the Euclidean distance, wherein the calculation formula of the Euclidean distance is as follows
Wherein a and b are corner points in the two images, d a,b The Euclidean distance between the points a and b is l aj For the j-th descriptor of normalized corner a, l bj The j descriptor of the normalized corner b; and finding out angular points in which the similarity distance between the two images is nearest and the ratio of the nearest similarity distance to the next nearest similarity distance is lower than a certain threshold value, and matching the angular points two by two.
4. The method for multi-resolution hyperspectral/SAR image registration based on the public space-spectrum subspace according to claim 1, wherein step S4 specifically comprises: firstly, a multi-neighborhood model is adopted to divide a public space-spectrum subspace image into a plurality of non-coincident gridsAnd->
N i ={c j |c j ∈C a ,c i ≠c j }
Wherein c j And c i Representing different matched pixels, C a Representing a gridAll matching points in (1), N i C is i Is a neighborhood of (a); definition N i Is of similar neighborhood S i Is S i ={c j |c j ∈C ab ,c i ≠c j }, wherein C ab Representing simultaneous falling on grid->And->Pairs of matching points in (a), thus, can pair S i The modeling is as follows
Wherein B (·, ·) represents binomial distribution, k represents the number of neighbors, n represents the number of matching pairs in the neighborhood, t and ε represent the probabilities that correct and incorrect matches are supported by their certain neighborhood window matches, respectively; thereafter, through S i The mean and standard deviation of (c) construct a discrimination model, and set a threshold value
Wherein DP is the division, E t And E is f Respectively S i Mathematical expectation of correct and incorrect matches, V t And V f Respectively S i Variance of correct and incorrect matches; τ is a threshold value, β is an adjustment coefficient, and the incorrect matching points are removed by setting the threshold value.
5. The method for registering a multi-resolution hyperspectral/SAR image based on a common space-spectrum subspace according to claim 1, wherein step S5 specifically comprises: solving the following equation to obtain registration parameters, and performing bilinear interpolation on the hyperspectral image by using the obtained affine transformation parameters to realize registration
Wherein u is n 、v n And u m 、v m Represents the coordinates of the matching points, m=n is the number of matching points, d x And d y Representing the offset in the x and y directions, Ω is the scale factor and α is the rotation angle.
CN202211062203.4A 2022-09-01 2022-09-01 Multi-resolution hyperspectral/SAR image registration method based on public space-spectrum subspace Active CN115564808B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211062203.4A CN115564808B (en) 2022-09-01 2022-09-01 Multi-resolution hyperspectral/SAR image registration method based on public space-spectrum subspace

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211062203.4A CN115564808B (en) 2022-09-01 2022-09-01 Multi-resolution hyperspectral/SAR image registration method based on public space-spectrum subspace

Publications (2)

Publication Number Publication Date
CN115564808A CN115564808A (en) 2023-01-03
CN115564808B true CN115564808B (en) 2023-08-25

Family

ID=84739910

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211062203.4A Active CN115564808B (en) 2022-09-01 2022-09-01 Multi-resolution hyperspectral/SAR image registration method based on public space-spectrum subspace

Country Status (1)

Country Link
CN (1) CN115564808B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117710711B (en) * 2024-02-06 2024-05-10 东华理工大学南昌校区 Optical and SAR image matching method based on lightweight depth convolution network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104867126A (en) * 2014-02-25 2015-08-26 西安电子科技大学 Method for registering synthetic aperture radar image with change area based on point pair constraint and Delaunay
CN104992438A (en) * 2015-06-26 2015-10-21 江西师范大学 Large-time-span remote sensing image registration method combining with historical image sequence
CN109215064A (en) * 2018-08-03 2019-01-15 华南理工大学 A kind of medical image registration method based on super-pixel guide
CN110796022A (en) * 2019-10-09 2020-02-14 西安工程大学 Low-resolution face recognition method based on multi-manifold coupling mapping
CN111754403A (en) * 2020-06-15 2020-10-09 南京邮电大学 Image super-resolution reconstruction method based on residual learning
CN113112533A (en) * 2021-04-15 2021-07-13 宁波甬矩空间信息技术有限公司 SAR-multispectral-hyperspectral integrated fusion method based on multiresolution analysis

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104867126A (en) * 2014-02-25 2015-08-26 西安电子科技大学 Method for registering synthetic aperture radar image with change area based on point pair constraint and Delaunay
CN104992438A (en) * 2015-06-26 2015-10-21 江西师范大学 Large-time-span remote sensing image registration method combining with historical image sequence
CN109215064A (en) * 2018-08-03 2019-01-15 华南理工大学 A kind of medical image registration method based on super-pixel guide
CN110796022A (en) * 2019-10-09 2020-02-14 西安工程大学 Low-resolution face recognition method based on multi-manifold coupling mapping
CN111754403A (en) * 2020-06-15 2020-10-09 南京邮电大学 Image super-resolution reconstruction method based on residual learning
CN113112533A (en) * 2021-04-15 2021-07-13 宁波甬矩空间信息技术有限公司 SAR-multispectral-hyperspectral integrated fusion method based on multiresolution analysis

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于图像内在特征的图像自动拼接方法;马超杰;杨华;李晓霞;吴丹;;激光与红外(第11期);第1152-1155页 *

Also Published As

Publication number Publication date
CN115564808A (en) 2023-01-03

Similar Documents

Publication Publication Date Title
CN110097093B (en) Method for accurately matching heterogeneous images
CN109409292B (en) Heterogeneous image matching method based on refined feature optimization extraction
CN108759788B (en) Unmanned aerial vehicle image positioning and attitude determining method and unmanned aerial vehicle
Zhu et al. Robust registration of aerial images and LiDAR data using spatial constraints and Gabor structural features
CN112085772A (en) Remote sensing image registration method and device
CN112419380B (en) Cloud mask-based high-precision registration method for stationary orbit satellite sequence images
CN107958443A (en) A kind of fingerprint image joining method based on crestal line feature and TPS deformation models
CN115564808B (en) Multi-resolution hyperspectral/SAR image registration method based on public space-spectrum subspace
CN112946679B (en) Unmanned aerial vehicle mapping jelly effect detection method and system based on artificial intelligence
CN107862319A (en) A kind of heterologous high score optical image matching error elimination method based on neighborhood ballot
CN107274441A (en) The wave band calibration method and system of a kind of high spectrum image
CN114549871A (en) Unmanned aerial vehicle aerial image and satellite image matching method
CN110246165B (en) Method and system for improving registration speed of visible light image and SAR image
CN113642463A (en) Heaven and earth multi-view alignment method for video monitoring and remote sensing images
CN112734818B (en) Multi-source high-resolution remote sensing image automatic registration method based on residual network and SIFT
CN112184785B (en) Multi-mode remote sensing image registration method based on MCD measurement and VTM
CN111862005A (en) Method and system for accurately positioning tropical cyclone center by using synthetic radar image
CN117058008A (en) Remote sensing image geometry and radiation integrated correction method, device, equipment and medium
CN113066015B (en) Multi-mode remote sensing image rotation difference correction method based on neural network
CN115761528A (en) Push-broom type remote sensing satellite image high-precision wave band alignment method based on integral graph
CN115511928A (en) Matching method of multispectral image
CN114565653A (en) Heterogeneous remote sensing image matching method with rotation change and scale difference
Yang et al. Adjacent Self-Similarity Three-dimensional Convolution for Multi-modal Image Registration
CN111127525B (en) Incremental farmland boundary precision calibration method and device with constraint point set registration
CN113674332A (en) Point cloud registration method based on topological structure and multi-scale features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant