CN116468760A - Multi-source remote sensing image registration method based on anisotropic diffusion description - Google Patents

Multi-source remote sensing image registration method based on anisotropic diffusion description Download PDF

Info

Publication number
CN116468760A
CN116468760A CN202310288607.3A CN202310288607A CN116468760A CN 116468760 A CN116468760 A CN 116468760A CN 202310288607 A CN202310288607 A CN 202310288607A CN 116468760 A CN116468760 A CN 116468760A
Authority
CN
China
Prior art keywords
point
image
feature
matching
pair set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310288607.3A
Other languages
Chinese (zh)
Inventor
苏涛
梁远
吕宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202310288607.3A priority Critical patent/CN116468760A/en
Publication of CN116468760A publication Critical patent/CN116468760A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multisource remote sensing image registration method based on anisotropic diffusion description, which comprises the following steps: for the input heterologous image pair, gradient calculation is carried out in different modes; respectively establishing an anisotropic scale space for the heterogeneous image, filtering noise, reserving the image edge, and establishing a Harris scale space on the basis so as to segment sub-regions and extract characteristic points in the sub-regions; the circular neighborhood of the feature is divided into 3 parts in the radial direction and into 8 equally spaced angles in the angular direction, yielding a total of 24 sub-regions. Performing feature description in the subarea by using the mapped diffusion function, obtaining a feature descriptor after normalization, and further estimating to obtain a transformation parameter theta final . The invention uses the diffusion function in the nonlinear diffusion equation to carry out the characteristic points and the neighborhood thereofThe description is carried out to obtain the feature descriptors, so that the radiation characteristic difference and noise interference in the heterogeneous image can be reduced, and the independence and the robustness of the feature descriptors are enhanced.

Description

Multi-source remote sensing image registration method based on anisotropic diffusion description
Technical Field
The invention belongs to the technical field of remote sensing image registration, and particularly relates to a multisource remote sensing image registration method based on anisotropic diffusion description.
Background
SAR (Synthetic Aperture Radar ) images have the characteristics of all weather, long distance and wide coverage, while optical images are beneficial to human vision interpretation, so that the fusion of the optical images and SAR images effectively combines the advantages of different image sensors. Image registration is a key step in the image fusion process, and is to align two images of the same area photographed under different time, angle and weather conditions.
In the prior art, the methods of image registration are mainly divided into two types: the region-based method and the feature-based method, specifically, the region-based registration method is mainly based on similarity measurement of the same region in the image, but can be limited by imaging mechanism differences, while the feature-based registration method mainly uses point, line and region features to find a mapping relationship between two images to achieve registration.
The registration method based on the features mainly comprises the steps of feature extraction, feature representation, feature matching and model establishment, wherein the feature representation can convert geometric information of feature neighborhood in an image, including gradient information, structure information, frequency domain information and the like into a digital vector representation mode through a mathematical statistics mode, so that matching among homonymous features can be carried out subsequently. For feature descriptions in multi-source image registration, the following two properties need to be met: (1) The independence and the descriptors among the non-homonymous feature points are highly separable, so that the correct matching can be efficiently and correctly obtained when the features are matched; (2) Robustness, for the same name feature points of different source images, the feature descriptions of the same name feature points need to be highly similar, and the situation of wrong matching is avoided. Lowe et al propose SIFT (Scale Invariant Feature Transform ) descriptors, and use gradient modulus as a gradient direction histogram in the square neighborhood of the feature point to obtain one-dimensional vector descriptors of the feature point. Mikolajczyk et al replace square areas in SIFT with logarithmic polar coordinate system to form GLOH (Gradient Location and Orientation Histogram, gradient position and direction histogram) to obtain more independent and robust feature descriptors.
Therefore, in the image registration method in the prior art, features are described based on image gradient information, however, differences occur between image gray scale and gradient calculation results due to differences of radiation characteristics, noise interference and the like of the multi-source image, so that difficulty is brought to description of the features, and independence and robustness of feature descriptors are affected.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a multisource remote sensing image registration method based on anisotropic diffusion description. The technical problems to be solved by the invention are realized by the following technical scheme:
the embodiment of the invention provides a multisource remote sensing image registration method based on anisotropic diffusion description, which comprises the following steps:
acquiring an optical image and an SAR image to be registered;
after the optical image and the SAR image are respectively filtered by utilizing a nonlinear diffusion equation, an anisotropic scale space ASS of the optical image is established based on a first gradient of the optical image O And establishing an anisotropic scale space ASS based on the first gradient of the SAR image S
In the anisotropic scale space ASS O In the method, a first Hessian matrix is established point by point on the optical image to generate a first Harris scale space, and the first Hessian matrix is used for generating a first Harris scale space in the anisotropic scale space ASS S Establishing a second Hessian matrix point by point for the SAR image to generate a second Harris scale space;
respectively carrying out feature point detection layer by layer in a first Harris scale space of the optical image and a second Harris scale space of the SAR image, and calculating feature vectors of the feature points after distributing main directions for the detected feature points;
determining a reference image and an image to be registered from an optical image and a SAR image, aiming at characteristic points P in the reference image O Respectively calculating Euclidean distance between the characteristic points and each characteristic point in the image to be registered, and recording characteristic point pairs corresponding to the Euclidean distance meeting a first preset condition into a coarse matching point pair set CM;
after error points in the coarse matching point pair set CM are removed, carrying out primary matching on the residual characteristic points in the coarse matching point pair set CM to form a fine matching point pair set FM and determining a transformation parameter theta;
generating a high confidence matching point pair set C based on the fine matching point pair set FM sample And low confidence matching point pair set C total And for the high confidence matching point pair set C sample Matching the point pair set C with the low confidence total Performing secondary matching to form a final matching point pair set FM final
FM based on final matching point pair set final Estimating a transformation parameter θ between the optical image and the SAR image final And according to the transformation parameter theta final The image to be registered is transformed into the coordinate system of the reference image.
In one embodiment of the present invention, before the step of filtering the optical image and the SAR image respectively using nonlinear diffusion equations, the method further comprises:
calculating a first gradient of each target pixel point in the optical image by using a Sobel operator according to the following formula:
wherein,,respectively representing the amplitude and the direction of the first gradient of each target pixel point in the optical image, +.> Respectively representing each target pixel point in the optical image to be horizontalFirst gradient value in direction and vertical direction, < >>Intensity value image, gaussian-smoothed, of said optical image,>respectively represent the templates of Sobel calculation in the horizontal direction and the vertical direction, beta j J represents a scale layer in a scale space for the scale of the optical image;
calculating local index weighted average ratios of all target pixel points in the SAR image in the horizontal direction and the vertical direction by using an Adaptive ROEWA operator, and calculating a first gradient of each target pixel point in the SAR image according to the local index weighted average ratios in the horizontal direction and the vertical direction and the following formula:
Wherein,,respectively representing the amplitude and the direction of the first gradient of each target pixel point in the SAR image, +.> Respectively representing local exponential weighted average ratio of each target pixel point in SAR image in horizontal direction and vertical direction, < ->Respectively representing each target in SAR imageFirst gradient value alpha of pixel point in horizontal direction and vertical direction i The scale of the scale layer where the target point in the SAR image is located.
In one embodiment of the present invention, in the anisotropic scale space ASS O In the method, a first Hessian matrix is established point by point on the optical image to generate a first Harris scale space, and the first Hessian matrix is used for generating a first Harris scale space in the anisotropic scale space ASS S And (c) establishing a second Hessian matrix point by point on the SAR image to generate a second Harris scale space, wherein the step of establishing the second Hessian matrix point by point on the SAR image comprises the following steps of:
in the anisotropic scale space ASS O And (3) calculating a second gradient of the SAR image filtered by the nonlinear diffusion equation, and establishing a first Hessian matrix point by point in the second gradient corresponding to each scale layer:
wherein,,respectively representing second gradient values of each target pixel point in the SAR image filtered by the nonlinear diffusion equation in the horizontal direction and the vertical direction,/respectively>Representing variance as sigma i Gaussian kernel, sigma i =α ii X represents a convolution operation;
according to a first Hessian matrix M Si ) Generating a first Harris scale space:
R Si )=det(M Si ))-d·tr(M Si )) 2
wherein det represents a calculation matrix determinant, d represents corner detection factors, tr represents the trace of the calculation matrix, R Si ) Representing the sequence { a } according to scale 1 ,...,α n A Harris scale space generated;
in the anisotropic scale space ASS S And (3) calculating a second gradient of the optical image filtered by the nonlinear diffusion equation, and establishing a second Hessian matrix point by point in the second gradient corresponding to each scale layer:
wherein,,respectively representing second gradient values of each target pixel point in the optical R image filtered by the nonlinear diffusion equation in the horizontal direction and the vertical direction,/respectively>Representing variance as beta i Is a gaussian kernel of (c);
according to a second Hessian matrix M Oi ) Generating a second Harris scale space:
R Oi )=det(M Oi ))-d·tr(M Oi )) 2
wherein R is Oi ) Representing the sequence { beta } according to scale 1 ,...,β n The Harris scale space generated.
In one embodiment of the invention, the feature vectors of the feature points are calculated as follows:
establishing a circular neighborhood by taking the characteristic points as circle centers and the preset length as radius, and establishing a logarithmic coordinate system by taking the characteristic points as poles;
dividing the circular neighborhood into three parts along the radial direction, and equally dividing the obtained inner circle and the obtained outer two circles into 8 equal parts along the chord direction to form 24 sub-areas;
After the feature points are used as the original points and the main directions of the feature points are used as the positive directions of the transverse axes, the feature points in each sub-area are rotated into the corresponding rectangular coordinate system;
and equally dividing 180 degrees into 8 equal parts, and summing the main directions of the feature points in each sub-region by taking the mapped diffusion function as a weight to obtain a one-dimensional feature vector with the length of 192 dimensions corresponding to each feature point.
In one embodiment of the invention, a reference image and an image to be registered are determined from an optical image and a SAR image, for a feature point P in the reference image O The method comprises the steps of respectively calculating Euclidean distance between the characteristic points and each characteristic point in the image to be registered, and recording characteristic point pairs corresponding to the Euclidean distance meeting a first preset condition into a rough matching point pair set CM, wherein the steps comprise:
selecting a characteristic point P from a characteristic point set of one of an optical image and an SAR image serving as a reference image and the other serving as an image to be registered O And calculates the feature point P O Euclidean distance between the feature vector of each feature point in the image to be registered;
and when the minimum value in the Euclidean distance meets a first preset condition, recording the characteristic point pair corresponding to the minimum value into a coarse matching point pair set CM.
In one embodiment of the present invention, the first preset condition is:
wherein ED is 1 、ED 2 Respectively representing the minimum value and the next-minimum value in the Euclidean distance, wherein Thres is a preset threshold value.
In one embodiment of the present invention, after the error points in the coarse matching point pair set CM are removed, performing a first-level matching on the remaining feature points in the coarse matching point pair set CM to form a fine matching point pair set FM, and determining a transformation parameter θ, including:
selecting any two feature point pairs [ P ] from the coarse matching point pair set CM k ,Q k ]、[P l ,Q l ]Respective pairs of feature points [ P ] k ,Q k ]Corresponding Euclidean distance and characteristic point pair[P l ,Q l ]Corresponding Euclidean distance ratio D kl
After traversing all feature point pairs, pair D kl Carrying out histogram statistics;
removing the feature point pair corresponding to the minimum value in the histogram, and calculating the root mean square error of Euclidean distance of the rest feature point pairs;
detecting whether the root mean square error meets a second preset condition; if not, returning to select any two feature point pairs [ P ] from the coarse matching point pair set CM k ,Q k ]、[P l ,Q l ]Is carried out by the steps of (a); if yes, taking a point set formed by the residual characteristic point pairs as an SC-CM point set pair;
and carrying out primary matching on the SC-CM point set pair by using a cascade sample consistency estimation algorithm to obtain a fine matching point set pair FM and a transformation parameter theta.
In one embodiment of the present invention, the step of performing first-order matching on the SC-CM point set pair by using a cascade sample coincidence estimation algorithm to obtain a fine matching point set pair FM and a transformation parameter θ includes:
randomly selecting three characteristic point pairs from the SC-CM point set and calculating current transformation parameters;
transforming the characteristic points of the reference image in the coarse matching point pair set CM by using the current transformation parameters, and calculating errors according to the characteristic points of the image to be registered in the coarse matching point pair set CM;
counting a number num of characteristic points which enable the error to be smaller than a preset threshold value;
detecting whether the preset iteration times are reached or not; if not, returning to the step of randomly selecting three characteristic point pairs from the SC-CM point set and calculating the current transformation parameters; if yes, counting the maximum value of the number num of the characteristic points;
and obtaining a characteristic point pair corresponding to the maximum value of the characteristic point pair num, forming a fine matching point set pair FM, and taking the current transformation parameter as a transformation parameter theta.
In one embodiment of the invention, and for the high confidence matching point pair set C sample Matching the point pairs with the low confidenceSet C total Performing secondary matching to form a final matching point pair set FM final The method comprises the steps of carrying out a first treatment on the surface of the Comprises the steps of:
Feature point P of reference image in fine matching point pair set FM I Mapping the transformation parameters theta into an image to be registered, and detecting whether the nearest characteristic point Q exists in a subarea of the image to be registered I
If yes, the characteristic point pair [ P ] I ,Q I ]Record high confidence matching point pair set C sample And low confidence matching point pair set C total The method comprises the steps of carrying out a first treatment on the surface of the If not, searching the nearest characteristic point Q in the image to be registered J And record the feature point pairs into a low confidence matching point pair set C total
High confidence matching point pair set C by using cascade sample consistency estimation algorithm sample And low confidence matching point pair set C total Performing secondary matching to form a final matching point pair set FM final
Compared with the prior art, the invention has the beneficial effects that:
the embodiment of the invention provides a multisource remote sensing image registration method based on anisotropic diffusion description, which uses a diffusion function in a nonlinear diffusion equation to describe feature points and the neighborhood thereof to obtain feature descriptors, can reduce radiation characteristic differences and noise interference in heterogeneous images, and enhances the independence and robustness of the feature descriptors.
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Drawings
FIG. 1 is a flowchart of a multi-source remote sensing image registration method based on anisotropic diffusion description provided by an embodiment of the invention;
FIG. 2 is another flowchart of a multi-source remote sensing image registration method based on anisotropic diffusion description provided by an embodiment of the present invention;
FIG. 3a is a diagram of an example of SAR image provided in accordance with an embodiment of the present invention;
FIG. 3b is a graph showing the gradient modulus of the SAR image shown in FIG. 3a according to the present embodiment;
FIG. 3c illustrates the diffusion coefficient of the SAR image shown in FIG. 3a according to the present disclosure;
FIG. 3d is a graph of a diffusion function mapping result of the SAR image shown in FIG. 3a according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of feature vector calculation according to an embodiment of the present invention;
FIG. 5a is a diagram of an example of an optical image provided by an embodiment of the present invention;
FIG. 5b is another example view of SAR image provided in accordance with the embodiments of the present subject matter;
FIG. 6a is a graph showing the detection result of the feature points of the optical image shown in FIG. 5a according to an embodiment of the present invention;
fig. 6b is a feature point detection result of the SAR image shown in fig. 5b according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a feature matching result of the optical-SAR image pair shown in FIGS. 5a and 5b provided by an embodiment of the present invention;
FIG. 8a is a schematic diagram of image registration results provided by an embodiment of the present invention;
fig. 8b is a partial schematic view of the image registration result shown in fig. 8a provided by an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to specific examples, but embodiments of the present invention are not limited thereto.
Fig. 1 is a flowchart of a multi-source remote sensing image registration method based on anisotropic diffusion description according to an embodiment of the present invention. As shown in fig. 1, an embodiment of the present invention provides a multi-source remote sensing image registration method based on anisotropic diffusion description, including:
s1, acquiring an optical image and an SAR image to be registered;
s2, respectively filtering the optical image and the SAR image by utilizing a nonlinear diffusion equation, and establishing an anisotropic scale space ASS of the optical image based on a first gradient of the optical image O And establishing an anisotropic scale space ASS based on the first gradient of the SAR image S
S3, in an anisotropic scale space ASS O In, a first Hessian matrix is established point by point on the optical image to generate a first Harris scale space and an anisotropic scale space ASS is generated S Establishing a second Hessian matrix point by point on the SAR image to generate a second Harris scale space;
S4, respectively carrying out feature point detection layer by layer in a first Harris scale space of the optical image and a second Harris scale space of the SAR image, and calculating feature vectors of the feature points after distributing main directions for the detected feature points;
s5, determining a reference image and an image to be registered from the optical image and the SAR image, and aiming at characteristic points P in the reference image O Respectively calculating Euclidean distance between the characteristic points and each characteristic point in the image to be registered, and recording characteristic point pairs corresponding to the Euclidean distance meeting a first preset condition into a coarse matching point pair set CM;
s6, after error points in the coarse matching point pair set CM are removed, carrying out primary matching on the residual characteristic points in the coarse matching point pair set CM to form a fine matching point pair set FM and determining a transformation parameter theta;
s7, generating a high-confidence matching point pair set C based on the fine matching point pair set FM sample And low confidence matching point pair set C total And for the high confidence matching point pair set C sample Matching point pairs with low confidence C total Performing secondary matching to form a final matching point pair set FM final
S8, based on final matching point pair set FM final Estimating a transformation parameter θ between an optical image and a SAR image final And according to the transformation parameter theta final The image to be registered is transformed into the coordinate system of the reference image.
Fig. 2 is another flowchart of a multi-source remote sensing image registration method based on anisotropic diffusion description according to an embodiment of the present invention. As shown in fig. 2, before the step of filtering the optical image and the SAR image by using nonlinear diffusion equations, the method further includes:
calculating a first gradient of each target pixel point in the optical image by using a Sobel operator according to the following formula:
wherein,,the magnitude and direction of the first gradient of each target pixel point in the optical image are respectively represented, and for the convenience of calculation, the Sobel operator is simplified into a rectangular window in the embodiment, and the Sobel operator is +.> First gradient values in the horizontal direction and the vertical direction of each target pixel point in the optical image are respectively represented by +.>Intensity value image, gaussian-smoothed, of said optical image,>respectively represent the templates of Sobel calculation in the horizontal direction and the vertical direction, beta j J represents a scale layer in a scale space for the scale of the optical image;
calculating local index weighted average ratios of all target pixel points in the SAR image in the horizontal direction and the vertical direction by using an Adaptive ROEWA operator, and calculating a first gradient of each target pixel point in the SAR image according to the local index weighted average ratios in the horizontal direction and the vertical direction and the following formula:
Wherein,,respectively representing the amplitude and the direction of the first gradient of each target pixel point in the SAR image, +.> Respectively representing local exponential weighted average ratio of each target pixel point in SAR image in horizontal direction and vertical direction, < ->Respectively representing first gradient values of each target pixel point in SAR image in horizontal direction and vertical direction, alpha i The scale of the scale layer where the target point in the SAR image is located.
In the above formula, the local exponential weighted average ratio of each target pixel point in the SAR image in the horizontal direction and the vertical direction is expressed as:
wherein M and N respectively represent the row and column numbers of the calculation window, the window size and alpha i In direct proportion, I (x, y) represents the coordinates in SAR image asThe intensity of the image at (x, y),local exponential weighted average representing image intensity, loc=r, d, l, u representing up, down, left, right directions, respectively, +.>Representing the adaptive constant values in the horizontal direction and the vertical direction, respectively.
The embodiment is toThe first gradient values of the target pixel points in the SAR image in the horizontal direction and the vertical direction are calculated by taking the logarithms respectively, so that interference of speckle noise generated by a special imaging mechanism in the SAR imaging process on subsequent processing of the image can be eliminated.
In addition, in order to ensure that the optical image and SAR image scale space are consistent, alpha is calculated when the first gradient of the optical image and SAR image is calculated i And beta i The requirements are as follows:
α i+1i =k
β i+1i =k
α 1 =β 1
where k represents the scale ratio between two adjacent scale layers in the scale space.
In the above step S2, the optical image and the SAR image are filtered using a nonlinear diffusion equation, respectively, as follows:
wherein L represents an input image, namely an optical image or an SAR image, t represents the current scale, c (x, y, t) is a diffusion function, delta,Representing divergence and gradient calculations, respectivelyThe sign div (·) represents the calculated divergence, the diffusion function c (x, y, t) is a function of the image gradient modulus, and different calculation modes can be adopted for different points of interest of the image content, as follows:
wherein,,represents the first gradient of the input image, K being a constant.
For nonlinear diffusion equations, an additive split operator solution can be employed as follows:
and finally obtaining the anisotropic scale space ASS of the optical image O And anisotropic scale space ASS of SAR image S
In the above step S3, in the anisotropic scale space ASS O In, a first Hessian matrix is established point by point on the optical image to generate a first Harris scale space and an anisotropic scale space ASS is generated S In the step of establishing a second Hessian matrix point by point for the SAR image to generate a second Harris scale space, comprising:
s301, in an anisotropic scale space ASS O And (3) calculating a second gradient of the SAR image filtered by the nonlinear diffusion equation, and establishing a first Hessian matrix point by point in the second gradient corresponding to each scale layer:
wherein,,respectively represent SAR images filtered by nonlinear diffusion equationsSecond gradient value of each target pixel in horizontal direction and vertical direction, +.>Representing variance as sigma i Gaussian kernel, sigma i =α ii X represents a convolution operation;
s302, according to the first Hessian matrix M Si ) Generating a first Harris scale space:
R Si )=det(M Si ))-d·tr(M Si )) 2
wherein det represents a calculation matrix determinant, d represents a corner detection factor, generally 0.02-0.04, tr represents a trace of the calculation matrix, R Si ) Representing the sequence { a } according to scale 1 ,…,α n A Harris scale space generated;
s303, in an anisotropic scale space ASS S And (3) calculating a second gradient of the optical image filtered by the nonlinear diffusion equation, and establishing a second Hessian matrix point by point in the second gradient corresponding to each scale layer:
wherein,,respectively representing second gradient values of each target pixel point in the optical R image filtered by the nonlinear diffusion equation in the horizontal direction and the vertical direction,/respectively >Representing variance as beta i Is a gaussian kernel of (c);
s304, according to the second Hessian matrix M Oi ) Generating a second Harris scale space:
R Oi )=det(M Oi ))-d·tr(M Oi )) 2
wherein R is Oi ) Representing the sequence { beta } according to scale 1 ,...,β n The Harris scale space generated.
It should be noted that, the calculation process of the second gradient of the SAR image filtered by the nonlinear diffusion equation is the same as the calculation manner of the first gradient of the SAR image, and the calculation process of the second gradient of the optical image filtered by the nonlinear diffusion equation is the same as the calculation manner of the first gradient of the optical image, so that the description thereof will not be repeated here.
In step S4, in order to ensure that the detected feature points can be uniformly distributed in the two images, the embodiment divides the two images into 5*5 areas with the same size before detection, and selects the target pixel point with the largest corner response value as the preliminary point in each area. Then, maximum value detection is performed for each preliminary point, and the detected target pixel point is the feature point. In addition, in order to ensure that the flat area can still obtain the feature points, the embodiment also detects the grid intersection points.
Similarly, when calculating the principal direction of the feature point, a circular area around the feature point is used as a neighborhood, and the radius of the circular area is in direct proportion to the scale of the scale layer where the circular area is located. Due to the different imaging mechanisms of the optical image and the SAR image, a phenomenon of gradient direction reversal may occur for the imaging result of the same target. Therefore, when calculating the main direction of the feature point, the gradient directions of the feature point are all normalized to [0 degrees, 180 degrees ], the interval is divided into 18 subintervals, the gradient direction histogram statistics is carried out on the feature point in the neighborhood region, and then the maximum value of the histogram is subjected to Gaussian smoothing and interpolation, namely the main direction of the feature point is obtained.
Specifically, in this embodiment, the feature vector of the feature point may be calculated according to the following steps:
establishing a circular neighborhood by taking the characteristic points as circle centers and the preset length as radius, and establishing a logarithmic coordinate system by taking the characteristic points as poles;
dividing the circular neighborhood into three parts along the radial direction, equally dividing the obtained inner circle and the obtained outer two circles into 8 equal parts along the chord direction, and forming 24 sub-areas;
after the feature points are used as the original points and the main directions of the feature points are used as the positive directions of the transverse axes, the feature points in each sub-area are rotated into the corresponding rectangular coordinate system;
and equally dividing 180 degrees into 8 equal parts, and summing the main directions of the feature points in each sub-region by taking the mapped diffusion function as a weight to obtain a one-dimensional feature vector with the length of 192 dimensions corresponding to each feature point.
Fig. 3a is an example graph of a SAR image provided by an embodiment of the present invention, fig. 3b is a gradient modulus value of the SAR image shown in fig. 3a provided by an embodiment of the present invention, fig. 3c is a diffusion coefficient of the SAR image shown in fig. 3a provided by an embodiment of the present invention, and fig. 3d is a map result graph of a diffusion function of the SAR image shown in fig. 3a provided by an embodiment of the present invention. As can be seen from fig. 3a-3d, the diffusion function is more clear in the expression of the flat and textured regions of the SAR image and the discrimination is higher than the gradient modulus.
Fig. 4 is a schematic diagram of feature vector calculation according to an embodiment of the present invention. Further, as shown in fig. 4, when calculating the feature vector of the feature point, the feature point is first centered on the length, e.g. 12σ, proportional to the current layer scale i Establishing a circular neighborhood for the radius, and establishing a logarithmic polar coordinate system by taking the characteristic points as poles; the circular neighborhood is then divided into 24 sub-regions: in particular, according to 3 sigma in radial direction i 、8σ i The circular neighborhood is divided into three parts, and then the two circular rings of the inner circle and the outer circle are respectively divided into 8 equal parts in an angle, so that the feature vector calculation neighborhood divided into 24 sub-areas is obtained.
In order to ensure the invariance of each feature point to the direction, the main direction of the feature point is taken as the positive direction of the transverse axis of the rectangular coordinate system, the feature point is taken as the origin to establish the rectangular coordinate system, and the coordinates of each pixel point in the neighborhood of the feature point under the rectangular coordinate system are calculated; and then carrying out gradient histogram statistics on points in the subareas: and equally dividing the 180-degree image into 8 equal parts, adding the mapped diffusion function as a weight and the gradient direction of each pixel point as an index, and finally obtaining a one-dimensional feature vector with the length of 192 for each feature point. In order to reduce the sensitivity of the feature points to illumination intensity, normalization processing can be performed on the feature vectors after the feature vectors are calculated, amplitude limiting is performed on large values caused by high intensity in the feature vectors, and then normalization is performed again, so that the addition and normalization of the feature vectors are ensured.
In the above step S5, the reference image and the image to be registered are determined from the optical image and the SAR image, and the feature point P in the reference image is used O The method comprises the steps of respectively calculating Euclidean distance between the characteristic points and each characteristic point in the image to be registered, and recording characteristic point pairs corresponding to the Euclidean distance meeting a first preset condition into a rough matching point pair set CM, wherein the steps comprise:
selecting a characteristic point P from a characteristic point set of the reference image by taking one of the optical image and the SAR image as the reference image and the other as the image to be registered O And calculates the feature point P O Euclidean distance between the feature vector of each feature point in the image to be registered;
and when the minimum value in the Euclidean distance meets a first preset condition, recording the characteristic point pair corresponding to the minimum value into a coarse matching point pair set CM.
In this embodiment, one of the optical image and the SAR image is set as a reference image, and the other is set as an image to be registered, and first, the feature points in the two images are roughly matched. Specifically, a feature point set in the reference image is denoted as O, and a feature point P is selected therefrom O Then the feature point P O And calculating Euclidean distance between feature vectors point by point with a feature point set S of the image to be registered, and if the minimum value and the second minimum value in the Euclidean distance meet a first preset condition shown as follows, recording a feature point pair corresponding to the minimum value into a rough matching point pair set CM:
Wherein ED is 1 、ED 2 Respectively representCharacteristic point P O The minimum Euclidean distance and the second smallest Euclidean distance between the feature vectors of the feature points in the feature point set S, and Thres is a preset threshold.
Further, feature point fine matching is performed on the basis of the coarse matching point pair set CM.
In the step S6, after eliminating the error points in the coarse matching point pair set CM, the step of performing first-level matching on the remaining feature points in the coarse matching point pair set CM to form the fine matching point pair set FM and determining the transformation parameter θ includes:
s601, selecting any two feature point pairs [ P ] from a coarse matching point pair set CM k ,Q k ]、[P l ,Q l ]Respective pairs of feature points [ P ] k ,Q k ]Corresponding Euclidean distance and characteristic point pair [ P ] l ,Q l ]Corresponding Euclidean distance ratio D kl
S602, traversing all characteristic point pairs, and then aligning D kl Carrying out histogram statistics;
s603, eliminating a feature point pair corresponding to the minimum value in the histogram, and calculating the root mean square error of Euclidean distance of the rest feature point pairs;
s604, detecting whether the root mean square error meets a second preset condition; if not, returning to select any two feature point pairs [ P ] from the coarse matching point pair set CM k ,Q k ]、[P l ,Q l ]Is carried out by the steps of (a); if yes, the residual characteristic point pairs form a scale constraint matching point pair set SC-CM;
S605, performing primary matching on the SC-CM point set pair by using a cascade sample consistency estimation algorithm to obtain a fine matching point set pair FM and a transformation parameter theta.
Firstly, carrying out scale constraint on a coarse matching point pair set CM, specifically, calculating any two feature point pairs [ P ] in the coarse matching point pair set CM k ,Q k ]、[P l ,Q l ]Ratio D of Euclidean distances between kl Wherein P is k 、P l Representing feature points in a reference image, Q k 、Q l Representing feature points in the registered image. Namely:
after traversing all possible feature point pair combinations, pair D ij Carrying out histogram statistics, wherein the horizontal axis of the histogram is D ij The bin, the vertical axis is the accumulated point logarithm of each bin, the feature point pair corresponding to the minimum value in the histogram is removed, and the root mean square error (Root Mean Square Error, RMSE) at the moment is calculated. The above steps are repeated until the variation of RMSE is within a preset range, or until three pairs of feature points remain.
The scale constraint processing is carried out to obtain a scale constraint matching point pair set SC-CM, and then a fast sample consistency estimation (Fast Sample Consensus, FSC) algorithm is used for carrying out first-stage matching, and the specific steps are as follows:
randomly selecting three characteristic point pairs from the SC-CM point set and calculating current transformation parameters;
transforming the characteristic points of the reference image in the coarse matching point pair set CM by using the current transformation parameters, and calculating errors according to the characteristic points of the image to be registered in the coarse matching point pair set CM;
Counting a number num of characteristic points with the error smaller than a preset threshold value;
detecting whether the preset iteration times are reached or not; if not, returning to the step of randomly selecting three characteristic point pairs from the SC-CM point set and calculating the current transformation parameters; if yes, counting the maximum value of the number num of the characteristic points;
and obtaining a characteristic point pair corresponding to the maximum value of the characteristic point pair num, forming a fine matching point set pair FM, and taking the current transformation parameter as a transformation parameter theta.
Further, for the high confidence matching point pair set C sample Matching point pairs with low confidence C total Performing secondary matching to form a final matching point pair set FM final The method comprises the steps of carrying out a first treatment on the surface of the Comprises the steps of:
feature point P of reference image in fine matching point pair set FM I Mapping the image to be registered according to the transformation parameter theta, and detecting whether the sub-region of the image to be registered is in the sub-region of the image to be registeredWhether or not there is a nearest feature point Q I
If yes, the characteristic point pair [ P ] I ,Q I ]Record high confidence matching point pair set C sample And low confidence matching point pair set C total The method comprises the steps of carrying out a first treatment on the surface of the If not, searching the nearest characteristic point Q in the image to be registered J And record the feature point pairs into a low confidence matching point pair set C total
High confidence matching point pair set C by using cascade sample consistency estimation algorithm sample And low confidence matching point pair set C total Performing secondary matching to form a final matching point pair set FM final
FM according to the final matching point pair set final The transformation parameters theta between the optical image and SAR image can be estimated by combining different image transformation models such as similar transformation, affine transformation, projective transformation and the like final According to the transformation parameter theta final And transforming the images to be registered under the coordinate system of the reference image through similar transformation, projection transformation and affine transformation, namely completing the registration work between the two images. In general, in order to examine and detect the accuracy of registration, two images after registration may be drawn together in the form of a checkerboard grid, and the registration effect may be reflected by comparing edges, regions, and the like.
Fig. 5a is a diagram of an example of an optical image provided by an embodiment of the present invention, and fig. 5b is a diagram of another example of a SAR image provided by an embodiment of the present invention. In order to verify the performance of the descriptors adopted in the invention, optical-SAR image pairs under different scenes are selected for registration experiments, and SIFT and OS-SIFT algorithms are adopted for comparison and comparison. Taking a single set of image pair experiments as an example, as shown in fig. 5a-5b, the optical images used in this embodiment were collected in Google Earth, and the SAR images were collected in terra SAR-X, both of which have a resolution of 3m.
Fig. 6a is a feature point detection result of the optical image shown in fig. 5a provided by the embodiment of the present invention, and fig. 6b is a feature point detection result of the SAR image shown in fig. 5b provided by the embodiment of the present invention. As shown in fig. 6a-6b, the feature quantity of the two images is relatively close because the invention adopts a block extraction method. Fig. 7 is a schematic diagram of a feature matching result of the optical-SAR image pair shown in fig. 5a and 5b according to an embodiment of the present invention. The descriptors used in the invention have better independence and robustness, which show higher matching quantity and matching precision when the features are matched, the result pairs of the three image registration methods are shown in a table 1, wherein CMR is accurate matching rate and reflects the stability of the algorithm; RMSE is the root mean square error, reflecting the accuracy of the algorithm. Fig. 8a is a schematic diagram of an image registration result provided by an embodiment of the present invention, and fig. 8b is a partial schematic diagram of the image registration result shown in fig. 8a provided by an embodiment of the present invention. As can be seen from fig. 8a-8b and table 1, the registration method proposed by the present invention can be attributed to the remaining two algorithms currently existing in terms of stability and accuracy of the algorithms.
Table 1 experimental results (representing registration failure)
According to the above embodiments, the beneficial effects of the invention are as follows:
the embodiment of the invention provides a multisource remote sensing image registration method based on anisotropic diffusion description, which uses a diffusion function in a nonlinear diffusion equation to describe feature points and the neighborhood thereof to obtain feature descriptors, can reduce radiation characteristic differences and noise interference in heterogeneous images, and enhances the independence and robustness of the feature descriptors.
In the description of the present invention, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Further, one skilled in the art can engage and combine the different embodiments or examples described in this specification.
Although the present application has been described herein in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a review of the figures, the disclosure, and the appended claims.
The foregoing is a further detailed description of the invention in connection with the preferred embodiments, and it is not intended that the invention be limited to the specific embodiments described. It will be apparent to those skilled in the art that several simple deductions or substitutions may be made without departing from the spirit of the invention, and these should be considered to be within the scope of the invention.

Claims (9)

1. The multi-source remote sensing image registration method based on anisotropic diffusion description is characterized by comprising the following steps of:
acquiring an optical image and an SAR image to be registered;
after the optical image and the SAR image are respectively filtered by utilizing a nonlinear diffusion equation, an anisotropic scale space ASS of the optical image is established based on a first gradient of the optical image O And establishing an anisotropic scale space ASS based on the first gradient of the SAR image S
In the anisotropic scale space ASS O In, establishing a first Hessian matrix point by point on the optical image to generate a first Harris scale space, and in the anisotropic ruleDegree space ASS S Establishing a second Hessian matrix point by point for the SAR image to generate a second Harris scale space;
respectively carrying out feature point detection layer by layer in a first Harris scale space of the optical image and a second Harris scale space of the SAR image, and calculating feature vectors of the feature points after distributing main directions for the detected feature points;
determining a reference image and an image to be registered from an optical image and a SAR image, aiming at characteristic points P in the reference image O Respectively calculating Euclidean distance between the characteristic points and each characteristic point in the image to be registered, and recording characteristic point pairs corresponding to the Euclidean distance meeting a first preset condition into a coarse matching point pair set CM;
after error points in the coarse matching point pair set CM are removed, carrying out primary matching on the residual characteristic points in the coarse matching point pair set CM to form a fine matching point pair set FM and determining a transformation parameter theta;
generating a high confidence matching point pair set C based on the fine matching point pair set FM sample And low confidence matching point pair set C total And for the high confidence matching point pair set C sample Matching the point pair set C with the low confidence total Performing secondary matching to form a final matching point pair set FM final
FM based on final matching point pair set final Estimating a transformation parameter θ between the optical image and the SAR image final And according to the transformation parameter theta final The image to be registered is transformed into the coordinate system of the reference image.
2. The anisotropic diffusion description-based multi-source remote sensing image registration method according to claim 1, further comprising, before the step of filtering the optical image and the SAR image respectively using nonlinear diffusion equations:
calculating a first gradient of each target pixel point in the optical image by using a Sobel operator according to the following formula:
wherein,,respectively representing the magnitude and direction of the first gradient of each target pixel point in the optical image, first gradient values in the horizontal direction and the vertical direction of each target pixel point in the optical image are respectively represented by +.>Intensity value image, gaussian-smoothed, of said optical image,>respectively represent the templates of Sobel calculation in the horizontal direction and the vertical direction, beta j J represents a scale layer in a scale space for the scale of the optical image;
Calculating local index weighted average ratios of all target pixel points in the SAR image in the horizontal direction and the vertical direction by using an Adaptive ROEWA operator, and calculating a first gradient of each target pixel point in the SAR image according to the local index weighted average ratios in the horizontal direction and the vertical direction and the following formula:
wherein,,respectively representing the amplitude and the direction of the first gradient of each target pixel point in the SAR image, respectively representing local exponential weighted average ratio of each target pixel point in SAR image in horizontal direction and vertical direction, < ->Respectively representing first gradient values of each target pixel point in SAR image in horizontal direction and vertical direction, alpha i The scale of the scale layer where the target point in the SAR image is located.
3. The method of multi-source remote sensing image registration based on anisotropic diffusion description according to claim 2, wherein in the anisotropic scale space ASS O In the method, a first Hessian matrix is established point by point on the optical image to generate a first Harris scale space, and the first Hessian matrix is used for generating a first Harris scale space in the anisotropic scale space ASS S And (c) establishing a second Hessian matrix point by point on the SAR image to generate a second Harris scale space, wherein the step of establishing the second Hessian matrix point by point on the SAR image comprises the following steps of:
In the anisotropic scale space ASS O And (3) calculating a second gradient of the SAR image filtered by the nonlinear diffusion equation, and establishing a first Hessian matrix point by point in the second gradient corresponding to each scale layer:
wherein,,respectively representing second gradient values of each target pixel point in the SAR image filtered by the nonlinear diffusion equation in the horizontal direction and the vertical direction,/respectively>Representing variance as sigma i Gaussian kernel, sigma i =α ii X represents a convolution operation;
according to a first Hessian matrix M Si ) Generating a first Harris scale space:
R Si )=det(M Si ))-d·tr(M Si )) 2
wherein det represents a calculation matrix determinant, d represents corner detection factors, tr represents the trace of the calculation matrix, R Si ) Representing the sequence { a } according to scale 1 ,...,α n A Harris scale space generated;
in the anisotropic scale space ASS S And (3) calculating a second gradient of the optical image filtered by the nonlinear diffusion equation, and establishing a second Hessian matrix point by point in the second gradient corresponding to each scale layer:
wherein,,respectively representing second gradient values of each target pixel point in the optical R image filtered by the nonlinear diffusion equation in the horizontal direction and the vertical direction,/respectively>Representing variance as beta i Is a gaussian kernel of (c);
according to a second Hessian matrix M Oi ) Generating a second Harris scale space:
R Oi )=det(M Oi ))-d·tr(M Oi )) 2
wherein R is Oi ) Representing the sequence { beta } according to scale 1 ,...,β n The Harris scale space generated.
4. The multi-source remote sensing image registration method based on anisotropic diffusion description according to claim 1, wherein the feature vector of the feature point is calculated according to the following steps:
establishing a circular neighborhood by taking the characteristic points as circle centers and the preset length as radius, and establishing a logarithmic coordinate system by taking the characteristic points as poles;
dividing the circular neighborhood into three parts along the radial direction, and equally dividing the obtained inner circle and the obtained outer two circles into 8 equal parts along the chord direction to form 24 sub-areas;
after the feature points are used as the original points and the main directions of the feature points are used as the positive directions of the transverse axes, the feature points in each sub-area are rotated into the corresponding rectangular coordinate system;
and equally dividing 180 degrees into 8 equal parts, and summing the main directions of the feature points in each sub-region by taking the mapped diffusion function as a weight to obtain a one-dimensional feature vector with the length of 192 dimensions corresponding to each feature point.
5. The anisotropic diffusion description-based multi-source remote sensing image registration method according to claim 1, wherein a reference image and an image to be registered are determined from an optical image and a SAR image, and feature points P in the reference image are used for the reference image O Respectively calculating Euclidean distance between the characteristic points and each characteristic point in the image to be registered, and recording characteristic point pairs corresponding to the Euclidean distance meeting a first preset condition into rough matchingA step of point-to-set CM, comprising:
selecting a characteristic point P from a characteristic point set of one of an optical image and an SAR image serving as a reference image and the other serving as an image to be registered O And calculates the feature point P O Euclidean distance between the feature vector of each feature point in the image to be registered;
and when the minimum value in the Euclidean distance meets a first preset condition, recording the characteristic point pair corresponding to the minimum value into a coarse matching point pair set CM.
6. The anisotropic diffusion description-based multi-source remote sensing image registration method according to claim 5, wherein the first preset condition is:
wherein ED is 1 、ED 2 Respectively representing the minimum value and the next-minimum value in the Euclidean distance, wherein Thres is a preset threshold value.
7. The method for registering multi-source remote sensing images based on anisotropic diffusion description according to claim 5, wherein the step of performing primary matching on the remaining feature points in the coarse matching point pair set CM after eliminating the error points in the coarse matching point pair set CM to form a fine matching point pair set FM and determining a transformation parameter θ comprises the steps of:
Selecting any two feature point pairs [ P ] from the coarse matching point pair set CM k ,Q k ]、[P l ,Q l ]Respective pairs of feature points [ P ] k ,Q k ]Corresponding Euclidean distance and characteristic point pair [ P ] l ,Q l ]Corresponding Euclidean distance ratio D kl
After traversing all feature point pairs, pair D kl Carrying out histogram statistics;
removing the feature point pair corresponding to the minimum value in the histogram, and calculating the root mean square error of Euclidean distance of the rest feature point pairs;
detecting whether the root mean square error meets a second preset condition; if not, returning to select any two feature point pairs [ P ] from the coarse matching point pair set CM k ,Q k ]、[P l ,Q l ]Is carried out by the steps of (a); if yes, taking a point set formed by the residual characteristic point pairs as an SC-CM point set pair;
and carrying out primary matching on the SC-CM point set pair by using a cascade sample consistency estimation algorithm to obtain a fine matching point set pair FM and a transformation parameter theta.
8. The anisotropic diffusion description-based multi-source remote sensing image registration method according to claim 7, wherein the step of performing primary matching on the SC-CM point set pair by using a cascade sample coincidence estimation algorithm to obtain a fine matching point set pair FM and a transformation parameter θ comprises:
randomly selecting three characteristic point pairs from the SC-CM point set and calculating current transformation parameters;
Transforming the characteristic points of the reference image in the coarse matching point pair set CM by using the current transformation parameters, and calculating errors according to the characteristic points of the image to be registered in the coarse matching point pair set CM;
counting a number num of characteristic points which enable the error to be smaller than a preset threshold value;
detecting whether the preset iteration times are reached or not; if not, returning to the step of randomly selecting three characteristic point pairs from the SC-CM point set and calculating the current transformation parameters; if yes, counting the maximum value of the number num of the characteristic points;
and obtaining a characteristic point pair corresponding to the maximum value of the characteristic point pair num, forming a fine matching point set pair FM, and taking the current transformation parameter as a transformation parameter theta.
9. The anisotropic diffusion description-based multi-source remote sensing image registration method according to claim 8, wherein the high-confidence matching point pair set C sample Matching the point pair set C with the low confidence total Performing secondary matching to form the final productMatching point pair set FM final The method comprises the steps of carrying out a first treatment on the surface of the Comprises the steps of:
feature point P of reference image in fine matching point pair set FM I Mapping the transformation parameters theta into an image to be registered, and detecting whether the nearest characteristic point Q exists in a subarea of the image to be registered I
If yes, the characteristic point pair [ P ] I ,Q I ]Record high confidence matching point pair set C sample And low confidence matching point pair set C total The method comprises the steps of carrying out a first treatment on the surface of the If not, searching the nearest characteristic point Q in the image to be registered J And record the feature point pairs into a low confidence matching point pair set C total
High confidence matching point pair set C by using cascade sample consistency estimation algorithm sample And low confidence matching point pair set C total Performing secondary matching to form a final matching point pair set FM final
CN202310288607.3A 2023-03-22 2023-03-22 Multi-source remote sensing image registration method based on anisotropic diffusion description Pending CN116468760A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310288607.3A CN116468760A (en) 2023-03-22 2023-03-22 Multi-source remote sensing image registration method based on anisotropic diffusion description

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310288607.3A CN116468760A (en) 2023-03-22 2023-03-22 Multi-source remote sensing image registration method based on anisotropic diffusion description

Publications (1)

Publication Number Publication Date
CN116468760A true CN116468760A (en) 2023-07-21

Family

ID=87177991

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310288607.3A Pending CN116468760A (en) 2023-03-22 2023-03-22 Multi-source remote sensing image registration method based on anisotropic diffusion description

Country Status (1)

Country Link
CN (1) CN116468760A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116824516A (en) * 2023-08-30 2023-09-29 中冶路桥建设有限公司 Road construction safety monitoring and management system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116824516A (en) * 2023-08-30 2023-09-29 中冶路桥建设有限公司 Road construction safety monitoring and management system
CN116824516B (en) * 2023-08-30 2023-11-21 中冶路桥建设有限公司 Road construction safety monitoring and management system

Similar Documents

Publication Publication Date Title
US10198858B2 (en) Method for 3D modelling based on structure from motion processing of sparse 2D images
Kang et al. Automatic targetless camera–lidar calibration by aligning edge with gaussian mixture model
CN111080529A (en) Unmanned aerial vehicle aerial image splicing method for enhancing robustness
Palenichka et al. Automatic extraction of control points for the registration of optical satellite and LiDAR images
CN110569861B (en) Image matching positioning method based on point feature and contour feature fusion
CN104318548A (en) Rapid image registration implementation method based on space sparsity and SIFT feature extraction
CN112163622B (en) Global and local fusion constrained aviation wide-baseline stereopair line segment matching method
CN109711321B (en) Structure-adaptive wide baseline image view angle invariant linear feature matching method
CN114973028B (en) Aerial video image real-time change detection method and system
CN112183434B (en) Building change detection method and device
CN114331879A (en) Visible light and infrared image registration method for equalized second-order gradient histogram descriptor
Fei et al. Ossim: An object-based multiview stereo algorithm using ssim index matching cost
CN116468760A (en) Multi-source remote sensing image registration method based on anisotropic diffusion description
CN113643334A (en) Different-source remote sensing image registration method based on structural similarity
Xiong et al. Robust SAR image registration using rank-based ratio self-similarity
CN116883464A (en) Registration method for large-viewing-angle difference optics and SAR remote sensing image
Huang et al. SAR and optical images registration using shape context
CN114529681A (en) Hand-held double-camera building temperature field three-dimensional model construction method and system
Jin et al. Registration of UAV images using improved structural shape similarity based on mathematical morphology and phase congruency
JP4424797B2 (en) 3D shape detection method
CN116612165A (en) Registration method for large-view-angle difference SAR image
CN115588033A (en) Synthetic aperture radar and optical image registration system and method based on structure extraction
Zhu et al. A filtering strategy for interest point detecting to improve repeatability and information content
CN114565653A (en) Heterogeneous remote sensing image matching method with rotation change and scale difference
Zhou et al. Occlusion detection for urban aerial true orthoimage generation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination