CN111767960A - Image matching method and system applied to image three-dimensional reconstruction - Google Patents

Image matching method and system applied to image three-dimensional reconstruction Download PDF

Info

Publication number
CN111767960A
CN111767960A CN202010627488.6A CN202010627488A CN111767960A CN 111767960 A CN111767960 A CN 111767960A CN 202010627488 A CN202010627488 A CN 202010627488A CN 111767960 A CN111767960 A CN 111767960A
Authority
CN
China
Prior art keywords
image
matched
matching
reference image
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010627488.6A
Other languages
Chinese (zh)
Inventor
程德强
龚飞
李纳森
寇旗旗
陈亮亮
李海翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Mining and Technology CUMT
Original Assignee
China University of Mining and Technology CUMT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Mining and Technology CUMT filed Critical China University of Mining and Technology CUMT
Priority to CN202010627488.6A priority Critical patent/CN111767960A/en
Publication of CN111767960A publication Critical patent/CN111767960A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an image matching method and system applied to image three-dimensional reconstruction, belongs to the technical field of image matching, and solves the problems that in the prior art, an image matching algorithm is low in feature extraction speed or a detected feature point does not have rotation invariance, the calculation amount of an accurate matching process is large, the matching efficiency is low, and the like. The method comprises the following steps: acquiring and preprocessing a reference image and an image to be matched; respectively obtaining feature points of the preprocessed reference image and the image to be matched by using an AKEZE algorithm, and generating feature descriptors corresponding to the feature points; obtaining a rough matching feature point pair set of the reference image and the image to be matched based on the Euclidean distance between the feature descriptors of the reference image and the image to be matched; and obtaining an average value of the rough matching feature point sets of the reference image and the image to be matched, constructing a geometric matrix and a distance measurement model based on the average value, obtaining a precision matching feature point pair, and completing matching between the reference image and the image to be matched.

Description

Image matching method and system applied to image three-dimensional reconstruction
Technical Field
The invention relates to the technical field of image matching, in particular to an image matching method and system applied to image three-dimensional reconstruction.
Background
Image matching methods in three-dimensional reconstruction can generally be divided into two categories: a grayscale-based image matching method and a feature-based image matching method. In the image matching method based on the gray scale, the similarity between the images is calculated by using the gray scale information of the images, but the method is easily affected by the geometric transformation between the images, so that the matching rate is reduced. The image matching method based on the characteristics utilizes points, lines and edges with stable characteristics in the image to match, and has better robustness on the gray level change of the image, so the image matching method based on the characteristics is the most widely researched and applied image matching method.
The image matching method based on the characteristics mainly comprises the following steps: extracting feature points, generating feature descriptors and registering features. Among them, a Scale-invariant feature transform (SIFT) algorithm is a common feature matching algorithm, but the algorithm has a high complexity and is difficult to meet the requirement of real-time property. Alcantarila et al propose a KAZE algorithm for feature detection and description using nonlinear diffusion filtering and nonlinear scale space, which can smooth noise and preserve details or edges, but at a higher computational cost. To speed up the KAZE algorithm, Alcantarilla et al proposed an akage algorithm in 2013, which dynamically speeds up the computation of the non-linear scale space. In three-dimensional reconstruction, a traditional SIFT algorithm uses a Gaussian scale space and a Gaussian derivative as a smoothing kernel, but Gaussian blur cannot adapt to the natural boundary of an image, details and noise are smoothed on all scales, and meanwhile, feature points detected by the combination of AKAZE and SIFT do not have rotation invariance, so that the final matching effect is influenced.
When roughly matching feature descriptors, euclidean distance is typically used to measure the similarity of feature vectors. In fine matching, a Random Sample Consensus (RANSAC) algorithm is usually used, which iteratively estimates the parameters of the model fitting data from a set of points including inliers and outliers, thereby eliminating outliers. However, the iteration times of the RANSAC algorithm has no upper limit, the mismatching rate is high, the calculated amount is large, and the operation time of the whole reconstruction is increased. The RANSAC algorithm extracts the minimum interior point set for multiple times by using a distance measurement model and adjusts parameters in the model, so that the matching result is unstable, the calculated amount is large, the matching efficiency is low, and the RANSAC is difficult to effectively remove error points.
Disclosure of Invention
In view of the foregoing analysis, the present invention aims to provide an image matching method and system applied to image three-dimensional reconstruction, so as to solve the problems of slow feature extraction speed or no rotation invariance of detected feature points, large calculation amount in an accurate matching process, low matching efficiency, and the like in the existing image matching algorithm.
The purpose of the invention is mainly realized by the following technical scheme:
in one aspect, an image matching method applied to three-dimensional reconstruction of an image is provided, the method comprising the following steps:
step S1: acquiring a reference image and an image to be matched, and preprocessing the reference image and the image to be matched;
step S2: respectively obtaining feature points of the preprocessed reference image and the image to be matched by using an AKEZE algorithm, and generating feature descriptors corresponding to the feature points;
step S3: obtaining a rough matching characteristic point pair set of the reference image and the image to be matched based on the Euclidean distance between the characteristic descriptors of the reference image and the image to be matched; the rough matching characteristic point pair set comprises a rough matching characteristic point set of the reference image and the image to be matched;
step S4: obtaining an average value of the rough matching feature point sets of the reference image and the image to be matched, and constructing a geometric matrix and a distance measurement model based on the average value; obtaining precision matching characteristic point pairs based on the constructed distance measurement model;
step S5: and matching the reference image and the image to be matched based on the precision matching characteristic point pairs.
On the basis of the scheme, the invention also makes the following improvements:
further, in the step S1, the reference image and the image to be matched are preprocessed by using a circular Gabor filter.
Further, the step S2 includes:
respectively constructing nonlinear scale spaces of the preprocessed reference image and the image to be matched by using an AKAZE algorithm;
judging a local extremum value by using eigenvalues of Hessian matrixes of the filtering images with different nonlinear scales;
and obtaining the feature points of the preprocessed reference image and the image to be matched based on the judgment result of the local extreme value.
Further, in the step S2, feature descriptors corresponding to the feature points are generated by performing the following operations:
acquiring the main direction of the feature points by using a SURF algorithm;
and generating a feature descriptor corresponding to the feature point based on the main direction of the feature point.
Further, in the step S3, the coarse matching feature point pair satisfies formula (1):
dij<t min(dik),k=1,2,...,N,k≠j (1)
wherein d isijRepresenting the Euclidean distance between the ith feature descriptor in the reference image and the jth feature descriptor nearest to the reference image in the image to be matched; dikRepresenting the Euclidean distance between the ith feature descriptor in the reference image and the kth feature descriptor in the image to be matched; n represents the total number of the characteristic points in the image to be matched; t represents a distance threshold.
Further, the distance threshold t is 0.6.
Further, in the step S4, an average value of the rough matching feature point sets of the reference image and the image to be matched is obtained by using formula (2):
Figure BDA0002567071590000041
wherein m represents the number of the rough matching feature point pairs; x is the number ofiAnd yiRespectively representing the ith elements in the rough matching feature point sets of the reference image and the image to be matched.
Further, in the step S4, the geometric matrix R is constructed based on formula (3):
Figure BDA0002567071590000042
constructing a distance metric model based on the geometric matrix, in which distance metric model xiAnd yiGeometric distance H ofiExpressed as:
Figure BDA0002567071590000043
wherein, x represents the average value of the distances of all rough matching characteristic point pairs in the rough matching characteristic point pair sets of the reference image and the image to be matched; f represents a transformation matrix obtained by solving all rough matching characteristic point pairs in the rough matching characteristic point pair set of the reference image and the image to be matched, FyiRepresenting element yiA corresponding transformation matrix.
In another aspect, the present invention further provides an image matching system for three-dimensional image reconstruction, the system including the following modules:
the device comprises a preprocessing module, a matching module and a matching module, wherein the preprocessing module is used for acquiring a reference image and an image to be matched and preprocessing the reference image and the image to be matched;
the characteristic extraction module is used for respectively obtaining the characteristic points of the preprocessed reference image and the image to be matched by using an AKEZE algorithm and generating characteristic descriptors corresponding to the characteristic points; obtaining a rough matching feature point pair based on the Euclidean distance between the feature descriptors of the reference image and the image to be matched; the rough matching feature point pair comprises a rough matching feature point set of the reference image and the image to be matched;
the characteristic matching module is used for obtaining a rough matching characteristic point pair set of the reference image and the image to be matched based on the Euclidean distance between the characteristic descriptors of the reference image and the image to be matched; the rough matching feature point set pair comprises rough matching feature point sets of the reference image and the image to be matched; the device is also used for obtaining the average value of the rough matching feature point sets of the reference image and the image to be matched and constructing a geometric matrix and a distance measurement model based on the average value; obtaining precision matching characteristic point pairs based on the constructed distance measurement model;
and the image matching module is used for completing matching between the reference image and the image to be matched based on the precision matching characteristic point pairs.
Further, in the feature extraction module, the following operations are specifically performed:
respectively constructing nonlinear scale spaces of the preprocessed reference image and the image to be matched by using an AKAZE algorithm;
judging a local extremum value by using eigenvalues of Hessian matrixes of the filtering images with different nonlinear scales;
obtaining the feature points of the preprocessed reference image and the image to be matched based on the judgment result of the local extreme value;
acquiring the main direction of the feature points by using a SURF algorithm;
and generating a feature descriptor corresponding to the feature point based on the main direction of the feature point.
The invention has the following beneficial effects:
the image matching method and the image matching system applied to the three-dimensional reconstruction of the image, disclosed by the invention, have the following beneficial effects:
(1) by combining the AKAZE algorithm and the SURF algorithm, the defect that the single AKAZE algorithm does not have rotation invariance is effectively overcome, and the accuracy of feature matching is improved;
(2) the traditional RANSAC accurate matching method is improved, a method for constructing a geometric distance measurement model by using the average value of a matching feature point set and the average value is provided, mismatching points can be eliminated, and the quality of three-dimensional reconstruction is improved.
In the invention, the technical schemes can be combined with each other to realize more preferable combination schemes. Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The drawings are only for purposes of illustrating particular embodiments and are not to be construed as limiting the invention, wherein like reference numerals are used to designate like parts throughout.
Fig. 1 is a flowchart of an image matching method applied to three-dimensional reconstruction of an image in embodiment 1 of the present invention;
fig. 2 is a schematic structural diagram of an image matching system applied to three-dimensional image reconstruction in embodiment 2 of the present invention.
Detailed Description
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate preferred embodiments of the invention and together with the description, serve to explain the principles of the invention and not to limit the scope of the invention.
Example 1
The invention discloses an image matching method applied to image three-dimensional reconstruction, and mainly aims to improve matching speed while ensuring matching accuracy so as to facilitate the next three-dimensional reconstruction. The flow chart is shown in fig. 1, and the method comprises the following steps:
step S1: acquiring a reference image and an image to be matched, and preprocessing the reference image and the image to be matched;
the reference image and the image to be matched are used as matching objects, one of the reference image and the image to be matched can be directly selected as the reference image, and the other one of the reference image and the image to be matched can be used as the image to be matched; or according to the specific requirements of image matching, one image can be designated as a reference image, and the other image is an image to be matched.
In order to reduce the influence on subsequent feature extraction and feature matching caused by the noise in the reference image and the image to be matched, in this embodiment, the reference image and the image to be matched are obtained and then are preprocessed.
Preferably, the embodiment uses a circular Gabor filter to perform filtering processing on the reference image and the image to be matched, and the circular Gabor filter is a very effective isotropic filter, and can effectively enhance the image and suppress noise. The circular Gabor filter can be expressed as:
Figure BDA0002567071590000071
where F is the center frequency of the filter, σ is the standard deviation of the gaussian envelope, i (x, y) represents the image to be processed, and x and y represent the positions of the row vector and the column vector in the image to be processed (the reference image or the image to be matched), respectively.
Step S2: respectively obtaining feature points of the preprocessed reference image and the image to be matched by using an AKEZE algorithm, and generating feature descriptors corresponding to the feature points;
specifically, in the step, an AKAZE algorithm is used for constructing a nonlinear scale space, and characteristic values of Hessian matrixes of filtered images with different nonlinear scales are used for judging local extremum values;
the AKAZE algorithm utilizes nonlinear diffusion filtering and fast display fast diffusion (FED) to solve nonlinear partial micro equations so as to construct a nonlinear scale space, thereby solving the problem of fuzzy edges of the Gaussian scale space.
The nonlinear diffusion filtering is a method for describing the change of image brightness in different scales, and can be expressed by a nonlinear partial differential equation:
Figure BDA0002567071590000081
wherein div and ^ denote divergence and gradient, respectively; l represents the brightness of the image; c is a conduction diffusion function, so that diffusion and the local structure of the image can be self-adapted, and the local precision of the image is reserved; t is a scale factor, and the larger the value of t is, the larger the image scale is.
The FED combines the advantages of the display and implicit modes, and can decompose the box filter according to the display mode, so that the akage algorithm solves the nonlinear partial micro equation (2) through fast display diffusion (FED). The box-shaped filtering is used for approximating a Gaussian kernel, and the complexity of calculation can be effectively reduced. The main idea of the FED is to perform M cycles of n display diffusion iterations with different step sizes. These steps are derived from the decomposition of a box filter, where the jth iteration step τjIs defined as:
Figure BDA0002567071590000082
in the formula: tau ismaxA maximum step size representing a condition that does not violate display mode stability; partial differential equation (2) can be expressed as a display mode using display diffusion:
Figure BDA0002567071590000083
wherein τ is an iteration step of constant time; l isiThe brightness of the image on different scale spaces; a (L)i) Is a conductive matrix of the image. Suppose Li+1,0=LiThe image brightness after iteration can be expressed as:
Li+1,j+1=(I+τjA(Li))Li+1,j,j=0,1,...,n-1 (5)
wherein I is an identity matrix; the time of each iteration is determined by the standard deviation sigma of the Gaussian filter parameteriAnd determining, the parameter value of the gaussian filter of each layer of image is defined as:
Figure BDA0002567071590000091
wherein, O and S are the group number and the layer number of the image pyramid respectively; and M is O S and represents the total number of the filtered images. Since the nonlinear diffusion filtering iterates in units of time, it is necessary to apply a discrete scale parameter σ in units of pixelsiConverted to units of time. In this process, σ is described using equation (7)iFrom pixel unit to time unit tiBy time values, a non-linear scale space is constructed:
Figure BDA0002567071590000092
when a certain layer (o) is completedi) After filtering, a downsampling operation is performed on the next time of the image pyramid, and then the contrast factor λ is modified, so that the image pyramid of O groups (each group including S layers) can be obtained.
The detection feature point of the AKAZE algorithm is similar to that of the traditional SIFT algorithm, and the extreme point in the scale space is determined by finding out the normalized Hessian local maximum value on different scales. Filtered image L using different non-linear scalesiJudging a local extremum value by using the eigenvalue of the Hessian matrix:
Figure BDA0002567071590000093
wherein, sigma is a certain layer scale parameter sigma2A value of (d); l isxx、LyyRespectively representing the quadratic partial derivatives of the image in the horizontal and vertical directions; l isxyIs the mixed partial derivative of the image in the horizontal and vertical directions.
When the maximum value is searched, the pixel processed by the Hessian matrix is compared with 8 adjacent points of the same scale and 9 × 2 ═ 18 points of the adjacent scale, and when the value of the pixel is larger than all other adjacent points, the pixel is the maximum value.
After the feature points are found, the coordinates of the feature points are solved by using Taylor expansion:
Figure BDA0002567071590000101
wherein L (x) is a spatial scale function of L; the solution of the sub-pixel coordinate where the feature point x is located is:
Figure BDA0002567071590000102
in order to solve the problem that the AKAZE feature and SIFT descriptor combination does not have rotation invariance, the invention provides that SURF algorithm is applied to AKAZE to calculate the main direction of the key point, and the algorithm is called S-AKAZE algorithm.
Setting the scale parameter of the feature point to sigmaiThe search area is centered on the position of the feature point and has a radius of 6 σiThe circle of (c). First, in the present embodiment, the step σ is adoptediComputing the first derivative L of the neighborhood of feature pointsxAnd LyAnd Gaussian weighting is carried out on the pixel points taking the interest points as the centers, so that the closer feature points are ensured to have larger weight. The Gaussian derivative response represents a point in vector space by calculating the angle of coverage as
Figure BDA0002567071590000103
The main direction is found by the sum of all responses in the sliding circle segment, and the longest vector direction in the vector sum is the dominant direction.
Since the scale, position and direction information of each key point is obtained, a feature descriptor which has good robustness to the rotation transformation and illumination change of the feature point and is unique and convenient for feature point matching can be constructed next. First, the main direction of the feature point is rotated (the rotation angle does not exceed 10 to 15 °), then a 16 × 16 neighborhood centered on the feature point is selected as a sampling window, gradient direction histograms of 8 directions are calculated in each of 4 × 4 windows thereof, and the accumulated value of each gradient direction is plotted, thereby generating a seed point having 8 data. Therefore, each feature point is composed of 4 × 4 seed points, each seed point contains 8-dimensional feature vectors, and finally a 128-dimensional feature descriptor is generated. Finally, the 128-dimensional vector is normalized to unit length to reduce the effect of illumination on it.
Step S3: obtaining a rough matching characteristic point pair set of the reference image and the image to be matched based on the Euclidean distance between the characteristic descriptors of the reference image and the image to be matched; the rough matching characteristic point pair set comprises a rough matching characteristic point set of the reference image and the image to be matched;
the S-AKAZE algorithm takes Euclidean distance as similarity measurement, the Euclidean distance is used for calculating descriptors and finding the relation of matching points between a reference image and an image to be matched, and rough matching is carried out through the ratio of nearest neighbor to second nearest neighbor. If the ratio of the minimum distance to the distance of the point is less than the threshold, the feature point and its closest point are retained, the feature point and its closest point forming a pair of matching point pairs. With a descriptor piAnd q isjA pair of points of (a):
dij<t min(dik),k=1,2,...,N,k≠j (11)
wherein d isijAnd dikAre respectively a descriptor pi、qjAnd pi、qkThe euclidean distance between them. In particular, dijRepresenting the Euclidean distance between the ith feature descriptor in the reference image and the jth feature descriptor nearest to the reference image in the image to be matched; dikRepresenting the Euclidean distance between the ith feature descriptor in the reference image and the kth feature descriptor in the image to be matched; n represents the total number of the characteristic points in the image to be matched; t represents a distance threshold value and a large number of experiments show that more prepared matching points can be obtained when the threshold value t is set to 0.6.
After matching by euclidean distance, there may still be mismatches. In order to further delete the pairs of mismatching points, a RANSAC (random sample consensus) algorithm is used for performing exact matching. The basic idea of the RANSAC algorithm is to determine the set of inliers as the maximum set of inliers by using an iterative method, but the operation time is too long, and the number of correct matches of the RANSAC depends on a threshold value. In the traditional RANSAC algorithm, a distance measurement model is used for extracting a minimum point set for multiple times, parameters in the distance measurement model are adjusted, and then the error matching feature points are detected through the adjusted distance measurement model. Because the distance measurement model in the traditional RANSAC algorithm is too simple, the method cannot detect the mismatching feature points more completely. In this respect, the invention utilizes the average value of the matched feature point set to construct a geometric distance measurement model, and improves the traditional RANSAC algorithm.
Step S4: obtaining an average value of the rough matching feature point sets of the reference image and the image to be matched, and constructing a geometric matrix and a distance measurement model based on the average value; obtaining precision matching characteristic point pairs based on the constructed distance measurement model;
for matching feature point set extracted from reference image and image to be matched
Figure BDA0002567071590000128
And
Figure BDA0002567071590000129
its average value MA can be expressed as:
Figure BDA0002567071590000121
in the formula, m represents the number of the rough matching characteristic point pairs; and is also a matching feature point set
Figure BDA0002567071590000122
Or
Figure BDA0002567071590000123
Number of elements in (1), xiAnd yiAre respectively as
Figure BDA0002567071590000124
And
Figure BDA0002567071590000125
the ith element in (b) is a matching feature point of the reference image and the image to be matched respectively. Then define the new geometry matrix R according to MA:
Figure BDA0002567071590000126
constructing a distance metric model based on the geometric matrix, in which distance metric model xiAnd yiGeometric distance H ofiExpressed as:
Figure BDA0002567071590000127
in the formula, x represents the average value of the distances of all rough matching characteristic point pairs in the rough matching characteristic point pair set of the reference image and the image to be matched; f represents a transformation matrix obtained by solving all rough matching characteristic point pairs in the rough matching characteristic point pair set of the reference image and the image to be matched, FyiRepresenting element yiA corresponding transformation matrix. FyiSatisfies the following conditions:
xiFyi=0 (15)
the equation (13) replaces the distance measurement model in the traditional RANSAC algorithm, so as to further determine the geometric distance measurement model of the image to be matched and the reference image, and the correct matching point is determined according to the model, so that the precise matching of the matching feature points can be completed through the improved RANSAC.
Step S5: and completing the matching between the reference image and the image to be matched based on the precision matching feature points.
After the steps, most obvious error matching points can be deleted, and finally the image matching relationship is obtained to finish the accurate matching.
And after the accurate matching points are obtained through the steps, discrete three-dimensional coordinates corresponding to the matching points are determined through a triangulation method by combining internal and external parameters of the camera. And then, adding images with the same overlapping structure with the source image and different view angles, and performing reverse iteration on the process through an SFM algorithm to finally obtain the three-dimensional coordinates corresponding to all the image points, thereby obtaining the final reconstruction result.
Example 2
Another embodiment of the present invention further provides an image matching system applied to three-dimensional image reconstruction, a schematic structural diagram of which is shown in fig. 2, and the system includes the following modules:
the device comprises a preprocessing module, a matching module and a matching module, wherein the preprocessing module is used for acquiring a reference image and an image to be matched and preprocessing the reference image and the image to be matched; the characteristic extraction module is used for respectively obtaining the characteristic points of the preprocessed reference image and the image to be matched by using an AKEZE algorithm and generating characteristic descriptors corresponding to the characteristic points; obtaining a rough matching feature point pair based on the Euclidean distance between the feature descriptors of the reference image and the image to be matched; the rough matching feature point pair comprises a rough matching feature point set of the reference image and the image to be matched; the characteristic matching module is used for obtaining a rough matching characteristic point pair set of the reference image and the image to be matched based on the Euclidean distance between the characteristic descriptors of the reference image and the image to be matched; the rough matching characteristic point pair set comprises a rough matching characteristic point set of the reference image and the image to be matched; the device is also used for obtaining the average value of the rough matching feature point sets of the reference image and the image to be matched and constructing a geometric matrix and a distance measurement model based on the average value; obtaining precision matching characteristic point pairs based on the constructed distance measurement model; and the image matching module is used for completing matching between the reference image and the image to be matched based on the precision matching characteristic point pairs.
The specific implementation process of this embodiment may refer to the above method embodiments, and this embodiment is not described herein again.
Since the principle of the present embodiment is the same as that of the above method embodiment, the present system also has the corresponding technical effects of the above method embodiment.
Those skilled in the art will appreciate that all or part of the flow of the method implementing the above embodiments may be implemented by a computer program, which is stored in a computer readable storage medium, to instruct related hardware. The computer readable storage medium is a magnetic disk, an optical disk, a read-only memory or a random access memory.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention.

Claims (10)

1. An image matching method applied to three-dimensional reconstruction of an image, characterized in that the method comprises the following steps:
step S1: acquiring a reference image and an image to be matched, and preprocessing the reference image and the image to be matched;
step S2: respectively obtaining feature points of the preprocessed reference image and the image to be matched by using an AKEZE algorithm, and generating feature descriptors corresponding to the feature points;
step S3: obtaining a rough matching characteristic point pair set of the reference image and the image to be matched based on the Euclidean distance between the characteristic descriptors of the reference image and the image to be matched; the rough matching characteristic point pair set comprises a rough matching characteristic point set of the reference image and the image to be matched;
step S4: obtaining an average value of the rough matching feature point sets of the reference image and the image to be matched, and constructing a geometric matrix and a distance measurement model based on the average value; obtaining precision matching characteristic point pairs based on the constructed distance measurement model;
step S5: and matching the reference image and the image to be matched based on the precision matching characteristic point pairs.
2. The image matching method applied to three-dimensional reconstruction of image according to claim 1, wherein in step S1, the reference image and the image to be matched are preprocessed by using a circular Gabor filter.
3. The image matching method applied to three-dimensional reconstruction of image according to claim 1, wherein in the step S2, the method comprises:
respectively constructing nonlinear scale spaces of the preprocessed reference image and the image to be matched by using an AKAZE algorithm;
judging a local extremum value by using eigenvalues of Hessian matrixes of the filtering images with different nonlinear scales;
and obtaining the feature points of the preprocessed reference image and the image to be matched based on the judgment result of the local extreme value.
4. The image matching method applied to three-dimensional reconstruction of images according to claim 3, wherein in the step S2, the feature descriptors corresponding to the feature points are generated by performing the following operations:
acquiring the main direction of the feature points by using a SURF algorithm;
and generating a feature descriptor corresponding to the feature point based on the main direction of the feature point.
5. The image matching method applied to three-dimensional reconstruction of images according to claim 1, wherein in the step S3, the coarse matching feature point pair satisfies formula (1):
dij<t min(dik),k=1,2,...,N,k≠j (1)
wherein d isijRepresenting the Euclidean distance between the ith feature descriptor in the reference image and the jth feature descriptor nearest to the reference image in the image to be matched; dikRepresenting the Euclidean distance between the ith feature descriptor in the reference image and the kth feature descriptor in the image to be matched; n represents the total number of the characteristic points in the image to be matched; t represents a distance threshold.
6. The image matching method applied to three-dimensional image reconstruction according to claim 5, wherein the distance threshold t is 0.6.
7. The image matching method applied to three-dimensional reconstruction of images according to any of claims 1-6, wherein in the step S4, the average value of the coarse matching feature point sets of the reference image and the image to be matched is obtained by using formula (2):
Figure FDA0002567071580000021
wherein m represents the number of the rough matching feature point pairs; x is the number ofiAnd yiRespectively representing the ith elements in the rough matching feature point sets of the reference image and the image to be matched.
8. The image matching method applied to three-dimensional reconstruction of images according to claim 7, wherein in the step S4, a geometric matrix R is constructed based on formula (3):
Figure FDA0002567071580000031
constructing a distance metric model based on the geometric matrix, in which distance metric model xiAnd yiGeometric distance H ofiExpressed as:
Figure FDA0002567071580000032
wherein, x represents the average value of the distances of all rough matching characteristic point pairs in the rough matching characteristic point pair sets of the reference image and the image to be matched; f represents a transformation matrix obtained by solving all rough matching characteristic point pairs in the rough matching characteristic point pair set of the reference image and the image to be matched,
Figure FDA0002567071580000033
representing element yiA corresponding transformation matrix.
9. An image matching system for three-dimensional reconstruction of images, the system comprising the following modules:
the device comprises a preprocessing module, a matching module and a matching module, wherein the preprocessing module is used for acquiring a reference image and an image to be matched and preprocessing the reference image and the image to be matched;
the characteristic extraction module is used for respectively obtaining the characteristic points of the preprocessed reference image and the image to be matched by using an AKEZE algorithm and generating characteristic descriptors corresponding to the characteristic points; obtaining a rough matching feature point pair based on the Euclidean distance between the feature descriptors of the reference image and the image to be matched; the rough matching feature point pair comprises a rough matching feature point set of the reference image and the image to be matched;
the characteristic matching module is used for obtaining a rough matching characteristic point pair set of the reference image and the image to be matched based on the Euclidean distance between the characteristic descriptors of the reference image and the image to be matched; the rough matching feature point set pair comprises rough matching feature point sets of the reference image and the image to be matched; the device is also used for obtaining the average value of the rough matching feature point sets of the reference image and the image to be matched and constructing a geometric matrix and a distance measurement model based on the average value; obtaining precision matching characteristic point pairs based on the constructed distance measurement model;
and the image matching module is used for completing matching between the reference image and the image to be matched based on the precision matching characteristic point pairs.
10. The image matching system applied to three-dimensional reconstruction of images according to claim 9,
in the feature extraction module, the following operations are specifically performed:
respectively constructing nonlinear scale spaces of the preprocessed reference image and the image to be matched by using an AKAZE algorithm;
judging a local extremum value by using eigenvalues of Hessian matrixes of the filtering images with different nonlinear scales;
obtaining the feature points of the preprocessed reference image and the image to be matched based on the judgment result of the local extreme value;
acquiring the main direction of the feature points by using a SURF algorithm;
and generating a feature descriptor corresponding to the feature point based on the main direction of the feature point.
CN202010627488.6A 2020-07-02 2020-07-02 Image matching method and system applied to image three-dimensional reconstruction Pending CN111767960A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010627488.6A CN111767960A (en) 2020-07-02 2020-07-02 Image matching method and system applied to image three-dimensional reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010627488.6A CN111767960A (en) 2020-07-02 2020-07-02 Image matching method and system applied to image three-dimensional reconstruction

Publications (1)

Publication Number Publication Date
CN111767960A true CN111767960A (en) 2020-10-13

Family

ID=72723608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010627488.6A Pending CN111767960A (en) 2020-07-02 2020-07-02 Image matching method and system applied to image three-dimensional reconstruction

Country Status (1)

Country Link
CN (1) CN111767960A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112241745A (en) * 2020-10-29 2021-01-19 东北大学 Characteristic point extraction method based on illumination invariant color space
CN112750164A (en) * 2021-01-21 2021-05-04 脸萌有限公司 Lightweight positioning model construction method, positioning method and electronic equipment
CN112861875A (en) * 2021-01-20 2021-05-28 西南林业大学 Method for distinguishing different wood products
CN112906705A (en) * 2021-03-26 2021-06-04 北京邮电大学 Image feature matching algorithm based on G-AKAZE
CN113326856A (en) * 2021-08-03 2021-08-31 电子科技大学 Self-adaptive two-stage feature point matching method based on matching difficulty
CN113470085A (en) * 2021-05-19 2021-10-01 西安电子科技大学 Image registration method based on improved RANSAC
CN114782724A (en) * 2022-06-17 2022-07-22 联宝(合肥)电子科技有限公司 Image matching method and device, electronic equipment and storage medium
CN116012526A (en) * 2022-12-15 2023-04-25 杭州医策科技有限公司 Three-dimensional CT image focus reconstruction method based on two-dimensional image
CN116128945A (en) * 2023-04-18 2023-05-16 南京邮电大学 Improved AKAZE image registration method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722731A (en) * 2012-05-28 2012-10-10 南京航空航天大学 Efficient image matching method based on improved scale invariant feature transform (SIFT) algorithm
KR20160080816A (en) * 2014-12-26 2016-07-08 조선대학교산학협력단 System and method for detecting and describing color invariant features using fast explicit diffusion in nonlinear scale spaces
CN106991695A (en) * 2017-03-27 2017-07-28 苏州希格玛科技有限公司 A kind of method for registering images and device
CN110569861A (en) * 2019-09-01 2019-12-13 中国电子科技集团公司第二十研究所 Image matching positioning method based on point feature and contour feature fusion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722731A (en) * 2012-05-28 2012-10-10 南京航空航天大学 Efficient image matching method based on improved scale invariant feature transform (SIFT) algorithm
KR20160080816A (en) * 2014-12-26 2016-07-08 조선대학교산학협력단 System and method for detecting and describing color invariant features using fast explicit diffusion in nonlinear scale spaces
CN106991695A (en) * 2017-03-27 2017-07-28 苏州希格玛科技有限公司 A kind of method for registering images and device
CN110569861A (en) * 2019-09-01 2019-12-13 中国电子科技集团公司第二十研究所 Image matching positioning method based on point feature and contour feature fusion

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YUXUAN LIU等: "S-AKAZE: An effective point-based method for image matching", 《OPTIK》, vol. 127, no. 2016, pages 5670 *
王瑜等: "基于曲率特征与改进的RANSAC策略的图像匹配算法", 《计算机工程与设计》, vol. 39, no. 12, pages 3791 - 3796 *
程德强等: "改进的SIFT邻域投票图像匹配算法", 《计算机工程与设计》, vol. 41, no. 01, pages 162 - 168 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112241745A (en) * 2020-10-29 2021-01-19 东北大学 Characteristic point extraction method based on illumination invariant color space
CN112861875A (en) * 2021-01-20 2021-05-28 西南林业大学 Method for distinguishing different wood products
CN112750164B (en) * 2021-01-21 2023-04-18 脸萌有限公司 Lightweight positioning model construction method, positioning method and electronic equipment
CN112750164A (en) * 2021-01-21 2021-05-04 脸萌有限公司 Lightweight positioning model construction method, positioning method and electronic equipment
CN112906705A (en) * 2021-03-26 2021-06-04 北京邮电大学 Image feature matching algorithm based on G-AKAZE
CN112906705B (en) * 2021-03-26 2023-01-13 北京邮电大学 Image feature matching algorithm based on G-AKAZE
CN113470085A (en) * 2021-05-19 2021-10-01 西安电子科技大学 Image registration method based on improved RANSAC
CN113470085B (en) * 2021-05-19 2023-02-10 西安电子科技大学 Improved RANSAC-based image registration method
CN113326856A (en) * 2021-08-03 2021-08-31 电子科技大学 Self-adaptive two-stage feature point matching method based on matching difficulty
CN114782724A (en) * 2022-06-17 2022-07-22 联宝(合肥)电子科技有限公司 Image matching method and device, electronic equipment and storage medium
CN116012526A (en) * 2022-12-15 2023-04-25 杭州医策科技有限公司 Three-dimensional CT image focus reconstruction method based on two-dimensional image
CN116012526B (en) * 2022-12-15 2024-02-09 杭州医策科技有限公司 Three-dimensional CT image focus reconstruction method based on two-dimensional image
CN116128945A (en) * 2023-04-18 2023-05-16 南京邮电大学 Improved AKAZE image registration method
CN116128945B (en) * 2023-04-18 2023-10-13 南京邮电大学 Improved AKAZE image registration method

Similar Documents

Publication Publication Date Title
CN111767960A (en) Image matching method and system applied to image three-dimensional reconstruction
CN108427924B (en) Text regression detection method based on rotation sensitive characteristics
CN110334762B (en) Feature matching method based on quad tree combined with ORB and SIFT
Montesinos et al. Matching color uncalibrated images using differential invariants
CN110909778B (en) Image semantic feature matching method based on geometric consistency
CN107516322A (en) A kind of image object size based on logarithm pole space and rotation estimation computational methods
CN111310688A (en) Finger vein identification method based on multi-angle imaging
CN113392856A (en) Image forgery detection device and method
CN112614167A (en) Rock slice image alignment method combining single-polarization and orthogonal-polarization images
CN114463397A (en) Multi-modal image registration method based on progressive filtering
CN108447084B (en) Stereo matching compensation method based on ORB characteristics
Zheng et al. A unified B-spline framework for scale-invariant keypoint detection
CN112288784B (en) Descriptor neighborhood self-adaptive weak texture remote sensing image registration method
CN113763274A (en) Multi-source image matching method combining local phase sharpness orientation description
CN115511928A (en) Matching method of multispectral image
CN116128919A (en) Multi-temporal image abnormal target detection method and system based on polar constraint
Bi [Retracted] A Motion Image Pose Contour Extraction Method Based on B‐Spline Wavelet
CN107146244B (en) Method for registering images based on PBIL algorithm
Sun et al. Robust feature matching based on adaptive ORB for vision-based robot navigation
CN110751189A (en) Ellipse detection method based on perception contrast and feature selection
CN113570667B (en) Visual inertial navigation compensation method and device and storage medium
Lian et al. 3D-SIFT Point Cloud Registration Method Integrating Curvature Information
Chen et al. Regional growth inpainting strategy for depth image
Jiang et al. Research on feature point generation and matching method optimization in image matching algorithm
Yongsheng et al. An Improved AKAZE Algorithm for Feature Matching of Moving Objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination