WO2022073249A1 - Image matching method based on bidirectional optimal matching point pairs - Google Patents

Image matching method based on bidirectional optimal matching point pairs Download PDF

Info

Publication number
WO2022073249A1
WO2022073249A1 PCT/CN2020/120449 CN2020120449W WO2022073249A1 WO 2022073249 A1 WO2022073249 A1 WO 2022073249A1 CN 2020120449 W CN2020120449 W CN 2020120449W WO 2022073249 A1 WO2022073249 A1 WO 2022073249A1
Authority
WO
WIPO (PCT)
Prior art keywords
matching point
optimal matching
point pairs
optimal
bidirectional
Prior art date
Application number
PCT/CN2020/120449
Other languages
French (fr)
Chinese (zh)
Inventor
陈卫征
司轩斌
Original Assignee
南京轩宁信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京轩宁信息技术有限公司 filed Critical 南京轩宁信息技术有限公司
Publication of WO2022073249A1 publication Critical patent/WO2022073249A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Definitions

  • the invention belongs to the technical field of image processing, and in particular relates to an image matching method based on two-way optimal matching point pairs.
  • the general process of image matching based on feature points is: extract the features of the image, generate feature descriptors, find several pairs of feature points that match each other through the pairwise comparison of the features of the two images (feature points + descriptors), and eliminate the wrong matching points. Yes, calculate the transformation relationship between the two graphs according to the reserved matching point pairs.
  • the random sample consensus algorithm can estimate the parameters of the mathematical model from the observed data containing abnormal data, eliminate invalid samples, and retain valid samples.
  • Step1 randomly select a subset from the set of matching point pairs
  • Step2 uses the matching point pairs in the subset to calculate the transformation matrix, such as the homography matrix
  • Step3 uses the transformation matrix to calculate whether each matching point pair in the matching point pair set conforms to the transformation matrix, and the matching point pair that conforms to the transformation is marked as an inner point, otherwise it is marked as an outer point;
  • Step 4 Calculate the error probability of the loop according to the number of inner and outer points, or calculate the mapping error of the inner point according to the transformation relationship. If the preset termination condition is not met, return to Step 1 to continue the calculation until the requirements are met.
  • Exhaustive method For each feature point in the target image, the exhaustive method is used to traverse each feature point in the original image, and the point with the highest similarity is taken as the matching point of the target image feature point in the original image, thereby obtaining a set of matching point pairs.
  • the ratio method the method proposed by SIFT author Lowe. Take a key point in the target image and find the first two key points with the closest Euclidean distance to the original image. Among these two key points, if the closest distance divided by the second closest distance is less than a certain ratio threshold, the pair of matching points is accepted.
  • the above-mentioned first scheme has a low matching accuracy rate, and the obtained set of matching point pairs will contain a large number of incorrect matching points, thereby greatly increasing the processing complexity and calculation amount when eliminating incorrect matching point pairs.
  • the second scheme above depends on the selection of the threshold.
  • the general principle is that when the threshold is reduced, the matching correct rate increases, but the number of matching point pairs decreases; when the threshold is increased, the matching correct rate decreases, including a large number of incorrect matching point pairs.
  • how to select an appropriate threshold is a difficult problem to solve.
  • the similarity of local areas is relatively large, and the next closest distance is often close to the correct matching point pair. The selection of the threshold cannot solve the matching problem of such images.
  • the invention provides an image matching method based on bidirectional optimal matching point pairs, and proposes the concept of bidirectional optimal matching point pairs in the feature matching stage, so that the number of incorrect matching point pairs contained in the reserved matching point pairs is as small as possible, The number of correct matching point pairs is as large as possible, thereby reducing the computational complexity of eliminating incorrect matching point pairs, and quickly calculating a stable transformation relationship between two images.
  • the present invention provides an image matching method based on two-way optimal matching point pairs, the specific steps are as follows, and it is characterized in that:
  • Step S1 use the exhaustive method to perform bidirectional matching to obtain the optimal matching points of the feature points in the original image in the target image, and the optimal matching points of the feature points in the target image in the original image, forming two optimal matching point pairs. gather;
  • Step S2 according to the above-mentioned two sets of optimal matching point pairs, calculate a set of two-way optimal matching point pairs;
  • Step S3 using the RANSAC algorithm to eliminate the erroneous matching points in the set of bidirectional optimal matching point pairs;
  • Step S4 according to the reserved bidirectional optimal matching point pair, calculate the transformation relationship of the two images
  • Step S5 select a matching point pair that satisfies the above-mentioned transformation relationship from the two sets of optimal matching point pairs obtained in step S1;
  • step S6 the retained bidirectional optimal matching point pair and the optimal matching point pair satisfying the transformation relationship are the acceptable correct matching point pair between the two images.
  • step S1 the specific steps of step S1 are as follows:
  • step S2 the specific steps of step S2 are as follows:
  • the two-way optimal matching point pair is defined as:
  • the similarity is calculated by using the cosine distance or the Euclidean distance, and the similarity can be obtained by calculating the above two methods.
  • the present invention provides an image matching method based on two-way optimal matching point pairs, which has the following advantages:
  • RANSAC Use the RANSAC algorithm to eliminate the wrong matching points in the set of bidirectional optimal matching point pairs. Since RANSAC is a random sampling method, its optimization process is based on loop iteration, and the number of iterations is uncontrollable, and the optimization results cannot be guaranteed without full-space search. Therefore, if the valid data in the matching point pair is the majority and the invalid data is very small, it is more efficient to determine the parameters and errors of the transformation model by the least squares method or similar methods. Fast and stable method. In the set of bidirectional optimal matching point pairs described in the present application, the proportion of incorrect matching point pairs is small, and a sufficiently accurate and stable transformation relationship can often be obtained by the least squares method.
  • the present invention can keep the correct number of matching point pairs as much as possible, greatly reducing the computational complexity when correcting the matching point pairs. For the relabeling method, ensure that the two images have enough matching point pairs.
  • Figure 1 is a flow chart of the method of the present invention.
  • the invention provides an image matching method based on bidirectional optimal matching point pairs, and proposes the concept of bidirectional optimal matching point pairs in the feature matching stage, so that the number of incorrect matching point pairs contained in the reserved matching point pairs is as small as possible, The number of correct matching point pairs is as large as possible, thereby reducing the computational complexity of eliminating incorrect matching point pairs, and quickly calculating a stable transformation relationship between two images.
  • the present invention provides an image matching method based on bidirectional optimal matching point pairs as shown in FIG. 1 , and the specific steps are as follows:
  • Step S1 use the exhaustive method to perform two-way matching to obtain the optimal matching point of the feature points in the original image in the target image, and the optimal matching point of the feature points in the target image in the original image, forming two optimal matching point pairs. gather;
  • step S1 The specific steps of step S1 are as follows:
  • Step S2 according to the above-mentioned two sets of optimal matching point pairs, calculate a set of two-way optimal matching point pairs;
  • step S2 The specific steps of step S2 are as follows:
  • the two-way optimal matching point pair is defined as:
  • Step S3 using the RANSAC algorithm to eliminate the erroneous matching points in the set of bidirectional optimal matching point pairs;
  • Step S4 according to the reserved bidirectional optimal matching point pair, calculate the transformation relationship of the two images
  • Step S5 select a matching point pair that satisfies the above-mentioned transformation relationship from the two sets of optimal matching point pairs obtained in step S1;
  • step S6 the retained bidirectional optimal matching point pair and the optimal matching point pair satisfying the transformation relationship are the acceptable correct matching point pair between the two images.
  • RANSAC is a random sampling method. Its optimization process is based on loop iterations, and the number of iterations is uncontrollable. The optimization results cannot be guaranteed to be consistent without full-space search. When the data is in the majority and the invalid data is small, we can determine the parameters and errors of the transformation model by least squares or similar methods.
  • the proportion of wrong matching point pairs is very small, and a sufficiently accurate and stable transformation relationship can often be obtained by the least square method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

An image matching method based on bidirectional optimal matching point pairs, the method comprising: S1. using proof by exhaustion to perform bidirectional matching, obtaining an optimal matching point in a target image for a feature point in an original image as well as an optimal matching point in the original image for a feature point in the target image, and forming two sets of optimal matching point pairs; S2. calculating a bidirectional optimal matching point pair set according to the two sets of optimal matching point pairs; S3. eliminating wrong matching points in the bidirectional optimal matching point pair set; S4. calculating the transformation relationship between the two images according to reserved bidirectional optimal matching point pairs; S5. selecting matching point pairs satisfying the transformation relationship from among the two sets of optimal matching point pairs obtained in Step S1; and S6. obtaining an acceptable correct matching point pair between the two images. In the described method, enough correct matching point pairs can be retained on the premise of introducing only a small number of wrong matching point pairs, the amount of calculation when eliminating wrong matching point pairs is greatly reduced, and an accurate and stable transformation relationship between the two images is calculated.

Description

一种基于双向最优匹配点对的图像匹配方法An Image Matching Method Based on Bidirectional Optimal Matching Point Pairs 技术领域technical field
本发明属于图像处理技术领域,特别是涉及到一种基于双向最优匹配点对的图像匹配方法。The invention belongs to the technical field of image processing, and in particular relates to an image matching method based on two-way optimal matching point pairs.
背景技术Background technique
基于特征点实现图像匹配的一般过程是:提取图像的特征,生成特征描述子,通过两图特征(特征点+描述子)的两两比较找出相互匹配的若干对特征点,剔除错误匹配点对,根据保留的匹配点对计算两图的变换关系。The general process of image matching based on feature points is: extract the features of the image, generate feature descriptors, find several pairs of feature points that match each other through the pairwise comparison of the features of the two images (feature points + descriptors), and eliminate the wrong matching points. Yes, calculate the transformation relationship between the two graphs according to the reserved matching point pairs.
匹配点对校正时,往往采用RANSAC算法剔除错误匹配点对。随机抽样一致算法(random sample consensus,RANSAC),可从包含异常数据的被观测数据中估算出数学模型的参数,剔除无效样本,保留有效样本。When matching point pairs are corrected, the RANSAC algorithm is often used to eliminate incorrect matching point pairs. The random sample consensus algorithm (RANSAC) can estimate the parameters of the mathematical model from the observed data containing abnormal data, eliminate invalid samples, and retain valid samples.
RANSAC算法的处理步骤如下:The processing steps of the RANSAC algorithm are as follows:
Step1从匹配点对集合中随机抽选一个子集;Step1 randomly select a subset from the set of matching point pairs;
Step2利用子集中的匹配点对计算出变换矩阵,如单应矩阵;Step2 uses the matching point pairs in the subset to calculate the transformation matrix, such as the homography matrix;
Step3利用变换矩阵,计算匹配点对集合中的每个匹配点对是否符合变换矩阵,符合变换的匹配点对标记为内点,否则标记为外点;Step3 uses the transformation matrix to calculate whether each matching point pair in the matching point pair set conforms to the transformation matrix, and the matching point pair that conforms to the transformation is marked as an inner point, otherwise it is marked as an outer point;
Step4根据内外点的个数计算该循环的错误概率,或根据变换关系计算内点的映射误差,若不满足预设的终止条件,则返回 Step1继续计算,直到满足要求为止。Step 4 Calculate the error probability of the loop according to the number of inner and outer points, or calculate the mapping error of the inner point according to the transformation relationship. If the preset termination condition is not met, return to Step 1 to continue the calculation until the requirements are met.
通过RANSAC算法的迭代过程,剔除错误匹配点对,并根据保留的匹配点对计算变换关系。Through the iterative process of the RANSAC algorithm, the wrong matching point pairs are eliminated, and the transformation relationship is calculated according to the reserved matching point pairs.
目前,在特征匹配阶段获取匹配点对,普遍的做法有以下两种:At present, there are two common approaches to obtain matching point pairs in the feature matching stage:
穷举法。对于目标图像中的每一个特征点,利用穷举法遍历原图像中的每个特征点,取相似度最高的点作为目标图像特征点在原图像中的匹配点,从而得到一组匹配点对。Exhaustive method. For each feature point in the target image, the exhaustive method is used to traverse each feature point in the original image, and the point with the highest similarity is taken as the matching point of the target image feature point in the original image, thereby obtaining a set of matching point pairs.
比值法,即SIFT作者lowe提出的方法。取目标图像中的某个关键点,找出其与原图像中欧式距离最近的前两个关键点,在这两个关键点中,如果最近的距离除以次近的距离少于某个比例阈值,则接受这一对匹配点。The ratio method, the method proposed by SIFT author Lowe. Take a key point in the target image and find the first two key points with the closest Euclidean distance to the original image. Among these two key points, if the closest distance divided by the second closest distance is less than a certain ratio threshold, the pair of matching points is accepted.
上述第1种方案匹配正确率低,得到的匹配点对集合中将包含大量的错误匹配点,从而大大增加剔除错误匹配点对时的处理复杂度和计算量。The above-mentioned first scheme has a low matching accuracy rate, and the obtained set of matching point pairs will contain a large number of incorrect matching points, thereby greatly increasing the processing complexity and calculation amount when eliminating incorrect matching point pairs.
上述第2种方案依赖于阈值的选取,一般原则是,减小阈值则匹配正确率上升,但匹配点对数目减少;增加阈值则匹配正确率下降,包含大量错误匹配点对。针对不同质量的图像,如何选取合适的阈值,是很难解决的问题。另外,针对指纹图像,局部区域相似性较大,次近的距离往往接近于正确匹配点对,阈值的选取无法解决这类图像的匹配问题。The second scheme above depends on the selection of the threshold. The general principle is that when the threshold is reduced, the matching correct rate increases, but the number of matching point pairs decreases; when the threshold is increased, the matching correct rate decreases, including a large number of incorrect matching point pairs. For images of different quality, how to select an appropriate threshold is a difficult problem to solve. In addition, for fingerprint images, the similarity of local areas is relatively large, and the next closest distance is often close to the correct matching point pair. The selection of the threshold cannot solve the matching problem of such images.
因此,如何保留特征匹配点对,减少匹配点校正时的搜索空间, 降低计算复杂度是特征匹配的重要难题。Therefore, how to retain feature matching point pairs, reduce the search space for matching point correction, and reduce computational complexity is an important problem in feature matching.
发明内容SUMMARY OF THE INVENTION
为了解决上述存在问题。本发明提供一种基于双向最优匹配点对的图像匹配方法,在特征匹配阶段提出双向最优匹配点对的概念,使保留的匹配点对中包含的错误匹配点对数尽可能的少,正确匹配点对数尽可能多,从而降低剔除错误匹配点对的计算复杂度,快速计算出两幅图像之间稳定的变换关系。In order to solve the above problems. The invention provides an image matching method based on bidirectional optimal matching point pairs, and proposes the concept of bidirectional optimal matching point pairs in the feature matching stage, so that the number of incorrect matching point pairs contained in the reserved matching point pairs is as small as possible, The number of correct matching point pairs is as large as possible, thereby reducing the computational complexity of eliminating incorrect matching point pairs, and quickly calculating a stable transformation relationship between two images.
本发明提供一种基于双向最优匹配点对的图像匹配方法,具体步骤如下,其特征在于:The present invention provides an image matching method based on two-way optimal matching point pairs, the specific steps are as follows, and it is characterized in that:
步骤S1、利用穷举法进行双向匹配,得到原图像中特征点在目标图像中的最优匹配点,和目标图像中特征点在原图像中的最优匹配点,组成两个最优匹配点对集合;Step S1, use the exhaustive method to perform bidirectional matching to obtain the optimal matching points of the feature points in the original image in the target image, and the optimal matching points of the feature points in the target image in the original image, forming two optimal matching point pairs. gather;
步骤S2、根据上述两个最优匹配点对集合,计算双向最优匹配点对集合;Step S2, according to the above-mentioned two sets of optimal matching point pairs, calculate a set of two-way optimal matching point pairs;
步骤S3、利用RANSAC算法剔除双向最优匹配点对集合中的错误匹配点;Step S3, using the RANSAC algorithm to eliminate the erroneous matching points in the set of bidirectional optimal matching point pairs;
步骤S4、根据保留的双向最优匹配点对,计算两幅图像的变换关系;Step S4, according to the reserved bidirectional optimal matching point pair, calculate the transformation relationship of the two images;
步骤S5、从步骤S1中所得两组最优匹配点对集合中挑选满足上述变换关系的匹配点对;Step S5, select a matching point pair that satisfies the above-mentioned transformation relationship from the two sets of optimal matching point pairs obtained in step S1;
步骤S6、保留的双向最优匹配点对,和满足变换关系的最优匹配点对,即为两幅图像之间可接受的正确匹配点对。In step S6, the retained bidirectional optimal matching point pair and the optimal matching point pair satisfying the transformation relationship are the acceptable correct matching point pair between the two images.
作为本发明进一步细化,步骤S1具体步骤如下:As a further refinement of the present invention, the specific steps of step S1 are as follows:
记原图像中提取的特征点集为T:T={t i},i=1,2,...,m Denote the feature point set extracted from the original image as T: T={t i }, i=1, 2, ..., m
记目标图像中提取的特征点集为V:V={v j},j=1,2,...,n Denote the feature point set extracted from the target image as V: V={v j }, j=1, 2,...,n
记原图像中特征点t i与目标图像中特征点v j的相似度为s ijDenote the similarity between the feature point t i in the original image and the feature point v j in the target image as s ij ;
则定义最优匹配点对为:Then the optimal matching point pair is defined as:
Figure PCTCN2020120449-appb-000001
满足s xy=max j=1,2,...,ns xj
Figure PCTCN2020120449-appb-000001
Satisfy s xy =max j=1,2,...,n s xj ;
Figure PCTCN2020120449-appb-000002
满足s xy=max i=1,2,...,ms iy
Figure PCTCN2020120449-appb-000002
Satisfy s xy =max i=1,2,...,m s iy ;
上述
Figure PCTCN2020120449-appb-000003
Figure PCTCN2020120449-appb-000004
均可称为最优匹配点对,其含义分别为:
Figure PCTCN2020120449-appb-000005
表示v y为t x在目标图像中相似度最大的匹配点,
Figure PCTCN2020120449-appb-000006
表示t x为v y在原图像中相似度最大的匹配点。
the above
Figure PCTCN2020120449-appb-000003
and
Figure PCTCN2020120449-appb-000004
Both can be called optimal matching point pairs, and their meanings are:
Figure PCTCN2020120449-appb-000005
Indicates that v y is the matching point with the largest similarity of t x in the target image,
Figure PCTCN2020120449-appb-000006
Indicates that t x is the matching point with the largest similarity of v y in the original image.
作为本发明进一步细化,步骤S2具体步骤如下:As a further refinement of the present invention, the specific steps of step S2 are as follows:
认为
Figure PCTCN2020120449-appb-000007
Figure PCTCN2020120449-appb-000008
是穷举法在双向匹配时得到的最优匹配点对,基于上述内容,定义双向最优匹配点对为:
think
Figure PCTCN2020120449-appb-000007
and
Figure PCTCN2020120449-appb-000008
is the optimal matching point pair obtained by the exhaustive method in two-way matching. Based on the above content, the two-way optimal matching point pair is defined as:
Figure PCTCN2020120449-appb-000009
满足s xy=max j=1,2,...,ns xj且s xy=max i=1,2,...,ms iy
Figure PCTCN2020120449-appb-000009
s xy =max j=1, 2, . . . , ns xj and s xy =max i=1, 2, . . . , m s iy are satisfied .
作为本发明进一步细化,所述相似度采用余弦距离或欧氏距离进行计算,相似度可以采用以上两种方式进行计算获得。As a further refinement of the present invention, the similarity is calculated by using the cosine distance or the Euclidean distance, and the similarity can be obtained by calculating the above two methods.
本发明提供一种基于双向最优匹配点对的图像匹配方法,具有如下优点:The present invention provides an image matching method based on two-way optimal matching point pairs, which has the following advantages:
1)与现有的技术相比,穷举法得到的最优匹配点对集中存在一对多的匹配点对,这是明显的错误匹配点对。本发明定义的双向最优匹配点对中只存在一对一的匹配点对,错误匹配点对数明显降低,且避免了比值法中对阈值选取的依赖;1) Compared with the existing technology, there are one-to-many matching point pairs in the set of optimal matching point pairs obtained by the exhaustive method, which are obviously wrong matching point pairs. There are only one-to-one matching point pairs in the bidirectional optimal matching point pairs defined by the present invention, the number of incorrect matching point pairs is significantly reduced, and the dependence on threshold selection in the ratio method is avoided;
2)利用RANSAC算法剔除双向最优匹配点对集合中的错误匹配 点,由于RANSAC是一种随机抽样方法,其优化过程基于循环迭代,迭代次数不可控,不进行全空间搜索时优化结果不能保证一致,而全空间搜索时计算量过大,因此,若匹配点对中有效数据占大多数、无效数据很少时,通过最小二乘法或类似的方法来确定变换模型的参数和误差,是更快捷稳定的方法。本申请所述双向最优匹配点对集合中,错误匹配点对的占比很小,通过最小二乘法往往即可得到足够精准稳定的变换关系。2) Use the RANSAC algorithm to eliminate the wrong matching points in the set of bidirectional optimal matching point pairs. Since RANSAC is a random sampling method, its optimization process is based on loop iteration, and the number of iterations is uncontrollable, and the optimization results cannot be guaranteed without full-space search. Therefore, if the valid data in the matching point pair is the majority and the invalid data is very small, it is more efficient to determine the parameters and errors of the transformation model by the least squares method or similar methods. Fast and stable method. In the set of bidirectional optimal matching point pairs described in the present application, the proportion of incorrect matching point pairs is small, and a sufficiently accurate and stable transformation relationship can often be obtained by the least squares method.
3)本申请在特征匹配阶段,本发明能尽可能保留正确的匹配点对数,大大减少匹配点对校正时的计算复杂度,在计算出稳定变换关系后,通过将非双向最优匹配点对重新标记的方式,保证两幅图像具有足够多的匹配点对数。3) In the feature matching stage of the present application, the present invention can keep the correct number of matching point pairs as much as possible, greatly reducing the computational complexity when correcting the matching point pairs. For the relabeling method, ensure that the two images have enough matching point pairs.
附图说明Description of drawings
图1为本发明方法的流程图。Figure 1 is a flow chart of the method of the present invention.
具体实施方式Detailed ways
下面结合附图与具体实施方式对本发明作进一步详细描述:The present invention will be described in further detail below in conjunction with the accompanying drawings and specific embodiments:
本发明提供一种基于双向最优匹配点对的图像匹配方法,在特征匹配阶段提出双向最优匹配点对的概念,使保留的匹配点对中包含的错误匹配点对数尽可能的少,正确匹配点对数尽可能多,从而降低剔除错误匹配点对的计算复杂度,快速计算出两幅图像之间稳定的变换关系。The invention provides an image matching method based on bidirectional optimal matching point pairs, and proposes the concept of bidirectional optimal matching point pairs in the feature matching stage, so that the number of incorrect matching point pairs contained in the reserved matching point pairs is as small as possible, The number of correct matching point pairs is as large as possible, thereby reducing the computational complexity of eliminating incorrect matching point pairs, and quickly calculating a stable transformation relationship between two images.
作为发明的一种实施例,本发明提供如图1所示一种基于双向最优匹配点对的图像匹配方法,具体步骤如下:As an embodiment of the invention, the present invention provides an image matching method based on bidirectional optimal matching point pairs as shown in FIG. 1 , and the specific steps are as follows:
步骤S1、利用穷举法进行双向匹配,得到原图像中特征点在目标图像中的最优匹配点,和目标图像中特征点在原图像中的最优匹配点,组成两个最优匹配点对集合;Step S1, use the exhaustive method to perform two-way matching to obtain the optimal matching point of the feature points in the original image in the target image, and the optimal matching point of the feature points in the target image in the original image, forming two optimal matching point pairs. gather;
步骤S1具体步骤如下:The specific steps of step S1 are as follows:
记原图像中提取的特征点集为T:T={t i},i=1,2,...,m Denote the feature point set extracted from the original image as T: T={t i }, i=1, 2, ..., m
记目标图像中提取的特征点集为V:V={v j},j=1,2,...,n Denote the feature point set extracted from the target image as V: V={v j }, j=1, 2,...,n
记原图像中特征点t i与目标图像中特征点v j的相似度为s ijDenote the similarity between the feature point t i in the original image and the feature point v j in the target image as s ij ;
则定义最优匹配点对为:Then the optimal matching point pair is defined as:
Figure PCTCN2020120449-appb-000010
满足s xy=max j=1,2,...,ns xj
Figure PCTCN2020120449-appb-000010
Satisfy s xy =max j=1,2,...,n s xj ;
Figure PCTCN2020120449-appb-000011
满足s xy=max i=1,2,...,ms iy
Figure PCTCN2020120449-appb-000011
Satisfy s xy =max i=1,2,...,m s iy ;
上述
Figure PCTCN2020120449-appb-000012
Figure PCTCN2020120449-appb-000013
均可称为最优匹配点对,其含义分别为:
Figure PCTCN2020120449-appb-000014
表示v y为t x在目标图像中相似度最大的匹配点,
Figure PCTCN2020120449-appb-000015
表示t x为v y在原图像中相似度最大的匹配点,所述相似度采用余弦距离或欧氏距离进行计算,相似度可以采用以上两种方式进行计算获得;
the above
Figure PCTCN2020120449-appb-000012
and
Figure PCTCN2020120449-appb-000013
Both can be called optimal matching point pairs, and their meanings are:
Figure PCTCN2020120449-appb-000014
Indicates that v y is the matching point with the largest similarity of t x in the target image,
Figure PCTCN2020120449-appb-000015
Indicates that t x is the matching point with the largest similarity of v y in the original image, the similarity is calculated by cosine distance or Euclidean distance, and the similarity can be obtained by calculation in the above two ways;
步骤S2、根据上述两个最优匹配点对集合,计算双向最优匹配点对集合;Step S2, according to the above-mentioned two sets of optimal matching point pairs, calculate a set of two-way optimal matching point pairs;
步骤S2具体步骤如下:The specific steps of step S2 are as follows:
认为
Figure PCTCN2020120449-appb-000016
Figure PCTCN2020120449-appb-000017
是穷举法在双向匹配时得到的最优匹配点对,基于上述内容,定义双向最优匹配点对为:
think
Figure PCTCN2020120449-appb-000016
and
Figure PCTCN2020120449-appb-000017
is the optimal matching point pair obtained by the exhaustive method in two-way matching. Based on the above content, the two-way optimal matching point pair is defined as:
Figure PCTCN2020120449-appb-000018
满足s xy=max j=1,2,...,ns xj且s xy=max i=1,2,...,ms iy
Figure PCTCN2020120449-appb-000018
s xy =max j=1, 2, . . . , ns xj and s xy =max i=1, 2, . . . , m s iy are satisfied .
步骤S3、利用RANSAC算法剔除双向最优匹配点对集合中的错误匹配点;Step S3, using the RANSAC algorithm to eliminate the erroneous matching points in the set of bidirectional optimal matching point pairs;
步骤S4、根据保留的双向最优匹配点对,计算两幅图像的变换关系;Step S4, according to the reserved bidirectional optimal matching point pair, calculate the transformation relationship of the two images;
步骤S5、从步骤S1中所得两组最优匹配点对集合中挑选满足上述变换关系的匹配点对;Step S5, select a matching point pair that satisfies the above-mentioned transformation relationship from the two sets of optimal matching point pairs obtained in step S1;
步骤S6、保留的双向最优匹配点对,和满足变换关系的最优匹配点对,即为两幅图像之间可接受的正确匹配点对。In step S6, the retained bidirectional optimal matching point pair and the optimal matching point pair satisfying the transformation relationship are the acceptable correct matching point pair between the two images.
RANSAC是一种随机抽样方法,其优化过程基于循环迭代,迭代次数不可控,不进行全空间搜索时优化结果不能保证一致,而全空间搜索时计算量过大,因此,若匹配点对中有效数据占大多数、无效数据很少时,我们可以通过最小二乘法或类似的方法来确定变换模型的参数和误差。RANSAC is a random sampling method. Its optimization process is based on loop iterations, and the number of iterations is uncontrollable. The optimization results cannot be guaranteed to be consistent without full-space search. When the data is in the majority and the invalid data is small, we can determine the parameters and errors of the transformation model by least squares or similar methods.
本发明所述双向最优匹配点对集合中,错误匹配点对的占比很小,通过最小二乘法往往即可得到足够精准稳定的变换关系。In the set of bidirectional optimal matching point pairs of the present invention, the proportion of wrong matching point pairs is very small, and a sufficiently accurate and stable transformation relationship can often be obtained by the least square method.
以上所述,仅是本发明的较佳实施例而已,并非是对本发明作任何其他形式的限制,而依据本发明的技术实质所作的任何修改或等同变化,仍属于本发明所要求保护的范围。The above are only preferred embodiments of the present invention, and are not intended to limit the present invention in any other form, and any modifications or equivalent changes made according to the technical essence of the present invention still fall within the scope of protection of the present invention. .

Claims (4)

  1. 一种基于双向最优匹配点对的图像匹配方法,具体步骤如下,其特征在于:An image matching method based on two-way optimal matching point pairs, the specific steps are as follows, and it is characterized in that:
    步骤S1、利用穷举法进行双向匹配,得到原图像中特征点在目标图像中的最优匹配点,和目标图像中特征点在原图像中的最优匹配点,组成两个最优匹配点对集合;Step S1, use the exhaustive method to perform two-way matching to obtain the optimal matching point of the feature points in the original image in the target image, and the optimal matching point of the feature points in the target image in the original image, forming two optimal matching point pairs. gather;
    步骤S2、根据上述两个最优匹配点对集合,计算双向最优匹配点对集合;Step S2, according to the above-mentioned two sets of optimal matching point pairs, calculate a set of two-way optimal matching point pairs;
    步骤S3、利用RANSAC算法剔除双向最优匹配点对集合中的错误匹配点;Step S3, using the RANSAC algorithm to eliminate the erroneous matching points in the set of bidirectional optimal matching point pairs;
    步骤S4、根据保留的双向最优匹配点对,计算两幅图像的变换关系;Step S4, according to the reserved bidirectional optimal matching point pair, calculate the transformation relationship of the two images;
    步骤S5、从步骤S1中所得两组最优匹配点对集合中挑选满足上述变换关系的匹配点对;Step S5, select a matching point pair that satisfies the above-mentioned transformation relationship from the two sets of optimal matching point pairs obtained in step S1;
    步骤S6、保留的双向最优匹配点对,和满足变换关系的最优匹配点对,即为两幅图像之间可接受的正确匹配点对。In step S6, the retained bidirectional optimal matching point pair and the optimal matching point pair satisfying the transformation relationship are the acceptable correct matching point pair between the two images.
  2. 根据权利要求1所述的一种基于双向最优匹配点对的图像匹配方法,其特征在于:A kind of image matching method based on bidirectional optimal matching point pair according to claim 1, is characterized in that:
    步骤S1具体步骤如下:The specific steps of step S1 are as follows:
    记原图像中提取的特征点集为T:T={t i},i=1,2,...,m Denote the feature point set extracted from the original image as T: T={t i }, i=1, 2, ..., m
    记目标图像中提取的特征点集为V:V={v j},j=1,2,...,n Denote the feature point set extracted from the target image as V: V={v j }, j=1, 2,...,n
    记原图像中特征点t i与目标图像中特征点v j的相似度为s ijDenote the similarity between the feature point t i in the original image and the feature point v j in the target image as s ij ;
    则定义最优匹配点对为:Then the optimal matching point pair is defined as:
    Figure PCTCN2020120449-appb-100001
    满足s xy=max j=1,2,...,ns xj
    Figure PCTCN2020120449-appb-100001
    Satisfy s xy =max j=1,2,...,n s xj ;
    Figure PCTCN2020120449-appb-100002
    满足s xy=max i=1,2,...,ms iy
    Figure PCTCN2020120449-appb-100002
    Satisfy s xy =max i=1,2,...,m s iy ;
    上述
    Figure PCTCN2020120449-appb-100003
    Figure PCTCN2020120449-appb-100004
    均可称为最优匹配点对,其含义分别为:
    Figure PCTCN2020120449-appb-100005
    表示v y为t x在目标图像中相似度最大的匹配点,
    Figure PCTCN2020120449-appb-100006
    表示t x为v y在原图像中相似度最大的匹配点。
    the above
    Figure PCTCN2020120449-appb-100003
    and
    Figure PCTCN2020120449-appb-100004
    Both can be called optimal matching point pairs, and their meanings are:
    Figure PCTCN2020120449-appb-100005
    Indicates that v y is the matching point with the largest similarity of t x in the target image,
    Figure PCTCN2020120449-appb-100006
    Indicates that t x is the matching point with the largest similarity of v y in the original image.
  3. 根据权利要求2所述的一种基于双向最优匹配点对的图像匹配方法,其特征在于:A kind of image matching method based on bidirectional optimal matching point pair according to claim 2, is characterized in that:
    步骤S2具体步骤如下:The specific steps of step S2 are as follows:
    认为
    Figure PCTCN2020120449-appb-100007
    Figure PCTCN2020120449-appb-100008
    是穷举法在双向匹配时得到的最优匹配点对,基于上述内容,定义双向最优匹配点对为:
    think
    Figure PCTCN2020120449-appb-100007
    and
    Figure PCTCN2020120449-appb-100008
    is the optimal matching point pair obtained by the exhaustive method in two-way matching. Based on the above content, the two-way optimal matching point pair is defined as:
    Figure PCTCN2020120449-appb-100009
    满足s xy=max j=1,2,...,ns xj且s xy=max i=1,2,...,ms iy
    Figure PCTCN2020120449-appb-100009
    s xy =max j=1, 2, . . . , ns xj and s xy =max i=1, 2, . . . , m s iy are satisfied .
  4. 根据权利要求2所述的一种基于双向最优匹配点对的图像匹配方法,其特征在于:所述相似度采用余弦距离或欧氏距离进行计算。The image matching method based on bidirectional optimal matching point pairs according to claim 2, wherein the similarity is calculated by using cosine distance or Euclidean distance.
PCT/CN2020/120449 2020-10-10 2020-10-12 Image matching method based on bidirectional optimal matching point pairs WO2022073249A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011079952.9A CN112364879A (en) 2020-10-10 2020-10-10 Image matching method based on bidirectional optimal matching point pair
CN202011079952.9 2020-10-10

Publications (1)

Publication Number Publication Date
WO2022073249A1 true WO2022073249A1 (en) 2022-04-14

Family

ID=74507624

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/120449 WO2022073249A1 (en) 2020-10-10 2020-10-12 Image matching method based on bidirectional optimal matching point pairs

Country Status (2)

Country Link
CN (1) CN112364879A (en)
WO (1) WO2022073249A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120148164A1 (en) * 2010-12-08 2012-06-14 Electronics And Telecommunications Research Institute Image matching devices and image matching methods thereof
CN103400388A (en) * 2013-08-06 2013-11-20 中国科学院光电技术研究所 Method for eliminating Brisk (binary robust invariant scale keypoint) error matching point pair by utilizing RANSAC (random sampling consensus)
CN110009670A (en) * 2019-03-28 2019-07-12 上海交通大学 The heterologous method for registering images described based on FAST feature extraction and PIIFD feature
CN111104922A (en) * 2019-12-30 2020-05-05 深圳纹通科技有限公司 Feature matching algorithm based on ordered sampling
CN111144338A (en) * 2019-12-30 2020-05-12 深圳纹通科技有限公司 Feature matching algorithm based on feature point topological structure

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116891B (en) * 2013-03-12 2015-08-12 上海海事大学 A kind of remote sensing image registration method based on two-way neighborhood filtering policy
US9400939B2 (en) * 2014-04-13 2016-07-26 International Business Machines Corporation System and method for relating corresponding points in images with different viewing angles
CN104596519B (en) * 2015-02-17 2017-06-13 哈尔滨工业大学 Vision positioning method based on RANSAC algorithms

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120148164A1 (en) * 2010-12-08 2012-06-14 Electronics And Telecommunications Research Institute Image matching devices and image matching methods thereof
CN103400388A (en) * 2013-08-06 2013-11-20 中国科学院光电技术研究所 Method for eliminating Brisk (binary robust invariant scale keypoint) error matching point pair by utilizing RANSAC (random sampling consensus)
CN110009670A (en) * 2019-03-28 2019-07-12 上海交通大学 The heterologous method for registering images described based on FAST feature extraction and PIIFD feature
CN111104922A (en) * 2019-12-30 2020-05-05 深圳纹通科技有限公司 Feature matching algorithm based on ordered sampling
CN111144338A (en) * 2019-12-30 2020-05-12 深圳纹通科技有限公司 Feature matching algorithm based on feature point topological structure

Also Published As

Publication number Publication date
CN112364879A (en) 2021-02-12

Similar Documents

Publication Publication Date Title
WO2022002150A1 (en) Method and device for constructing visual point cloud map
CN113012212B (en) Depth information fusion-based indoor scene three-dimensional point cloud reconstruction method and system
Wang et al. An improved dice loss for pneumothorax segmentation by mining the information of negative areas
CN108764041B (en) Face recognition method for lower shielding face image
WO2020119619A1 (en) Network optimization structure employing 3d target classification and scene semantic segmentation
CN109697692B (en) Feature matching method based on local structure similarity
CN109344845B (en) Feature matching method based on triple deep neural network structure
WO2019205813A1 (en) Method for correcting gyroscope data of mobile robot, device, and storage medium
WO2021072891A1 (en) Knowledge graph relationship alignment method, apparatus and device, and storage medium
WO2023125456A1 (en) Multi-level variational autoencoder-based hyperspectral image feature extraction method
CN114494368A (en) Low-overlapping-rate point cloud registration method combining dimensionality reduction projection and feature matching
WO2022073249A1 (en) Image matching method based on bidirectional optimal matching point pairs
CN114782715A (en) Vein identification method based on statistical information
CN111881989B (en) Hyperspectral image classification method
CN111144469B (en) End-to-end multi-sequence text recognition method based on multi-dimensional associated time sequence classification neural network
Guo et al. Triplet relationship guided sampling consensus for robust model estimation
CN110400300B (en) Pathological blood vessel accurate detection method based on block matching self-adaptive weight sparse representation
CN113470085B (en) Improved RANSAC-based image registration method
Wei et al. L2-norm prototypical networks for tackling the data shift problem in scene classification
CN111192302A (en) Feature matching method based on motion smoothness and RANSAC algorithm
CN116152517A (en) Improved ORB feature extraction method
CN112837275B (en) Capsule endoscope image organ classification method, device, equipment and storage medium
CN114529739A (en) Image matching method
Meng et al. Research on the ROI registration algorithm of the cardiac CT image time series
Zhang et al. Research on finger vein recognition based on sub-convolutional neural network

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20956564

Country of ref document: EP

Kind code of ref document: A1