WO2022147774A1 - 基于三角剖分和概率加权ransac算法的物***姿识别方法 - Google Patents

基于三角剖分和概率加权ransac算法的物***姿识别方法 Download PDF

Info

Publication number
WO2022147774A1
WO2022147774A1 PCT/CN2021/070900 CN2021070900W WO2022147774A1 WO 2022147774 A1 WO2022147774 A1 WO 2022147774A1 CN 2021070900 W CN2021070900 W CN 2021070900W WO 2022147774 A1 WO2022147774 A1 WO 2022147774A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature point
image
intersections
probability
feature
Prior art date
Application number
PCT/CN2021/070900
Other languages
English (en)
French (fr)
Inventor
何再兴
沈晨涛
赵昕玥
Original Assignee
浙江大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江大学 filed Critical 浙江大学
Priority to US18/028,528 priority Critical patent/US20230360262A1/en
Priority to PCT/CN2021/070900 priority patent/WO2022147774A1/zh
Publication of WO2022147774A1 publication Critical patent/WO2022147774A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/77Determining position or orientation of objects or cameras using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing

Definitions

  • the invention relates to the field of machine vision, in particular to an object pose recognition method based on triangulation and probability weighted RANSAC algorithm.
  • Object pose recognition has always been an important research direction in the field of machine vision and industrial automation. Object pose recognition is required in many application scenarios, such as autonomous operation of robots in unstructured environments, augmented reality, and virtual assembly.
  • the most common method is to extract the template image and the image feature points (such as SIFT or SURF) in the actual image for matching. Since the feature point matching usually results in wrong matching, the RANSAC algorithm is then used to select 4 pairs of correctly matched feature points are used to accurately calculate the spatial transformation matrix between the two images, thereby calculating the pose of the object.
  • the specific method of the RANSAC algorithm is: randomly select 4 well-matched feature point pairs to calculate the spatial transformation matrix each time, and when there are a large number of point pairs in the remaining feature point pairs, they conform to the transformation relationship of this transformation matrix within a given error range. , the corresponding 4 point pairs are considered to be the correct matching point pairs, and the transformation matrix is also the required matrix.
  • the patent of the present invention proposes a new object pose recognition method based on triangulation and probability weighted RANSAC algorithm.
  • the main idea is to construct the topological network of all feature points on the same image by triangulation, and compare the difference of the topological network of feature points of two images, so as to analyze and calculate the probability of matching error of feature point pairs, and weight this probability to
  • the feature point pair with higher error probability has a smaller probability of being randomly selected as four point pairs to calculate the transformation matrix.
  • This method can effectively improve the efficiency and success rate of transformation matrix and subsequent object pose recognition.
  • the present invention proposes an object pose recognition method based on triangulation and probability weighted RANSAC algorithm.
  • the present invention proposes a new method based on triangulation. Paired feature point pair probability weighting method, and improved RANSAC method, making it easier to select the correct feature point pair to meet the needs of practical applications.
  • the technical scheme of the present invention comprises the following steps:
  • Step 1 Image acquisition.
  • For the actual objects placed in different actual environments use the actual physical camera to shoot the object to obtain the actual image
  • for the object model imported in the computer virtual scene use the virtual camera to shoot the object model to obtain the template image; extract the input actual image and the foreground part of the template image;
  • Step 2 Image feature point detection and matching. Use the SIFT algorithm to detect the feature points of the actual image and the template image, and realize the feature point matching between the actual image and the template image;
  • Step 3 Triangulation.
  • select the successfully matched feature point in step 2 and number the feature point pair where the feature point is located. Triangulate these feature points, and record the feature point numbers of the vertices of each triangle.
  • Step 4 Count the number of intersections.
  • intersection times of the feature point After finding the intersection times of all leading line segments of the feature point, sum the intersection times and divide by the number of leading line segments to obtain the average number of intersections per line segment, which is called the intersection times of the feature point.
  • Step 5 Probability assignment. Sort according to the number of intersections corresponding to each feature point in the model image, from low to high. Calculate the score for the number of intersections of each feature point, and take the highest number of intersections minus the number of intersections of the feature point as the score. The probability of each feature point is the score of the feature point divided by the total score of all feature points, and the sum of the probability of all feature points is 1;
  • Step 6 Probability weighted RANSAC algorithm.
  • the spatial transformation matrix T is calculated using the coordinates of the 4 feature points and the feature points that match them. For each feature point of the actual image, multiply its coordinates (x 1 , y 1 ) by the matrix T to obtain the coordinates (x 1 ', y 1 ') after pose transformation, which are the corresponding features on the model image.
  • the Euclidean distance of the point coordinates (x 2 , y 2 ) is the deviation of the space transformation of the pair of feature points.
  • Deviation analysis If the deviation e of each pair of feature points is less than the threshold, it indicates successful correspondence. If the number of successfully corresponding point pairs exceeds the set number, it means that the space transformation matrix is a feasible solution; otherwise, repeat step 6 until a feasible solution appears, or the cycle ends automatically when a certain number of cycles is reached.
  • the present invention solves the problems of low efficiency and low success rate of traditional pose recognition methods when the error rate of feature points is relatively high.
  • the present invention improves the original RANSAC algorithm, and enables the RANSAC algorithm to select points with relatively high accuracy more efficiently through probability weighting, so as to improve its effectiveness.
  • the present invention promotes the method of eliminating the mismatch of feature point pairs, and uses triangulation and intersection quantity detection to remove the mismatched feature points.
  • the present invention solves the problem of low efficiency and accuracy of using the traditional pose recognition method when the surrounding environment is complex or when there are obvious features in the surrounding environment, and the present invention can effectively eliminate the features of interference in the environment and improve the recognition efficiency. with the correct rate.
  • Fig. 1 is the flow chart of the method of the present invention
  • Figure 2 is a flow chart of the probability weighted RANSAC algorithm.
  • Fig. 3 is the result diagram of triangulation and connection
  • FIG. 4 is an explanatory diagram of the calculation of the number of intersections
  • Figure 5 is an example of probability distribution
  • FIG. 6 is a schematic diagram of feature point selection.
  • Figure 7 is a schematic diagram of feature point error calculation.
  • FIG. 8 is a schematic diagram of selected cases and coordinate axes.
  • FIG. 1 The flow chart of the present invention is shown in FIG. 1 .
  • Embodiments are implemented in different attitudes of a certain book.
  • Step 1 For actual objects placed in different actual environments, use an actual physical camera to photograph the object to obtain an actual image; for an object model imported in a computer virtual scene, use a virtual camera to photograph the object model to obtain a template image; extract Input the foreground part of the actual image and the template image;
  • Step 2 Image feature point detection and matching. Use the SIFT algorithm to detect the feature points of the actual image and the template image, and realize the feature point matching of the actual image and the template image;
  • Step 3 Triangulation.
  • the successfully matched feature points to perform triangulation, and record the feature point number of the vertex of each triangle, and reconnect into triangles according to the point number in the model image;
  • the left is the triangulation result of the actual image
  • the right is the image of the model image connected by the sequence number of the point.
  • Step 4 Count the number of intersections. Calculate the intersection of the line segment drawn from each feature point in the model image and other line segments, calculate the average number of intersections of each drawn line segment, and use this as the number of intersections of the feature point.
  • the feature point in Figure 4 leads to 7 line segments, and the total number of intersections with other lines is 2. It can be concluded that the average number of intersections of the line segments from the feature point is 0.286.
  • Step 5 Probability assignment. Sort according to the number of intersections corresponding to each feature point in the model image, from low to high. Then assign a probability value to the feature point pair corresponding to each feature point.
  • Figure 5 is a schematic diagram of probability assignment.
  • step 6
  • Feature points are selected as shown in Figure 6.
  • a spatial transformation matrix T is calculated using the coordinates of the 4 feature points and the feature points that match them. For each feature point of the actual image, the deviation of its spatial transformation will be calculated.
  • Figure 7 is a schematic diagram of the deviation calculation.
  • Deviation analysis The deviation e of each pair of feature points is analyzed, if it is less than the threshold, it indicates a successful correspondence. If the number of successfully corresponding point pairs exceeds the set number, it means that the space transformation matrix is a feasible solution; otherwise, repeat step 6 until a feasible solution appears, or the cycle ends automatically when a certain number of cycles is reached.
  • the normal vector is n T .
  • the camera intrinsic parameter matrix is K.
  • the solution of the translation vector is the distance of translation, for the rotation matrix R and the angle of rotation around the three axes:
  • the object rotates -16.7° around the x-axis, -29.8° around the y-axis, and -36.7° around the z-axis relative to the model. Move about 9cm in the x direction, about 12cm in the y direction, and about 9.5cm in the z direction. close to the actual measurement results.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

一种基于三角剖分和概率加权RANSAC算法的物***姿识别方法。首先,拍摄获得实际图像和模板图像作为输入,提取输入图像的前景部分,通过SIFT算法提取特征点,并实现实际图像和模板图像的特征点匹配,并对每一对特征点进行标号。其次,对实际图像进行三角剖分,并记录每个三角形的顶点的特征点序号,在模型图像内按照点的序号重新连接成三角形。接着,根据每个特征点周围的线段的交叉情况对该特征点分配一定的概率。最后,使用RANSAC算法时,按概率地选取4个特征点进行空间转换矩阵的计算,并计算该空间转换矩阵产生的误差,当误差满足设定条件时,该矩阵即为所求得的空间转换矩阵。本方法能够计算得到精确的物***姿,同时能够达到更高的效率,可满足实际应用的需求。

Description

基于三角剖分和概率加权RANSAC算法的物***姿识别方法 技术领域
本发明涉及机器视觉领域,具体涉及一种基于三角剖分和概率加权RANSAC算法的物***姿识别方法。
背景技术
物***姿识别一直以来都是机器视觉及工业自动化领域的一个重要研究方向。在很多应用场景下都需要用到物***姿识别,如非结构环境机器人自主作业、增强现实、虚拟装配等。
对于物***姿识别,最常用的方法就是提取模板图像和实际图像中的图像特征点(如SIFT或SURF)进行匹配,由于特征点匹配通常会出现错误的匹配,因此随后需要使用RANSAC算法选出4对正确匹配的特征点对,用来准确计算两个图像之间的空间变换矩阵,从而计算物体的位姿。RANSAC算法的具体作法是:每次随机选择4个匹配好的特征点对计算空间变换矩阵,当其余的特征点对中也有大量的点对在给定的误差范围内符合这个变换矩阵的变换关系时,它对应的这4个点对被认为是正确的匹配点对,该变换矩阵也即所要求的矩阵。反之,则重新随机选择4个点对,直到选出正确的点对、计算出正确的变换矩阵为止。但是这种方法在错误的匹配点对比例比较低的时候效果比较好。当错误匹配点对比例较高(超过50%)时,该方法需要循环许多次才能计算出正确的变换矩阵,严重影响物***姿识别的效率。而当错误匹配点对比例进一步提高到一定程度(超过80%)时,则无法在合理时间内计算出正确的变换矩阵。
针对这一问题,本发明专利提出了一种新的基于三角剖分和概率加权RANSAC算法的物***姿识别方法。其主要思想是:通过三角剖分构造同一图像上所有特征点的拓扑网络,对比两个图像特征点拓扑网络的差异,从而分析计算出特征点对匹配错误的概率,将这一概率加权给在RANSAC算法的随机过程中,错误概率越高的特征点对,被随机选择到、作为4个点对计算变换矩阵 的概率就越小。这一方法可以有效提高变换矩阵以及后续的物***姿识别的效率及成功率。
发明内容
本发明为解决上述方法在特征点对错误率比较高的情况下的不足,提出一种基于三角剖分和概率加权RANSAC算法的物***姿识别方法,本发明提出了一种新的基于三角剖分对特征点对概率加权方法,并改进了RANSAC方法,使其能更加容易地选取正确的特征点对,满足实际应用的需求。
如图1所示,本发明的技术方案包括以下步骤:
步骤1:图像获取。对于放置不同实际环境下的实际物体,采用实际物理相机对该物体进行拍摄获得实际图像;对于计算机虚拟场景下导入的物体模型,采用虚拟相机对该物体模型进行拍摄获得模板图像;提取输入实际图像和模板图像的前景部分;
步骤2:图像特征点检测与匹配。使用SIFT算法对实际图像和模板图像进行特征点的检测,并实现实际图像和模板图像的特征点匹配;
步骤3:三角剖分。在实际图像中,选取步骤2中匹配成功的特征点,并对该特征点所在的特征点对进行编号。将这些特征点进行三角剖分,并记录每个三角形的顶点的特征点序号。在模型图像内,找到每个特征点的相应的序号,按照点的序号重新连接成三角形;
步骤4:计算相交次数。
4.1、对于模型图像中每一个特征点(a1,b1)引出的线段可以用一个向量来表示m=(a,b),随后判断该线段是否与其他线段相交。首先进行快速判断,设第一个线段的两个端点是A(a x,a y)和B(b x,b y),第二个线段的两个端点是C(c x,c y)和D(d x,d y)。若max(a x,b x)<min(c x,d x)或max(a y,b y)<min(c y,d y)或max(c x,d x)<min(a x,b x)或max(c y,d y)<min(a y,b y),则可直接判定为不相交。
其次,用向量连接四个点,同时满足以下条件则说明相交:
Figure PCTCN2021070900-appb-000001
Figure PCTCN2021070900-appb-000002
最后,若相交,则该特征点引出的该线段的相加次数+1,遍历其他所有线段后,可得该特征点引出的该线段与其他线段相交总次数。
4.2、在找完该特征点所有引出线段的相交次数后,将相交次数求和,除以引出线段的数量,得到平均每条线段的相交次数,称作该特征点的相交次数。
步骤5:概率赋值。根据模型图像中每一个特征点对应的相交次数进行排序,从低到高。对每一个特征点的相交次数进行计算得分,取最高相交次数减去该特征点相交次数作为得分。每个特征点的概率为该特征点的得分除以所有特征点的总分,所有特征点概率总和为1;
步骤6:概率加权RANSAC算法。
6.1、按概率选取4个特征点。将每个特征点的概率生成一个长度为该概率的区间,拼接成0-1的区间。随机产生一个0-1的随机数,落在某个区间则代表选到该区间对应的特征点,若有重复则重新选择。
6.2、位姿求取与偏差计算。使用该4个特征点和与之匹配的特征点的坐标计算空间转换矩阵T。对于实际图像的每一个特征点,将其坐标(x 1,y 1)乘以该矩阵T得到位姿变换后的坐标(x 1’,y 1’),该坐标与模型图像上对应的特征点的坐标(x 2,y 2)的欧式距离即为这对特征点空间转换的偏差。
Figure PCTCN2021070900-appb-000003
6.3、偏差分析。对每一对特征点对的偏差e,若小于阈值,则表明成功对应。若成功对应的点对的数量超过设定数量,则说明该空间转换矩阵是一个可行的解,反之,则重复步骤6直到出现一个可行解,或达到一定循环次数自动结束。
流程如图2所示。
本发明的有益效果是:
1)本发明解决了解决在特征点错误率比较高的情况下,传统位姿识别方法效率低及成功率低的问题。
2)本发明改进了原有的RANSAC算法,通过概率加权让RANSAC算法更加高效地选取正确度相对高的点,使其有效性提升。
3)本发明推广了消除特征点对误匹配的方法,使用三角剖分及相交数量检测来去除误匹配的特征点。
4)本发明解决了在周围环境复杂,或周围环境存在比较明显的特征时,使用传统位姿识别方法效率和正确率低的问题,使用本发明可以有效排除环境中干扰的特征,提高识别效率与正确率。
附图说明
图1为本发明方法的流程图;
图2为概率加权RANSAC算法流程图。
图3为三角剖分与连线结果图;
图4为相交次数计算说明图;
图5为概率分配例图;
图6为特征点选取示意图。
图7为特征点误差计算示意图。
图8为选取案例及坐标轴示意图。
具体实施方式
下面结合附图和实施例对本发明作进一步说明。本发明的流程图如图1所示。
本发明的具体实施例及其实施过程如下:
实施例以某书本的不同姿态进行实施。
步骤1:对于放置不同实际环境下的实际物体,采用实际物理相机对该物体进行拍摄获得实际图像;对于计算机虚拟场景下导入的物体模型,采用虚拟相机对该物体模型进行拍摄获得模板图像;提取输入实际图像和模板图像的前景部分;
步骤2:图像特征点检测与匹配。使用SIFT算法对实际图像和模板图像进 行特征点的检测,并实现实际图像和模板图像的特征点匹配;
步骤3:三角剖分。在实际图像中,使用匹配成功的特征点进行三角剖分,并记录每个三角形的顶点的特征点序号,在模型图像内按照点的序号重新连接成三角形;
如图3所示,左边为实际图像的三角剖分结果,右图为模型图片按照点的序号连接的图像。
步骤4:计算相交次数。计算模型图像中每一个特征点引出的线段与其他线段的相交情况,计算平均每一条引出的线段的相交次数,并以此作为该特征点的相交次数。
如图4所示,图4中该特征点引出了7条线段,与其他线条相交次数总数为2,可以得出该特征点引出线段平均相交次数为0.286。
步骤5:概率赋值。根据模型图像中每一个特征点对应的相交次数进行排序,从低到高。随后赋予每一个特征点对应的特征点对一个概率值,相交次数越少,赋予其概率越高,概率的总和为1。
图5为概率分配的示意图。
步骤6中:
6.1、按概率选取4个特征点。将每个特征点的概率生成一个长度为该概率的区间,拼接成0-1的区间。随机产生一个0-1的随机数,落在某个区间则代表选到该区间对应的特征点,若有重复则重新选择。
如图6所示选取特征点。
6.2、位姿求取与偏差计算。使用该4个特征点和与之匹配的特征点的坐标计算空间转换矩阵T。对实际图像的每一个特征点,将计算其空间转换的偏差。
如图7为偏差计算示意图。
6.3、偏差分析。对每一对特征点对的偏差e分析,若小于阈值,则表明成功对应。若成功对应的点对的数量超过设定数量,则说明该空间转换矩阵是一个可行的解,反之,则重复步骤6直到出现一个可行解,或达到一定循环次数自动结束。
该实例(图8)经过上述步骤,计算所得的空间转换矩阵为:
Figure PCTCN2021070900-appb-000004
该矩阵为单应矩阵,可以通过单应矩阵的分解计算出旋转矩阵R,物体平移的向量t。记模型图像的点在aX+bY+cZ=d的平面上。法向量为n T。相机内参矩阵为K。
Figure PCTCN2021070900-appb-000005
平移向量的解即为平移的距离,对于旋转矩阵R和绕三轴旋转的角度:
θ x=atan2(R 32,R 33)
Figure PCTCN2021070900-appb-000006
θ z=atan2(R 21,R 11)
最终经过计算,该物体相对于模型绕x轴旋转-16.7°,绕y轴旋转-29.8°,绕z轴旋转-36.7°。x向移动约9cm,y向移动约12cm,z向移动约9.5cm。与实际测量结果较为接近。
以上仅为本发明的具体实施例,但本发明的技术特征并不局限于此。任何以本发明为基础,为解决基本相同的技术问题,实现基本相同的技术效果,所作出地简单变化、等同替换或者修饰等,皆涵盖于本发明的保护范围之中。

Claims (5)

  1. 基于三角剖分和概率加权RANSAC算法的物***姿识别方法,其特征在于:
    步骤1:图像获取:对于放置不同实际环境下的实际物体,采用实际物理相机对该物体进行拍摄获得实际图像;对于计算机虚拟场景下导入的物体模型,采用虚拟相机对该物体模型进行拍摄获得模板图像;提取输入实际图像和模板图像的前景部分;
    步骤2:图像特征点检测与匹配:使用SIFT算法对实际图像和模板图像进行特征点的检测,并实现实际图像和模板图像的特征点匹配;
    步骤3:三角剖分:在实际图像中,使用匹配成功的特征点进行三角剖分,并记录每个三角形的顶点的特征点序号,在模型图像内按照点的序号重新连接成三角形,具体是:在实际图像中,选取步骤2中匹配成功的特征点,并对该特征点所在的特征点对进行编号,将这些特征点进行三角剖分,并记录每个三角形的顶点的特征点序号,在模型图像内,找到每个特征点的相应的序号,按照点的序号重新连接成三角形;
    步骤4:计算相交次数:计算模型图像中每一个特征点引出的线段与其他线段的相交情况,计算平均每一条引出的线段的相交次数,并以此作为该特征点的相交次数;
    步骤5:概率赋值:根据模型图像中每一个特征点对应的相交次数进行排序,从低到高,随后赋予每一个特征点对应的特征点对一个概率值,相交次数越少,赋予其概率越高,概率的总和为1;
    步骤6:概率加权RANSAC算法:按照概率选取4个特征点对,计算出空间转换矩阵,随后将实际图像中的每一个特征点的坐标乘以该矩阵,得到该特征点位姿变换后的坐标,与模型图像中该特征点对应的特征点的坐标计算偏差。如果位姿变换的矩阵是正确的,则变换后的特征点将与模型图像对应的特征点基本重合,随后计算偏差是否符合要求,若符合,该空间转换矩阵即为一个解,若不符合,则重复步骤6前面内容,直至计算出一个解或直至循环次数达到一定次数。
  2. 根据权利要求1所述基于三角剖分和概率加权RANSAC算法的物***姿识
    别方法,其特征在于:所述步骤4中,计算模型图像中每一个特征点引出的线段与其他线段的相交情况,具体为:对于模型图像中每一个特征点(a1,b1)引出的线段可以用一个向量来表示m=(a,b),随后判断该线段是否与其他线段相交,首先进行快速判断,设第一个线段的两个端点是A(a x,a y)和B(b x,b y),第二个线段的两个端点是C(c x,c y)和D(d x,d y);若max(a x,b x)<min(c x,d x)或max(a y,b y)<min(c y,d y)或max(c x,d x)<min(a x,b x)或max(c y,d y)<min(a y,b y),则可直接判定为不相交;
    其次,用向量连接四个点,同时满足以下条件则说明相交:
    Figure PCTCN2021070900-appb-100001
    Figure PCTCN2021070900-appb-100002
    最后,若相交,则该特征点引出的该线段的相加次数+1,遍历其他所有线段后,可得该特征点引出的该线段与其他线段相交总次数;遍历该特征点引出的所有线段的相交次数即为该特征点引出线段与其他线段相交情况。
  3. 根据权利要求1所述基于三角剖分和概率加权RANSAC算法的物***姿识别方法,其特征在于:所述步骤4中,计算平均每一条引出的线段的相交次数,并以此作为该特征点的相交次数,具体为:对于某一个特征点,找出由该特征点引出的所有线段,使用上述方法计算每一条引出线段与其他所有线段的相交次数,在找完该特征点所有引出线段的相交次数后,将相交次数求和,除以引出线段的数量,得到平均每条线段的相交次数,称作该特征点的相交次数。
  4. 根据权利要求1所述基于三角剖分和概率加权RANSAC算法的物***姿识别方法,其特征在于:所述步骤5中,赋予每一个特征点对应的特征点对一个概率值,相交次数越少,赋予其概率越高,概率的总和为1,具体为:根据步骤4中每一个特征点的相交次数进行计算得分,取最高相交次数减去该 特征点相交次数作为得分,每个特征点的概率为该特征点的得分除以所有特征点的总分。
  5. 根据权利要求1所述基于三角剖分和概率加权RANSAC算法的物***姿识别方法,其特征在于:所述步骤6,具体是:
    6.1、按概率选取4个特征点:将每个特征点的概率生成一个长度为该概率的区间,拼接成0-1的区间,随机产生一个0-1的随机数,落在某个区间则代表选到该区间对应的特征点,若有重复则重新选择;
    6.2、位姿求取与偏差计算:使用该4个特征点和与之匹配的特征点的坐标计算空间转换矩阵T,对于实际图像的每一个特征点,将其坐标(x 1,y 1)乘以该矩阵T得到位姿变换后的坐标(x 1’,y 1’),该坐标与模型图像上对应的特征点的坐标(x 2,y 2)的欧式距离即为这对特征点空间转换的偏差;
    Figure PCTCN2021070900-appb-100003
    6.3、偏差分析:对每一对特征点对的偏差e,若小于阈值,则表明成功对应,若成功对应的点对的数量超过设定数量,则说明该空间转换矩阵是一个可行的解,反之,则重复步骤6直到出现一个可行解,或达到一定循环次数自动结束。
PCT/CN2021/070900 2021-01-08 2021-01-08 基于三角剖分和概率加权ransac算法的物***姿识别方法 WO2022147774A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US18/028,528 US20230360262A1 (en) 2021-01-08 2021-01-08 Object pose recognition method based on triangulation and probability weighted ransac algorithm
PCT/CN2021/070900 WO2022147774A1 (zh) 2021-01-08 2021-01-08 基于三角剖分和概率加权ransac算法的物***姿识别方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/070900 WO2022147774A1 (zh) 2021-01-08 2021-01-08 基于三角剖分和概率加权ransac算法的物***姿识别方法

Publications (1)

Publication Number Publication Date
WO2022147774A1 true WO2022147774A1 (zh) 2022-07-14

Family

ID=82357802

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/070900 WO2022147774A1 (zh) 2021-01-08 2021-01-08 基于三角剖分和概率加权ransac算法的物***姿识别方法

Country Status (2)

Country Link
US (1) US20230360262A1 (zh)
WO (1) WO2022147774A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036623A (zh) * 2023-10-08 2023-11-10 长春理工大学 一种基于三角剖分的匹配点筛选方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109493384A (zh) * 2018-09-20 2019-03-19 顺丰科技有限公司 相机位姿估计方法、***、设备及存储介质
CN110147809A (zh) * 2019-03-08 2019-08-20 亮风台(北京)信息科技有限公司 图像处理方法及装置、存储介质及图像设备
US20190362178A1 (en) * 2017-11-21 2019-11-28 Jiangnan University Object Symmetry Axis Detection Method Based on RGB-D Camera
CN111553395A (zh) * 2020-04-21 2020-08-18 江苏航空职业技术学院 基于标定区域的特征点组合匹配的位姿估计方法
CN111768478A (zh) * 2020-07-13 2020-10-13 腾讯科技(深圳)有限公司 一种图像合成方法、装置、存储介质和电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190362178A1 (en) * 2017-11-21 2019-11-28 Jiangnan University Object Symmetry Axis Detection Method Based on RGB-D Camera
CN109493384A (zh) * 2018-09-20 2019-03-19 顺丰科技有限公司 相机位姿估计方法、***、设备及存储介质
CN110147809A (zh) * 2019-03-08 2019-08-20 亮风台(北京)信息科技有限公司 图像处理方法及装置、存储介质及图像设备
CN111553395A (zh) * 2020-04-21 2020-08-18 江苏航空职业技术学院 基于标定区域的特征点组合匹配的位姿估计方法
CN111768478A (zh) * 2020-07-13 2020-10-13 腾讯科技(深圳)有限公司 一种图像合成方法、装置、存储介质和电子设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036623A (zh) * 2023-10-08 2023-11-10 长春理工大学 一种基于三角剖分的匹配点筛选方法
CN117036623B (zh) * 2023-10-08 2023-12-15 长春理工大学 一种基于三角剖分的匹配点筛选方法

Also Published As

Publication number Publication date
US20230360262A1 (en) 2023-11-09

Similar Documents

Publication Publication Date Title
Xu et al. Pose estimation from line correspondences: A complete analysis and a series of solutions
CN106204574B (zh) 基于目标平面运动特征的相机位姿自标定方法
CN108022262A (zh) 一种基于点的邻域重心向量特征的点云配准方法
CN107358629B (zh) 一种基于目标识别的室内建图与定位方法
CN109159113B (zh) 一种基于视觉推理的机器人作业方法
Yu et al. Robust robot pose estimation for challenging scenes with an RGB-D camera
CN105740899A (zh) 一种机器视觉图像特征点检测与匹配复合的优化方法
CN109886124B (zh) 一种基于线束描述子图像匹配的无纹理金属零件抓取方法
CN113393524B (zh) 一种结合深度学习和轮廓点云重建的目标位姿估计方法
CN112364881B (zh) 一种进阶采样一致性图像匹配方法
Zhou et al. Vision-based pose estimation from points with unknown correspondences
WO2022147774A1 (zh) 基于三角剖分和概率加权ransac算法的物***姿识别方法
CN110727817A (zh) 基于t-CNN的三维模型检索方法、终端设备及存储介质
CN110838146A (zh) 一种共面交比约束的同名点匹配方法、***、装置及介质
Zhang et al. A visual-inertial dynamic object tracking SLAM tightly coupled system
CN116862984A (zh) 一种相机的空间位姿估计方法
JP2014102805A (ja) 情報処理装置、情報処理方法及びプログラム
Jiao et al. Robust localization for planar moving robot in changing environment: A perspective on density of correspondence and depth
Zhang et al. An improved SLAM algorithm based on feature contour extraction for camera pose estimation
CN113537309B (zh) 一种对象识别方法、装置及电子设备
JP2011174891A (ja) 位置姿勢計測装置、位置姿勢計測方法、及びプログラム
Wang et al. Robot grasping in dense clutter via view-based experience transfer
Chen et al. A Framework for 3D Object Detection and Pose Estimation in Unstructured Environment Using Single Shot Detector and Refined LineMOD Template Matching
Zhao et al. Dmvo: A multi-motion visual odometry for dynamic environments
Meng et al. Prob-slam: real-time visual slam based on probabilistic graph optimization

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21916830

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21916830

Country of ref document: EP

Kind code of ref document: A1