WO2019100647A1 - 一种基于rgb-d相机的物体对称轴检测方法 - Google Patents

一种基于rgb-d相机的物体对称轴检测方法 Download PDF

Info

Publication number
WO2019100647A1
WO2019100647A1 PCT/CN2018/083260 CN2018083260W WO2019100647A1 WO 2019100647 A1 WO2019100647 A1 WO 2019100647A1 CN 2018083260 W CN2018083260 W CN 2018083260W WO 2019100647 A1 WO2019100647 A1 WO 2019100647A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
candidate
cloud data
point
symmetry
Prior art date
Application number
PCT/CN2018/083260
Other languages
English (en)
French (fr)
Inventor
黄敏
李�浩
朱启兵
郭亚
Original Assignee
江南大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 江南大学 filed Critical 江南大学
Priority to US16/093,658 priority Critical patent/US10607106B2/en
Priority to AU2018370629A priority patent/AU2018370629B2/en
Publication of WO2019100647A1 publication Critical patent/WO2019100647A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/68Analysis of geometric attributes of symmetry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30128Food products
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/68Food, e.g. fruit or vegetables

Definitions

  • the invention relates to the field of image processing, in particular to an object symmetry axis detecting method based on an RGB-D camera.
  • the fruit picking robot can automatically detect fruit and pick it. It is widely used because of its high efficiency and good automation.
  • the picking action of the fruit picking robot depends on the accurate detection and positioning of the fruit by its visual inspection system.
  • the literature indicates that if the fruit The fruit picking efficiency can be improved by rotating and twisting in a specific manner with respect to the direction of the fruit and the stem. Therefore, in order to further improve the picking efficiency, it is necessary to improve the detection accuracy of the fruit picking robot for the fruit symmetry axis.
  • the most common methods for detecting the axis of symmetry of the fruit are the method of finding the axis of curvature based on the curvature of the surface, the natural image detection method based on learning, and the natural axis of symmetry detection based on edge feature learning.
  • they still have many problems.
  • the 3D point cloud used is very accurate, and the learning time becomes very long as the point cloud data increases. It is not convenient to detect the fruit symmetry axis from the images taken in the natural scene.
  • the inventors have proposed an object symmetry axis detection method based on RGB-D camera, which can accurately detect the symmetry axis of an object in a natural scene.
  • a method for detecting an axis of symmetry of an object based on an RGB-D camera comprising:
  • the depth image is mapped into the color image pixel coordinate system where the color image is located, the aligned depth image is obtained, and the color image and the aligned depth image are processed into three-dimensional point cloud data, and the three-dimensional point is
  • the cloud data includes three-dimensional coordinates and color information of the point cloud;
  • the target point cloud data is identified from the three-dimensional point cloud data according to the color difference threshold segmentation method, and the target point cloud data is data corresponding to the point cloud of the area where the target object is located;
  • centroid of the target point cloud data is obtained, and the spherical coordinate system is established centering on the centroid, and n 2 candidate symmetry planes are selected by the spherical coordinate system segmentation method, n is greater than 3 and n is an integer;
  • the symmetry axis of the target point cloud data is calculated according to n 2 candidate symmetry planes, and the symmetry axis of the target point cloud data is determined as the symmetry axis of the target object.
  • a further technical solution is to calculate the symmetry axis of the target point cloud data according to the n 2 candidate symmetry planes, including:
  • the symmetry axis of the target point cloud data is calculated according to the symmetry plane of the target object.
  • a further technical solution is to calculate the symmetry axis of the target point cloud data according to the symmetry plane of the target object, including calculation:
  • p v is the coordinate of the viewpoint
  • p o is the coordinate of the centroid of the target point cloud data.
  • a further technical solution is to calculate a score of each candidate symmetry plane in the n 2 candidate symmetry planes according to a preset integration strategy, including:
  • candidate points in the candidate symmetry plane are determined, and the candidate points are points having symmetric partners in the candidate symmetry plane;
  • the sum of the scores of the respective candidate points is determined as the score of the candidate symmetry plane.
  • a further technical solution is that the score of each candidate point is calculated according to a preset integration strategy, including:
  • a further technical solution is to determine candidate points in the candidate symmetry plane, including:
  • Calculation p v is the coordinate of the viewpoint, and p is the point on one side of the candidate symmetry plane, Is the normal vector at the point;
  • the point is determined as a candidate point.
  • the target point cloud data is identified from the three-dimensional point cloud data according to the color difference threshold segmentation method, including:
  • d i is the average distance between the point cloud and the neighboring points of the point cloud
  • k is the number of neighboring points of the point cloud
  • (x i , y i , z i ) is the coordinates of the point cloud
  • (x j , y j , z j ) is the coordinates of the neighboring point of the point cloud
  • a further technical solution is to identify the former scenic spot cloud data from the three-dimensional point cloud data according to the color difference threshold segmentation method, including:
  • R s is the red channel R information of the point cloud
  • G s is the green channel G information of the point cloud
  • a further technical solution is to establish a spherical coordinate system centered on the center of mass, and select n 2 candidate symmetry planes by using a spherical coordinate system segmentation method, including:
  • the horizontal split angle and the vertical split angle are changed n times respectively to obtain a unit normal vector of the n 2 set split planes;
  • n 2 candidate symmetry planes are obtained.
  • a further technical solution is to obtain the centroid of the target point cloud data, including:
  • the random point that determines the smallest standard deviation of the distance from each point cloud in the target point cloud data is the centroid of the target point cloud data.
  • the invention discloses an object symmetry axis detection method based on RGB-D camera, which realizes three-dimensional reconstruction of target object RGB-D image and point cloud filtering, and finally realizes the axis of symmetry of the target object by adopting an integration strategy.
  • the detection when applied to the field of fruit picking, can accurately find the axis of symmetry of the fruit based on the image of the fruit photographed in the natural scene, overcoming the defect that the previous fruit recognition is only accurate to the individual and cannot describe the individual characteristics, and is convenient for the robot Grab, improve the automatic harvesting efficiency of fruits, and reduce damage to fruits.
  • FIG. 1 is a flow chart of a method for detecting an object symmetry axis based on an RGB-D camera according to the present invention.
  • FIG. 2 is a schematic diagram of a coordinate mapping model between a color camera and a depth camera.
  • FIG. 3 is a schematic diagram of calculating a score of each candidate point according to a preset integration strategy.
  • the present application discloses an object symmetry axis detecting method based on an RGB-D camera.
  • the method is based on an RGB-D camera.
  • the RGB-D camera includes a color camera and a depth camera.
  • the detection method includes the following steps. Refer to Figure 1:
  • the calibration method used in this application is Zhang Zhengyou calibration method. Actually, other methods may also be used, which are not limited in this application.
  • the parameters of the color camera can be obtained: the color camera internal parameter H rgb and the color camera external parameters (R rgb , T rgb ); using Zhang Zhengyou calibration method for RGB-D
  • the camera's depth camera is calibrated to obtain depth camera parameters including: depth camera internal parameters H d and depth camera external parameters (R d , T d ).
  • the color image of the target object is obtained by the color camera of the RGB-D camera, and the depth image of the target object is acquired by the depth camera of the RGB-D camera.
  • the target object is the fruit on the fruit tree photographed under the natural scene.
  • the color image acquired by the color camera has a higher resolution, usually 1080*1920.
  • the depth image acquired by the depth camera has a lower resolution, usually 424*512, and the specific resolution of the color image and the depth image is RGB-
  • the hardware parameters of the D camera and the artificial settings are determined, and the present application does not limit this.
  • the depth information is mapped into the color image pixel coordinate system, and the alignment with the pixel points of the color image is obtained one by one.
  • the resolution of the subsequent depth image is the same as the resolution of the color image, which is 1080*1920, specifically including the following steps, as shown in FIG. 2:
  • Step 1 Using the color camera external parameters to obtain the projection coordinates P rgb (X c1 , Y c1 , Z c1 ) of the color camera in the world coordinate system P(X w , Y w , Z w ) according to the inverse transformation of the projection, using the depth
  • the camera external parameters acquire the projection coordinates P d (X c2 , Y c2 , Z c2 ) of the depth camera in the world coordinate system P(X w , Y w , Z w ) according to the inverse inverse projection.
  • Step 2 Acquire the coordinates P rgb (u 1 , v 1 ) of the image pixels in the color camera coordinate system by using the internal parameters of the color camera, and acquire the coordinates P d of the image pixels in the depth camera coordinate system by using the internal parameters of the depth camera ( u 2 , v 2 ).
  • Step 3 According to the projection coordinates P rgb (X c1 , Y c1 , Z c1 ) of the color camera in the world coordinate system, the projection coordinates P d (X c2 , Y c2 , Z c2 ) of the depth camera in the world coordinate system, The coordinates P rgb (u 1 , v 1 ) of the image pixels in the color camera coordinate system and the coordinates P d (u 2 , v 2 ) of the image pixels in the depth camera coordinate system determine the pixel coordinates of the color image and the pixel coordinates of the depth image. Correspondence between the two.
  • Step 4 Finally, the depth image is mapped into the color image pixel coordinate system according to the correspondence between the color image pixel coordinates and the depth image pixel coordinates, thereby obtaining the aligned depth image.
  • the color image and the aligned depth image are processed into three-dimensional point cloud data
  • d i is the average distance between the point cloud p i and its neighboring points p j in the space
  • (x i , y i , z i ) is the three-dimensional coordinates of the point cloud p i
  • (x j , y j , z j ) is the three-dimensional coordinates of the neighboring point p j of the point cloud
  • m is the number of point clouds in the cloud data of the former attraction
  • k is the number of neighboring points of the point cloud p i of the search
  • k is a preset value
  • [mu] for each of the point cloud corresponding to the distance d i is the average value
  • is the distance of each point cloud corresponding to the standard deviation of d i.
  • the centroid P 0 of the target point cloud data is obtained, that is, a predetermined number of random points are randomly generated in the vicinity of the target point cloud data, and the predetermined number is larger than the number of the target point cloud data, and usually the predetermined number is the target point cloud data. 2-3 times the number. Traverse all random points, for each random point, find the Euclidean distance between the random point and each point cloud in the target point cloud data, and determine the centroid P 0 from the random point with the smallest distance fluctuation, that is, obtain the random point The standard deviation of the distance between each point cloud in the target point cloud data, the random point that minimizes the standard deviation of the distance between each point cloud in the target point cloud data is the centroid P 0 of the target point cloud data.
  • Step 1 Determine the horizontal split angle ⁇ and the vertical split angle
  • the range of the horizontal split angle ⁇ should be the entire range, ie ⁇ ⁇ [0, ⁇ ], and Covering only the negative half of the z-axis, ie
  • Step 2 Split the horizontal angle ⁇ and the vertical split angle Equal division, ie calculation:
  • phiBinSize is the vertical split angle
  • the range of variation, ⁇ (2) and ⁇ (1) are the maximum and minimum values of the horizontal split angle ⁇ , respectively.
  • Vertical split angle The maximum and minimum values.
  • Step 3 Change the horizontal split angle and the vertical split angle n times, respectively, to obtain the unit normal vector of the n 2 set split plane.
  • a specific approach is to keep the horizontal split angle ⁇ unchanged by changing the vertical split angle Calculate the unit normal vector of the segmentation plane, and change the vertical segmentation angle cyclically Calculate n unit normal vectors a total of n times; cyclically change the horizontal split angle ⁇ a total of n times, for each horizontal split angle ⁇ , cyclically change the vertical split angle
  • the unit normal vector of n 2 sets of segmentation planes is obtained, that is, n 2 candidate symmetry planes are obtained, and the calculation method of each unit normal vector is as follows:
  • x, y, z are the coordinate values of the unit normal vector
  • theta is the azimuth in the spherical coordinate system
  • phi is the high and low angle in the spherical coordinate system
  • r is the radius of the unit circle
  • i and j are parameters. i is the number of times the horizontal split angle ⁇ loop changes, and j is the vertical split angle The number of times the loop has changed.
  • Step 1 First determine candidate points in the candidate symmetry plane: calculation p v is the coordinate of the viewpoint, the default is (0,0,0), and p is the point on one side of the candidate symmetry plane. Is the normal vector at that point. when When it is determined that the point has a symmetric partner in the candidate symmetry plane, then the point is determined as a candidate point; When it is determined, the point has no symmetric partner in the candidate symmetry plane. This method is used to filter out all candidate points in the candidate symmetry plane.
  • the method for determining the normal vector at each point p is: based on the method of local surface fitting, obtain the t adjacent neighbors closest to the point p, t is an integer, and then calculate a least squares meaning for these points
  • the local plane P on the upper part which can be expressed as:
  • the eigenvector corresponding to the minimum eigenvalue of the covariance matrix M is the normal vector at point p.
  • Step 2 Calculate the score of each candidate point according to the preset integral strategy, specifically, the candidate point Symmetrical change of the candidate symmetry plane to obtain the symmetry point of the candidate point Determine the symmetry point with the candidate point according to the KNN (K-Nearest Neighbor, K nearest neighbor) classification algorithm Nearest nearest point Determine the symmetry point of the candidate point With neighboring points
  • Step 3 Determine the sum of the scores of the respective candidate points as the score of the candidate symmetry plane.
  • p v is the coordinate of the viewpoint
  • the default is (0,0,0)
  • p o is the coordinate of the centroid of the target point cloud data
  • the symmetry axis of the target point cloud data is the target The axis of symmetry of the object.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

一种基于RGB-D相机的物体对称轴检测方法,涉及图像处理领域,该方法包括:获取自然场景下的目标物体的图像的三维点云数据,根据色差阈值分割法从三维点云数据中识别出目标物体所在区域的点云对应的目标点云数据,以目标点云数据的质心为中心建立球坐标系,利用球坐标系分割法选出n 2个候选对称平面,根据n 2个候选对称平面计算目标点云数据的对称轴,即为目标物体的对称轴;该方法实现了目标物体RGB-D图像的三维重建以及点云滤波,最后通过采用一种积分策略实现对目标物体对称轴的检测,当该方法应用于水果采摘领域时,可以基于自然场景中拍摄的果实的图像准确地寻找果实的对称轴,提高水果的自动化采摘效率。

Description

一种基于RGB-D相机的物体对称轴检测方法 技术领域
本发明涉及图像处理领域,尤其是一种基于RGB-D相机的物体对称轴检测方法。
背景技术
水果采摘机器人可以自动检测水果并进行采摘,因其效率高自动化程度好的优点而被广泛使用,水果采摘机器人的采摘动作依赖于其视觉检测***对水果的准确检测与定位,文献表明,如果水果相对于果实和茎的方向以特定方式旋转和扭曲,可以提高水果的采摘效率,因此为了进一步提高采摘效率,需要提高水果采摘机器人对于果实对称轴的检测精度。
目前比较常见的对果实对称轴的检测方法有基于曲面曲率变化寻找对称轴方法、基于学习的自然图像检测方法、以及基于边缘特征学习的自然图像对称轴检测方法等,不过它们还存在着许多问题,如:使用的三维点云要求十分精确,学习时间会随着点云数据增多而变得很长,不便于从自然场景中拍摄的图像来检测水果对称轴等。
发明内容
本发明人针对上述问题及技术需求,提出了一种基于RGB-D相机的物体对称轴检测方法,该方法可以对自然场景下的物体的对称轴进行精确检测。
本发明的技术方案如下:
一种基于RGB-D相机的物体对称轴检测方法,该方法包括:
对RGB-D相机的彩色相机和深度相机进行标定,获取RGB-D相机的相机参数;
通过RGB-D相机的彩色相机获取目标物体的彩色图像,通过RGB-D相机的深度相机获取目标物体的深度图像;
根据RGB-D相机的相机参数将深度图像映射到彩色图像所在的彩色图像像素坐标系中,获得对齐后的深度图像,并将彩色图像和对齐后的深度图像处理为三维点云数据,三维点云数据包括点云的三维坐标和颜色信息;
根据色差阈值分割法从三维点云数据中识别出目标点云数据,目标点云数据是目标物体所在区域的点云对应的数据;
求取目标点云数据的质心,并以质心为中心建立球坐标系,利用球坐标系分割法选出n 2个候选对称平面,n大于3且n为整数;
根据n 2个候选对称平面计算目标点云数据的对称轴,确定目标点云数据的对称轴为目标物体的对称轴。
其进一步的技术方案为,根据n 2个候选对称平面计算目标点云数据的对称轴,包括:
根据预设积分策略计算n 2个候选对称平面中每个候选对称平面的分数;
确定分数最低的候选对称平面为目标物体的对称平面;
根据目标物体的对称平面计算得到目标点云数据的对称轴。
其进一步的技术方案为,根据目标物体的对称平面计算得到目标点云数据的对称轴,包括计算:
Figure PCTCN2018083260-appb-000001
其中,
Figure PCTCN2018083260-appb-000002
是目标点云数据的对称轴向量,
Figure PCTCN2018083260-appb-000003
是目标物体的对称平面的法向量,p v是视点的坐标,p o是目标点云数据的质心的坐标。
其进一步的技术方案为,根据预设积分策略计算n 2个候选对称平面中每个候选对称平面的分数,包括:
对于每个候选对称平面,确定候选对称平面中的候选点,候选点是在候选对称平面中有对称伙伴的点;
根据预设积分策略计算得到每个候选点的分数;
确定各个候选点的分数之和为候选对称平面的分数。
其进一步的技术方案为,根据预设积分策略计算得到每个候选点的分数,包括:
将候选点关于候选对称平面做对称变化得到候选点的对称点;
根据KNN分类算法确定与候选点的对称点距离最近的邻近点,确定候选点的对称点与邻近点的距离;
计算x score=d min+ω·α,其中,x score是候选点的分数,d min是候选点的对称点与临近点的距离,α是候选点的对称点的法向量与邻近点的法向量之间的夹角,ω为权重系数。
其进一步的技术方案为,确定候选对称平面中的候选点,包括:
计算
Figure PCTCN2018083260-appb-000004
p v是视点的坐标,p是候选对称平面中一侧的点,
Figure PCTCN2018083260-appb-000005
是点处的法向量;
Figure PCTCN2018083260-appb-000006
时,确定点在候选对称平面中有对称伙伴,则确定点为候选点。
其进一步的技术方案为,根据色差阈值分割法从三维点云数据中识别出目标点云数据,包括;
根据色差阈值分割法从三维点云数据中识别出前景点云数据;
对于前景点云数据中的各个点云的任意一个点云,计算
Figure PCTCN2018083260-appb-000007
d i为点云与点云各个邻近点之间的平均距离,k为点云的邻近点的个数,(x i,y i,z i)为点云的坐标,(x j,y j,z j)是点云的临近点的坐标;
检测d i是否在μ±γσ范围内,其中,
Figure PCTCN2018083260-appb-000008
m为前景点云数据中的点云的个数,γ为参数;
若d i在μ±γσ范围内,则确定点云对应的数据是目标点云数据。
其进一步的技术方案为,根据色差阈值分割法从三维点云数据中识别出前景点云数据,包括:
对于三维点云数据中的各个点云的任意一个点云,计算R s-G s,R s为点云的红色通道R信息,G s为点云的绿色通道G信息;
若R s-G s>δ,则确定点云对应的数据是前景点云数据,δ为预设阈值。
其进一步的技术方案为,以质心为中心建立球坐标系,利用球坐标系分割法选出n 2个候选对称平面,包括:
以质心为中心建立球坐标系,确定水平分割角和垂直分割角的范围;
通过计算
Figure PCTCN2018083260-appb-000009
将水平分割角进行等分,通过计算
Figure PCTCN2018083260-appb-000010
将垂直分割角进行等分,其中,thetaBinSize是水平分割角的变化范围,phiBinSize是垂直分割角的变化范围,θ(2)和θ(1)分别为水平分割角的最大值与最小值,
Figure PCTCN2018083260-appb-000011
Figure PCTCN2018083260-appb-000012
分别为垂直分割角的最大值和最小值;
分别改变水平分割角和垂直分割角n次,得到n 2组分割平面的单位法向量;
根据n 2组分割平面的单位法向量得到n 2个候选对称平面。
其进一步的技术方案为,求取目标点云数据的质心,包括:
在目标点云数据附近随机产生预定数量的随机点,预定数量大于目标点云数据的个数;
对于每个随机点,求取随机点与目标点云数据中的各个点云之间的距离的标准差;
确定与目标点云数据中的各个点云之间的距离的标准差最小的随机点为目标点云数据的质心。
本发明的有益技术效果是:
本申请公开了一种基于RGB-D相机的物体对称轴检测方法,该方法实现了目标物体RGB-D图像的三维重建,以及点云滤波,最后通过采用一种积分策略实现对目标物体对称轴的检测,当该方法应用于水果采摘领域时,可以基于自然场景中拍摄的果实的图像准确地寻找果实的对称轴,克服了先前水果识别仅精确到个体而不能描述个体特征的缺陷,便于机械手抓取,提高了水果的自动化采摘效率,并且减少对水果的损伤。
附图说明
图1是本发明公开的基于RGB-D相机的物体对称轴检测方法的方法流程图。
图2是彩色相机与深度相机之间的坐标映射模型示意图。
图3是根据预设积分策略计算每个候选点的分数的示意图。
具体实施方式
下面结合附图对本发明的具体实施方式做进一步说明。
本申请公开了一种基于RGB-D相机的物体对称轴检测方法,该方法基于RGB-D相机,RGB-D相机包括一个彩色相机和一个深度相机,该检测方法包括如下步骤,主要流程图请参考图1:
一、分别对RGB-D相机的彩色相机和深度相机进行标定,获取RGB-D相机的相机参数,该相机参数包括彩色相机的参数以及深度相机的参数,本申请采用的标定方法为张正友标定法,实际也可以采用其他方法,本申请对此不做限定。采用张正友标定法对RGB-D相机的彩色相机进行标定,可以获取彩色相机的参数包括:彩色相机内部参数H rgb和彩色相机外部参数(R rgb,T rgb);采用张正友标定法对RGB-D相机的深度相机进行标定,可以获取深度相机的参数包括:深度相机内部参数H d和深度相机外部参数(R d,T d)。
二、通过RGB-D相机的彩色相机获取目标物体的彩色图像,通过RGB-D相机的深度相机获取目标物体的深度图像,在本申请中,目标物体为自然场景下拍摄的果树上的果实,彩色相机采集到的彩色图像的分辨率较高,通常为1080*1920,深度相机采集到的深度图像的分辨率较低,通常为424*512,彩色图像和深度图像的具体分辨率由RGB-D相机的硬件参数和人为设定等因素来决定,本申请对此不做限定。
在实际实现时,上述步骤一和步骤二没有特定的先后顺序。
三、利用RGB-D相机的相机参数,根据投影逆变换和空间坐标系之间的坐标映射关系,将深度信息映射到彩色图像像素坐标系中,获取与彩色图像的像素点一一对应的对齐后的深度图像,对其后的深度图像的分辨率与彩色图像的分辨率相同,为1080*1920,具体包括如下步骤,示意图如图2所示:
步骤1:利用彩色相机外部参数根据投影逆变换获取在世界坐标系P(X w,Y w,Z w)下的彩色相机的投影坐标P rgb(X c1,Y c1,Z c1),利用深度相机外部参数根据投影逆变换获取在世界坐标系P(X w,Y w,Z w)下的深度相机的投影坐标P d(X c2,Y c2,Z c2)。
步骤2:利用彩色相机内部参数获取在彩色相机坐标系下的图像像素的坐标P rgb(u 1,v 1),利用深度相机内部参数获取在深度相机坐标系下的图像像素的坐标P d(u 2,v 2)。
步骤3:根据世界坐标系下的彩色相机的投影坐标P rgb(X c1,Y c1,Z c1)、世界坐标系下的深度相机的投影坐标P d(X c2,Y c2,Z c2)、彩色相机坐标系下的图像像素的坐标P rgb(u 1,v 1)以及深度相机坐标系下的图像像素的坐标P d(u 2,v 2)确定彩色图像像素坐标与深度图像像素坐标之间的对应关系。
步骤4,最后根据彩色图像像素坐标与深度图像像素坐标之间的对应关系将深度图像映射到彩色图像像素坐标系中,从而获得对齐后的深度图像。
四、将彩色图像和对齐后的深度图像处理为三维点云数据,三维点云数据包括点云的三维坐标和颜色信息,表示为s=(x,y,z,R,G,B),其中,x、y、z分别为点云对应于空间X轴、Y轴和Z轴的坐标,R为点云的红色通道信息,G为点云的绿色通道信息,B为点云的蓝色通道信息。
五、根据色差阈值分割法从三维点云数据中识别出前景点云数据,具体的,对于三维点云数据中的任意一个点云,计算R s-G s来将前景和背景进行分离,R s为该点云的R信息(红色通道信息),G s为该点云的G信息(绿色通道信息), 若R s-G s>δ,则确定该点云对应的数据是前景点云数据,若R s-G s≤δ,则确定该点云对应的数据是背景点云数据,δ为预设阈值,通常为一经验值,比如取δ=13。
六、经过色差阈值分割后得到的前景点云数据中,除了目标物体所在区域的点云外,还包括一些偏离目标物体较远的离群噪声点,采用离群滤波的方法滤除这些噪声,具体的,对于前景点云数据中的任意一个点云p i,计算:
Figure PCTCN2018083260-appb-000013
Figure PCTCN2018083260-appb-000014
Figure PCTCN2018083260-appb-000015
其中,d i为该点云p i与其在空间内各个邻近点p j之间的平均距离,(x i,y i,z i)为点云p i的三维坐标,(x j,y j,z j)为点云的临近点p j的三维坐标,m为前景点云数据中的点云的个数,k为搜索的该点云p i的邻近点的个数,k为预设值;μ为各个点云对应的距离d i的平均值,σ为各个点云对应的距离d i的标准差。
检测d i是否在μ±γσ范围内,γ为参数;参数γ的取值与临近点的个数k的大小有关,通常情况下根据经验将k的取值设为80~100,则对应的γ的取值为0.8~1.2。若d i在μ±γσ范围内,则确定该点云对应的数据是目标点云数据,也即是目标物体所在区域的点云对应的数据。若d i不在μ±γσ范围内,则确定该点云是偏离目标物体较远的离群噪声点,滤除该点云。
七、求取目标点云数据的质心P 0,即在目标点云数据附近随机产生预定数量的随机点,预定数量大于目标点云数据的个数,通常该预定数量为目标点云数据的个数的2-3倍。遍历所有随机点,对于每个随机点,求取该随机点与目标点云数据中的各个点云之间的欧式距离,将距离波动最小的随机点确定质心P 0,也即求取随机点与目标点云数据中的各个点云之间的距离的标准差,将与目标点云数据中的各个点云之间的距离的标准差最小的随机点为目标点云数据的质心P 0
八、以质心P 0为中心建立球坐标系,利用球坐标系分割法选出n 2个候选对称平面,包括如下几个步骤,其中n大于3且n为整数:
步骤1:确定水平分割角θ和垂直分割角
Figure PCTCN2018083260-appb-000016
的范围,水平分割角θ的范围应为 整个范围,即θ∈[0,π],而
Figure PCTCN2018083260-appb-000017
只覆盖z轴负半轴,即
Figure PCTCN2018083260-appb-000018
步骤2:将水平分割角θ和垂直分割角
Figure PCTCN2018083260-appb-000019
等分,即计算:
Figure PCTCN2018083260-appb-000020
Figure PCTCN2018083260-appb-000021
其中,thetaBinSize是水平分割角θ的变化范围,phiBinSize是垂直分割角
Figure PCTCN2018083260-appb-000022
的变化范围,θ(2)和θ(1)分别为水平分割角θ的最大值与最小值,
Figure PCTCN2018083260-appb-000023
Figure PCTCN2018083260-appb-000024
分别为垂直分割角
Figure PCTCN2018083260-appb-000025
的最大值和最小值。
步骤3:分别改变水平分割角和垂直分割角n次,得到n 2组分割平面的单位法向量。一种具体的做法是:保持水平分割角θ不变,通过改变垂直分割角
Figure PCTCN2018083260-appb-000026
计算出分割平面的单位法向量,循环改变垂直分割角
Figure PCTCN2018083260-appb-000027
共n次计算出n个单位法向量;循环改变水平分割角θ共n次,对于每个水平分割角θ,循环改变垂直分割角
Figure PCTCN2018083260-appb-000028
得到n 2组分割平面的单位法向量,即得到了n 2个候选对称平面,每个单位法向量的计算方法如下:
x=r*sin(theta)*cos(phi);
y=r*sin(theta)*sin(phi);
z=r*cos(theta);
Figure PCTCN2018083260-appb-000029
Figure PCTCN2018083260-appb-000030
其中,x,y,z是计算得到单位法向量的坐标值,theta是球坐标系中的方位角,phi是球坐标系中的高低角,r是单位圆的半径,i和j为参数,i是水平分割角θ循环改变的次数,j是垂直分割角
Figure PCTCN2018083260-appb-000031
循环改变的次数。
九、根据预设积分策略计算n 2个候选对称平面中每个候选对称平面的分数,遍历各个候选对称平面,确定分数最低的候选对称平面为目标物体的对称平面。以一个候选对称平面的分数计算为例,其计算过程包括如下步骤:
步骤1:先确定该候选对称平面中的候选点:计算
Figure PCTCN2018083260-appb-000032
p v是视点的坐标,默认为(0,0,0),p是候选对称平面中一侧的点,
Figure PCTCN2018083260-appb-000033
是该点处的法向量。当
Figure PCTCN2018083260-appb-000034
时,确定该点在候选对称平面中有对称伙伴,则确定点为候选点;当
Figure PCTCN2018083260-appb-000035
时,确定该点在候选对称平面中无对称伙伴。使用该方法筛选出该候选对称平面中的所有候选点。
对于每个点p处的法向量的确定方法为:基于局部表面拟合的方法,获取与点p最相邻的t个相邻点,t为整数,然后为这些点计算一个最小二乘意义上 的局部平面P,此局部平面P可以表述为:
Figure PCTCN2018083260-appb-000036
其中,
Figure PCTCN2018083260-appb-000037
为局部平面P的法向量,w为局部平面P到坐标原点的距离,
Figure PCTCN2018083260-appb-000038
表示点p处的法向量。由于局部平面P经过t个近邻点拟合曲面的质心
Figure PCTCN2018083260-appb-000039
同时,法向量
Figure PCTCN2018083260-appb-000040
需要满足
Figure PCTCN2018083260-appb-000041
因此问题可以转化为对如下半正定的协方差矩阵M进行特征值分解:
Figure PCTCN2018083260-appb-000042
对应于该协方差矩阵M最小特征值的特征向量即为点p处的法向量。
步骤2:根据预设积分策略计算得到每个候选点的分数,具体为,将候选点
Figure PCTCN2018083260-appb-000043
关于候选对称平面做对称变化得到候选点的对称点
Figure PCTCN2018083260-appb-000044
根据KNN(K-NearestNeighbor,K最邻近)分类算法确定与候选点的对称点
Figure PCTCN2018083260-appb-000045
距离最近的邻近点
Figure PCTCN2018083260-appb-000046
确定候选点的对称点
Figure PCTCN2018083260-appb-000047
与邻近点
Figure PCTCN2018083260-appb-000048
的距离d min,示意图请参考图3;计算x score=d min+ω·α,其中,x score是候选点的分数,α是候选点的对称点的法向量与邻近点的法向量之间的夹角,ω为权重系数。
步骤3:确定各个候选点的分数之和为候选对称平面的分数。
十、根据目标物体的对称平面计算得到目标点云数据的对称轴,即计算:
Figure PCTCN2018083260-appb-000049
其中,
Figure PCTCN2018083260-appb-000050
是目标点云数据的对称轴向量,
Figure PCTCN2018083260-appb-000051
是目标物体的对称平面的法向量,p v是视点的坐标,默认为(0,0,0),p o是目标点云数据的质心的坐标,该目标点云数据的对称轴即为目标物体的对称轴。
以上所述的仅是本申请的优选实施方式,本发明不限于以上实施例。可以理解,本领域技术人员在不脱离本发明的精神和构思的前提下直接导出或联想到的其他改进和变化,均应认为包含在本发明的保护范围之内。

Claims (10)

  1. 一种基于RGB-D相机的物体对称轴检测方法,其特征在于,所述方法包括:
    对RGB-D相机的彩色相机和深度相机进行标定,获取所述RGB-D相机的相机参数;
    通过RGB-D相机的彩色相机获取目标物体的彩色图像,通过RGB-D相机的深度相机获取所述目标物体的深度图像;
    根据所述RGB-D相机的相机参数将所述深度图像映射到所述彩色图像所在的彩色图像像素坐标系中,获得对齐后的深度图像,并将所述彩色图像和所述对齐后的深度图像处理为三维点云数据,所述三维点云数据包括点云的三维坐标和颜色信息;
    根据色差阈值分割法从所述三维点云数据中识别出目标点云数据,所述目标点云数据是所述目标物体所在区域的点云对应的数据;
    求取所述目标点云数据的质心,并以所述质心为中心建立球坐标系,利用球坐标系分割法选出n 2个候选对称平面,n大于3且n为整数;
    根据所述n 2个候选对称平面计算所述目标点云数据的对称轴,确定所述目标点云数据的对称轴为所述目标物体的对称轴。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述n 2个候选对称平面计算所述目标点云数据的对称轴,包括:
    根据预设积分策略计算所述n 2个候选对称平面中每个候选对称平面的分数;
    确定分数最低的候选对称平面为所述目标物体的对称平面;
    根据所述目标物体的对称平面计算得到所述目标点云数据的所述对称轴。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述目标物体的对称平面计算得到所述目标点云数据的所述对称轴,包括计算:
    Figure PCTCN2018083260-appb-100001
    其中,
    Figure PCTCN2018083260-appb-100002
    是所述目标点云数据的对称轴向量,
    Figure PCTCN2018083260-appb-100003
    是所述目标物体的对称平面的法向量,p v是视点的坐标,p o是所述目标点云数据的质心的坐标。
  4. 根据权利要求2所述的方法,其特征在于,所述根据预设积分策略计算所述n 2个候选对称平面中每个候选对称平面的分数,包括:
    对于每个候选对称平面,确定所述候选对称平面中的候选点,所述候选点是在所述候选对称平面中有对称伙伴的点;
    根据所述预设积分策略计算得到每个所述候选点的分数;
    确定各个所述候选点的分数之和为所述候选对称平面的分数。
  5. 根据权利要求4所述的方法,其特征在于,根据所述预设积分策略计算得到每个所述候选点的分数,包括:
    将所述候选点关于所述候选对称平面做对称变化得到所述候选点的对称点;
    根据KNN分类算法确定与所述候选点的对称点距离最近的邻近点,确定所述候选点的对称点与所述邻近点的距离;
    计算x score=d min+ω·α,其中,x score是所述候选点的分数,d min是所述候选点的对称点与所述临近点的距离,α是所述候选点的对称点的法向量与所述邻近点的法向量之间的夹角,ω为权重系数。
  6. 根据权利要求4所述的方法,其特征在于,所述确定所述候选对称平面中的候选点,包括:
    计算
    Figure PCTCN2018083260-appb-100004
    p v是视点的坐标,p是所述候选对称平面中一侧的点,
    Figure PCTCN2018083260-appb-100005
    是所述点处的法向量;
    Figure PCTCN2018083260-appb-100006
    时,确定所述点在所述候选对称平面中有对称伙伴,则确定所述点为候选点。
  7. 根据权利要求1至6任一所述的方法,其特征在于,所述根据色差阈值分割法从所述三维点云数据中识别出目标点云数据,包括;
    根据所述色差阈值分割法从所述三维点云数据中识别出前景点云数据;
    对于所述前景点云数据中的各个点云的任意一个点云,计算
    Figure PCTCN2018083260-appb-100007
    d i为所述点云与所述点云各个邻近点之间的平均距离,k为所述点云的邻近点的个数,(x i,y i,z i)为所述点云的坐标,(x j,y j,z j)是所述点云的临近点的坐标;
    检测d i是否在μ±γσ范围内,其中,
    Figure PCTCN2018083260-appb-100008
    m为所述前景点云数据中的点云的个数,γ为参数;
    若d i在μ±γσ范围内,则确定所述点云对应的数据是所述目标点云数据。
  8. 根据权利要求7所述的方法,其特征在于,所述根据色差阈值分割法从 所述三维点云数据中识别出前景点云数据,包括:
    对于所述三维点云数据中的各个点云的任意一个点云,计算R s-G s,R s为所述点云的红色通道R信息,G s为所述点云的绿色通道G信息;
    若R s-G s>δ,则确定所述点云对应的数据是所述前景点云数据,δ为预设阈值。
  9. 根据权利要求1至6任一所述的方法,其特征在于,所述以所述质心为中心建立球坐标系,利用球坐标系分割法选出n 2个候选对称平面,包括:
    以所述质心为中心建立球坐标系,确定水平分割角和垂直分割角的范围;
    通过计算
    Figure PCTCN2018083260-appb-100009
    将所述水平分割角进行等分,通过计算
    Figure PCTCN2018083260-appb-100010
    将所述垂直分割角进行等分,其中,thetaBinSize是水平分割角的变化范围,phiBinSize是垂直分割角的变化范围,θ(2)和θ(1)分别为水平分割角的最大值与最小值,
    Figure PCTCN2018083260-appb-100011
    Figure PCTCN2018083260-appb-100012
    分别为垂直分割角的最大值和最小值;
    分别改变所述水平分割角和所述垂直分割角n次,得到n 2组分割平面的单位法向量;
    根据所述n 2组分割平面的单位法向量得到n 2个候选对称平面。
  10. 根据权利要求1至6任一所述的方法,其特征在于,所述求取所述目标点云数据的质心,包括:
    在所述目标点云数据附近随机产生预定数量的随机点,所述预定数量大于所述目标点云数据的个数;
    对于每个所述随机点,求取所述随机点与所述目标点云数据中的各个点云之间的距离的标准差;
    确定与所述目标点云数据中的各个点云之间的距离的标准差最小的随机点为所述目标点云数据的质心。
PCT/CN2018/083260 2017-11-21 2018-04-17 一种基于rgb-d相机的物体对称轴检测方法 WO2019100647A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/093,658 US10607106B2 (en) 2017-11-21 2018-04-17 Object symmetry axis detection method based on RGB-D camera
AU2018370629A AU2018370629B2 (en) 2017-11-21 2018-04-17 RGB-D camera-based object symmetry axis detection method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711165747.2 2017-11-21
CN201711165747.2A CN108010036B (zh) 2017-11-21 2017-11-21 一种基于rgb-d相机的物体对称轴检测方法

Publications (1)

Publication Number Publication Date
WO2019100647A1 true WO2019100647A1 (zh) 2019-05-31

Family

ID=62053109

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/083260 WO2019100647A1 (zh) 2017-11-21 2018-04-17 一种基于rgb-d相机的物体对称轴检测方法

Country Status (4)

Country Link
US (1) US10607106B2 (zh)
CN (1) CN108010036B (zh)
AU (1) AU2018370629B2 (zh)
WO (1) WO2019100647A1 (zh)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110648359A (zh) * 2019-09-23 2020-01-03 山东师范大学 一种果实目标定位识别方法及***
CN110992372A (zh) * 2019-11-21 2020-04-10 浙江大华技术股份有限公司 物品抓取方法、装置、存储介质及电子装置
CN111160280A (zh) * 2019-12-31 2020-05-15 芜湖哈特机器人产业技术研究院有限公司 基于rgbd相机的目标物体识别与定位方法及移动机器人
CN111754515A (zh) * 2019-12-17 2020-10-09 北京京东尚科信息技术有限公司 堆叠物品的顺序抓取方法和装置
CN112258631A (zh) * 2020-10-20 2021-01-22 河海大学常州校区 一种基于深度神经网络的三维目标检测方法及***
CN112686859A (zh) * 2020-12-30 2021-04-20 中国农业大学 基于热红外和rgb-d相机的作物cwsi的检测方法
CN112766223A (zh) * 2021-01-29 2021-05-07 西安电子科技大学 基于样本挖掘与背景重构的高光谱图像目标检测方法
CN112819883A (zh) * 2021-01-28 2021-05-18 华中科技大学 一种规则对象检测及定位方法
CN113205465A (zh) * 2021-04-29 2021-08-03 上海应用技术大学 点云数据集分割方法及***
CN113469195A (zh) * 2021-06-25 2021-10-01 浙江工业大学 一种基于自适应颜色快速点特征直方图的目标识别方法
CN113989391A (zh) * 2021-11-11 2022-01-28 河北农业大学 基于rgb-d相机的动物体三维模型重构***及方法
CN114323283A (zh) * 2021-12-30 2022-04-12 中铭谷智能机器人(广东)有限公司 一种钣金颜色特征智能识别框选方法

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10854011B2 (en) 2018-04-09 2020-12-01 Direct Current Capital LLC Method for rendering 2D and 3D data within a 3D virtual environment
CN110473283B (zh) * 2018-05-09 2024-01-23 无锡时代天使医疗器械科技有限公司 牙齿三维数字模型的局部坐标系设定方法
CN109146894A (zh) * 2018-08-07 2019-01-04 庄朝尹 一种三维建模的模型区域分割方法
CN109344868B (zh) * 2018-08-28 2021-11-16 广东奥普特科技股份有限公司 一种区分互为轴对称的不同类物件的通用方法
US11823461B1 (en) 2018-09-28 2023-11-21 Direct Current Capital LLC Systems and methods for perceiving a scene around a mobile device
US11567497B1 (en) 2019-02-04 2023-01-31 Direct Current Capital LLC Systems and methods for perceiving a field around a device
US10549928B1 (en) 2019-02-22 2020-02-04 Dexterity, Inc. Robotic multi-item type palletizing and depalletizing
US11741566B2 (en) * 2019-02-22 2023-08-29 Dexterity, Inc. Multicamera image processing
US11460855B1 (en) 2019-03-29 2022-10-04 Direct Current Capital LLC Systems and methods for sensor calibration
CN110032962B (zh) * 2019-04-03 2022-07-08 腾讯科技(深圳)有限公司 一种物体检测方法、装置、网络设备和存储介质
CN110567422B (zh) * 2019-06-25 2021-07-06 江苏省特种设备安全监督检验研究院 一种起重机吊钩扭转角自动检测方法
CN110853080A (zh) * 2019-09-30 2020-02-28 广西慧云信息技术有限公司 一种田间果实尺寸的测量方法
TWI759651B (zh) * 2019-11-21 2022-04-01 財團法人工業技術研究院 基於機器學習的物件辨識系統及其方法
CN111028345B (zh) * 2019-12-17 2023-05-02 中国科学院合肥物质科学研究院 一种港口场景下圆形管道的自动识别与对接方法
CN111126296A (zh) * 2019-12-25 2020-05-08 中国联合网络通信集团有限公司 水果定位方法及装置
CN111144480A (zh) * 2019-12-25 2020-05-12 深圳蓝胖子机器人有限公司 一种可回收垃圾视觉分类方法、***、设备
CN111311576B (zh) * 2020-02-14 2023-06-02 易思维(杭州)科技有限公司 基于点云信息的缺陷检测方法
CN111353417A (zh) * 2020-02-26 2020-06-30 北京三快在线科技有限公司 一种目标检测的方法及装置
CN111428622B (zh) * 2020-03-20 2023-05-09 上海健麾信息技术股份有限公司 一种基于分割算法的图像定位方法及其应用
CN111899301A (zh) * 2020-06-02 2020-11-06 广州中国科学院先进技术研究所 一种基于深度学习的工件6d位姿估计方法
CN111768487B (zh) * 2020-06-11 2023-11-28 武汉市工程科学技术研究院 基于三维点云库的地质岩层数据三维重建***及方法
CN112101092A (zh) * 2020-07-31 2020-12-18 北京智行者科技有限公司 自动驾驶环境感知方法及***
CN112102415A (zh) * 2020-08-25 2020-12-18 中国人民解放军63919部队 基于标定球的深度相机外参数标定方法、装置及设备
CN112017220B (zh) * 2020-08-27 2023-07-28 南京工业大学 一种基于抗差约束最小二乘算法的点云精确配准方法
CN112115953B (zh) * 2020-09-18 2023-07-11 南京工业大学 一种基于rgb-d相机结合平面检测与随机抽样一致算法的优化orb算法
CN112720459B (zh) * 2020-12-02 2022-07-12 达闼机器人股份有限公司 目标物体抓取方法、装置、存储介质及电子设备
WO2022147774A1 (zh) * 2021-01-08 2022-07-14 浙江大学 基于三角剖分和概率加权ransac算法的物***姿识别方法
CN112652075B (zh) * 2021-01-30 2022-08-09 上海汇像信息技术有限公司 对称物体3d模型的对称拟合方法
CN113344844A (zh) * 2021-04-14 2021-09-03 山东师范大学 基于rgb-d多模图像信息的目标果实检测方法及***
CN113362276B (zh) * 2021-04-26 2024-05-10 广东大自然家居科技研究有限公司 板材视觉检测方法及***
CN113192206B (zh) * 2021-04-28 2023-04-07 华南理工大学 基于目标检测和背景去除的三维模型实时重建方法及装置
CN113447948B (zh) * 2021-05-28 2023-03-21 淮阴工学院 一种基于ros机器人的相机与多激光雷达融合方法
CN113470049B (zh) * 2021-07-06 2022-05-20 吉林省田车科技有限公司 一种基于结构化彩色点云分割的完整目标提取方法
CN114373105A (zh) * 2021-12-20 2022-04-19 华南理工大学 一种点云标注及数据集制作的方法、***、装置及介质
CN114295076B (zh) * 2022-01-05 2023-10-20 南昌航空大学 一种解决基于结构光的微小物体测量阴影问题的测量方法
CN115082815B (zh) * 2022-07-22 2023-04-07 山东大学 基于机器视觉的茶芽采摘点定位方法、装置及采摘***
CN115139325B (zh) * 2022-09-02 2022-12-23 星猿哲科技(深圳)有限公司 物体抓取***
CN117011309B (zh) * 2023-09-28 2023-12-26 济宁港航梁山港有限公司 基于人工智能及深度数据的自动盘煤***

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080247660A1 (en) * 2007-04-05 2008-10-09 Hui Zhang Automatic Detection and Mapping of Symmetries in an Image
CN104126989A (zh) * 2014-07-30 2014-11-05 福州大学 一种基于多台rgb-d摄像机下的足部表面三维信息获取方法
CN104298971A (zh) * 2014-09-28 2015-01-21 北京理工大学 一种3d点云数据中的目标识别方法
CN105184830A (zh) * 2015-08-28 2015-12-23 华中科技大学 一种对称图像对称轴检测定位方法
CN106780528A (zh) * 2016-12-01 2017-05-31 广西师范大学 基于边缘线匹配的图像对称轴检测方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015172227A1 (en) * 2014-05-13 2015-11-19 Pcp Vr Inc. Method, system and apparatus for generation and playback of virtual reality multimedia

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080247660A1 (en) * 2007-04-05 2008-10-09 Hui Zhang Automatic Detection and Mapping of Symmetries in an Image
CN104126989A (zh) * 2014-07-30 2014-11-05 福州大学 一种基于多台rgb-d摄像机下的足部表面三维信息获取方法
CN104298971A (zh) * 2014-09-28 2015-01-21 北京理工大学 一种3d点云数据中的目标识别方法
CN105184830A (zh) * 2015-08-28 2015-12-23 华中科技大学 一种对称图像对称轴检测定位方法
CN106780528A (zh) * 2016-12-01 2017-05-31 广西师范大学 基于边缘线匹配的图像对称轴检测方法

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110648359A (zh) * 2019-09-23 2020-01-03 山东师范大学 一种果实目标定位识别方法及***
CN110992372A (zh) * 2019-11-21 2020-04-10 浙江大华技术股份有限公司 物品抓取方法、装置、存储介质及电子装置
CN110992372B (zh) * 2019-11-21 2023-08-29 浙江大华技术股份有限公司 物品抓取方法、装置、存储介质及电子装置
CN111754515A (zh) * 2019-12-17 2020-10-09 北京京东尚科信息技术有限公司 堆叠物品的顺序抓取方法和装置
CN111754515B (zh) * 2019-12-17 2024-03-01 北京京东乾石科技有限公司 堆叠物品的顺序抓取方法和装置
CN111160280B (zh) * 2019-12-31 2022-09-30 芜湖哈特机器人产业技术研究院有限公司 基于rgbd相机的目标物体识别与定位方法及移动机器人
CN111160280A (zh) * 2019-12-31 2020-05-15 芜湖哈特机器人产业技术研究院有限公司 基于rgbd相机的目标物体识别与定位方法及移动机器人
CN112258631A (zh) * 2020-10-20 2021-01-22 河海大学常州校区 一种基于深度神经网络的三维目标检测方法及***
CN112258631B (zh) * 2020-10-20 2023-12-08 河海大学常州校区 一种基于深度神经网络的三维目标检测方法及***
CN112686859A (zh) * 2020-12-30 2021-04-20 中国农业大学 基于热红外和rgb-d相机的作物cwsi的检测方法
CN112686859B (zh) * 2020-12-30 2024-03-15 中国农业大学 基于热红外和rgb-d相机的作物cwsi的检测方法
CN112819883A (zh) * 2021-01-28 2021-05-18 华中科技大学 一种规则对象检测及定位方法
CN112819883B (zh) * 2021-01-28 2024-04-26 华中科技大学 一种规则对象检测及定位方法
CN112766223B (zh) * 2021-01-29 2023-01-06 西安电子科技大学 基于样本挖掘与背景重构的高光谱图像目标检测方法
CN112766223A (zh) * 2021-01-29 2021-05-07 西安电子科技大学 基于样本挖掘与背景重构的高光谱图像目标检测方法
CN113205465A (zh) * 2021-04-29 2021-08-03 上海应用技术大学 点云数据集分割方法及***
CN113205465B (zh) * 2021-04-29 2024-04-19 上海应用技术大学 点云数据集分割方法及***
CN113469195A (zh) * 2021-06-25 2021-10-01 浙江工业大学 一种基于自适应颜色快速点特征直方图的目标识别方法
CN113469195B (zh) * 2021-06-25 2024-02-06 浙江工业大学 一种基于自适应颜色快速点特征直方图的目标识别方法
CN113989391A (zh) * 2021-11-11 2022-01-28 河北农业大学 基于rgb-d相机的动物体三维模型重构***及方法
CN114323283A (zh) * 2021-12-30 2022-04-12 中铭谷智能机器人(广东)有限公司 一种钣金颜色特征智能识别框选方法

Also Published As

Publication number Publication date
CN108010036A (zh) 2018-05-08
AU2018370629A1 (en) 2020-07-02
US20190362178A1 (en) 2019-11-28
CN108010036B (zh) 2020-01-21
US10607106B2 (en) 2020-03-31
AU2018370629B2 (en) 2021-01-14

Similar Documents

Publication Publication Date Title
WO2019100647A1 (zh) 一种基于rgb-d相机的物体对称轴检测方法
US10234873B2 (en) Flight device, flight control system and method
CN112070818B (zh) 基于机器视觉的机器人无序抓取方法和***及存储介质
US11227405B2 (en) Determining positions and orientations of objects
WO2017080102A1 (zh) 飞行装置、飞行控制***及方法
CN106683137B (zh) 基于人工标志的单目多目标识别与定位方法
WO2019228523A1 (zh) 物体空间位置形态的确定方法、装置、存储介质及机器人
CN107392929B (zh) 一种基于人眼视觉模型的智能化目标检测及尺寸测量方法
CN108470356B (zh) 一种基于双目视觉的目标对象快速测距方法
US20100033584A1 (en) Image processing device, storage medium storing image processing program, and image pickup apparatus
KR102073468B1 (ko) 비전 시스템에서 컬러 이미지에 대해 컬러 후보 포즈들의 점수화를 위한 시스템 및 방법
CN110458858A (zh) 一种十字靶标的检测方法、***及存储介质
CN117152163B (zh) 一种桥梁施工质量视觉检测方法
CN110021029A (zh) 一种适用于rgbd-slam的实时动态配准方法及存储介质
CN107680035B (zh) 一种参数标定方法和装置、服务器及可读存储介质
Han et al. Target positioning method in binocular vision manipulator control based on improved canny operator
CN108765444A (zh) 基于单目视觉的地面t形运动目标检测与定位方法
CN104346614A (zh) 一种实景下的西瓜图像处理和定位方法
CN110992372A (zh) 物品抓取方法、装置、存储介质及电子装置
CN115841668A (zh) 一种双目视觉苹果识别以及精准定位的方法
CN115880220A (zh) 多视角苹果成熟度检测方法
CN113255455A (zh) 基于矢量去光照影响算法的单目相机物体识别与定位方法
CN113095214A (zh) 一种基于人工智能的无人机测绘光学防抖方法及***
CN113516709B (zh) 一种基于双目视觉的法兰定位方法
CN111630569A (zh) 双目匹配的方法、视觉成像装置及具有存储功能的装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18880736

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018370629

Country of ref document: AU

Date of ref document: 20180417

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 18880736

Country of ref document: EP

Kind code of ref document: A1