WO2019100647A1 - 一种基于rgb-d相机的物体对称轴检测方法 - Google Patents
一种基于rgb-d相机的物体对称轴检测方法 Download PDFInfo
- Publication number
- WO2019100647A1 WO2019100647A1 PCT/CN2018/083260 CN2018083260W WO2019100647A1 WO 2019100647 A1 WO2019100647 A1 WO 2019100647A1 CN 2018083260 W CN2018083260 W CN 2018083260W WO 2019100647 A1 WO2019100647 A1 WO 2019100647A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- point cloud
- candidate
- cloud data
- point
- symmetry
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/68—Analysis of geometric attributes of symmetry
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24147—Distances to closest patterns, e.g. nearest neighbour classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/42—Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30128—Food products
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/68—Food, e.g. fruit or vegetables
Definitions
- the invention relates to the field of image processing, in particular to an object symmetry axis detecting method based on an RGB-D camera.
- the fruit picking robot can automatically detect fruit and pick it. It is widely used because of its high efficiency and good automation.
- the picking action of the fruit picking robot depends on the accurate detection and positioning of the fruit by its visual inspection system.
- the literature indicates that if the fruit The fruit picking efficiency can be improved by rotating and twisting in a specific manner with respect to the direction of the fruit and the stem. Therefore, in order to further improve the picking efficiency, it is necessary to improve the detection accuracy of the fruit picking robot for the fruit symmetry axis.
- the most common methods for detecting the axis of symmetry of the fruit are the method of finding the axis of curvature based on the curvature of the surface, the natural image detection method based on learning, and the natural axis of symmetry detection based on edge feature learning.
- they still have many problems.
- the 3D point cloud used is very accurate, and the learning time becomes very long as the point cloud data increases. It is not convenient to detect the fruit symmetry axis from the images taken in the natural scene.
- the inventors have proposed an object symmetry axis detection method based on RGB-D camera, which can accurately detect the symmetry axis of an object in a natural scene.
- a method for detecting an axis of symmetry of an object based on an RGB-D camera comprising:
- the depth image is mapped into the color image pixel coordinate system where the color image is located, the aligned depth image is obtained, and the color image and the aligned depth image are processed into three-dimensional point cloud data, and the three-dimensional point is
- the cloud data includes three-dimensional coordinates and color information of the point cloud;
- the target point cloud data is identified from the three-dimensional point cloud data according to the color difference threshold segmentation method, and the target point cloud data is data corresponding to the point cloud of the area where the target object is located;
- centroid of the target point cloud data is obtained, and the spherical coordinate system is established centering on the centroid, and n 2 candidate symmetry planes are selected by the spherical coordinate system segmentation method, n is greater than 3 and n is an integer;
- the symmetry axis of the target point cloud data is calculated according to n 2 candidate symmetry planes, and the symmetry axis of the target point cloud data is determined as the symmetry axis of the target object.
- a further technical solution is to calculate the symmetry axis of the target point cloud data according to the n 2 candidate symmetry planes, including:
- the symmetry axis of the target point cloud data is calculated according to the symmetry plane of the target object.
- a further technical solution is to calculate the symmetry axis of the target point cloud data according to the symmetry plane of the target object, including calculation:
- p v is the coordinate of the viewpoint
- p o is the coordinate of the centroid of the target point cloud data.
- a further technical solution is to calculate a score of each candidate symmetry plane in the n 2 candidate symmetry planes according to a preset integration strategy, including:
- candidate points in the candidate symmetry plane are determined, and the candidate points are points having symmetric partners in the candidate symmetry plane;
- the sum of the scores of the respective candidate points is determined as the score of the candidate symmetry plane.
- a further technical solution is that the score of each candidate point is calculated according to a preset integration strategy, including:
- a further technical solution is to determine candidate points in the candidate symmetry plane, including:
- Calculation p v is the coordinate of the viewpoint, and p is the point on one side of the candidate symmetry plane, Is the normal vector at the point;
- the point is determined as a candidate point.
- the target point cloud data is identified from the three-dimensional point cloud data according to the color difference threshold segmentation method, including:
- d i is the average distance between the point cloud and the neighboring points of the point cloud
- k is the number of neighboring points of the point cloud
- (x i , y i , z i ) is the coordinates of the point cloud
- (x j , y j , z j ) is the coordinates of the neighboring point of the point cloud
- a further technical solution is to identify the former scenic spot cloud data from the three-dimensional point cloud data according to the color difference threshold segmentation method, including:
- R s is the red channel R information of the point cloud
- G s is the green channel G information of the point cloud
- a further technical solution is to establish a spherical coordinate system centered on the center of mass, and select n 2 candidate symmetry planes by using a spherical coordinate system segmentation method, including:
- the horizontal split angle and the vertical split angle are changed n times respectively to obtain a unit normal vector of the n 2 set split planes;
- n 2 candidate symmetry planes are obtained.
- a further technical solution is to obtain the centroid of the target point cloud data, including:
- the random point that determines the smallest standard deviation of the distance from each point cloud in the target point cloud data is the centroid of the target point cloud data.
- the invention discloses an object symmetry axis detection method based on RGB-D camera, which realizes three-dimensional reconstruction of target object RGB-D image and point cloud filtering, and finally realizes the axis of symmetry of the target object by adopting an integration strategy.
- the detection when applied to the field of fruit picking, can accurately find the axis of symmetry of the fruit based on the image of the fruit photographed in the natural scene, overcoming the defect that the previous fruit recognition is only accurate to the individual and cannot describe the individual characteristics, and is convenient for the robot Grab, improve the automatic harvesting efficiency of fruits, and reduce damage to fruits.
- FIG. 1 is a flow chart of a method for detecting an object symmetry axis based on an RGB-D camera according to the present invention.
- FIG. 2 is a schematic diagram of a coordinate mapping model between a color camera and a depth camera.
- FIG. 3 is a schematic diagram of calculating a score of each candidate point according to a preset integration strategy.
- the present application discloses an object symmetry axis detecting method based on an RGB-D camera.
- the method is based on an RGB-D camera.
- the RGB-D camera includes a color camera and a depth camera.
- the detection method includes the following steps. Refer to Figure 1:
- the calibration method used in this application is Zhang Zhengyou calibration method. Actually, other methods may also be used, which are not limited in this application.
- the parameters of the color camera can be obtained: the color camera internal parameter H rgb and the color camera external parameters (R rgb , T rgb ); using Zhang Zhengyou calibration method for RGB-D
- the camera's depth camera is calibrated to obtain depth camera parameters including: depth camera internal parameters H d and depth camera external parameters (R d , T d ).
- the color image of the target object is obtained by the color camera of the RGB-D camera, and the depth image of the target object is acquired by the depth camera of the RGB-D camera.
- the target object is the fruit on the fruit tree photographed under the natural scene.
- the color image acquired by the color camera has a higher resolution, usually 1080*1920.
- the depth image acquired by the depth camera has a lower resolution, usually 424*512, and the specific resolution of the color image and the depth image is RGB-
- the hardware parameters of the D camera and the artificial settings are determined, and the present application does not limit this.
- the depth information is mapped into the color image pixel coordinate system, and the alignment with the pixel points of the color image is obtained one by one.
- the resolution of the subsequent depth image is the same as the resolution of the color image, which is 1080*1920, specifically including the following steps, as shown in FIG. 2:
- Step 1 Using the color camera external parameters to obtain the projection coordinates P rgb (X c1 , Y c1 , Z c1 ) of the color camera in the world coordinate system P(X w , Y w , Z w ) according to the inverse transformation of the projection, using the depth
- the camera external parameters acquire the projection coordinates P d (X c2 , Y c2 , Z c2 ) of the depth camera in the world coordinate system P(X w , Y w , Z w ) according to the inverse inverse projection.
- Step 2 Acquire the coordinates P rgb (u 1 , v 1 ) of the image pixels in the color camera coordinate system by using the internal parameters of the color camera, and acquire the coordinates P d of the image pixels in the depth camera coordinate system by using the internal parameters of the depth camera ( u 2 , v 2 ).
- Step 3 According to the projection coordinates P rgb (X c1 , Y c1 , Z c1 ) of the color camera in the world coordinate system, the projection coordinates P d (X c2 , Y c2 , Z c2 ) of the depth camera in the world coordinate system, The coordinates P rgb (u 1 , v 1 ) of the image pixels in the color camera coordinate system and the coordinates P d (u 2 , v 2 ) of the image pixels in the depth camera coordinate system determine the pixel coordinates of the color image and the pixel coordinates of the depth image. Correspondence between the two.
- Step 4 Finally, the depth image is mapped into the color image pixel coordinate system according to the correspondence between the color image pixel coordinates and the depth image pixel coordinates, thereby obtaining the aligned depth image.
- the color image and the aligned depth image are processed into three-dimensional point cloud data
- d i is the average distance between the point cloud p i and its neighboring points p j in the space
- (x i , y i , z i ) is the three-dimensional coordinates of the point cloud p i
- (x j , y j , z j ) is the three-dimensional coordinates of the neighboring point p j of the point cloud
- m is the number of point clouds in the cloud data of the former attraction
- k is the number of neighboring points of the point cloud p i of the search
- k is a preset value
- [mu] for each of the point cloud corresponding to the distance d i is the average value
- ⁇ is the distance of each point cloud corresponding to the standard deviation of d i.
- the centroid P 0 of the target point cloud data is obtained, that is, a predetermined number of random points are randomly generated in the vicinity of the target point cloud data, and the predetermined number is larger than the number of the target point cloud data, and usually the predetermined number is the target point cloud data. 2-3 times the number. Traverse all random points, for each random point, find the Euclidean distance between the random point and each point cloud in the target point cloud data, and determine the centroid P 0 from the random point with the smallest distance fluctuation, that is, obtain the random point The standard deviation of the distance between each point cloud in the target point cloud data, the random point that minimizes the standard deviation of the distance between each point cloud in the target point cloud data is the centroid P 0 of the target point cloud data.
- Step 1 Determine the horizontal split angle ⁇ and the vertical split angle
- the range of the horizontal split angle ⁇ should be the entire range, ie ⁇ ⁇ [0, ⁇ ], and Covering only the negative half of the z-axis, ie
- Step 2 Split the horizontal angle ⁇ and the vertical split angle Equal division, ie calculation:
- phiBinSize is the vertical split angle
- the range of variation, ⁇ (2) and ⁇ (1) are the maximum and minimum values of the horizontal split angle ⁇ , respectively.
- Vertical split angle The maximum and minimum values.
- Step 3 Change the horizontal split angle and the vertical split angle n times, respectively, to obtain the unit normal vector of the n 2 set split plane.
- a specific approach is to keep the horizontal split angle ⁇ unchanged by changing the vertical split angle Calculate the unit normal vector of the segmentation plane, and change the vertical segmentation angle cyclically Calculate n unit normal vectors a total of n times; cyclically change the horizontal split angle ⁇ a total of n times, for each horizontal split angle ⁇ , cyclically change the vertical split angle
- the unit normal vector of n 2 sets of segmentation planes is obtained, that is, n 2 candidate symmetry planes are obtained, and the calculation method of each unit normal vector is as follows:
- x, y, z are the coordinate values of the unit normal vector
- theta is the azimuth in the spherical coordinate system
- phi is the high and low angle in the spherical coordinate system
- r is the radius of the unit circle
- i and j are parameters. i is the number of times the horizontal split angle ⁇ loop changes, and j is the vertical split angle The number of times the loop has changed.
- Step 1 First determine candidate points in the candidate symmetry plane: calculation p v is the coordinate of the viewpoint, the default is (0,0,0), and p is the point on one side of the candidate symmetry plane. Is the normal vector at that point. when When it is determined that the point has a symmetric partner in the candidate symmetry plane, then the point is determined as a candidate point; When it is determined, the point has no symmetric partner in the candidate symmetry plane. This method is used to filter out all candidate points in the candidate symmetry plane.
- the method for determining the normal vector at each point p is: based on the method of local surface fitting, obtain the t adjacent neighbors closest to the point p, t is an integer, and then calculate a least squares meaning for these points
- the local plane P on the upper part which can be expressed as:
- the eigenvector corresponding to the minimum eigenvalue of the covariance matrix M is the normal vector at point p.
- Step 2 Calculate the score of each candidate point according to the preset integral strategy, specifically, the candidate point Symmetrical change of the candidate symmetry plane to obtain the symmetry point of the candidate point Determine the symmetry point with the candidate point according to the KNN (K-Nearest Neighbor, K nearest neighbor) classification algorithm Nearest nearest point Determine the symmetry point of the candidate point With neighboring points
- Step 3 Determine the sum of the scores of the respective candidate points as the score of the candidate symmetry plane.
- p v is the coordinate of the viewpoint
- the default is (0,0,0)
- p o is the coordinate of the centroid of the target point cloud data
- the symmetry axis of the target point cloud data is the target The axis of symmetry of the object.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Geometry (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (10)
- 一种基于RGB-D相机的物体对称轴检测方法,其特征在于,所述方法包括:对RGB-D相机的彩色相机和深度相机进行标定,获取所述RGB-D相机的相机参数;通过RGB-D相机的彩色相机获取目标物体的彩色图像,通过RGB-D相机的深度相机获取所述目标物体的深度图像;根据所述RGB-D相机的相机参数将所述深度图像映射到所述彩色图像所在的彩色图像像素坐标系中,获得对齐后的深度图像,并将所述彩色图像和所述对齐后的深度图像处理为三维点云数据,所述三维点云数据包括点云的三维坐标和颜色信息;根据色差阈值分割法从所述三维点云数据中识别出目标点云数据,所述目标点云数据是所述目标物体所在区域的点云对应的数据;求取所述目标点云数据的质心,并以所述质心为中心建立球坐标系,利用球坐标系分割法选出n 2个候选对称平面,n大于3且n为整数;根据所述n 2个候选对称平面计算所述目标点云数据的对称轴,确定所述目标点云数据的对称轴为所述目标物体的对称轴。
- 根据权利要求1所述的方法,其特征在于,所述根据所述n 2个候选对称平面计算所述目标点云数据的对称轴,包括:根据预设积分策略计算所述n 2个候选对称平面中每个候选对称平面的分数;确定分数最低的候选对称平面为所述目标物体的对称平面;根据所述目标物体的对称平面计算得到所述目标点云数据的所述对称轴。
- 根据权利要求2所述的方法,其特征在于,所述根据预设积分策略计算所述n 2个候选对称平面中每个候选对称平面的分数,包括:对于每个候选对称平面,确定所述候选对称平面中的候选点,所述候选点是在所述候选对称平面中有对称伙伴的点;根据所述预设积分策略计算得到每个所述候选点的分数;确定各个所述候选点的分数之和为所述候选对称平面的分数。
- 根据权利要求4所述的方法,其特征在于,根据所述预设积分策略计算得到每个所述候选点的分数,包括:将所述候选点关于所述候选对称平面做对称变化得到所述候选点的对称点;根据KNN分类算法确定与所述候选点的对称点距离最近的邻近点,确定所述候选点的对称点与所述邻近点的距离;计算x score=d min+ω·α,其中,x score是所述候选点的分数,d min是所述候选点的对称点与所述临近点的距离,α是所述候选点的对称点的法向量与所述邻近点的法向量之间的夹角,ω为权重系数。
- 根据权利要求7所述的方法,其特征在于,所述根据色差阈值分割法从 所述三维点云数据中识别出前景点云数据,包括:对于所述三维点云数据中的各个点云的任意一个点云,计算R s-G s,R s为所述点云的红色通道R信息,G s为所述点云的绿色通道G信息;若R s-G s>δ,则确定所述点云对应的数据是所述前景点云数据,δ为预设阈值。
- 根据权利要求1至6任一所述的方法,其特征在于,所述以所述质心为中心建立球坐标系,利用球坐标系分割法选出n 2个候选对称平面,包括:以所述质心为中心建立球坐标系,确定水平分割角和垂直分割角的范围;通过计算 将所述水平分割角进行等分,通过计算 将所述垂直分割角进行等分,其中,thetaBinSize是水平分割角的变化范围,phiBinSize是垂直分割角的变化范围,θ(2)和θ(1)分别为水平分割角的最大值与最小值, 和 分别为垂直分割角的最大值和最小值;分别改变所述水平分割角和所述垂直分割角n次,得到n 2组分割平面的单位法向量;根据所述n 2组分割平面的单位法向量得到n 2个候选对称平面。
- 根据权利要求1至6任一所述的方法,其特征在于,所述求取所述目标点云数据的质心,包括:在所述目标点云数据附近随机产生预定数量的随机点,所述预定数量大于所述目标点云数据的个数;对于每个所述随机点,求取所述随机点与所述目标点云数据中的各个点云之间的距离的标准差;确定与所述目标点云数据中的各个点云之间的距离的标准差最小的随机点为所述目标点云数据的质心。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/093,658 US10607106B2 (en) | 2017-11-21 | 2018-04-17 | Object symmetry axis detection method based on RGB-D camera |
AU2018370629A AU2018370629B2 (en) | 2017-11-21 | 2018-04-17 | RGB-D camera-based object symmetry axis detection method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711165747.2 | 2017-11-21 | ||
CN201711165747.2A CN108010036B (zh) | 2017-11-21 | 2017-11-21 | 一种基于rgb-d相机的物体对称轴检测方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019100647A1 true WO2019100647A1 (zh) | 2019-05-31 |
Family
ID=62053109
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/083260 WO2019100647A1 (zh) | 2017-11-21 | 2018-04-17 | 一种基于rgb-d相机的物体对称轴检测方法 |
Country Status (4)
Country | Link |
---|---|
US (1) | US10607106B2 (zh) |
CN (1) | CN108010036B (zh) |
AU (1) | AU2018370629B2 (zh) |
WO (1) | WO2019100647A1 (zh) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110648359A (zh) * | 2019-09-23 | 2020-01-03 | 山东师范大学 | 一种果实目标定位识别方法及*** |
CN110992372A (zh) * | 2019-11-21 | 2020-04-10 | 浙江大华技术股份有限公司 | 物品抓取方法、装置、存储介质及电子装置 |
CN111160280A (zh) * | 2019-12-31 | 2020-05-15 | 芜湖哈特机器人产业技术研究院有限公司 | 基于rgbd相机的目标物体识别与定位方法及移动机器人 |
CN111754515A (zh) * | 2019-12-17 | 2020-10-09 | 北京京东尚科信息技术有限公司 | 堆叠物品的顺序抓取方法和装置 |
CN112258631A (zh) * | 2020-10-20 | 2021-01-22 | 河海大学常州校区 | 一种基于深度神经网络的三维目标检测方法及*** |
CN112686859A (zh) * | 2020-12-30 | 2021-04-20 | 中国农业大学 | 基于热红外和rgb-d相机的作物cwsi的检测方法 |
CN112766223A (zh) * | 2021-01-29 | 2021-05-07 | 西安电子科技大学 | 基于样本挖掘与背景重构的高光谱图像目标检测方法 |
CN112819883A (zh) * | 2021-01-28 | 2021-05-18 | 华中科技大学 | 一种规则对象检测及定位方法 |
CN113205465A (zh) * | 2021-04-29 | 2021-08-03 | 上海应用技术大学 | 点云数据集分割方法及*** |
CN113469195A (zh) * | 2021-06-25 | 2021-10-01 | 浙江工业大学 | 一种基于自适应颜色快速点特征直方图的目标识别方法 |
CN113989391A (zh) * | 2021-11-11 | 2022-01-28 | 河北农业大学 | 基于rgb-d相机的动物体三维模型重构***及方法 |
CN114323283A (zh) * | 2021-12-30 | 2022-04-12 | 中铭谷智能机器人(广东)有限公司 | 一种钣金颜色特征智能识别框选方法 |
Families Citing this family (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10854011B2 (en) | 2018-04-09 | 2020-12-01 | Direct Current Capital LLC | Method for rendering 2D and 3D data within a 3D virtual environment |
CN110473283B (zh) * | 2018-05-09 | 2024-01-23 | 无锡时代天使医疗器械科技有限公司 | 牙齿三维数字模型的局部坐标系设定方法 |
CN109146894A (zh) * | 2018-08-07 | 2019-01-04 | 庄朝尹 | 一种三维建模的模型区域分割方法 |
CN109344868B (zh) * | 2018-08-28 | 2021-11-16 | 广东奥普特科技股份有限公司 | 一种区分互为轴对称的不同类物件的通用方法 |
US11823461B1 (en) | 2018-09-28 | 2023-11-21 | Direct Current Capital LLC | Systems and methods for perceiving a scene around a mobile device |
US11567497B1 (en) | 2019-02-04 | 2023-01-31 | Direct Current Capital LLC | Systems and methods for perceiving a field around a device |
US10549928B1 (en) | 2019-02-22 | 2020-02-04 | Dexterity, Inc. | Robotic multi-item type palletizing and depalletizing |
US11741566B2 (en) * | 2019-02-22 | 2023-08-29 | Dexterity, Inc. | Multicamera image processing |
US11460855B1 (en) | 2019-03-29 | 2022-10-04 | Direct Current Capital LLC | Systems and methods for sensor calibration |
CN110032962B (zh) * | 2019-04-03 | 2022-07-08 | 腾讯科技(深圳)有限公司 | 一种物体检测方法、装置、网络设备和存储介质 |
CN110567422B (zh) * | 2019-06-25 | 2021-07-06 | 江苏省特种设备安全监督检验研究院 | 一种起重机吊钩扭转角自动检测方法 |
CN110853080A (zh) * | 2019-09-30 | 2020-02-28 | 广西慧云信息技术有限公司 | 一种田间果实尺寸的测量方法 |
TWI759651B (zh) * | 2019-11-21 | 2022-04-01 | 財團法人工業技術研究院 | 基於機器學習的物件辨識系統及其方法 |
CN111028345B (zh) * | 2019-12-17 | 2023-05-02 | 中国科学院合肥物质科学研究院 | 一种港口场景下圆形管道的自动识别与对接方法 |
CN111126296A (zh) * | 2019-12-25 | 2020-05-08 | 中国联合网络通信集团有限公司 | 水果定位方法及装置 |
CN111144480A (zh) * | 2019-12-25 | 2020-05-12 | 深圳蓝胖子机器人有限公司 | 一种可回收垃圾视觉分类方法、***、设备 |
CN111311576B (zh) * | 2020-02-14 | 2023-06-02 | 易思维(杭州)科技有限公司 | 基于点云信息的缺陷检测方法 |
CN111353417A (zh) * | 2020-02-26 | 2020-06-30 | 北京三快在线科技有限公司 | 一种目标检测的方法及装置 |
CN111428622B (zh) * | 2020-03-20 | 2023-05-09 | 上海健麾信息技术股份有限公司 | 一种基于分割算法的图像定位方法及其应用 |
CN111899301A (zh) * | 2020-06-02 | 2020-11-06 | 广州中国科学院先进技术研究所 | 一种基于深度学习的工件6d位姿估计方法 |
CN111768487B (zh) * | 2020-06-11 | 2023-11-28 | 武汉市工程科学技术研究院 | 基于三维点云库的地质岩层数据三维重建***及方法 |
CN112101092A (zh) * | 2020-07-31 | 2020-12-18 | 北京智行者科技有限公司 | 自动驾驶环境感知方法及*** |
CN112102415A (zh) * | 2020-08-25 | 2020-12-18 | 中国人民解放军63919部队 | 基于标定球的深度相机外参数标定方法、装置及设备 |
CN112017220B (zh) * | 2020-08-27 | 2023-07-28 | 南京工业大学 | 一种基于抗差约束最小二乘算法的点云精确配准方法 |
CN112115953B (zh) * | 2020-09-18 | 2023-07-11 | 南京工业大学 | 一种基于rgb-d相机结合平面检测与随机抽样一致算法的优化orb算法 |
CN112720459B (zh) * | 2020-12-02 | 2022-07-12 | 达闼机器人股份有限公司 | 目标物体抓取方法、装置、存储介质及电子设备 |
WO2022147774A1 (zh) * | 2021-01-08 | 2022-07-14 | 浙江大学 | 基于三角剖分和概率加权ransac算法的物***姿识别方法 |
CN112652075B (zh) * | 2021-01-30 | 2022-08-09 | 上海汇像信息技术有限公司 | 对称物体3d模型的对称拟合方法 |
CN113344844A (zh) * | 2021-04-14 | 2021-09-03 | 山东师范大学 | 基于rgb-d多模图像信息的目标果实检测方法及*** |
CN113362276B (zh) * | 2021-04-26 | 2024-05-10 | 广东大自然家居科技研究有限公司 | 板材视觉检测方法及*** |
CN113192206B (zh) * | 2021-04-28 | 2023-04-07 | 华南理工大学 | 基于目标检测和背景去除的三维模型实时重建方法及装置 |
CN113447948B (zh) * | 2021-05-28 | 2023-03-21 | 淮阴工学院 | 一种基于ros机器人的相机与多激光雷达融合方法 |
CN113470049B (zh) * | 2021-07-06 | 2022-05-20 | 吉林省田车科技有限公司 | 一种基于结构化彩色点云分割的完整目标提取方法 |
CN114373105A (zh) * | 2021-12-20 | 2022-04-19 | 华南理工大学 | 一种点云标注及数据集制作的方法、***、装置及介质 |
CN114295076B (zh) * | 2022-01-05 | 2023-10-20 | 南昌航空大学 | 一种解决基于结构光的微小物体测量阴影问题的测量方法 |
CN115082815B (zh) * | 2022-07-22 | 2023-04-07 | 山东大学 | 基于机器视觉的茶芽采摘点定位方法、装置及采摘*** |
CN115139325B (zh) * | 2022-09-02 | 2022-12-23 | 星猿哲科技(深圳)有限公司 | 物体抓取*** |
CN117011309B (zh) * | 2023-09-28 | 2023-12-26 | 济宁港航梁山港有限公司 | 基于人工智能及深度数据的自动盘煤*** |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080247660A1 (en) * | 2007-04-05 | 2008-10-09 | Hui Zhang | Automatic Detection and Mapping of Symmetries in an Image |
CN104126989A (zh) * | 2014-07-30 | 2014-11-05 | 福州大学 | 一种基于多台rgb-d摄像机下的足部表面三维信息获取方法 |
CN104298971A (zh) * | 2014-09-28 | 2015-01-21 | 北京理工大学 | 一种3d点云数据中的目标识别方法 |
CN105184830A (zh) * | 2015-08-28 | 2015-12-23 | 华中科技大学 | 一种对称图像对称轴检测定位方法 |
CN106780528A (zh) * | 2016-12-01 | 2017-05-31 | 广西师范大学 | 基于边缘线匹配的图像对称轴检测方法 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015172227A1 (en) * | 2014-05-13 | 2015-11-19 | Pcp Vr Inc. | Method, system and apparatus for generation and playback of virtual reality multimedia |
-
2017
- 2017-11-21 CN CN201711165747.2A patent/CN108010036B/zh active Active
-
2018
- 2018-04-17 US US16/093,658 patent/US10607106B2/en active Active
- 2018-04-17 AU AU2018370629A patent/AU2018370629B2/en active Active
- 2018-04-17 WO PCT/CN2018/083260 patent/WO2019100647A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080247660A1 (en) * | 2007-04-05 | 2008-10-09 | Hui Zhang | Automatic Detection and Mapping of Symmetries in an Image |
CN104126989A (zh) * | 2014-07-30 | 2014-11-05 | 福州大学 | 一种基于多台rgb-d摄像机下的足部表面三维信息获取方法 |
CN104298971A (zh) * | 2014-09-28 | 2015-01-21 | 北京理工大学 | 一种3d点云数据中的目标识别方法 |
CN105184830A (zh) * | 2015-08-28 | 2015-12-23 | 华中科技大学 | 一种对称图像对称轴检测定位方法 |
CN106780528A (zh) * | 2016-12-01 | 2017-05-31 | 广西师范大学 | 基于边缘线匹配的图像对称轴检测方法 |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110648359A (zh) * | 2019-09-23 | 2020-01-03 | 山东师范大学 | 一种果实目标定位识别方法及*** |
CN110992372A (zh) * | 2019-11-21 | 2020-04-10 | 浙江大华技术股份有限公司 | 物品抓取方法、装置、存储介质及电子装置 |
CN110992372B (zh) * | 2019-11-21 | 2023-08-29 | 浙江大华技术股份有限公司 | 物品抓取方法、装置、存储介质及电子装置 |
CN111754515A (zh) * | 2019-12-17 | 2020-10-09 | 北京京东尚科信息技术有限公司 | 堆叠物品的顺序抓取方法和装置 |
CN111754515B (zh) * | 2019-12-17 | 2024-03-01 | 北京京东乾石科技有限公司 | 堆叠物品的顺序抓取方法和装置 |
CN111160280B (zh) * | 2019-12-31 | 2022-09-30 | 芜湖哈特机器人产业技术研究院有限公司 | 基于rgbd相机的目标物体识别与定位方法及移动机器人 |
CN111160280A (zh) * | 2019-12-31 | 2020-05-15 | 芜湖哈特机器人产业技术研究院有限公司 | 基于rgbd相机的目标物体识别与定位方法及移动机器人 |
CN112258631A (zh) * | 2020-10-20 | 2021-01-22 | 河海大学常州校区 | 一种基于深度神经网络的三维目标检测方法及*** |
CN112258631B (zh) * | 2020-10-20 | 2023-12-08 | 河海大学常州校区 | 一种基于深度神经网络的三维目标检测方法及*** |
CN112686859A (zh) * | 2020-12-30 | 2021-04-20 | 中国农业大学 | 基于热红外和rgb-d相机的作物cwsi的检测方法 |
CN112686859B (zh) * | 2020-12-30 | 2024-03-15 | 中国农业大学 | 基于热红外和rgb-d相机的作物cwsi的检测方法 |
CN112819883A (zh) * | 2021-01-28 | 2021-05-18 | 华中科技大学 | 一种规则对象检测及定位方法 |
CN112819883B (zh) * | 2021-01-28 | 2024-04-26 | 华中科技大学 | 一种规则对象检测及定位方法 |
CN112766223B (zh) * | 2021-01-29 | 2023-01-06 | 西安电子科技大学 | 基于样本挖掘与背景重构的高光谱图像目标检测方法 |
CN112766223A (zh) * | 2021-01-29 | 2021-05-07 | 西安电子科技大学 | 基于样本挖掘与背景重构的高光谱图像目标检测方法 |
CN113205465A (zh) * | 2021-04-29 | 2021-08-03 | 上海应用技术大学 | 点云数据集分割方法及*** |
CN113205465B (zh) * | 2021-04-29 | 2024-04-19 | 上海应用技术大学 | 点云数据集分割方法及*** |
CN113469195A (zh) * | 2021-06-25 | 2021-10-01 | 浙江工业大学 | 一种基于自适应颜色快速点特征直方图的目标识别方法 |
CN113469195B (zh) * | 2021-06-25 | 2024-02-06 | 浙江工业大学 | 一种基于自适应颜色快速点特征直方图的目标识别方法 |
CN113989391A (zh) * | 2021-11-11 | 2022-01-28 | 河北农业大学 | 基于rgb-d相机的动物体三维模型重构***及方法 |
CN114323283A (zh) * | 2021-12-30 | 2022-04-12 | 中铭谷智能机器人(广东)有限公司 | 一种钣金颜色特征智能识别框选方法 |
Also Published As
Publication number | Publication date |
---|---|
CN108010036A (zh) | 2018-05-08 |
AU2018370629A1 (en) | 2020-07-02 |
US20190362178A1 (en) | 2019-11-28 |
CN108010036B (zh) | 2020-01-21 |
US10607106B2 (en) | 2020-03-31 |
AU2018370629B2 (en) | 2021-01-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019100647A1 (zh) | 一种基于rgb-d相机的物体对称轴检测方法 | |
US10234873B2 (en) | Flight device, flight control system and method | |
CN112070818B (zh) | 基于机器视觉的机器人无序抓取方法和***及存储介质 | |
US11227405B2 (en) | Determining positions and orientations of objects | |
WO2017080102A1 (zh) | 飞行装置、飞行控制***及方法 | |
CN106683137B (zh) | 基于人工标志的单目多目标识别与定位方法 | |
WO2019228523A1 (zh) | 物体空间位置形态的确定方法、装置、存储介质及机器人 | |
CN107392929B (zh) | 一种基于人眼视觉模型的智能化目标检测及尺寸测量方法 | |
CN108470356B (zh) | 一种基于双目视觉的目标对象快速测距方法 | |
US20100033584A1 (en) | Image processing device, storage medium storing image processing program, and image pickup apparatus | |
KR102073468B1 (ko) | 비전 시스템에서 컬러 이미지에 대해 컬러 후보 포즈들의 점수화를 위한 시스템 및 방법 | |
CN110458858A (zh) | 一种十字靶标的检测方法、***及存储介质 | |
CN117152163B (zh) | 一种桥梁施工质量视觉检测方法 | |
CN110021029A (zh) | 一种适用于rgbd-slam的实时动态配准方法及存储介质 | |
CN107680035B (zh) | 一种参数标定方法和装置、服务器及可读存储介质 | |
Han et al. | Target positioning method in binocular vision manipulator control based on improved canny operator | |
CN108765444A (zh) | 基于单目视觉的地面t形运动目标检测与定位方法 | |
CN104346614A (zh) | 一种实景下的西瓜图像处理和定位方法 | |
CN110992372A (zh) | 物品抓取方法、装置、存储介质及电子装置 | |
CN115841668A (zh) | 一种双目视觉苹果识别以及精准定位的方法 | |
CN115880220A (zh) | 多视角苹果成熟度检测方法 | |
CN113255455A (zh) | 基于矢量去光照影响算法的单目相机物体识别与定位方法 | |
CN113095214A (zh) | 一种基于人工智能的无人机测绘光学防抖方法及*** | |
CN113516709B (zh) | 一种基于双目视觉的法兰定位方法 | |
CN111630569A (zh) | 双目匹配的方法、视觉成像装置及具有存储功能的装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18880736 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2018370629 Country of ref document: AU Date of ref document: 20180417 Kind code of ref document: A |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18880736 Country of ref document: EP Kind code of ref document: A1 |