WO2021103824A1 - 基于标定块的机器人手眼标定中关键点位置确定方法与装置 - Google Patents

基于标定块的机器人手眼标定中关键点位置确定方法与装置 Download PDF

Info

Publication number
WO2021103824A1
WO2021103824A1 PCT/CN2020/120103 CN2020120103W WO2021103824A1 WO 2021103824 A1 WO2021103824 A1 WO 2021103824A1 CN 2020120103 W CN2020120103 W CN 2020120103W WO 2021103824 A1 WO2021103824 A1 WO 2021103824A1
Authority
WO
WIPO (PCT)
Prior art keywords
calibration block
point cloud
point
dimensional
calibration
Prior art date
Application number
PCT/CN2020/120103
Other languages
English (en)
French (fr)
Inventor
郑振兴
刁世普
Original Assignee
广东技术师范大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东技术师范大学 filed Critical 广东技术师范大学
Publication of WO2021103824A1 publication Critical patent/WO2021103824A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the invention relates to the calibration of a vision guidance system in a robot automatic processing system, the calibration of a vision system in the detection of the position and related parameters of parts to be assembled in a robot automatic assembly system, and the conversion of target position information after a defect is obtained by analyzing sensor data in a processing center
  • the technical field of hand-eye calibration for visual inspection system calibration and other automated processing (operation) processes such as visual guidance tasks in the automated field, specifically relates to a method and device for determining key point positions in robot hand-eye calibration based on a calibration block.
  • Automated equipment is a weapon to make a powerful country. Therefore, it must move towards high speed and intelligence.
  • An important method is to equip the machine with "eyes" and a "brain” that can cooperate with the eyes.
  • This eye can be a monocular camera, a binocular camera, a multi-eye camera, a 3D scanner, or an RGB-D sensor.
  • the present invention proposes a method and device for determining the position of key points in the hand-eye calibration of a robot based on a calibration block, which can extract key points with low cost, convenience, and high precision, thereby achieving low cost, convenience, and high precision. Accurately perform hand-eye calibration in the robot vision system.
  • the calibration block is a three-dimensional calibration block
  • the three-dimensional calibration block has a polyhedral structure and an irregular shape
  • the key points are on the three-dimensional calibration block and there are many
  • the preset points do not overlap in the height direction
  • the key point extraction method includes the following steps:
  • Step 1 Adjust the posture of the 3D calibration block so that the projection of any two points of the key points on the 3D calibration block on the XY plane is not parallel to any coordinate axis of the robot base coordinate system;
  • Step 2 Adjust the posture of the robot so that the three-dimensional vision system at the end of the robot can obtain the three-dimensional calibration block point cloud containing the peripheral surface of the key point on the three-dimensional calibration block;
  • Step 3 Convert the CAD model of the 3D calibration block into a point cloud to obtain the point cloud of the 3D calibration block model
  • Step 4 Register the 3D calibration block model point cloud with the obtained 3D calibration block point cloud
  • Step 5 Taking the position of the key points on the 3D calibration block model point cloud as the reference, set the corresponding threshold to obtain the point cloud near the key points from the 3D calibration block point cloud to determine that the key points on the 3D calibration block are in the 3D vision system coordinate system The coordinate value.
  • step 3 includes the following sub-steps:
  • Step 301 Obtain the CAD model of the 3D calibration block and convert it into a PLY format file
  • Step 302 According to the PLY format file, use the data format conversion function in the PCL library to convert it into a point cloud data format to obtain a three-dimensional calibration block model point cloud.
  • step 4 includes the following sub-steps:
  • Step 401 Sampling the 3D calibration block point cloud and the 3D calibration block model point cloud respectively;
  • Step 402 Calculate the feature point descriptors of the three-dimensional calibration block point cloud and the three-dimensional calibration block model point cloud respectively to obtain respective fast point feature histograms;
  • Step 403 according to the fast point feature histogram of the 3D calibration block point cloud and the 3D calibration block model point cloud, perform rough registration on the point cloud by using a sampling consistent initial registration algorithm;
  • step 404 the point cloud is accurately registered through the iterative closest point algorithm.
  • step 5 set the corresponding threshold to search for the closest point to the key point cloud on the 3D calibration block model point cloud from the 3D calibration block point cloud through the nearest neighbor search method, and determine that the coordinate value of the point is 3D The coordinate value of the key point on the calibration block in the coordinate system of the 3D vision system.
  • a device for determining the position of key points in the hand-eye calibration of a robot based on a calibration block is a three-dimensional calibration block, the three-dimensional calibration block has a polyhedral structure and an irregular shape, and the key points are on the three-dimensional calibration block and there are not many At three preset points, the preset points do not overlap in the height direction; the key point extraction device includes
  • the 3D calibration block posture adjustment module is used to adjust the placement posture of the 3D calibration block, so that the connection between any two points of the key points on the 3D calibration block is projected on the XY plane, and any coordinates in the robot base coordinate system The axis is not parallel;
  • the robot posture adjustment module is used to adjust the posture of the robot so that the three-dimensional vision system at the end of the robot can obtain the three-dimensional calibration block point cloud containing the peripheral surface of the key point on the three-dimensional calibration block;
  • the model point cloud conversion module is used to convert the CAD model of the 3D calibration block into a point cloud to obtain the 3D calibration block model point cloud;
  • the registration module is used to register the 3D calibration block model point cloud with the obtained 3D calibration block point cloud;
  • the key point coordinate determination module is used to set the corresponding threshold value based on the key point position on the point cloud of the 3D calibration block model to obtain the point cloud near the key point from the 3D calibration block point cloud to determine the key point on the 3D calibration block.
  • the coordinate value of the coordinate system of the 3D vision system is used to set the corresponding threshold value based on the key point position on the point cloud of the 3D calibration block model to obtain the point cloud near the key point from the 3D calibration block point cloud to determine the key point on the 3D calibration block.
  • model point cloud conversion module includes
  • the PLY format file conversion unit is used to obtain the CAD model of the 3D calibration block and convert it into a PLY format file;
  • the model point cloud acquisition unit is used to convert the PLY format file into a point cloud data format by using the data format conversion function in the PCL library to acquire a three-dimensional calibration block model point cloud.
  • the registration module includes a sampling unit, a fast point feature histogram unit, a rough configuration unit, and a precise registration unit, wherein
  • the sampling unit is used to sample the 3D calibration block point cloud and the 3D calibration block model point cloud respectively;
  • the fast point feature histogram unit is used to calculate the feature point descriptors of the 3D calibration block point cloud and the 3D calibration block model point cloud respectively to obtain their respective fast point feature histograms;
  • the coarse configuration unit is used to coarsely register the point cloud based on the 3D calibration block point cloud and the fast point feature histogram of the 3D calibration block model point cloud by using the sampling consistent initial registration algorithm;
  • the precise registration unit is used to accurately register the point cloud through the iterative closest point algorithm.
  • the key point coordinate determination module is used to set a corresponding threshold to search for the closest point to the key point cloud on the 3D calibration block model point cloud from the 3D calibration block point cloud through the nearest neighbor search method, and determine the The coordinate value of the point is the coordinate value of the key point on the 3D calibration block in the coordinate system of the 3D vision system.
  • the present invention uses a three-dimensional calibration block with a polyhedral structure and irregular shape, and multiple key points on the three-dimensional calibration block do not overlap in the height direction, thereby being low-cost, convenient, and Determine the coordinate value of the key point in the robot vision system with high precision; specifically, by adjusting the placement posture of the three-dimensional calibration block, the connection between any two points of the multiple key points is projected on the XY plane, and the Any coordinate axis of the robot base coordinate system is not parallel; then by adjusting the robot's posture, the 3D vision system can obtain the point cloud of the peripheral surface of the key point; finally, the 3D calibration block model point cloud and the collected 3D The calibration block point cloud is registered, and the corresponding threshold is set to determine the point cloud near the key point, so as to obtain the coordinate value of the key point in the coordinate system of the three-dimensional vision system.
  • the transformation matrix of the hand-eye relationship of the robot dynamic three-dimensional vision system can be quickly solved, so as to achieve low-cost, convenient and high-precision realization Hand-eye calibration in robot 3D dynamic vision system.
  • Figure 1 is a schematic diagram of the structure of a calibration block used in the present invention.
  • Figure 2 is a schematic diagram of the robot detection attitude adjustment of the present invention
  • FIG. 3 is a flow chart of an embodiment of the method for extracting key points in the calibration of the robot dynamic three-dimensional vision system according to the present invention
  • FIG. 4 is a structural block diagram of an embodiment of the device for extracting key points in the calibration of the robot dynamic three-dimensional vision system according to the present invention
  • step 3 is a flowchart of step 3 in an embodiment of the method for extracting key points in the calibration of the robot dynamic three-dimensional vision system according to the present invention
  • FIG. 6 is a structural block diagram of a model point cloud conversion module in an embodiment of the method for extracting key points in the calibration of the robot dynamic three-dimensional vision system according to the present invention
  • step 4 is a flowchart of step 4 in an embodiment of the method for extracting key points in the calibration of the robot dynamic 3D vision system according to the present invention
  • FIG. 8 is a structural block diagram of a model point cloud registration module in an embodiment of the method for extracting key points in the calibration of the robot dynamic three-dimensional vision system of the present invention.
  • Figure 1 is a calibration block used in the present invention, wherein the calibration block is a three-dimensional calibration block, and the key points are points P1, P2, P3 in Figure 1;
  • Figure 2 is a schematic diagram of the robot detection attitude adjustment of the present invention , After adjusting the placement posture of the three-dimensional calibration block, the detection posture of the robot is adjusted to determine the coordinate values of points P1, P2, and P3 in the coordinate system of the three-dimensional vision system.
  • the coordinate values of points P1, P2, and P3 in the robot base coordinate system are also determined, according to the coordinate values of points P1, P2, and P3 in the three-dimensional vision system coordinate system and the coordinates of points P1, P2, P3 in the robot base coordinate system
  • the coordinate value can solve the transformation matrix of the hand-eye relationship of the robot dynamic 3D vision system, and realize the hand-eye calibration in the robot 3D dynamic vision system. A detailed description will be given below with reference to FIGS. 1 to 4.
  • the calibration block used in the embodiment of the present invention is a three-dimensional calibration block with a special shape, which is specifically expressed as: as shown in FIG. 1, the three-dimensional calibration block has a polyhedral structure and an irregular shape, and the key points are three-dimensional Points P1, P2, P3 on the calibration block, among which the key points P1, P2, P3 do not overlap in the height direction, and are basically evenly distributed in the height direction.
  • the coordinate value of the key point in the coordinate system of the three-dimensional vision system is determined by the three-dimensional calibration block of this special structure.
  • the embodiment of the present invention discloses a method for determining the position of key points in the hand-eye calibration of a robot based on a calibration block, which includes the following steps:
  • Step 1 as shown in Figure 2, adjust the placement posture of the 3D calibration block so that the connection of any two points of P1, P2, P3 on the 3D calibration block is projected on the XY plane, and the robot base coordinates Any coordinate axis of the system is not parallel;
  • Step 2 as shown in Figure 2, adjust the posture of the robot so that the 3D vision system at the end of the robot can obtain the 3D calibration block point cloud containing the peripheral surface of the P1, P2, and P3 points on the 3D calibration block;
  • Step 3 Convert the CAD model of the 3D calibration block into a point cloud to obtain the point cloud of the 3D calibration block model
  • Step 4 Register the 3D calibration block model point cloud with the obtained 3D calibration block point cloud
  • Step 5 Take the key point positions on the point cloud of the three-dimensional calibration block model (ie P1', P2', P3' points, where P1' corresponds to P1, P2' corresponds to P2, and P3' corresponds to P3) as Benchmark, set the corresponding threshold to obtain the point cloud near the key point from the point cloud of the three-dimensional calibration block to determine the coordinate value of the key point on the three-dimensional calibration block in the coordinate system of the three-dimensional vision system.
  • P1', P2', P3' points where P1' corresponds to P1, P2' corresponds to P2, and P3' corresponds to P3
  • the embodiment of the present invention also discloses a device for determining the position of key points in the hand-eye calibration of a robot based on a calibration block, including:
  • the three-dimensional calibration block posture adjustment module 10 is used to adjust the placement posture of the three-dimensional calibration block, so that the line of any two points of P1, P2, P3 on the three-dimensional calibration block is projected on the XY plane, and the robot base Any coordinate axis of the coordinate system is not parallel;
  • the robot posture adjustment module 20 is used to adjust the posture of the robot so that the three-dimensional vision system at the end of the robot can obtain the three-dimensional calibration block point cloud including the peripheral surface of the P1, P2, and P3 points on the three-dimensional calibration block;
  • the model point cloud conversion module 30 is used to convert the CAD model of the three-dimensional calibration block into a point cloud to obtain the point cloud of the three-dimensional calibration block model;
  • the registration module 40 is used to register the three-dimensional calibration block model point cloud with the obtained three-dimensional calibration block point cloud;
  • the key point coordinate determination module 50 is used for three-dimensional calibration of the key point positions on the block model point cloud (ie P1', P2', P3' points, where P1' corresponds to P1, P2' corresponds to P2, and P3' Corresponding to P3) as the reference, set the corresponding threshold to obtain the point cloud near the key point from the point cloud of the three-dimensional calibration block to determine the coordinate value of the key point on the three-dimensional calibration block in the coordinate system of the three-dimensional vision system.
  • P1', P2', P3' points where P1' corresponds to P1, P2' corresponds to P2, and P3' Corresponding to P3
  • the method for determining the position of the key point in the hand-eye calibration of the robot based on the calibration block uses the device for determining the position of the key point in the hand-eye calibration of the robot based on the calibration block as the execution target of the steps.
  • step 1 uses the 3D calibration block posture adjustment module 10 as the execution object of the steps
  • step 2 uses the robot posture adjustment module 20 as the execution object of the steps
  • step 3 uses the model point cloud conversion module 30 as the execution object of the steps.
  • Step 4 uses the registration module 40 as the execution object of the step
  • step 5 uses the key point coordinate determination module 50 as the execution object of the step.
  • the determination of the coordinate values of the key points P1, P2, P3 in the coordinate system of the three-dimensional vision system and the coordinate values of the key points P1, P2, P3 in the robot base coordinate system are the key to the solution of the transformation matrix, and the key The coordinate values of points P1, P2, and P3 in the robot base coordinate system are quickly determined by the probe set at the end of the robot. Specifically, when P1, P2, and P3 are touched by the probe, the robot controller The coordinate value after length compensation can be the coordinate value of the key points P1, P2, P3 in the robot base coordinate system. Therefore, determining the coordinate values of the key points P1, P2, and P3 in the coordinate system of the three-dimensional vision system is the key to solving the transformation matrix.
  • the embodiment of the present invention utilizes the key points on the three-dimensional calibration block with the aid of the three-dimensional calibration block with the polyhedral structure and irregular shape, so that the coordinates of the key points in the coordinate system of the three-dimensional vision system can be determined with low cost, convenience and high precision.
  • the hand-eye calibration in the robot's three-dimensional dynamic vision system can be realized at low cost, conveniently and with high precision.
  • step 1 the placement posture of the three-dimensional calibration block is related to whether the acquired data is available. Therefore, in the embodiment of the present invention, as shown in Figures 1 and 2, when adjusting the posture of the three-dimensional calibration block, the key The projection of the line connecting any two of the points P1, P2, and P3 on the XY plane is not parallel to any coordinate axis of the robot base coordinate system, so that the robot end can obtain the key points at the same time under the same detection posture. Point cloud data of each surface.
  • step 2 the detection posture of the robot also needs to be adjusted so that the three-dimensional vision system, such as monocular camera, binocular camera, multi-eye camera, three-dimensional scanner, etc., can obtain usable spatial position data.
  • the three-dimensional vision system installed at the end of the robot can simultaneously obtain the target position points P1, P2, P3 on the three-dimensional calibration block shown in Figure 1 under the same end detection posture. Point cloud on the peripheral surface.
  • step 3 includes the following sub-steps:
  • Step 301 Obtain the CAD model of the 3D calibration block and convert it into a PLY format file
  • Step 302 According to the PLY format file, use the data format conversion function in the PCL library to convert it into a point cloud data format to obtain a three-dimensional calibration block model point cloud.
  • the model point cloud conversion module 30 in the device for determining the position of the key point in the hand-eye calibration of the robot based on the calibration block includes
  • the PLY format file conversion unit 31 is used to obtain the CAD model of the three-dimensional calibration block and convert it into a PLY format file;
  • the model point cloud acquisition unit 32 is configured to convert the PLY format file into a point cloud data format by using the data format conversion function in the PCL library to acquire a three-dimensional calibration block model point cloud.
  • step 3 uses each unit in the model point cloud conversion module 30 as the execution target of the step.
  • step 301 uses the PLY format file conversion unit 31 as the execution object of the step
  • step 302 uses the model point cloud acquisition unit 32 as the execution object of the step.
  • step 4 includes the following sub-steps:
  • Step 401 Sampling the 3D calibration block point cloud and the 3D calibration block model point cloud respectively;
  • Step 402 Calculate the feature point descriptors of the three-dimensional calibration block point cloud and the three-dimensional calibration block model point cloud respectively to obtain respective fast point feature histograms;
  • Step 403 according to the 3D calibration block point cloud and the fast point feature histogram of the 3D calibration block model point cloud, perform rough registration on the point cloud by using a sampling consistent initial registration algorithm;
  • step 404 the point cloud is accurately registered by using the iterative closest point algorithm.
  • the registration module 40 in the device for determining the position of the key point in the hand-eye calibration of the robot based on the calibration block includes
  • the sampling unit 41 is used to sample the three-dimensional calibration block point cloud and the three-dimensional calibration block model point cloud respectively;
  • the fast point feature histogram unit 42 is used to calculate the feature point descriptors of the three-dimensional calibration block point cloud and the three-dimensional calibration block model point cloud respectively to obtain respective fast point feature histograms;
  • the coarse configuration unit 43 is used to perform coarse registration on the point cloud by using the sampling consistent initial registration algorithm according to the fast point feature histogram of the 3D calibration block point cloud and the 3D calibration block model point cloud;
  • the precise registration unit 44 is used for precise registration of the point cloud through the iterative closest point algorithm.
  • step 4 uses each unit in the registration module 40 as the execution target of the step.
  • step 401 uses the sampling unit 41 as the execution object of the step
  • step 402 uses the fast point feature histogram unit 42 as the execution object of the step
  • step 403 uses the rough configuration unit 43 as the execution object of the step
  • step 404 is The precise registration unit 44 is used as the execution target of the steps.
  • step 401 the three-dimensional calibration block point cloud and the three-dimensional calibration block model point cloud can be sampled by using a Volgograd filter to improve the registration speed of the point cloud pair.
  • step 402 the registration of the point cloud pair depends on the feature description. Therefore, in the present invention, it is necessary to calculate the feature point descriptors of the 3D calibration block point cloud and the 3D calibration block model point cloud separately to obtain respective fast point feature histograms (FPFH , Fast Point Feature Histograms);
  • FPFH Fast Point Feature Histograms
  • step 403 it is usually necessary to coarsely register the point cloud pair before accurately registering the point cloud pair. Therefore, in the present invention, the sample consensus initial registration algorithm (SAC-IA, Sample Consensus Initial Aligment) is used to realize the rough matching of the point cloud pair. quasi.
  • SAC-IA Sample Consensus Initial Aligment
  • step 404 after the rough registration of the point cloud pair, the iterative closest point algorithm (ICP, Iterative Closest Point) is used to realize the precise registration of the point cloud pair.
  • ICP iterative closest point algorithm
  • step 5 set the corresponding threshold to search for the closest point to the key point cloud on the 3D calibration block model point cloud from the 3D calibration block point cloud through the nearest neighbor search method, and determine that the coordinate value of the point is 3D The coordinate value of the key point on the calibration block in the coordinate system of the 3D vision system.
  • the key point coordinate determination module 50 in the key point position determination device in the robot hand-eye calibration based on the calibration block is used to set the corresponding threshold to search for the distance 3D calibration from the 3D calibration block point cloud through the nearest neighbor search method.
  • the closest point of the key point cloud on the block model point cloud, and the coordinate value of this point is determined to be the coordinate value of the key point on the 3D calibration block in the coordinate system of the 3D vision system.
  • the key point positions (ie P1', P2', P3' points on the point cloud of the block model are calibrated in three dimensions, where P1' corresponds to P1, P2' corresponds to P2, and P3' corresponds to P3.
  • P1' corresponds to P1
  • P2' corresponds to P2
  • P3' corresponds to P3.
  • Correspondence as the benchmark, through the nearest neighbor search method, from the 3D calibration block point cloud, search for the closest point to the key point on the 3D calibration block model point cloud (ie P1', P2', P3' point),
  • the coordinate value of this point is the required key point coordinate value, that is, the coordinate value of the key points P1, P2, P3 in the coordinate system of the three-dimensional vision system.
  • the present invention uses a three-dimensional calibration block with a polyhedral structure and an irregular shape, and multiple key points on the three-dimensional calibration block do not overlap in the height direction, so as to determine the key points in the robot with low cost, convenience and high precision.
  • the coordinate value in the vision system specifically, by adjusting the placement posture of the three-dimensional calibration block, the connection of any two points of multiple key points is projected on the XY plane, and it is with any coordinate axis of the robot base coordinate system Non-parallel; then by adjusting the robot's posture, the 3D vision system can obtain the point cloud of the peripheral surface of the key point; finally, the 3D calibration block model point cloud is registered with the collected 3D calibration block point cloud, Set the corresponding threshold to determine the point cloud near the key point, so as to obtain the coordinate value of the key point in the coordinate system of the 3D vision system.
  • the transformation matrix of the hand-eye relationship of the robot dynamic three-dimensional vision system can be quickly solved, so as to achieve low-cost, convenient and high-precision realization Hand-eye calibration in robot 3D dynamic vision system.
  • the following disclosure provides many different embodiments or examples for realizing different structures of the embodiments of the present invention.
  • the components and settings of specific examples are described below. Of course, they are only examples, and the purpose is not to limit the present invention.
  • the embodiments of the present invention may repeat reference numbers and/or reference letters in different examples. This repetition is for the purpose of simplification and clarity, and does not indicate the relationship between the various embodiments and/or settings discussed. .
  • the embodiments of the present invention provide examples of various specific processes and materials, but those of ordinary skill in the art may be aware of the application of other processes and/or the use of other materials.
  • a "computer-readable medium” can be any device that can contain, store, communicate, propagate, or transmit a program for use by an instruction execution system, device, or device or in combination with these instruction execution systems, devices, or devices.
  • computer readable media include the following: electrical connections (electronic devices) with one or more wiring, portable computer disk cases (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable and editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer-readable medium may even be paper or other suitable medium on which the program can be printed, because it can be used for example by optically scanning the paper or other medium, and then editing, interpreting, or other suitable media if necessary. The program is processed in a manner to obtain the program electronically, and then stored in the computer memory.
  • each part of the embodiments of the present invention can be implemented by hardware, software, firmware, or a combination thereof.
  • multiple steps or methods can be implemented by software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if it is implemented by hardware, as in another embodiment, it can be implemented by any one or a combination of the following technologies known in the art: Discrete logic circuits, application-specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGA), field programmable gate arrays (FPGA), etc.
  • a person of ordinary skill in the art can understand that all or part of the steps carried in the method of the foregoing embodiments can be implemented by a program instructing relevant hardware to complete.
  • the program can be stored in a computer-readable storage medium, and the program can be stored in a computer-readable storage medium. When executed, it includes one of the steps of the method embodiment or a combination thereof.
  • the functional units in the various embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software functional modules. If the integrated module is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a computer readable storage medium.
  • the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种基于标定块的机器人手眼标定中关键点位置确定方法和装置,通过对三维标定块的摆放姿态进行调节,使三维标定块上的关键点中任意两点的连线在XY平面的投影与机器人基坐标系的任意坐标轴不平行;并对机器人的姿态进行调节,使处于机器人末端的三维视觉***能获取包含所述关键点周边面的三维标定块点云;然后将三维标定块的CAD模型转变为三维标定块模型点云;再将三维标定块模型点云与三维标定块点云进行配准;最后设置阈值从三维标定块点云中获取关键点附近点云从而确定关键点在三维视觉***坐标系的坐标值。该方法和装置可低成本、便捷、高精度地对关键点进行提取,从而低成本、便捷、高精度地在机器人视觉***中进行手眼标定。

Description

基于标定块的机器人手眼标定中关键点位置确定方法与装置 技术领域
本发明涉及机器人自动化加工***中视觉引导***标定、机器人自动装配***中待装配零件的位置和相关参数检测中的视觉***标定、加工中心中通过分析传感器数据获取瑕疵后的目标位置信息的转换中的视觉检测***标定及其他自动化加工(操作)过程中的视觉引导作业等自动化领域的检测***的手眼标定技术领域,具体涉及一种基于标定块的机器人手眼标定中关键点位置确定方法与装置。
背景技术
自动化装备是制造强国的利器,因此必须要向高速化,智能化方向迈进,其一个重要的手段是给机器装上“眼睛”和能够与这颗眼睛配合的“大脑”。这只眼睛可以是单目相机,双目相机,多目相机,三维扫描仪,也可以是RGB-D传感器。通过视觉传感器获取相关数据,可以分析得到的加工信息,这里的加工信息是以视觉传感器的坐标系定义的,这些加工信息在被机器人执行前必须变换到机器人基坐标系下。因此,机器人视觉引导***的手眼关系的标定非常重要。
目前,眼在手上的视觉***的手眼标定方法很多,但是对于机器人动态三维视觉***来说,现有的这些标定方法要么标定精度较低,要么标定成本较高(需要激光跟踪仪等昂贵的仪器设备),且不利于快速标定。因此急需一种能低成本的、便捷的、高精度的手眼标定方法,而在进行手眼标定前,需要提供一种在机器人视觉***手眼标定中关键点的提取方法,以快速进行手眼标定。三维标定块在机器人视觉***手眼标定中关键点的提取方法与装置
发明内容
针对现有技术的不足,本发明提出一种基于标定块的机器人手眼标定中关键点位置确定方法与装置,可低成本、便捷、高精度地对关键点进行提取,从而低成本、便捷、高精度地在机器人视觉***中进行手眼标定。
本发明的技术方案是这样实现的:
一种基于标定块的机器人手眼标定中关键点位置确定方法,所述标定块为三维标定块,所述三维标定块为多面体结构且形状不规则,所述关键点为三维标定块上且不少于三个的预设点,所述预设点在高度方向不重合;所述关键点提取方法包括以下步骤:
步骤1,对三维标定块的摆放姿态进行调节,使三维标定块上的关键点中的任意两点的连线在XY平面的投影,与机器人基坐标系的任意坐标轴不平行;
步骤2,对机器人的姿态进行调节,使处于机器人末端的三维视觉***能够获取所述三维标定块上包含所述关键点周边面的三维标定块点云;
步骤3,将三维标定块的CAD模型转变为点云得到三维标定块模型点云;
步骤4,将三维标定块模型点云与所获得的三维标定块点云进行配准;
步骤5,以三维标定块模型点云上的关键点位置为基准,设置相应阈值以从三维标定块点云中获取关键点附近点云从而确定三维标定块上的关键点在三维视觉***坐标系的坐标值。
进一步的,步骤3包括以下子步骤:
步骤301,获取三维标定块的CAD模型,并将其转换为PLY格式文件;
步骤302,根据所述PLY格式文件,利用PCL库中的数据格式转换函数,将其转换为点云数据格式,获取三维标定块模型点云。
进一步的,步骤4包括以下子步骤:
步骤401,分别对三维标定块点云和三维标定块模型点云进行采样;
步骤402,分别计算三维标定块点云和三维标定块模型点云的特征点描述子,得到各自的快速点特征直方图;
步骤403,根据三维标定块点云和三维标定块模型点云的快速点特征直方图,通过使用采样一致性初始配准算法对点云进行粗配准;
步骤404,通过迭代最近点算法对点云进行精准配准。
进一步的,步骤5中,设置相应阈值,以通过近邻搜索的方法从三维标定块点云中分别搜索出距离三维标定块模型点云上关键点云最近的点,确定该点的坐标值为三维标定块上的关键点在三维视觉***坐标系的坐标值。
一种基于标定块的机器人手眼标定中关键点位置确定装置,所述标定块为三维标定块,所述三维标定块为多面体结构且形状不规则,所述关键点为三维标定块上且不少于三个的预设点,所述预设点在高度方向不重合;所述关键点提取装置包括
三维标定块姿态调节模块,用于对三维标定块的摆放姿态进行调节,使三维标定块上的关键点中的任意两点的连线在XY平面的投影,与机器人基坐标系的任意坐标轴不平行;
机器人姿态调节模块,用于对机器人的姿态进行调节,使处于机器人末端的三维视觉***能够获取所述三维标定块上包含所述关键点周边面的三维标定块点云;
模型点云转换模块,用于将三维标定块的CAD模型转变为点云得到三维标定块模型点云;
配准模块,用于将三维标定块模型点云与所获得的三维标定块点云进行配准;
关键点坐标确定模块,用于以三维标定块模型点云上的关键点位置为基准,设置相应阈值以从三维标定块点云中获取关键点附近点云从而确定三维标定块上的关键点在三维视觉***坐标系的坐标值。
进一步的,所述模型点云转换模块包括
PLY格式文件转换单元,用于获取三维标定块的CAD模型,并将其转换为PLY格式文件;
模型点云获取单元,用于根据所述PLY格式文件,利用PCL库中的数据格式转换函数,将其转换为点云数据格式,获取三维标定块模型点云。
进一步的,所述配准模块包括采样单元、快速点特征直方图单元、粗配置单元和精准配准单元,其中
采样单元,用于分别对三维标定块点云和三维标定块模型点云进行采样;
快速点特征直方图单元,用于分别计算三维标定块点云和三维标定块模型点云的特征点描述子,得到各自的快速点特征直方图;
粗配置单元,用于根据三维标定块点云和三维标定块模型点云的快速点特征直方图,通过使用采样一致性初始配准算法对点云进行粗配准;
精准配准单元,用于通过迭代最近点算法对点云进行精准配准。
进一步的,所述关键点坐标确定模块,用于设置相应阈值,以通过近邻搜索的方法从三维标定块点云中分别搜索出距离三维标定块模型点云上关键点云最近的点,确定该点的坐标值为三维标定块上的关键点在三维视觉***坐标系的坐标值。
与现有技术相比,本发明具有以下优点:本发明通过借助多面体结构且形状不规则的三维标定块,并且该三维标定块上多个关键点在高度方向不重合,从而低成本、便捷、高精度地确定关键点在机器人视觉***中的坐标值;具体的,通过对三维标定块的摆放姿态进行调节,使多个关键点的任意两点的连线在XY平面的投影,并且与机器人基坐标系的任意坐标轴不平行;然后再通过对机器人的姿态进行调节,使三维视觉***能给获取到关键点周边面的点云;最后将三维标定块模型点云与采集到的三维标定块点云进行配准,设置相应阈值确定关键点附近点云,从而获取关键点在三维视觉***坐标系的坐标值。根据关键点在机器人基坐标系下的坐标值及关键点在三维视觉***坐标系的坐标,可快速求解出机器人动态三维视觉***的手眼关系的变换矩阵,从而低成本、便捷、高精度地实现在机器人三维动态视觉***中的手眼标定。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本发明中所使用的标定块的结构示意图;
图2为本发明机器人检测姿态调节的示意图;
图3为本发明机器人动态三维视觉***标定中的关键点提取方法一实施方式的流程图;
图4为本发明机器人动态三维视觉***标定中的关键点提取装置一实施方式的结构框图;
图5为本发明机器人动态三维视觉***标定中的关键点提取方法一实施方式中步骤3的流程图;
图6为本发明机器人动态三维视觉***标定中的关键点提取方法一实施方式中模型点云转换模块的结构框图;
图7为本发明机器人动态三维视觉***标定中的关键点提取方法一实施方式中步骤4的流程图;
图8为本发明机器人动态三维视觉***标定中的关键点提取方法一实施方式中模型点云配准模块的结构框图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
图1为本发明中所使用的标定块,其中,所述标定块为三维标定块,所述关键点为图1中的P1、P2、P3点;图2为本发明机器人检测姿态调节的示意图,通过对三维标定块的摆放姿态进行调节后,再对机器人的检测姿态进行调节,以确定P1、P2、P3点在三维视觉***坐标系的坐标值。当P1、P2、P3点在机器人基坐标系下的坐标值也确定后,根据P1、P2、P3点在三维视觉***坐标系的坐标值及P1、P2、P3点在机器人基坐标系下的坐标值,即可求解出机器人动态三维视觉***的手眼关系的变换矩阵,实现在机器人三维动态视觉***中的手眼标定。下面结合图1~图4进行具体说明。
其中,本发明实施方式中所使用的标定块为一种具有特殊形状的三维标定块,具体表现为:如图1所示,三维标定块为多面体结构且形状不规则,所述关键点为三维标定块上P1、P2、P3点,其中关键点P1、P2、P3在高度方向不重合,且基本在高度方向均匀分布。本发明实施方式通过这种特殊结构的三维标定块确定关键点在三维视觉***坐标系的坐标值。
参阅图3,本发明实施方式公开了一种基于标定块的机器人手眼标定中关键点位置确定方法,包括以下步骤:
步骤1,如图2所示,对三维标定块的摆放姿态进行调节,使三维标定块上的P1、P2、P3点中的任意两点的连线在XY平面的投影,与机器人基坐标系的任意坐标轴不平行;
步骤2,如图2所示,对机器人的姿态进行调节,使处于机器人末端的三维视觉***能够获取所述三维标定块上包含P1、P2、P3点周边面的三维标定块点云;
步骤3,将三维标定块的CAD模型转变为点云得到三维标定块模型点云;
步骤4,将三维标定块模型点云与所获得的三维标定块点云进行配准;
步骤5,以三维标定块模型点云上的关键点位置(即P1’、P2’、P3’点,其中P1’与P1相对应,P2’与P2相对应,P3’与P3相对应)为基准,设置相应阈值以从三维标定块点云中获取关键点附近点云从而确定三维标定块上的关键点在三维视觉***坐标系的坐标值。
参阅图4,本发明实施方式还公开了一种基于标定块的机器人手眼标定中关键点位置确定装置,包括:
三维标定块姿态调节模块10,用于对三维标定块的摆放姿态进行调节,使三维标定块上的P1、P2、P3点中的任意两点的连线在XY平面的投影,与机器人基坐标系的任意坐标轴不平行;
机器人姿态调节模块20,用于对机器人的姿态进行调节,使处于机器人末端的三维视觉***能够获取所述三维标定块上包含P1、P2、P3点周边面的三维标定块点云;
模型点云转换模块30,用于将三维标定块的CAD模型转变为点云得到三维标定块模型点云;
配准模块40,用于将三维标定块模型点云与所获得的三维标定块点云进行配准;
关键点坐标确定模块50,用于以三维标定块模型点云上的关键点位置(即P1’、P2’、P3’点,其中P1’与P1相对应,P2’与P2相对应,P3’与P3相对应)为基准,设置相应阈值以从三维标定块点云中获取关键点附近点云从而确定三维标定块上的关键点在三维视觉***坐标系的坐标值。
本发明实施方式中,基于标定块的机器人手眼标定中关键点位置确定方法是以基于标定块的机器人手眼标定中关键点位置确定装置作为步骤的执行对象。其中,步骤1是以三维标定块姿态调节模块10作为步骤的执行对象,步骤2是以机器人姿态调节模块20作为步骤的执行对象,步骤3是以模型点云转换模块30作为步骤的执行对象,步骤4是以配准模块40作为步骤的执行对象,步骤5是以关键点坐标确定模块50作为步骤的执行对象。
本发明中,关键点P1、P2、P3在三维视觉***坐标系下的坐标值的确定及关键点P1、P2、P3在机器人基坐标系下的坐标值,是变换矩阵求解的关键,而关键点P1、P2、P3在机器人基坐标系下的坐标值利用设于机器人末端的探针进行快速确定,具体的,通过探针分别触及P1、P2、P3点时,机器人控制器中经探针长度补偿后的坐标值即可为关键点P1、P2、P3在机器人基坐标系下的坐标值。因此,确定关键点P1、P2、P3在三维视觉***坐标系下 的坐标值,是求解变换矩阵的关键。本发明实施方式通过借助这种多面体结构且形状不规则的三维标定块,利用三维标定块上的关键点,从而可低成本、便捷、高精度地确定关键点在三维视觉***坐标系下的坐标值,进而低成本、便捷、高精度地实现在机器人三维动态视觉***中的手眼标定。
步骤1中,三维标定块的摆放姿态关系到所获取的数据是否可用,因此,本发明实施方式中,如图1及图2所示,对三维标定块的姿态进行调节时,需要使关键点P1、P2、P3中的任意两点的连线在XY平面的投影,与机器人基坐标系的任意坐标轴不平行,以便于机器人末端在同一个检测姿态下,能够同时获得关键点周边多个面的点云数据。
步骤2中,同样需要对机器人的检测姿态进行调整,以便于三维视觉***,如单目相机,双目相机,多目相机,三维扫描仪等,可以得到可用的空间位置数据。如图2所示,在进行调节时,能够通过机器人末端安装的三维视觉***,在同一个末端检测姿态下,能够同时获得图1所示的三维标定块上的目标位置点P1、P2、P3周边面的点云。
具体的,如图5所示,步骤3包括以下子步骤:
步骤301,获取三维标定块的CAD模型,并将其转换为PLY格式文件;
步骤302,根据所述PLY格式文件,利用PCL库中的数据格式转换函数,将其转换为点云数据格式,获取三维标定块模型点云。
对应的,如图6所示,基于标定块的机器人手眼标定中关键点位置确定装置中的模型点云转换模块30包括
PLY格式文件转换单元31,用于获取三维标定块的CAD模型,并将其转换为PLY格式文件;
模型点云获取单元32,用于根据所述PLY格式文件,利用PCL库中的数据格式转换函数,将其转换为点云数据格式,获取三维标定块模型点云。
本发明实施方式中,步骤3是以模型点云转换模块30中的各个单元作为步骤的执行对象。具体的,步骤301是以PLY格式文件转换单元31作为步骤的执行对象,步骤302是以模型点云获取单元32作为步骤的执行对象。
具体的,如图7所示,步骤4包括以下子步骤:
步骤401,分别对三维标定块点云和三维标定块模型点云进行采样;
步骤402,分别计算三维标定块点云和三维标定块模型点云的特征点描述子,得到各自的快速点特征直方图;
步骤403,根据三维标定块点云及三维标定块模型点云的快速点特征直方图,通过使用采样一致性初始配准算法对点云进行粗配准;
步骤404,通过使用迭代最近点算法对点云进行精准配准。
对应的,如图8所示,基于标定块的机器人手眼标定中关键点位置确定装置中的配准模块40包括
采样单元41,用于分别对三维标定块点云和三维标定块模型点云进行采样;
快速点特征直方图单元42,用于分别计算三维标定块点云和三维标定块模型点云的特征点描述子,得到各自的快速点特征直方图;
粗配置单元43,用于根据三维标定块点云和三维标定块模型点云的快速点特征直方图,通过使用采样一致性初始配准算法对点云进行粗配准;
精准配准单元44,用于通过迭代最近点算法对点云进行精准配准。
本发明实施方式中,步骤4是以配准模块40中的各个单元作为步骤的执行对象。具体的,步骤401是以采样单元41作为步骤的执行对象,步骤402是以快速点特征直方图单元42作为步骤的执行对象,步骤403是以粗配置单元43作为步骤的执行对象,步骤404是以精准配准单元44作为步骤的执行对象。
步骤401中,可通过使用Volgograd滤波器采样三维标定块点云和三维标定块模型点云,以提高点云对的配准速度。
步骤402中,点云对的配准依赖于特征描述,因此本发明中需要分别计算三维标定块点云和三维标定块模型点云的特征点描述子,得到各自的快速点特征直方图(FPFH,Fast Point Feature Histograms);
步骤403中,在精确配准点云对前通常需要先粗配准点云对,因此在本发明中采用采样一致性初始配准算法(SAC-IA,Sample Consensus Initial Aligment)实现点云对的粗配准。
步骤404中,经点云对的粗配准后,再通过使用迭代最近点算法(ICP,Iterative Closest Point)实现点云对的精确配准。
进一步的,步骤5中,设置相应阈值,以通过近邻搜索的方法从三维标定块点云中分别搜索出距离三维标定块模型点云上关键点云最近的点,确定该点的坐标值为三维标定块上的关键点在三维视觉***坐标系的坐标值。
对应的,基于标定块的机器人手眼标定中关键点位置确定装置中的关键点坐标确定模块50,用于设置相应阈值,以通过近邻搜索的方法从三维标定块点云中分别搜索出距离三维标定块模型点云上关键点云最近的点,确定该点的坐标值为三维标定块上的关键点在三维视觉***坐标系的坐标值。
本发明实施方式中,以三维标定块模型点云上的关键点位置(即P1’、P2’、P3’点,其中P1’与P1相对应,P2’与P2相对应,P3’与P3相对应)为基准,通过近邻搜索的方法,从三维标定块点云中,搜索出距离三维标定块模型点云上关键点(即P1’、P2’、P3’点)处点云最近的点,该点的坐标值即为所需的关键点坐标值,即关键点P1、P2、P3在三维视觉***坐标系的坐标值。
综上所述,本发明通过借助多面体结构且形状不规则的三维标定块,并且该三维标定块上多个关键点在高度方向不重合,从而低成本、便捷、高精度地确定关键点在机器人视觉***中的坐标值;具体的,通过对三维标定块的摆放姿态进行调节,使多个关键点的任意两点的连线在XY平面的投影,并且与机器人基坐标系的任意坐标轴不平行;然后再通过对机器人的姿态进行调节,使三维视觉***能给获取到关键点周边面的点云;最后将三维标定块模型点云与采集到的三维标定块点云进行配准,设置相应阈值确定关键点附近点云,从而获取关键点在三维视觉***坐标系的坐标值。根据关键点在机器人基坐标系下的坐标值及关键点在三维视觉***坐标系的坐标,可快速求解出机器人动态三维视觉***的手眼关系的变换矩阵,从而低成本、便捷、高精度地实现在机器人三维动态视觉***中的手眼标定。
在本发明的实施方式的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。
下文的公开提供了许多不同的实施方式或例子用来实现本发明的实施方式的不同结构。为了简化本发明的实施方式的公开,下文中对特定例子的部件和设置进行描述。当然,它们仅仅为示例,并且目的不在于限制本发明。此外,本发明的实施方式可以在不同例子中重复参考数字和/或参考字母,这种重复是为了简化和清楚的目的,其本身不指示所讨论各种实施方式和/或设置之间的关系。此外,本发明的实施方式提供了的各种特定的工艺和材料的例子,但是本领域普通技术人员可以意识到其他工艺的应用和/或其他材料的使用。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本发明的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行***、装置或设备(如基于计算机的***、包括处理模块的***或其他可以从指令执行***、装置或设备取指令并执行指令的***)使用,或结合这些指令执行***、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行***、装置或设备或结合这些指令执行***、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本发明的实施方式的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行***执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本发明的各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件 功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。
尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (8)

  1. 一种基于标定块的机器人手眼标定中关键点位置确定方法,其特征在于,所述标定块为三维标定块,所述三维标定块为多面体结构且形状不规则,所述关键点为三维标定块上且不少于三个的预设点,所述预设点在高度方向不重合;所述关键点提取方法包括以下步骤:
    步骤1,对三维标定块的摆放姿态进行调节,使三维标定块上的关键点中的任意两点的连线在XY平面的投影,与机器人基坐标系的任意坐标轴不平行;
    步骤2,对机器人的姿态进行调节,使处于机器人末端的三维视觉***能够获取所述三维标定块上包含所述关键点周边面的三维标定块点云;
    步骤3,将三维标定块的CAD模型转变为点云得到三维标定块模型点云;
    步骤4,将三维标定块模型点云与所获得的三维标定块点云进行配准;
    步骤5,以三维标定块模型点云上的关键点位置为基准,设置相应阈值以从三维标定块点云中获取关键点附近点云从而确定三维标定块上的关键点在三维视觉***坐标系的坐标值。
  2. 如权利要求1所述基于标定块的机器人手眼标定中关键点位置确定方法,其特征在于,步骤3包括以下子步骤:
    步骤301,获取三维标定块的CAD模型,并将其转换为PLY格式文件;
    步骤302,根据所述PLY格式文件,利用PCL库中的数据格式转换函数,将其转换为点云数据格式,获取三维标定块模型点云。
  3. 如权利要求1所述基于标定块的机器人手眼标定中关键点位置确定方法,其特征在于,步骤4包括以下子步骤:
    步骤401,分别对三维标定块点云和三维标定块模型点云进行采样;
    步骤402,分别计算三维标定块点云和三维标定块模型点云的特征点描述子,得到各自的快速点特征直方图;
    步骤403,根据三维标定块点云和三维标定块模型点云的快速点特征直方图,通过使用采样一致性初始配准算法对点云进行粗配准;
    步骤404,通过迭代最近点算法对点云进行精准配准。
  4. 如权利要求1所述基于标定块的机器人手眼标定中关键点位置确定方法,其特征在于,步骤5中,设置相应阈值,以通过近邻搜索的方法从三维标定块点云中分别搜索出距离三维 标定块模型点云上关键点云最近的点,确定该点的坐标值为三维标定块上的关键点在三维视觉***坐标系的坐标值。
  5. 一种基于标定块的机器人手眼标定中关键点位置确定装置,其特征在于,所述标定块为三维标定块,所述三维标定块为多面体结构且形状不规则,所述关键点为三维标定块上且不少于三个的预设点,所述预设点在高度方向不重合;所述关键点提取装置包括
    三维标定块姿态调节模块,用于对三维标定块的摆放姿态进行调节,使三维标定块上的关键点中的任意两点的连线在XY平面的投影,与机器人基坐标系的任意坐标轴不平行;
    机器人姿态调节模块,用于对机器人的姿态进行调节,使处于机器人末端的三维视觉***能够获取所述三维标定块上包含所述关键点周边面的三维标定块点云;
    模型点云转换模块,用于将三维标定块的CAD模型转变为点云得到三维标定块模型点云;
    配准模块,用于将三维标定块模型点云与所获得的三维标定块点云进行配准;
    关键点坐标确定模块,用于以三维标定块模型点云上的关键点位置为基准,设置相应阈值以从三维标定块点云中获取关键点附近点云从而确定三维标定块上的关键点在三维视觉***坐标系的坐标值。
  6. 如权利要求5所述基于标定块的机器人手眼标定中关键点位置确定装置,其特征在于,所述模型点云转换模块包括
    PLY格式文件转换单元,用于获取三维标定块的CAD模型,并将其转换为PLY格式文件;
    模型点云获取单元,用于根据所述PLY格式文件,利用PCL库中的数据格式转换函数,将其转换为点云数据格式,获取三维标定块模型点云。
  7. 如权利要求5所述基于标定块的机器人手眼标定中关键点位置确定装置,其特征在于,所述配准模块包括采样单元、快速点特征直方图单元、粗配置单元和精准配准单元,其中
    采样单元,用于分别对三维标定块点云和三维标定块模型点云进行采样;
    快速点特征直方图单元,用于分别计算三维标定块点云和三维标定块模型点云的特征点描述子,得到各自的快速点特征直方图;
    粗配置单元,用于根据三维标定块点云和三维标定块模型点云的快速点特征直方图,通过使用采样一致性初始配准算法对点云进行粗配准;
    精准配准单元,用于通过迭代最近点算法对点云进行精准配准。
  8. 如权利要求5所述基于标定块的机器人手眼标定中关键点位置确定装置,其特征在于,关键点坐标确定模块,用于设置相应阈值,以通过近邻搜索的方法从三维标定块点云中分别搜索出距离三维标定块模型点云上关键点云最近的点,确定该点的坐标值为三维标定块上的关键点在三维视觉***坐标系的坐标值。
PCT/CN2020/120103 2019-11-26 2020-10-10 基于标定块的机器人手眼标定中关键点位置确定方法与装置 WO2021103824A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911175295.5 2019-11-26
CN201911175295.5A CN110930442B (zh) 2019-11-26 2019-11-26 基于标定块的机器人手眼标定中关键点位置确定方法与装置

Publications (1)

Publication Number Publication Date
WO2021103824A1 true WO2021103824A1 (zh) 2021-06-03

Family

ID=69851142

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/120103 WO2021103824A1 (zh) 2019-11-26 2020-10-10 基于标定块的机器人手眼标定中关键点位置确定方法与装置

Country Status (2)

Country Link
CN (1) CN110930442B (zh)
WO (1) WO2021103824A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114043087A (zh) * 2021-12-03 2022-02-15 厦门大学 一种三维轨迹激光焊接焊缝跟踪姿态规划方法
CN117140535A (zh) * 2023-10-27 2023-12-01 南湖实验室 一种基于单笔测量的机器人运动学参数标定方法及***

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110930442B (zh) * 2019-11-26 2020-07-31 广东技术师范大学 基于标定块的机器人手眼标定中关键点位置确定方法与装置
CN111797808B (zh) * 2020-07-17 2023-07-21 广东技术师范大学 一种基于视频特征点追踪的逆向方法及***
CN112790786A (zh) * 2020-12-30 2021-05-14 无锡祥生医疗科技股份有限公司 点云数据配准方法、装置、超声设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680124A (zh) * 2016-08-01 2018-02-09 康耐视公司 用于提高三维姿态评分和消除三维图像数据中杂点的***及方法
WO2018145025A1 (en) * 2017-02-03 2018-08-09 Abb Schweiz Ag Calibration article for a 3d vision robotic system
CN109102547A (zh) * 2018-07-20 2018-12-28 上海节卡机器人科技有限公司 基于物体识别深度学习模型的机器人抓取位姿估计方法
CN109702738A (zh) * 2018-11-06 2019-05-03 深圳大学 一种基于三维物体识别的机械臂手眼标定方法及装置
CN110335296A (zh) * 2019-06-21 2019-10-15 华中科技大学 一种基于手眼标定的点云配准方法
CN110930442A (zh) * 2019-11-26 2020-03-27 广东技术师范大学 基于标定块的机器人手眼标定中关键点位置确定方法与装置

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101908230B (zh) * 2010-07-23 2011-11-23 东南大学 一种基于区域深度边缘检测和双目立体匹配的三维重建方法
CN104142157B (zh) * 2013-05-06 2017-08-25 北京四维图新科技股份有限公司 一种标定方法、装置及设备
US10076842B2 (en) * 2016-09-28 2018-09-18 Cognex Corporation Simultaneous kinematic and hand-eye calibration
CN108828606B (zh) * 2018-03-22 2019-04-30 中国科学院西安光学精密机械研究所 一种基于激光雷达和双目可见光相机联合测量方法
CN108648272A (zh) * 2018-04-28 2018-10-12 上海激点信息科技有限公司 三维实景采集建模方法、可读存储介质及装置
CN108627178B (zh) * 2018-05-10 2020-10-13 广东拓斯达科技股份有限公司 机器人手眼标定方法和***
CN108994844B (zh) * 2018-09-26 2021-09-03 广东工业大学 一种打磨操作臂手眼关系的标定方法和装置
CN110355755B (zh) * 2018-12-15 2023-05-16 深圳铭杰医疗科技有限公司 机器人手眼***标定方法、装置、设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680124A (zh) * 2016-08-01 2018-02-09 康耐视公司 用于提高三维姿态评分和消除三维图像数据中杂点的***及方法
WO2018145025A1 (en) * 2017-02-03 2018-08-09 Abb Schweiz Ag Calibration article for a 3d vision robotic system
CN109102547A (zh) * 2018-07-20 2018-12-28 上海节卡机器人科技有限公司 基于物体识别深度学习模型的机器人抓取位姿估计方法
CN109702738A (zh) * 2018-11-06 2019-05-03 深圳大学 一种基于三维物体识别的机械臂手眼标定方法及装置
CN110335296A (zh) * 2019-06-21 2019-10-15 华中科技大学 一种基于手眼标定的点云配准方法
CN110930442A (zh) * 2019-11-26 2020-03-27 广东技术师范大学 基于标定块的机器人手眼标定中关键点位置确定方法与装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114043087A (zh) * 2021-12-03 2022-02-15 厦门大学 一种三维轨迹激光焊接焊缝跟踪姿态规划方法
CN117140535A (zh) * 2023-10-27 2023-12-01 南湖实验室 一种基于单笔测量的机器人运动学参数标定方法及***
CN117140535B (zh) * 2023-10-27 2024-02-02 南湖实验室 一种基于单笔测量的机器人运动学参数标定方法及***

Also Published As

Publication number Publication date
CN110930442A (zh) 2020-03-27
CN110930442B (zh) 2020-07-31

Similar Documents

Publication Publication Date Title
WO2021103824A1 (zh) 基于标定块的机器人手眼标定中关键点位置确定方法与装置
JP6842520B2 (ja) 物体検出方法、装置、機器、記憶媒体及び車両
KR102292028B1 (ko) 제스처 인식 방법, 장치, 전자 기기 및 저장 매체
US10496762B2 (en) Model generating device, position and orientation calculating device, and handling robot device
Singh et al. Bigbird: A large-scale 3d database of object instances
CN110842901A (zh) 基于一种新型三维标定块的机器人手眼标定方法与装置
CN109242903A (zh) 三维数据的生成方法、装置、设备及存储介质
CN110555889A (zh) 一种基于CALTag和点云信息的深度相机手眼标定方法
JP2018523865A (ja) 情報処理方法、デバイス、および端末
CN113146073B (zh) 基于视觉的激光切割方法及装置、电子设备、存储介质
US20190340783A1 (en) Autonomous Vehicle Based Position Detection Method and Apparatus, Device and Medium
CN111028205B (zh) 一种基于双目测距的眼睛瞳孔定位方法及装置
US11625842B2 (en) Image processing apparatus and image processing method
EP3879494A2 (en) Method, apparatus, electronic device, computer readable medium and program for calibrating external parameter of camera
US10748027B2 (en) Construction of an efficient representation for a three-dimensional (3D) compound object from raw video data
US20200051278A1 (en) Information processing apparatus, information processing method, robot system, and non-transitory computer-readable storage medium
US11875524B2 (en) Unmanned aerial vehicle platform based vision measurement method for static rigid object
WO2022247137A1 (zh) 机器人及其充电桩识别方法和装置
KR102618285B1 (ko) 카메라 자세 결정 방법 및 시스템
CN111598172A (zh) 基于异构深度网络融合的动态目标抓取姿态快速检测方法
CN107850425A (zh) 用于测量假影的方法
CN115409808A (zh) 焊缝识别方法、装置、焊接机器人及存储介质
CN113172636B (zh) 一种自动手眼标定方法、装置及存储介质
CN107990825B (zh) 基于先验数据校正的高精度位置测量装置与方法
JP7431714B2 (ja) 視線分析装置と視線分析方法及び視線分析システム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20893734

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20893734

Country of ref document: EP

Kind code of ref document: A1