WO2021103824A1 - Procédé et dispositif de détermination de position de point clé dans un étalonnage main-œil de robot basé sur un bloc d'étalonnage - Google Patents

Procédé et dispositif de détermination de position de point clé dans un étalonnage main-œil de robot basé sur un bloc d'étalonnage Download PDF

Info

Publication number
WO2021103824A1
WO2021103824A1 PCT/CN2020/120103 CN2020120103W WO2021103824A1 WO 2021103824 A1 WO2021103824 A1 WO 2021103824A1 CN 2020120103 W CN2020120103 W CN 2020120103W WO 2021103824 A1 WO2021103824 A1 WO 2021103824A1
Authority
WO
WIPO (PCT)
Prior art keywords
calibration block
point cloud
point
dimensional
calibration
Prior art date
Application number
PCT/CN2020/120103
Other languages
English (en)
Chinese (zh)
Inventor
郑振兴
刁世普
Original Assignee
广东技术师范大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东技术师范大学 filed Critical 广东技术师范大学
Publication of WO2021103824A1 publication Critical patent/WO2021103824A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the invention relates to the calibration of a vision guidance system in a robot automatic processing system, the calibration of a vision system in the detection of the position and related parameters of parts to be assembled in a robot automatic assembly system, and the conversion of target position information after a defect is obtained by analyzing sensor data in a processing center
  • the technical field of hand-eye calibration for visual inspection system calibration and other automated processing (operation) processes such as visual guidance tasks in the automated field, specifically relates to a method and device for determining key point positions in robot hand-eye calibration based on a calibration block.
  • Automated equipment is a weapon to make a powerful country. Therefore, it must move towards high speed and intelligence.
  • An important method is to equip the machine with "eyes" and a "brain” that can cooperate with the eyes.
  • This eye can be a monocular camera, a binocular camera, a multi-eye camera, a 3D scanner, or an RGB-D sensor.
  • the present invention proposes a method and device for determining the position of key points in the hand-eye calibration of a robot based on a calibration block, which can extract key points with low cost, convenience, and high precision, thereby achieving low cost, convenience, and high precision. Accurately perform hand-eye calibration in the robot vision system.
  • the calibration block is a three-dimensional calibration block
  • the three-dimensional calibration block has a polyhedral structure and an irregular shape
  • the key points are on the three-dimensional calibration block and there are many
  • the preset points do not overlap in the height direction
  • the key point extraction method includes the following steps:
  • Step 1 Adjust the posture of the 3D calibration block so that the projection of any two points of the key points on the 3D calibration block on the XY plane is not parallel to any coordinate axis of the robot base coordinate system;
  • Step 2 Adjust the posture of the robot so that the three-dimensional vision system at the end of the robot can obtain the three-dimensional calibration block point cloud containing the peripheral surface of the key point on the three-dimensional calibration block;
  • Step 3 Convert the CAD model of the 3D calibration block into a point cloud to obtain the point cloud of the 3D calibration block model
  • Step 4 Register the 3D calibration block model point cloud with the obtained 3D calibration block point cloud
  • Step 5 Taking the position of the key points on the 3D calibration block model point cloud as the reference, set the corresponding threshold to obtain the point cloud near the key points from the 3D calibration block point cloud to determine that the key points on the 3D calibration block are in the 3D vision system coordinate system The coordinate value.
  • step 3 includes the following sub-steps:
  • Step 301 Obtain the CAD model of the 3D calibration block and convert it into a PLY format file
  • Step 302 According to the PLY format file, use the data format conversion function in the PCL library to convert it into a point cloud data format to obtain a three-dimensional calibration block model point cloud.
  • step 4 includes the following sub-steps:
  • Step 401 Sampling the 3D calibration block point cloud and the 3D calibration block model point cloud respectively;
  • Step 402 Calculate the feature point descriptors of the three-dimensional calibration block point cloud and the three-dimensional calibration block model point cloud respectively to obtain respective fast point feature histograms;
  • Step 403 according to the fast point feature histogram of the 3D calibration block point cloud and the 3D calibration block model point cloud, perform rough registration on the point cloud by using a sampling consistent initial registration algorithm;
  • step 404 the point cloud is accurately registered through the iterative closest point algorithm.
  • step 5 set the corresponding threshold to search for the closest point to the key point cloud on the 3D calibration block model point cloud from the 3D calibration block point cloud through the nearest neighbor search method, and determine that the coordinate value of the point is 3D The coordinate value of the key point on the calibration block in the coordinate system of the 3D vision system.
  • a device for determining the position of key points in the hand-eye calibration of a robot based on a calibration block is a three-dimensional calibration block, the three-dimensional calibration block has a polyhedral structure and an irregular shape, and the key points are on the three-dimensional calibration block and there are not many At three preset points, the preset points do not overlap in the height direction; the key point extraction device includes
  • the 3D calibration block posture adjustment module is used to adjust the placement posture of the 3D calibration block, so that the connection between any two points of the key points on the 3D calibration block is projected on the XY plane, and any coordinates in the robot base coordinate system The axis is not parallel;
  • the robot posture adjustment module is used to adjust the posture of the robot so that the three-dimensional vision system at the end of the robot can obtain the three-dimensional calibration block point cloud containing the peripheral surface of the key point on the three-dimensional calibration block;
  • the model point cloud conversion module is used to convert the CAD model of the 3D calibration block into a point cloud to obtain the 3D calibration block model point cloud;
  • the registration module is used to register the 3D calibration block model point cloud with the obtained 3D calibration block point cloud;
  • the key point coordinate determination module is used to set the corresponding threshold value based on the key point position on the point cloud of the 3D calibration block model to obtain the point cloud near the key point from the 3D calibration block point cloud to determine the key point on the 3D calibration block.
  • the coordinate value of the coordinate system of the 3D vision system is used to set the corresponding threshold value based on the key point position on the point cloud of the 3D calibration block model to obtain the point cloud near the key point from the 3D calibration block point cloud to determine the key point on the 3D calibration block.
  • model point cloud conversion module includes
  • the PLY format file conversion unit is used to obtain the CAD model of the 3D calibration block and convert it into a PLY format file;
  • the model point cloud acquisition unit is used to convert the PLY format file into a point cloud data format by using the data format conversion function in the PCL library to acquire a three-dimensional calibration block model point cloud.
  • the registration module includes a sampling unit, a fast point feature histogram unit, a rough configuration unit, and a precise registration unit, wherein
  • the sampling unit is used to sample the 3D calibration block point cloud and the 3D calibration block model point cloud respectively;
  • the fast point feature histogram unit is used to calculate the feature point descriptors of the 3D calibration block point cloud and the 3D calibration block model point cloud respectively to obtain their respective fast point feature histograms;
  • the coarse configuration unit is used to coarsely register the point cloud based on the 3D calibration block point cloud and the fast point feature histogram of the 3D calibration block model point cloud by using the sampling consistent initial registration algorithm;
  • the precise registration unit is used to accurately register the point cloud through the iterative closest point algorithm.
  • the key point coordinate determination module is used to set a corresponding threshold to search for the closest point to the key point cloud on the 3D calibration block model point cloud from the 3D calibration block point cloud through the nearest neighbor search method, and determine the The coordinate value of the point is the coordinate value of the key point on the 3D calibration block in the coordinate system of the 3D vision system.
  • the present invention uses a three-dimensional calibration block with a polyhedral structure and irregular shape, and multiple key points on the three-dimensional calibration block do not overlap in the height direction, thereby being low-cost, convenient, and Determine the coordinate value of the key point in the robot vision system with high precision; specifically, by adjusting the placement posture of the three-dimensional calibration block, the connection between any two points of the multiple key points is projected on the XY plane, and the Any coordinate axis of the robot base coordinate system is not parallel; then by adjusting the robot's posture, the 3D vision system can obtain the point cloud of the peripheral surface of the key point; finally, the 3D calibration block model point cloud and the collected 3D The calibration block point cloud is registered, and the corresponding threshold is set to determine the point cloud near the key point, so as to obtain the coordinate value of the key point in the coordinate system of the three-dimensional vision system.
  • the transformation matrix of the hand-eye relationship of the robot dynamic three-dimensional vision system can be quickly solved, so as to achieve low-cost, convenient and high-precision realization Hand-eye calibration in robot 3D dynamic vision system.
  • Figure 1 is a schematic diagram of the structure of a calibration block used in the present invention.
  • Figure 2 is a schematic diagram of the robot detection attitude adjustment of the present invention
  • FIG. 3 is a flow chart of an embodiment of the method for extracting key points in the calibration of the robot dynamic three-dimensional vision system according to the present invention
  • FIG. 4 is a structural block diagram of an embodiment of the device for extracting key points in the calibration of the robot dynamic three-dimensional vision system according to the present invention
  • step 3 is a flowchart of step 3 in an embodiment of the method for extracting key points in the calibration of the robot dynamic three-dimensional vision system according to the present invention
  • FIG. 6 is a structural block diagram of a model point cloud conversion module in an embodiment of the method for extracting key points in the calibration of the robot dynamic three-dimensional vision system according to the present invention
  • step 4 is a flowchart of step 4 in an embodiment of the method for extracting key points in the calibration of the robot dynamic 3D vision system according to the present invention
  • FIG. 8 is a structural block diagram of a model point cloud registration module in an embodiment of the method for extracting key points in the calibration of the robot dynamic three-dimensional vision system of the present invention.
  • Figure 1 is a calibration block used in the present invention, wherein the calibration block is a three-dimensional calibration block, and the key points are points P1, P2, P3 in Figure 1;
  • Figure 2 is a schematic diagram of the robot detection attitude adjustment of the present invention , After adjusting the placement posture of the three-dimensional calibration block, the detection posture of the robot is adjusted to determine the coordinate values of points P1, P2, and P3 in the coordinate system of the three-dimensional vision system.
  • the coordinate values of points P1, P2, and P3 in the robot base coordinate system are also determined, according to the coordinate values of points P1, P2, and P3 in the three-dimensional vision system coordinate system and the coordinates of points P1, P2, P3 in the robot base coordinate system
  • the coordinate value can solve the transformation matrix of the hand-eye relationship of the robot dynamic 3D vision system, and realize the hand-eye calibration in the robot 3D dynamic vision system. A detailed description will be given below with reference to FIGS. 1 to 4.
  • the calibration block used in the embodiment of the present invention is a three-dimensional calibration block with a special shape, which is specifically expressed as: as shown in FIG. 1, the three-dimensional calibration block has a polyhedral structure and an irregular shape, and the key points are three-dimensional Points P1, P2, P3 on the calibration block, among which the key points P1, P2, P3 do not overlap in the height direction, and are basically evenly distributed in the height direction.
  • the coordinate value of the key point in the coordinate system of the three-dimensional vision system is determined by the three-dimensional calibration block of this special structure.
  • the embodiment of the present invention discloses a method for determining the position of key points in the hand-eye calibration of a robot based on a calibration block, which includes the following steps:
  • Step 1 as shown in Figure 2, adjust the placement posture of the 3D calibration block so that the connection of any two points of P1, P2, P3 on the 3D calibration block is projected on the XY plane, and the robot base coordinates Any coordinate axis of the system is not parallel;
  • Step 2 as shown in Figure 2, adjust the posture of the robot so that the 3D vision system at the end of the robot can obtain the 3D calibration block point cloud containing the peripheral surface of the P1, P2, and P3 points on the 3D calibration block;
  • Step 3 Convert the CAD model of the 3D calibration block into a point cloud to obtain the point cloud of the 3D calibration block model
  • Step 4 Register the 3D calibration block model point cloud with the obtained 3D calibration block point cloud
  • Step 5 Take the key point positions on the point cloud of the three-dimensional calibration block model (ie P1', P2', P3' points, where P1' corresponds to P1, P2' corresponds to P2, and P3' corresponds to P3) as Benchmark, set the corresponding threshold to obtain the point cloud near the key point from the point cloud of the three-dimensional calibration block to determine the coordinate value of the key point on the three-dimensional calibration block in the coordinate system of the three-dimensional vision system.
  • P1', P2', P3' points where P1' corresponds to P1, P2' corresponds to P2, and P3' corresponds to P3
  • the embodiment of the present invention also discloses a device for determining the position of key points in the hand-eye calibration of a robot based on a calibration block, including:
  • the three-dimensional calibration block posture adjustment module 10 is used to adjust the placement posture of the three-dimensional calibration block, so that the line of any two points of P1, P2, P3 on the three-dimensional calibration block is projected on the XY plane, and the robot base Any coordinate axis of the coordinate system is not parallel;
  • the robot posture adjustment module 20 is used to adjust the posture of the robot so that the three-dimensional vision system at the end of the robot can obtain the three-dimensional calibration block point cloud including the peripheral surface of the P1, P2, and P3 points on the three-dimensional calibration block;
  • the model point cloud conversion module 30 is used to convert the CAD model of the three-dimensional calibration block into a point cloud to obtain the point cloud of the three-dimensional calibration block model;
  • the registration module 40 is used to register the three-dimensional calibration block model point cloud with the obtained three-dimensional calibration block point cloud;
  • the key point coordinate determination module 50 is used for three-dimensional calibration of the key point positions on the block model point cloud (ie P1', P2', P3' points, where P1' corresponds to P1, P2' corresponds to P2, and P3' Corresponding to P3) as the reference, set the corresponding threshold to obtain the point cloud near the key point from the point cloud of the three-dimensional calibration block to determine the coordinate value of the key point on the three-dimensional calibration block in the coordinate system of the three-dimensional vision system.
  • P1', P2', P3' points where P1' corresponds to P1, P2' corresponds to P2, and P3' Corresponding to P3
  • the method for determining the position of the key point in the hand-eye calibration of the robot based on the calibration block uses the device for determining the position of the key point in the hand-eye calibration of the robot based on the calibration block as the execution target of the steps.
  • step 1 uses the 3D calibration block posture adjustment module 10 as the execution object of the steps
  • step 2 uses the robot posture adjustment module 20 as the execution object of the steps
  • step 3 uses the model point cloud conversion module 30 as the execution object of the steps.
  • Step 4 uses the registration module 40 as the execution object of the step
  • step 5 uses the key point coordinate determination module 50 as the execution object of the step.
  • the determination of the coordinate values of the key points P1, P2, P3 in the coordinate system of the three-dimensional vision system and the coordinate values of the key points P1, P2, P3 in the robot base coordinate system are the key to the solution of the transformation matrix, and the key The coordinate values of points P1, P2, and P3 in the robot base coordinate system are quickly determined by the probe set at the end of the robot. Specifically, when P1, P2, and P3 are touched by the probe, the robot controller The coordinate value after length compensation can be the coordinate value of the key points P1, P2, P3 in the robot base coordinate system. Therefore, determining the coordinate values of the key points P1, P2, and P3 in the coordinate system of the three-dimensional vision system is the key to solving the transformation matrix.
  • the embodiment of the present invention utilizes the key points on the three-dimensional calibration block with the aid of the three-dimensional calibration block with the polyhedral structure and irregular shape, so that the coordinates of the key points in the coordinate system of the three-dimensional vision system can be determined with low cost, convenience and high precision.
  • the hand-eye calibration in the robot's three-dimensional dynamic vision system can be realized at low cost, conveniently and with high precision.
  • step 1 the placement posture of the three-dimensional calibration block is related to whether the acquired data is available. Therefore, in the embodiment of the present invention, as shown in Figures 1 and 2, when adjusting the posture of the three-dimensional calibration block, the key The projection of the line connecting any two of the points P1, P2, and P3 on the XY plane is not parallel to any coordinate axis of the robot base coordinate system, so that the robot end can obtain the key points at the same time under the same detection posture. Point cloud data of each surface.
  • step 2 the detection posture of the robot also needs to be adjusted so that the three-dimensional vision system, such as monocular camera, binocular camera, multi-eye camera, three-dimensional scanner, etc., can obtain usable spatial position data.
  • the three-dimensional vision system installed at the end of the robot can simultaneously obtain the target position points P1, P2, P3 on the three-dimensional calibration block shown in Figure 1 under the same end detection posture. Point cloud on the peripheral surface.
  • step 3 includes the following sub-steps:
  • Step 301 Obtain the CAD model of the 3D calibration block and convert it into a PLY format file
  • Step 302 According to the PLY format file, use the data format conversion function in the PCL library to convert it into a point cloud data format to obtain a three-dimensional calibration block model point cloud.
  • the model point cloud conversion module 30 in the device for determining the position of the key point in the hand-eye calibration of the robot based on the calibration block includes
  • the PLY format file conversion unit 31 is used to obtain the CAD model of the three-dimensional calibration block and convert it into a PLY format file;
  • the model point cloud acquisition unit 32 is configured to convert the PLY format file into a point cloud data format by using the data format conversion function in the PCL library to acquire a three-dimensional calibration block model point cloud.
  • step 3 uses each unit in the model point cloud conversion module 30 as the execution target of the step.
  • step 301 uses the PLY format file conversion unit 31 as the execution object of the step
  • step 302 uses the model point cloud acquisition unit 32 as the execution object of the step.
  • step 4 includes the following sub-steps:
  • Step 401 Sampling the 3D calibration block point cloud and the 3D calibration block model point cloud respectively;
  • Step 402 Calculate the feature point descriptors of the three-dimensional calibration block point cloud and the three-dimensional calibration block model point cloud respectively to obtain respective fast point feature histograms;
  • Step 403 according to the 3D calibration block point cloud and the fast point feature histogram of the 3D calibration block model point cloud, perform rough registration on the point cloud by using a sampling consistent initial registration algorithm;
  • step 404 the point cloud is accurately registered by using the iterative closest point algorithm.
  • the registration module 40 in the device for determining the position of the key point in the hand-eye calibration of the robot based on the calibration block includes
  • the sampling unit 41 is used to sample the three-dimensional calibration block point cloud and the three-dimensional calibration block model point cloud respectively;
  • the fast point feature histogram unit 42 is used to calculate the feature point descriptors of the three-dimensional calibration block point cloud and the three-dimensional calibration block model point cloud respectively to obtain respective fast point feature histograms;
  • the coarse configuration unit 43 is used to perform coarse registration on the point cloud by using the sampling consistent initial registration algorithm according to the fast point feature histogram of the 3D calibration block point cloud and the 3D calibration block model point cloud;
  • the precise registration unit 44 is used for precise registration of the point cloud through the iterative closest point algorithm.
  • step 4 uses each unit in the registration module 40 as the execution target of the step.
  • step 401 uses the sampling unit 41 as the execution object of the step
  • step 402 uses the fast point feature histogram unit 42 as the execution object of the step
  • step 403 uses the rough configuration unit 43 as the execution object of the step
  • step 404 is The precise registration unit 44 is used as the execution target of the steps.
  • step 401 the three-dimensional calibration block point cloud and the three-dimensional calibration block model point cloud can be sampled by using a Volgograd filter to improve the registration speed of the point cloud pair.
  • step 402 the registration of the point cloud pair depends on the feature description. Therefore, in the present invention, it is necessary to calculate the feature point descriptors of the 3D calibration block point cloud and the 3D calibration block model point cloud separately to obtain respective fast point feature histograms (FPFH , Fast Point Feature Histograms);
  • FPFH Fast Point Feature Histograms
  • step 403 it is usually necessary to coarsely register the point cloud pair before accurately registering the point cloud pair. Therefore, in the present invention, the sample consensus initial registration algorithm (SAC-IA, Sample Consensus Initial Aligment) is used to realize the rough matching of the point cloud pair. quasi.
  • SAC-IA Sample Consensus Initial Aligment
  • step 404 after the rough registration of the point cloud pair, the iterative closest point algorithm (ICP, Iterative Closest Point) is used to realize the precise registration of the point cloud pair.
  • ICP iterative closest point algorithm
  • step 5 set the corresponding threshold to search for the closest point to the key point cloud on the 3D calibration block model point cloud from the 3D calibration block point cloud through the nearest neighbor search method, and determine that the coordinate value of the point is 3D The coordinate value of the key point on the calibration block in the coordinate system of the 3D vision system.
  • the key point coordinate determination module 50 in the key point position determination device in the robot hand-eye calibration based on the calibration block is used to set the corresponding threshold to search for the distance 3D calibration from the 3D calibration block point cloud through the nearest neighbor search method.
  • the closest point of the key point cloud on the block model point cloud, and the coordinate value of this point is determined to be the coordinate value of the key point on the 3D calibration block in the coordinate system of the 3D vision system.
  • the key point positions (ie P1', P2', P3' points on the point cloud of the block model are calibrated in three dimensions, where P1' corresponds to P1, P2' corresponds to P2, and P3' corresponds to P3.
  • P1' corresponds to P1
  • P2' corresponds to P2
  • P3' corresponds to P3.
  • Correspondence as the benchmark, through the nearest neighbor search method, from the 3D calibration block point cloud, search for the closest point to the key point on the 3D calibration block model point cloud (ie P1', P2', P3' point),
  • the coordinate value of this point is the required key point coordinate value, that is, the coordinate value of the key points P1, P2, P3 in the coordinate system of the three-dimensional vision system.
  • the present invention uses a three-dimensional calibration block with a polyhedral structure and an irregular shape, and multiple key points on the three-dimensional calibration block do not overlap in the height direction, so as to determine the key points in the robot with low cost, convenience and high precision.
  • the coordinate value in the vision system specifically, by adjusting the placement posture of the three-dimensional calibration block, the connection of any two points of multiple key points is projected on the XY plane, and it is with any coordinate axis of the robot base coordinate system Non-parallel; then by adjusting the robot's posture, the 3D vision system can obtain the point cloud of the peripheral surface of the key point; finally, the 3D calibration block model point cloud is registered with the collected 3D calibration block point cloud, Set the corresponding threshold to determine the point cloud near the key point, so as to obtain the coordinate value of the key point in the coordinate system of the 3D vision system.
  • the transformation matrix of the hand-eye relationship of the robot dynamic three-dimensional vision system can be quickly solved, so as to achieve low-cost, convenient and high-precision realization Hand-eye calibration in robot 3D dynamic vision system.
  • the following disclosure provides many different embodiments or examples for realizing different structures of the embodiments of the present invention.
  • the components and settings of specific examples are described below. Of course, they are only examples, and the purpose is not to limit the present invention.
  • the embodiments of the present invention may repeat reference numbers and/or reference letters in different examples. This repetition is for the purpose of simplification and clarity, and does not indicate the relationship between the various embodiments and/or settings discussed. .
  • the embodiments of the present invention provide examples of various specific processes and materials, but those of ordinary skill in the art may be aware of the application of other processes and/or the use of other materials.
  • a "computer-readable medium” can be any device that can contain, store, communicate, propagate, or transmit a program for use by an instruction execution system, device, or device or in combination with these instruction execution systems, devices, or devices.
  • computer readable media include the following: electrical connections (electronic devices) with one or more wiring, portable computer disk cases (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable and editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer-readable medium may even be paper or other suitable medium on which the program can be printed, because it can be used for example by optically scanning the paper or other medium, and then editing, interpreting, or other suitable media if necessary. The program is processed in a manner to obtain the program electronically, and then stored in the computer memory.
  • each part of the embodiments of the present invention can be implemented by hardware, software, firmware, or a combination thereof.
  • multiple steps or methods can be implemented by software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if it is implemented by hardware, as in another embodiment, it can be implemented by any one or a combination of the following technologies known in the art: Discrete logic circuits, application-specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGA), field programmable gate arrays (FPGA), etc.
  • a person of ordinary skill in the art can understand that all or part of the steps carried in the method of the foregoing embodiments can be implemented by a program instructing relevant hardware to complete.
  • the program can be stored in a computer-readable storage medium, and the program can be stored in a computer-readable storage medium. When executed, it includes one of the steps of the method embodiment or a combination thereof.
  • the functional units in the various embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software functional modules. If the integrated module is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a computer readable storage medium.
  • the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

L'invention concerne un procédé et un dispositif de détermination de position de point clé dans un étalonnage main-œil de robot sur la base d'un bloc d'étalonnage. Le procédé comprend : l'activation de la projection d'une ligne de connexion de deux points quelconques dans des points clés sur un bloc d'étalonnage tridimensionnel sur un plan XY pour ne pas être parallèle à n'importe quel axe de coordonnées d'un système de coordonnées de base de robot par réglage de la posture de placement du bloc d'étalonnage tridimensionnel; le réglage de la posture du robot, de telle sorte qu'un système de vision tridimensionnelle situé à l'extrémité de queue du robot peut obtenir un nuage de points de bloc d'étalonnage tridimensionnel contenant la surface périphérique des points clés; puis la conversion d'un modèle CAO du bloc d'étalonnage tridimensionnel en un nuage de points de modèle de bloc d'étalonnage tridimensionnel; puis l'enregistrement du nuage de points de modèle de bloc d'étalonnage tridimensionnel et du nuage de points de bloc d'étalonnage tridimensionnel; enfin, la définition d'une valeur seuil pour obtenir des nuages de points à proximité des points clés à partir du nuage de points de bloc d'étalonnage tridimensionnel de façon à déterminer des valeurs de coordonnées des points clés dans un système de coordonnées de système de vision tridimensionnelle. Selon le procédé et le dispositif, les points clés peuvent être extraits de manière commode avec un faible coût et une précision élevée, de telle sorte qu'un étalonnage de main-œil peut être réalisé dans un système de vision de robot de façon commode avec un faible coût et une précision élevée.
PCT/CN2020/120103 2019-11-26 2020-10-10 Procédé et dispositif de détermination de position de point clé dans un étalonnage main-œil de robot basé sur un bloc d'étalonnage WO2021103824A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911175295.5 2019-11-26
CN201911175295.5A CN110930442B (zh) 2019-11-26 2019-11-26 基于标定块的机器人手眼标定中关键点位置确定方法与装置

Publications (1)

Publication Number Publication Date
WO2021103824A1 true WO2021103824A1 (fr) 2021-06-03

Family

ID=69851142

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/120103 WO2021103824A1 (fr) 2019-11-26 2020-10-10 Procédé et dispositif de détermination de position de point clé dans un étalonnage main-œil de robot basé sur un bloc d'étalonnage

Country Status (2)

Country Link
CN (1) CN110930442B (fr)
WO (1) WO2021103824A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114043087A (zh) * 2021-12-03 2022-02-15 厦门大学 一种三维轨迹激光焊接焊缝跟踪姿态规划方法
CN117140535A (zh) * 2023-10-27 2023-12-01 南湖实验室 一种基于单笔测量的机器人运动学参数标定方法及***

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110930442B (zh) * 2019-11-26 2020-07-31 广东技术师范大学 基于标定块的机器人手眼标定中关键点位置确定方法与装置
CN111797808B (zh) * 2020-07-17 2023-07-21 广东技术师范大学 一种基于视频特征点追踪的逆向方法及***
CN112790786A (zh) * 2020-12-30 2021-05-14 无锡祥生医疗科技股份有限公司 点云数据配准方法、装置、超声设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680124A (zh) * 2016-08-01 2018-02-09 康耐视公司 用于提高三维姿态评分和消除三维图像数据中杂点的***及方法
WO2018145025A1 (fr) * 2017-02-03 2018-08-09 Abb Schweiz Ag Article d'étalonnage pour système robotique de vision 3d
CN109102547A (zh) * 2018-07-20 2018-12-28 上海节卡机器人科技有限公司 基于物体识别深度学习模型的机器人抓取位姿估计方法
CN109702738A (zh) * 2018-11-06 2019-05-03 深圳大学 一种基于三维物体识别的机械臂手眼标定方法及装置
CN110335296A (zh) * 2019-06-21 2019-10-15 华中科技大学 一种基于手眼标定的点云配准方法
CN110930442A (zh) * 2019-11-26 2020-03-27 广东技术师范大学 基于标定块的机器人手眼标定中关键点位置确定方法与装置

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101908230B (zh) * 2010-07-23 2011-11-23 东南大学 一种基于区域深度边缘检测和双目立体匹配的三维重建方法
CN104142157B (zh) * 2013-05-06 2017-08-25 北京四维图新科技股份有限公司 一种标定方法、装置及设备
US10076842B2 (en) * 2016-09-28 2018-09-18 Cognex Corporation Simultaneous kinematic and hand-eye calibration
CN108828606B (zh) * 2018-03-22 2019-04-30 中国科学院西安光学精密机械研究所 一种基于激光雷达和双目可见光相机联合测量方法
CN108648272A (zh) * 2018-04-28 2018-10-12 上海激点信息科技有限公司 三维实景采集建模方法、可读存储介质及装置
CN108627178B (zh) * 2018-05-10 2020-10-13 广东拓斯达科技股份有限公司 机器人手眼标定方法和***
CN108994844B (zh) * 2018-09-26 2021-09-03 广东工业大学 一种打磨操作臂手眼关系的标定方法和装置
CN110355755B (zh) * 2018-12-15 2023-05-16 深圳铭杰医疗科技有限公司 机器人手眼***标定方法、装置、设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680124A (zh) * 2016-08-01 2018-02-09 康耐视公司 用于提高三维姿态评分和消除三维图像数据中杂点的***及方法
WO2018145025A1 (fr) * 2017-02-03 2018-08-09 Abb Schweiz Ag Article d'étalonnage pour système robotique de vision 3d
CN109102547A (zh) * 2018-07-20 2018-12-28 上海节卡机器人科技有限公司 基于物体识别深度学习模型的机器人抓取位姿估计方法
CN109702738A (zh) * 2018-11-06 2019-05-03 深圳大学 一种基于三维物体识别的机械臂手眼标定方法及装置
CN110335296A (zh) * 2019-06-21 2019-10-15 华中科技大学 一种基于手眼标定的点云配准方法
CN110930442A (zh) * 2019-11-26 2020-03-27 广东技术师范大学 基于标定块的机器人手眼标定中关键点位置确定方法与装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114043087A (zh) * 2021-12-03 2022-02-15 厦门大学 一种三维轨迹激光焊接焊缝跟踪姿态规划方法
CN117140535A (zh) * 2023-10-27 2023-12-01 南湖实验室 一种基于单笔测量的机器人运动学参数标定方法及***
CN117140535B (zh) * 2023-10-27 2024-02-02 南湖实验室 一种基于单笔测量的机器人运动学参数标定方法及***

Also Published As

Publication number Publication date
CN110930442A (zh) 2020-03-27
CN110930442B (zh) 2020-07-31

Similar Documents

Publication Publication Date Title
WO2021103824A1 (fr) Procédé et dispositif de détermination de position de point clé dans un étalonnage main-œil de robot basé sur un bloc d'étalonnage
JP6842520B2 (ja) 物体検出方法、装置、機器、記憶媒体及び車両
KR102292028B1 (ko) 제스처 인식 방법, 장치, 전자 기기 및 저장 매체
US10496762B2 (en) Model generating device, position and orientation calculating device, and handling robot device
Singh et al. Bigbird: A large-scale 3d database of object instances
CN110842901A (zh) 基于一种新型三维标定块的机器人手眼标定方法与装置
CN109242903A (zh) 三维数据的生成方法、装置、设备及存储介质
CN110555889A (zh) 一种基于CALTag和点云信息的深度相机手眼标定方法
JP2018523865A (ja) 情報処理方法、デバイス、および端末
CN113146073B (zh) 基于视觉的激光切割方法及装置、电子设备、存储介质
US20190340783A1 (en) Autonomous Vehicle Based Position Detection Method and Apparatus, Device and Medium
CN111028205B (zh) 一种基于双目测距的眼睛瞳孔定位方法及装置
US11625842B2 (en) Image processing apparatus and image processing method
EP3879494A2 (fr) Procédé, appareil, dispositif électronique, support lisible sur ordinateur et programme d'étalonnage d'un paramètre externe de caméra
US10748027B2 (en) Construction of an efficient representation for a three-dimensional (3D) compound object from raw video data
US20200051278A1 (en) Information processing apparatus, information processing method, robot system, and non-transitory computer-readable storage medium
US11875524B2 (en) Unmanned aerial vehicle platform based vision measurement method for static rigid object
WO2022247137A1 (fr) Robot et procédé de reconnaissance de pile de charge et appareil associé
KR102618285B1 (ko) 카메라 자세 결정 방법 및 시스템
CN111598172A (zh) 基于异构深度网络融合的动态目标抓取姿态快速检测方法
CN107850425A (zh) 用于测量假影的方法
CN115409808A (zh) 焊缝识别方法、装置、焊接机器人及存储介质
CN113172636B (zh) 一种自动手眼标定方法、装置及存储介质
CN107990825B (zh) 基于先验数据校正的高精度位置测量装置与方法
JP7431714B2 (ja) 視線分析装置と視線分析方法及び視線分析システム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20893734

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20893734

Country of ref document: EP

Kind code of ref document: A1