CN117283555B - Method and device for autonomously calibrating tool center point of robot - Google Patents

Method and device for autonomously calibrating tool center point of robot Download PDF

Info

Publication number
CN117283555B
CN117283555B CN202311416497.0A CN202311416497A CN117283555B CN 117283555 B CN117283555 B CN 117283555B CN 202311416497 A CN202311416497 A CN 202311416497A CN 117283555 B CN117283555 B CN 117283555B
Authority
CN
China
Prior art keywords
tool
pose
coordinate system
robot
vision sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311416497.0A
Other languages
Chinese (zh)
Other versions
CN117283555A (en
Inventor
于军章
郭昊帅
姜东亚
吴虹
王文林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaoyu Intelligent Manufacturing Technology Co ltd
Original Assignee
Beijing Xiaoyu Intelligent Manufacturing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaoyu Intelligent Manufacturing Technology Co ltd filed Critical Beijing Xiaoyu Intelligent Manufacturing Technology Co ltd
Priority to CN202311416497.0A priority Critical patent/CN117283555B/en
Publication of CN117283555A publication Critical patent/CN117283555A/en
Application granted granted Critical
Publication of CN117283555B publication Critical patent/CN117283555B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Manipulator (AREA)

Abstract

The application discloses a method and a device for autonomously calibrating a tool center point of a robot, wherein the method comprises the following steps: acquiring a plurality of groups of images shot by a vision sensor, wherein the plurality of groups of images are obtained by shooting tools with different poses under different configurations of the robot by the vision sensor respectively; identifying preset feature points of the tool from images corresponding to different configurations and outputting multiple sets of feature point data corresponding to different configurations, wherein the feature points comprise tool center points; and calibrating pose data of the tool center point in the tool installation position coordinate system based on the plurality of sets of characteristic point data corresponding to different configurations and the pose data of the corresponding plurality of sets of tool installation positions in the world coordinate system. The tool mounting pose or track can be specified at will, and only the requirement that the image sensor can collect the image data of the tool is met, and the pose of the vision sensor relative to the robot base coordinate system is not required to be calibrated independently.

Description

Method and device for autonomously calibrating tool center point of robot
Technical Field
The application relates to the technical field of robots, in particular to a method and a device for autonomously calibrating a tool center point of a robot.
Background
With the development of robot technology, robots are being applied to more and more fields, for example, robots may be applied in scenes of industrial and agricultural production, social services, home services, and the like. For example, industrial robots are widely used in the fields of welding, assembly, handling, painting, polishing, etc.
Robots often require a tool to be installed for performing certain tasks such as welding the welding gun of the robot, glue gun of the glue robot, handling the clamps of the robot, etc. Because the size and the shape of each tool are different, a tool center point needs to be confirmed to represent the tool, and the pose relation between the tool and the robot is described, so that the robot can accurately execute the track by taking the tool center point as a reference. Tool mounting pose is described in the robot base coordinate system:
the existing scheme I is as follows: the manual operation robot changes the mounting position and the pose of the tool, so that the tool points to the same point in the space in different poses; based on the constraint condition that the position of the point is unchanged relative to the robot base coordinate system and the pose data of the tool mounting position when the tool points to the point, the position of the tool center point relative to the coordinate system where the mounting surface of the tool is located can be calibrated.
The existing scheme II: detecting, by one or more laser sensors, a signal that blocks a laser beam when a center point of a tool mounted by the robot is in motion or stopped; based on the line or surface constraint condition formed by the laser beam and the tool mounting position and pose data when the laser beam is blocked, the pose of the tool center point relative to the coordinate system where the tool center point is arranged can be calibrated.
However, the above scheme has the following disadvantages:
Scheme one disadvantage: the robot is required to be manually operated, the tool mounting pose is changed, the tool center points point are pointed to the same position, the process is tedious, the efficiency is low, an operator needs to enter a robot working area, and potential safety hazards exist; after the tool mounting pose is changed, the fact that the center point of the tool points to the same position of the front is needed to be judged by naked eyes, and a large error exists; only the position can be calibrated by the point constraint.
Scheme two has the disadvantage: after the sensor is deployed, a sensor coordinate system is required to be defined and described under a robot base coordinate system; corresponding tool mounting positions and postures are recorded when the laser beam is blocked, so that high requirements are placed on the performance of the sensor and the robot control system, such as communication frequency, feedback delay and time stamp synchronization; the robot needs to explore according to the established logic movement, so that blocking signals are triggered, and the efficiency is low; if the tool collides and deforms in the using process, calibration failure can be caused.
Disclosure of Invention
In order to overcome the problems in the related art, the embodiment of the invention provides a method and a device for autonomously calibrating a tool center point of a robot, and the technical scheme is as follows:
According to a first aspect of an embodiment of the present invention, there is provided a method for autonomous calibration of a tool center point by a robot, comprising:
Acquiring a plurality of groups of images shot by a vision sensor, wherein the plurality of groups of images are obtained by shooting tools with different poses under different configurations of the robot by the vision sensor respectively;
Identifying preset feature points of the tool from images corresponding to different configurations and outputting multiple sets of feature point data corresponding to different configurations, wherein the feature points comprise tool center points;
And calibrating pose data of the tool center point in the tool installation position coordinate system based on the plurality of sets of characteristic point data corresponding to different configurations and the pose data of the corresponding plurality of sets of tool installation positions in the world coordinate system.
Optionally, the preset feature points of the tool are identified from the image, including:
the convolutional neural network model is used to identify and extract the preset feature points of the tool from the image.
Optionally, identifying and extracting the preset feature points of the tool from the image includes:
processing a left eye image and a right eye image shot by a binocular stereo vision sensor respectively: and (3) scaling the image to a preset size, inputting the image into a convolutional neural network model for operation, and outputting characteristic point data.
Optionally, the calibrating the pose data of the tool center point in the tool installation position coordinate system based on the multiple sets of feature point data corresponding to different configurations and the pose data of the corresponding multiple sets of tool installation positions in the world coordinate system includes:
Obtaining parameters of the visual sensor through calibration;
Respectively acquiring pose data of a plurality of groups of characteristic points under a visual sensor coordinate system based on parameters of the visual sensor, wherein the pose data comprise poses of tool center points under the visual sensor;
and calibrating the pose data of the tool center point in the tool mounting position coordinate system based on the pose data of the tool center point in the vision sensor coordinate system and the tool mounting pose data.
Optionally, the calibrating the pose data of the tool center point in the tool installation position coordinate system based on the pose data of the tool center point in the vision sensor coordinate system and the tool installation pose data includes:
The following pose relationship is determined:
transforming the pose relation into:
the least square method based on the kronecker product conversion obtains the pose relation: And/>
Wherein,Is the pose description of the vision sensor in the world coordinate system,/>Is a pose description of the tool mounting position in the world coordinate system,/>Is the pose description of the tool characteristic points under the coordinate system of the tool installation position,/>The pose description of the tool feature points under a visual sensor coordinate system; k represents the configuration of the robot, k= [1,2, ], m-1, m ]; i denotes a feature point, i= [0,1, …, n-1, n ], and i=0 denotes a tool center point.
According to a second aspect of embodiments of the present invention, there is provided an apparatus for autonomous calibration of a tool center point by a robot, the apparatus comprising:
The acquisition module is used for acquiring a plurality of groups of images shot by the vision sensor, wherein the plurality of groups of images are obtained by shooting tools with different poses under different configurations of the robot by the vision sensor respectively;
The first processing module is used for respectively identifying preset characteristic points of the tool from the images corresponding to different configurations and outputting a plurality of sets of characteristic point data corresponding to different configurations, wherein the characteristic points comprise tool center points;
The second processing module is used for calibrating pose data of the tool center point in the tool installation position coordinate system based on multiple sets of feature point data corresponding to different configurations and pose data of corresponding multiple sets of tool installation positions in the world coordinate system.
Optionally, the first processing module is configured to:
the convolutional neural network model is used to identify and extract the preset feature points of the tool from the image.
Optionally, the first processing module is configured to:
processing a left eye image and a right eye image shot by a binocular stereo vision sensor respectively: and (3) scaling the image to a preset size, inputting the image into a convolutional neural network model for operation, and outputting characteristic point data.
Optionally, the second processing module is configured to:
Obtaining parameters of the visual sensor through calibration;
Respectively acquiring pose data of a plurality of groups of characteristic points under a visual sensor coordinate system based on parameters of the visual sensor, wherein the pose data comprise poses of tool center points under the visual sensor;
and calibrating the pose data of the tool center point in the tool mounting position coordinate system based on the pose data of the tool center point in the vision sensor coordinate system and the tool mounting pose data.
Optionally, the calibrating the pose data of the tool center point in the tool installation position coordinate system based on the pose data of the tool center point in the vision sensor coordinate system and the tool installation pose data includes:
The following pose relationship is determined:
transforming the pose relation into:
the least square method based on the kronecker product conversion obtains the pose relation: And/>
Wherein,Is the pose description of the vision sensor in the world coordinate system,/>Is a pose description of the tool mounting position in the world coordinate system,/>Is the pose description of the tool characteristic points under the coordinate system of the tool installation position,/>The pose description of the tool feature points under a visual sensor coordinate system; k represents the configuration of the robot, k= [1,2, …, m-1, m ]; i denotes a feature point, i= [0,1, ], n-1, n ], and i=0 denotes a tool center point.
According to a third aspect of embodiments of the present invention, there is provided an apparatus for autonomous calibration of a tool center point by a robot, comprising:
A processor;
A memory for storing processor-executable instructions;
wherein the processor is configured to:
Acquiring a plurality of groups of images shot by a vision sensor, wherein the plurality of groups of images are obtained by shooting tools with different poses under different configurations of the robot by the vision sensor respectively;
Identifying preset feature points of the tool from images corresponding to different configurations and outputting multiple sets of feature point data corresponding to different configurations, wherein the feature points comprise tool center points;
And calibrating pose data of the tool center point in the tool installation position coordinate system based on the plurality of sets of characteristic point data corresponding to different configurations and the pose data of the corresponding plurality of sets of tool installation positions in the world coordinate system.
According to a fourth aspect of embodiments of the present invention there is provided a computer readable storage medium having stored thereon computer instructions which when executed by a processor implement the steps of the method of any of the first aspects of embodiments of the present invention.
According to the technical scheme provided by the embodiment of the invention, by changing the mounting position and the track of the tool, only the condition that the vision sensor can acquire the tool image data is met, and the vision sensor does not need to independently calibrate the position and the posture of the vision sensor relative to the robot base coordinate system.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only embodiments of the present application, and other drawings may be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for autonomous calibration of a tool center point by a robot according to an embodiment of the present application;
fig. 2 is a schematic diagram of a CNN model according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a tool feature point according to an embodiment of the present application;
FIG. 4 is a schematic diagram of calibrating parameters of a vision sensor;
Fig. 5 is a schematic structural diagram of a device for autonomous calibration of a tool center point of a robot according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms first and second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to the listed steps or elements but may include steps or elements not expressly listed.
The application is applied to a robot, which may include a moving base and a robot arm. The moving base can drive the robot to integrally move, and the mechanical arm can move relative to the moving base. The tail end of the mechanical arm of the robot comprises tool mounting positions for mounting different tools to finish different operations, such as a welding gun of a welding robot, a glue gun of a gluing robot, a clamp of a carrying robot and the like, and corresponding tools can be mounted according to the use situation.
Fig. 1 shows a method for autonomously calibrating a tool center point of a robot according to an embodiment of the present invention, which can be applied to a robot or an industrial personal computer of a robot. The method comprises the following steps S101 to S103:
In step S101, a plurality of sets of images captured by the vision sensor are acquired, where the plurality of sets of images are obtained by the vision sensor capturing images of tools of different poses under different configurations of the robot.
The visual sensor may be included in the robot body or may be independent of the robot body. By changing the configuration of the robot, the tool mounting position of the robot can be changed, so that tools mounted on the tool mounting position can be in different postures.
In step S102, preset feature points of the tool are identified from the images corresponding to different configurations, and multiple sets of feature point data corresponding to different configurations are output, where the feature points include a tool center point.
In step S103, pose data of the tool center point in the tool installation position coordinate system is calibrated based on the plurality of sets of feature point data corresponding to different configurations and the pose data of the corresponding plurality of sets of tool installation positions in the world coordinate system.
According to the method for autonomously calibrating the tool center point of the robot, the tool mounting position or track can be arbitrarily designated, only the image data of the tool can be acquired by the image sensor, and the position and the posture of the vision sensor relative to the robot base coordinate system (namely the world coordinate system) are not required to be calibrated independently.
In an embodiment of the present application, step S102 identifies preset feature points of the tool from the images corresponding to different configurations, and outputs multiple sets of feature point data corresponding to different configurations, where the feature points include a tool center point. Wherein, from the preset feature point of image recognition instrument, include:
the convolutional neural network model is used to identify and extract the preset feature points of the tool from the image.
The embodiment adopts a feature point identification method based on deep learning, and uses a convolutional neural network (Convolutional Neural Networks, CNN) model for automatically identifying and extracting the feature points of the tool from the image data acquired by the vision sensor.
FIG. 2 is a schematic diagram of an exemplary CNN model that employs a multi-layer architecture, including multiple convolution layers, each for extracting features at a different level of an image; and a plurality of head layers (classifiers), each of which is used for outputting features with different dimensions, wherein the heads have interaction. The characteristic point identification method based on the AI algorithm can be self-iterated and upgraded, and generalization and calibration precision are improved.
In an embodiment of the present application, step S102 identifies and extracts preset feature points of the tool from the image, including: processing a left eye image and a right eye image shot by a binocular stereo vision sensor respectively: and (3) scaling the image to a preset size, inputting the image into a convolutional neural network model for operation, and outputting characteristic point data.
Taking a tool as a welding gun head as an example, the specific steps of outputting characteristic point data are as follows:
step A1: and shooting the gun head of the welding gun by using a binocular stereoscopic vision sensor.
Step A2: a left eye image is acquired and scaled to 256 x 256.
The preset size of the image scaling may be set according to the computing power, for example, when the computing power is sufficient, the image may be scaled to 512×512.
Step A3: the scaled left eye image is input into a convolutional neural network for operation, and characteristic point positions such as a gun head outline position 31, a welding wire starting point position 32, a welding wire tail end position 33 and the like are output, wherein a characteristic point schematic diagram is shown in fig. 3.
Step A4: a right eye image is acquired and scaled to 256 x 256.
Step A5: and inputting the scaled right eye image into a convolutional neural network for operation, and outputting characteristic point data such as a welding gun head outline, a welding wire starting point position, a welding wire tail end position and the like.
In this way, feature point data of preset feature points according to the left-eye image and according to the right-eye image can be obtained, respectively. The feature points may include tool center points, and may also include points representing tool feature contours. The feature point data includes location information and semantic information. The position information may be coordinates of the feature points in the left-eye image coordinate system/the right-eye image coordinate system, and the semantic information may be human-understandable information such as "gun head outline", "wire start position", "wire end position", and the like.
In an embodiment of the present application, step S103, based on a plurality of sets of feature point data corresponding to different configurations and corresponding sets of pose data of tool mounting positions in a world coordinate system, marks pose data of a tool center point in a tool mounting position coordinate system, includes steps B1 to B3:
Step B1: and obtaining parameters of the visual sensor through calibration.
Step B2: and respectively acquiring pose data of a plurality of groups of characteristic points under a visual sensor coordinate system based on parameters of the visual sensor, wherein the pose data comprise poses of tool center points under the visual sensor.
Step B3: and calibrating the pose data of the tool center point in the tool mounting position coordinate system based on the pose data of the tool center point in the vision sensor coordinate system and the tool mounting pose data.
The coordinates of the feature points on the tool under the left and right image coordinate systems of the vision sensor are respectively as follows:
Lpi=[Lxi,Lyi],Rpi=[Rxi,Ryi]
the pose of the feature point on the tool under the coordinate system of the vision sensor is recorded as follows: Cpi=[Cti,Cri ]
Wherein the position vector is Cti=[Ctxi,Ctyi,Ctzi ]
Attitude vector Cri=[Crxi,Cryi,Crzi ]
Where i represents a feature point, i= [0,1, ], n-1, n ]. Let Cp0 be the tool center point pose and Cpi=1,...,n be the other feature point poses.
Parameters of the binocular camera are obtained through calibration: baseline: b; focal length: f, as shown in fig. 4. Where δ i=|Lxi-Rxi | is the parallax of a certain feature point under binocular. Based on the principle of similar triangles, it is possible to:
therefore, the pose of the feature point under the visual sensor coordinate system can be deduced as follows:
In an embodiment of the present application, step B3 of calibrating pose data of a tool center point in a tool mounting position coordinate system based on pose data of the tool center point in a vision sensor coordinate system and tool mounting pose data, includes the steps of:
A plurality of sets of images have been previously acquired by step S101, the plurality of sets of images being captured by changing the configuration of the robot so that the tool appears in different poses within the field of view of the vision sensor. According to the steps, a plurality of sets of pose data of the tool feature point pose (calculated based on parallax as described above) in the visual sensor coordinate system and the pose data of the tool mounting position in the world coordinate system can be obtained. The tool mounting pose data can be obtained based on forward kinematics solution of the robot.
When the configuration of the robot is k:
the pose of the tool feature point under the visual sensor coordinate system is as follows:
Will be SE (3) description, noted/>
The pose of the tool mounting position is as follows: Its SE (3) description is noted as/>
Where k= [1,2, ], m-1, m ].
Taking the robot base coordinate system as a world coordinate system, and marking as W; is the pose description of the vision sensor in the world coordinate system,/> Is a pose description of the tool mounting position in the world coordinate system,/>Is the pose description of the tool characteristic points under the coordinate system of the tool installation position,/>Is the pose description of the tool characteristic points under the visual sensor coordinate system, and the inverse is And/>The value of (c) is independent of i and k.
The formation of a closed-chain system can be expressed as:
Multiplying the above equation by left and right The method can obtain:
When m groups of data are acquired, the data can be obtained according to a least square method based on the conversion of the Cronecker product:
And/>
Thus, the pose description of the tool characteristic point under the coordinate system of the tool mounting position is obtained, and the position of the tool center point relative to the coordinate system (the coordinate system of the tool mounting position) of the mounting surface of the tool center point can be calibrated.
In one embodiment of the application, the robot can periodically place the tool in the visual field of the visual sensor, and autonomously detect whether the central point pose of the tool is changed or not; if the calibration is changed, the calibration function is triggered, and the calibration and correction are performed independently. The vision sensor in the application can be used for not only tool center point calibration, but also robot kinematics parameter correction, and can be used for positioning, navigation and obstacle avoidance.
The following are examples of the apparatus of the present invention that may be used to perform the method embodiments of the present invention.
FIG. 5 is a block diagram illustrating an apparatus for robotic autonomous calibration tool center point, which may be a terminal or a portion of a terminal, which may be implemented as part or all of an electronic device by software, hardware, or a combination of both, according to one exemplary embodiment. As shown in fig. 5, the apparatus includes:
the acquisition module 501 is configured to acquire a plurality of groups of images captured by a vision sensor, where the plurality of groups of images are obtained by the vision sensor capturing tools of different poses under different configurations of the robot respectively;
The first processing module 502 is configured to identify preset feature points of a tool in images corresponding to different configurations, and output multiple sets of feature point data corresponding to different configurations, where the feature points include a tool center point;
the second processing module 503 is configured to calibrate pose data of the tool center point in the tool installation position coordinate system based on multiple sets of feature point data corresponding to different configurations and pose data of multiple corresponding sets of tool installation positions in the world coordinate system.
In an embodiment, the first processing module 502 is configured to:
the convolutional neural network model is used to identify and extract the preset feature points of the tool from the image.
In an embodiment, the first processing module 502 is configured to:
processing a left eye image and a right eye image shot by a binocular stereo vision sensor respectively: and (3) scaling the image to a preset size, inputting the image into a convolutional neural network model for operation, and outputting characteristic point data.
In an embodiment, the second processing module 502 is configured to:
Obtaining parameters of the visual sensor through calibration;
Respectively acquiring pose data of a plurality of groups of characteristic points under a visual sensor coordinate system based on parameters of the visual sensor, wherein the pose data comprise poses of tool center points under the visual sensor;
and calibrating the pose data of the tool center point in the tool mounting position coordinate system based on the pose data of the tool center point in the vision sensor coordinate system and the tool mounting pose data.
In an embodiment, the calibrating the pose data of the tool center point in the tool installation position coordinate system based on the pose data of the tool center point in the vision sensor coordinate system and the tool installation pose data includes:
The following pose relationship is determined:
transforming the pose relation into:
the least square method based on the kronecker product conversion obtains the pose relation: And/>
Wherein,Is the pose description of the vision sensor in the world coordinate system,/>Is a pose description of the tool mounting position in the world coordinate system,/>Is the pose description of the tool characteristic points under the coordinate system of the tool installation position,/>The pose description of the tool feature points under a visual sensor coordinate system; k represents the configuration of the robot, k= [1,2, ], m-1, m ]; i denotes a feature point, i= [0,1, …, n-1, n ], and i=0 denotes a tool center point.
In another embodiment of the application, there is also provided a readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method for autonomous calibration of a tool center point for a robot as described in any of the above.
In another embodiment of the present application, there is also provided an apparatus for autonomously calibrating a tool center point of a robot, the apparatus may include:
A processor;
A memory for storing processor-executable instructions;
wherein the processor is configured to:
Acquiring a plurality of groups of images shot by a vision sensor, wherein the plurality of groups of images are obtained by shooting tools with different poses under different configurations of the robot by the vision sensor respectively;
Identifying preset feature points of the tool from images corresponding to different configurations and outputting multiple sets of feature point data corresponding to different configurations, wherein the feature points comprise tool center points;
And calibrating pose data of the tool center point in the tool installation position coordinate system based on the plurality of sets of characteristic point data corresponding to different configurations and the pose data of the corresponding plurality of sets of tool installation positions in the world coordinate system.
It should be noted that, the specific implementation of the processor in this embodiment may refer to the corresponding content in the foregoing, which is not described in detail herein.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (6)

1. A method for autonomously calibrating a tool center point for a robot, comprising:
Acquiring a plurality of groups of images shot by a binocular stereo vision sensor, wherein the plurality of groups of images are obtained by shooting tools with different poses under different configurations of the robot by the binocular stereo vision sensor respectively;
Respectively scaling a left eye image and a right eye image shot by a binocular stereo vision sensor, inputting the images to a convolutional neural network model for processing after scaling the images to a preset size, so as to obtain preset characteristic points of a tool in the images corresponding to different configurations and outputting multiple groups of characteristic point data corresponding to different configurations, wherein the preset size is not more than 512 x 512, the tool comprises a welding gun, a glue gun or a clamp, and the characteristic points comprise tool center points;
based on multiple sets of characteristic point data corresponding to different configurations and corresponding multiple sets of pose data of tool installation positions in a world coordinate system, calibrating pose data of a tool center point in the tool installation position coordinate system, wherein:
The following pose relationship is determined:
transforming the pose relation into:
the least square method based on the kronecker product conversion obtains the pose relation: And/>
Wherein,Is the pose description of the binocular stereo vision sensor under the world coordinate system,/>Is a pose description of the tool mounting position in the world coordinate system,/>Is the pose description of the tool characteristic points under the coordinate system of the tool installation position,/>The pose description of the tool feature points under a binocular stereo vision sensor coordinate system; k represents the configuration of the robot, k= [1,2, ], m-1, m ]; i denotes a feature point, i= [0,1, …, n-1, n ], and i=0 denotes a tool center point.
2. The method according to claim 1, wherein the calibrating the pose data of the tool center point in the tool installation position coordinate system based on the sets of feature point data corresponding to different configurations and the pose data of the corresponding sets of tool installation positions in the world coordinate system includes:
Obtaining parameters of the visual sensor through calibration;
Respectively acquiring pose data of a plurality of groups of characteristic points under a visual sensor coordinate system based on parameters of the visual sensor, wherein the pose data comprise poses of tool center points under the visual sensor;
and calibrating the pose data of the tool center point in the tool mounting position coordinate system based on the pose data of the tool center point in the vision sensor coordinate system and the tool mounting pose data.
3. An apparatus for autonomously calibrating a tool center point of a robot, the apparatus comprising:
the acquisition module is used for acquiring a plurality of groups of images shot by the binocular stereo vision sensor, wherein the plurality of groups of images are obtained by shooting tools with different poses under different configurations of the robot by the binocular stereo vision sensor respectively;
The device comprises a first processing module, a second processing module and a third processing module, wherein the first processing module is used for respectively scaling a left eye image and a right eye image shot by a binocular stereo vision sensor, inputting the images to a convolutional neural network model for processing after scaling the images to a preset size so as to obtain preset characteristic points of a recognition tool in the images corresponding to different configurations and outputting a plurality of groups of characteristic point data corresponding to different configurations, the preset size is not more than 512 x 512, the tool comprises a welding gun, a glue gun or a clamp, and the characteristic points comprise tool center points;
The second processing module is used for calibrating pose data of the tool center point in the tool installation position coordinate system based on multiple groups of feature point data corresponding to different configurations and pose data of corresponding multiple groups of tool installation positions in the world coordinate system, wherein:
The following pose relationship is determined:
transforming the pose relation into:
the least square method based on the kronecker product conversion obtains the pose relation: And/>
Wherein,Is the pose description of the binocular stereo vision sensor under the world coordinate system,/>Is a pose description of the tool mounting position in the world coordinate system,/>Is the pose description of the tool characteristic points under the coordinate system of the tool installation position,/>The pose description of the tool feature points under a binocular stereo vision sensor coordinate system; k represents the configuration of the robot, k= [1,2, ], m-1, m ]; i denotes a feature point, i= [0,1, …, n-1, n ], and i=0 denotes a tool center point.
4. The apparatus of claim 3, wherein the second processing module is to:
Obtaining parameters of the visual sensor through calibration;
Respectively acquiring pose data of a plurality of groups of characteristic points under a visual sensor coordinate system based on parameters of the visual sensor, wherein the pose data comprise poses of tool center points under the visual sensor;
and calibrating the pose data of the tool center point in the tool mounting position coordinate system based on the pose data of the tool center point in the vision sensor coordinate system and the tool mounting pose data.
5. An apparatus for autonomously calibrating a tool center point of a robot, comprising:
A processor;
A memory for storing processor-executable instructions;
wherein the processor is configured to:
Acquiring a plurality of groups of images shot by a binocular stereo vision sensor, wherein the plurality of groups of images are obtained by shooting tools with different poses under different configurations of the robot by the binocular stereo vision sensor respectively;
respectively scaling a left eye image and a right eye image shot by a binocular stereo vision sensor, inputting the images to a convolutional neural network model for processing after scaling the images to a preset size, so as to obtain preset characteristic points of a recognition tool in the images corresponding to different configurations and outputting multiple groups of characteristic point data corresponding to different configurations, wherein the preset size is not more than 512 x 512, the tool comprises a welding gun, a glue gun or a clamp, and the characteristic points comprise tool center points;
based on multiple sets of characteristic point data corresponding to different configurations and corresponding multiple sets of pose data of tool installation positions in a world coordinate system, calibrating pose data of a tool center point in the tool installation position coordinate system, wherein:
The following pose relationship is determined:
transforming the pose relation into:
Least square method based on Cronecker product conversion to obtain the above pose relationship And/>
Wherein,Is the pose description of the binocular stereo vision sensor under the world coordinate system,/>Is a pose description of the tool mounting position in the world coordinate system,/>Is the pose description of the tool characteristic points under the coordinate system of the tool installation position,/>The pose description of the tool feature points under a binocular stereo vision sensor coordinate system; k represents the configuration of the robot, k= [1,2, ], m-1, m ]; i denotes a feature point, i= [0,1, …, n-1, n ], and i=0 denotes a tool center point.
6. A computer readable storage medium having stored thereon computer instructions, which when executed by a processor, implement the steps of the method of claim 1 or 2.
CN202311416497.0A 2023-10-29 2023-10-29 Method and device for autonomously calibrating tool center point of robot Active CN117283555B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311416497.0A CN117283555B (en) 2023-10-29 2023-10-29 Method and device for autonomously calibrating tool center point of robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311416497.0A CN117283555B (en) 2023-10-29 2023-10-29 Method and device for autonomously calibrating tool center point of robot

Publications (2)

Publication Number Publication Date
CN117283555A CN117283555A (en) 2023-12-26
CN117283555B true CN117283555B (en) 2024-06-11

Family

ID=89239128

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311416497.0A Active CN117283555B (en) 2023-10-29 2023-10-29 Method and device for autonomously calibrating tool center point of robot

Country Status (1)

Country Link
CN (1) CN117283555B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104827480A (en) * 2014-02-11 2015-08-12 泰科电子(上海)有限公司 Automatic calibration method of robot system
CN107192331A (en) * 2017-06-20 2017-09-22 佛山市南海区广工大数控装备协同创新研究院 A kind of workpiece grabbing method based on binocular vision
CN108527360A (en) * 2018-02-07 2018-09-14 唐山英莱科技有限公司 A kind of location position system and method
CN109029257A (en) * 2018-07-12 2018-12-18 中国科学院自动化研究所 Based on stereoscopic vision and the large-scale workpiece pose measurement system of structure light vision, method
CN109900207A (en) * 2019-03-12 2019-06-18 精诚工科汽车***有限公司 The tool center point scaling method and system of robot vision tool
CN110640745A (en) * 2019-11-01 2020-01-03 苏州大学 Vision-based robot automatic calibration method, equipment and storage medium
CN113524194A (en) * 2021-04-28 2021-10-22 重庆理工大学 Target grabbing method of robot vision grabbing system based on multi-mode feature deep learning
CN114001653A (en) * 2021-11-01 2022-02-01 亿嘉和科技股份有限公司 Calibration method for central point of robot tool
CN114310881A (en) * 2021-12-23 2022-04-12 中国科学院自动化研究所 Calibration method and system for mechanical arm quick-change device and electronic equipment
CN114310880A (en) * 2021-12-23 2022-04-12 中国科学院自动化研究所 Mechanical arm calibration method and device
CN115179294A (en) * 2022-08-02 2022-10-14 深圳微美机器人有限公司 Robot control method, system, computer device, and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4298757B2 (en) * 2007-02-05 2009-07-22 ファナック株式会社 Robot mechanism calibration apparatus and method
US20230278224A1 (en) * 2022-03-07 2023-09-07 Path Robotics, Inc. Tool calibration for manufacturing robots

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104827480A (en) * 2014-02-11 2015-08-12 泰科电子(上海)有限公司 Automatic calibration method of robot system
CN107192331A (en) * 2017-06-20 2017-09-22 佛山市南海区广工大数控装备协同创新研究院 A kind of workpiece grabbing method based on binocular vision
CN108527360A (en) * 2018-02-07 2018-09-14 唐山英莱科技有限公司 A kind of location position system and method
CN109029257A (en) * 2018-07-12 2018-12-18 中国科学院自动化研究所 Based on stereoscopic vision and the large-scale workpiece pose measurement system of structure light vision, method
CN109900207A (en) * 2019-03-12 2019-06-18 精诚工科汽车***有限公司 The tool center point scaling method and system of robot vision tool
CN110640745A (en) * 2019-11-01 2020-01-03 苏州大学 Vision-based robot automatic calibration method, equipment and storage medium
CN113524194A (en) * 2021-04-28 2021-10-22 重庆理工大学 Target grabbing method of robot vision grabbing system based on multi-mode feature deep learning
CN114001653A (en) * 2021-11-01 2022-02-01 亿嘉和科技股份有限公司 Calibration method for central point of robot tool
CN114310881A (en) * 2021-12-23 2022-04-12 中国科学院自动化研究所 Calibration method and system for mechanical arm quick-change device and electronic equipment
CN114310880A (en) * 2021-12-23 2022-04-12 中国科学院自动化研究所 Mechanical arm calibration method and device
CN115179294A (en) * 2022-08-02 2022-10-14 深圳微美机器人有限公司 Robot control method, system, computer device, and storage medium

Also Published As

Publication number Publication date
CN117283555A (en) 2023-12-26

Similar Documents

Publication Publication Date Title
JP5835926B2 (en) Information processing apparatus, information processing apparatus control method, and program
CN111452040B (en) System and method for associating machine vision coordinate space in a pilot assembly environment
JP6587761B2 (en) Position control device and position control method
CN111151463B (en) Mechanical arm sorting and grabbing system and method based on 3D vision
KR100693262B1 (en) Image processing apparatus
EP2682711B1 (en) Apparatus and method for three-dimensional measurement and robot system comprising said apparatus
US9555549B2 (en) Control device, robot, robot system, and control method
JP6855492B2 (en) Robot system, robot system control device, and robot system control method
JP7376268B2 (en) 3D data generation device and robot control system
CN111476841B (en) Point cloud and image-based identification and positioning method and system
CN111897349A (en) Underwater robot autonomous obstacle avoidance method based on binocular vision
CN116157837A (en) Calibration method and device for robot
CN114102585A (en) Article grabbing planning method and system
WO2016120724A1 (en) 3d segmentation for robotic applications
CN111360851B (en) Hybrid servo control device and method for robot integrating touch and vision
CN114299039B (en) Robot and collision detection device and method thereof
CN114742883A (en) Automatic assembly method and system based on plane type workpiece positioning algorithm
US20140094951A1 (en) Working unit control device, working robot, working unit control method, and working unit control program
CN113878588B (en) Robot compliant assembly method based on tactile feedback and oriented to buckle type connection
CN113927602B (en) Robot precision assembly control method and system based on visual and tactile fusion
CN114851209A (en) Industrial robot working path planning optimization method and system based on vision
CN112338922B (en) Five-axis mechanical arm grabbing and placing method and related device
CN117283555B (en) Method and device for autonomously calibrating tool center point of robot
CN108724183A (en) A kind of control method, system and the relevant apparatus of handling machinery arm
CN111275758B (en) Hybrid 3D visual positioning method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant