CN112598752B - Calibration method and operation method based on visual recognition - Google Patents

Calibration method and operation method based on visual recognition Download PDF

Info

Publication number
CN112598752B
CN112598752B CN202011545117.XA CN202011545117A CN112598752B CN 112598752 B CN112598752 B CN 112598752B CN 202011545117 A CN202011545117 A CN 202011545117A CN 112598752 B CN112598752 B CN 112598752B
Authority
CN
China
Prior art keywords
camera
end flange
center
tool
alpha
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011545117.XA
Other languages
Chinese (zh)
Other versions
CN112598752A (en
Inventor
李家清
石金博
陈晓聪
邬荣飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
QKM Technology Dongguan Co Ltd
Original Assignee
QKM Technology Dongguan Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by QKM Technology Dongguan Co Ltd filed Critical QKM Technology Dongguan Co Ltd
Priority to CN202011545117.XA priority Critical patent/CN112598752B/en
Publication of CN112598752A publication Critical patent/CN112598752A/en
Application granted granted Critical
Publication of CN112598752B publication Critical patent/CN112598752B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a calibration method and an operation method based on visual identification, wherein the calibration method based on visual identification comprises the following steps: acquiring a camera TOOL C The method comprises the steps of carrying out a first treatment on the surface of the Determining that the end flange is in a first fixed angular attitude alpha 1 The conversion relation between the pixel coordinate system and the camera coordinate system; shooting at least one characteristic point on the marker; obtaining the position relation between the characteristic points and the center of the camera during shooting; angle ROLL-V for obtaining the mark of a marker 1 The method comprises the steps of carrying out a first treatment on the surface of the Acquiring the position P of the end flange when the camera center is aligned with the feature points or the set relative positions among the feature points 1 The method comprises the steps of carrying out a first treatment on the surface of the Acquiring position P of end flange when work tool reaches work position in expected posture 1 According to P 1 、α 1 、P T 、α T The transform relation offset is obtained. The invention can enable the operation equipment with the visual identification function to adapt to the operation requirement under the condition that the photographing position and the operation position are not coincident in the operation process.

Description

Calibration method and operation method based on visual recognition
Technical Field
The invention relates to the technical field of industrial automation, in particular to a calibration method and an operation method based on visual identification.
Background
In industrial automation equipment, robots or other motion mechanisms with visual recognition function, a camera and a working tool are fixed on a rotating shaft and rotate along with the rotating shaft, a camera is often required to photograph a product or a working point in a plane to obtain the position or the working position of the product, and then the working tool is used to go to the working position for working, such as grabbing, placing or welding of the product. In the common demands, the photographing position and the working position of the product or the working point are overlapped sometimes, but in many cases, the photographing position and the working position of the product or the working point are not overlapped often due to different conditions of the product or the working point.
Disclosure of Invention
The present invention aims to solve at least one of the technical problems existing in the prior art. Therefore, the invention provides a calibration method based on visual identification, so that in the operation process, the operation equipment with the visual identification function can adapt to the operation requirement under the condition that the photographing position and the operation position are not coincident.
The invention further provides an operation method based on the visual identification calibration method.
According to the visual identification-based calibration method, the embodiment of the invention is applied to operation equipment, the operation equipment comprises an end effector and a movement mechanism, the end effector comprises an end flange, a camera and an operation tool, the camera and the operation tool are arranged on the end flange, and the movement mechanism can drive the end flange to freely move on an XY plane and can drive the end flange to rotate around an axis of the end flange; the calibration method comprises the following steps:
the TOOL of the camera is obtained by determining the position relationship between the center and the tail end flange of the camera C The method comprises the steps of carrying out a first treatment on the surface of the According to the TOOL of the camera C Determining that the end flange is in a first fixed angular attitude alpha 1 At this time, the conversion relation T between the pixel coordinates and the physical position with reference to the camera center PC The method comprises the steps of carrying out a first treatment on the surface of the At the end flange in a first fixed angular attitude alpha 1 At least one characteristic point on the marker is shot, wherein the relative position between the marker and the working position is fixed, and each characteristic is shotRecording the position of the current tail end flange when the point is reached; according to the image obtained by shooting each characteristic point and the conversion relation T PC Acquiring the position relationship between each characteristic point and the center of the camera during shooting; according to the position relationship between each characteristic point and the center of the camera during shooting, the current end flange position and the TOOL of the camera during shooting each characteristic point C When the camera center is aligned with the feature points or the set relative positions among the feature points are obtained, the position P of the tail end flange 1 The method comprises the steps of carrying out a first treatment on the surface of the Acquiring an angle ROLL-V of the marked object when the marked object is marked according to the image obtained by shooting each characteristic point 1 The method comprises the steps of carrying out a first treatment on the surface of the Acquiring the position P of the end flange when the work tool reaches the work position in a desired posture T And angular attitude alpha T The method comprises the steps of carrying out a first treatment on the surface of the According to P 1 、α 1 、P T 、α T Obtain (P) 1 、α 1 ) And (P) T 、α T ) Is a transform of the relation offset.
The calibration method based on visual recognition according to the embodiment of the first aspect of the invention has at least the following beneficial effects:
the embodiment of the invention firstly obtains the TOOL of the camera C Then according to the TOOL of the camera C Acquiring a first fixed angle attitude alpha of any one end flange 1 The transformation relation between the pixel coordinate system and the physical position taking the camera center as a reference is adopted, then the characteristic points are photographed, the relative position relation between the characteristic points and the camera center can be obtained according to the coordinates of the characteristic points in the pixel coordinate system, and then the TOOL of the camera is adopted C And acquiring a relative positional relationship between the acquired feature point and the camera center while maintaining the first fixed angular attitude α 1 When the camera center is aligned with the feature points or the set relative positions among the feature points, the position P of the center of the flange tail end 1 Meanwhile, the angle ROLL-V when the marker is marked can be obtained according to the image obtained by shooting the feature points 1 Then, when the working tool reaches the working position in the expected posture, the position P of the tail end flange is obtained T And end flange angular attitude alpha T I.e. according to P 1 、α 1 、P T 、α T The method can obtainP 1 、α 1 ) And (P) T 、α T ) Transforming the relation offset so as to complete the calibration process; according to the transformation relation offset obtained during calibration and the angle ROLL-V of the calibration of the marker 1 In the subsequent actual operation process, after the characteristic points fixed at the relative positions with the markers are photographed, the position and the angle posture of the tail end flange can be known according to the photographed image when the operation tool reaches the operation position in the expected posture, and the operation tool can be driven to come to the operation position to finish the operation in the proper posture; therefore, after the operation equipment is calibrated according to the calibration method based on visual identification, the operation equipment can adapt to the operation requirement under the condition that the photographing position and the operation position are not coincident.
According to some embodiments of the invention, the calibration method based on visual recognition comprises determining the positional relationship between the center and the end flange of the camera to obtain the TOOL of the camera C The method comprises the following steps:
at the end flange at a certain angular attitude alpha 3 Respectively photographing the calibration object from at least three different position points, and obtaining an angle posture alpha according to the position of the tail end flange and the pixel coordinates of the calibration object in the image when photographing each time 3 The transformation relation between the lower pixel coordinates and the flange end position, and the position P required to be reached by the end flange when the calibration object is positioned in the center of the image is obtained according to the transformation relation 3
At the end flange at a certain angular attitude alpha 4 Respectively photographing the calibration object from at least three different position points, and obtaining an angle posture alpha according to the position of the tail end flange and the pixel coordinates of the calibration object in the image when photographing each time 4 The transformation relation between the lower pixel coordinates and the flange end position, and the position P required to be reached by the end flange when the calibration object is positioned in the center of the image is obtained according to the transformation relation 4 The method comprises the steps of carrying out a first treatment on the surface of the Wherein alpha is 4 Not equal to alpha 3
According to alpha 3 And P 3 Alpha and alpha 4 And P 4 Calculate the TOOL of the camera C
According toThe invention relates to a calibration method based on visual recognition, which is based on alpha 3 And P 3 Alpha and alpha 4 And P 4 Calculate the TOOL of the camera C The method comprises the following steps:
will be alpha 3 、P 3 、α 4 、P 4 The related numerical values in the map f are substituted into the mapping relation f TOOL Calculating to obtain the TOOL of the camera C The method comprises the steps of carrying out a first treatment on the surface of the Wherein,
(X 3 ,Y 3 ) Is P 3 Position coordinates of (X) 4 ,Y 4 ) Is P 4 Is used for the position coordinates of the object.
According to some embodiments of the invention, a calibration method based on visual recognition is provided, wherein, according to alpha 3 And P 3 Alpha and alpha 4 And P 4 Calculate the TOOL of the camera C The method comprises the following steps:
acquiring the position P of the end flange 3 And the angle posture is alpha 3 In the state of (2), the camera center coordinates with respect to the camera TOOL in the motion mechanism coordinate system C Expression T of medium parameter 3
Acquiring the position P of the end flange 4 And the angle posture is alpha 4 In the state of (2), the camera center coordinates with respect to the camera TOOL in the motion mechanism coordinate system C Expression T of medium parameter 4
According to the relationship that the coordinate position of the camera center in the motion mechanism is unchanged in the two states, an equation T is established 3 =T 4 Obtaining the TOOL of the camera C
Wherein, (X 3 ,Y 3 ) Is P 3 Position coordinates of (X) 4 ,Y 4 ) Is P 4 Is used for the position coordinates of the object.
According to some embodiments of the invention, the calibration method based on visual recognition is implemented according to the TOOL of the camera C Determining that the end flange is in a first fixed angular attitude alpha 1 At this time, the conversion relation T between the pixel coordinate system and the physical position with the camera center as the reference PC The method comprises the following steps: at the end flange in a first fixed angular attitude alpha 1 Photographing the standard points from at least 3 different positions respectively, so that the corresponding graph of the standard points in the image is not collinear among at least three points in a pixel coordinate system; when photographing the standard points at each position, recording the current position of the tail end flange respectively; according to the terminal flange position recorded when each shooting of the calibration point and the TOOL of the camera C Converting to obtain the position of the camera center of each shooting calibration point, and acquiring the conversion relation T between the pixel coordinate and the physical position taking the camera center as a reference according to the pixel coordinate corresponding to the calibration point under each shooting position and the position of the camera center of each shooting calibration point PC
According to the calibration method based on visual recognition of some embodiments of the present invention, if there are 2 feature points, the number of feature points is 2, wherein the number of feature points is determined according to the positional relationship between each feature point and the center of the camera during shooting, the current end flange position during shooting each feature point, and the TOOL of the camera C When the camera center is aligned with the feature points or the set relative positions among the feature points are obtained, the position P of the tail end flange 1 The method comprises the following steps: according to the position relation between each characteristic point and the center of the camera during shooting, the TOOL of the camera C Shooting the current end flange position of each characteristic point, and acquiring the positions P of the two camera centers when the camera centers are respectively aligned with the two characteristic points O1 And P O2 The method comprises the steps of carrying out a first treatment on the surface of the According to the obtained P O1 And P O2 Obtaining P O1 P O2 The position of the central point O of the connecting line; based on the position of the center point O and the camera TOOL C Acquiring a terminal flange position P when a camera center is aligned with a center point O 1
Calibration methods based on visual recognition according to some embodiments of the present inventionThe number of the feature points is multiple, wherein the angle ROLL-V of the marked article when the marked article is marked is obtained according to the image obtained by shooting each feature point 1 The method comprises the following steps: according to the obtained P O1 And P O2 Obtaining P O1 P O2 The angle of the line to the horizontal center line of the camera, i.e. the angle ROLL-V at which the marker is calibrated 1
According to the calibration method based on visual recognition of some embodiments of the present invention, if the feature points are one, the current end flange position when each feature point is photographed, and the camera TOOL are based on the positional relationship between each feature point and the camera center when photographing C When the camera center is aligned with the feature points or the set relative positions among the feature points are obtained, the position P of the tail end flange 1 The method comprises the following steps: according to the TOOL of the camera C The position relation between the characteristic points and the camera center during shooting and the current end flange position during shooting the characteristic points are obtained, and the position of the camera center when the camera center is aligned to the characteristic points is obtained; according to the position of the camera center when the camera center is aligned with the feature point and the camera TOOLC, the position P of the tail end flange is obtained when the camera center is aligned with the feature point 1
According to some embodiments of the invention, the calibration method based on visual recognition includes acquiring a position P of the end flange when the work tool reaches the work position in a desired posture T And angular attitude alpha T The method comprises the following steps: the motion mechanism is controlled to drive the tail end flange to move and enable the working tool to reach the working position in a desired gesture by means of manual teaching, and the current position P of the tail end flange is recorded T And angular attitude alpha T
According to the calibration method based on visual recognition of some embodiments of the present invention, if the marker is a carrier or a product carrying a product, the position P of the end flange is obtained when the working tool reaches the working position in a desired posture T And angular attitude alpha T The method comprises the following steps: at the end flange in a first fixed angular attitude alpha 1 When the product is photographed by a camera, a product image is obtained; from the product image Conversion relation T between pixel coordinates and physical position referenced to camera center PC TOOL of camera C Acquiring the operation position of a product; acquiring the position P of the tail end flange when the working tool reaches the working position in a desired posture according to the working position of the product T And angular attitude alpha T
According to the calibration method based on visual recognition of some embodiments of the present invention, if the marker is a carrier for receiving a product, and a placement structure for placing the product is provided on the carrier, the position P of the end flange is obtained when the working tool reaches the working position in a desired posture T And angular attitude alpha T The method comprises the following steps: at the end flange in a first fixed angular attitude alpha 1 When the camera is used, the camera is used for photographing the placement structure, so that an image of the placement structure is obtained; based on the conversion relation T between the image of the placement structure, the pixel coordinates and the physical position referenced to the camera center PC TOOL of camera C Acquiring the operation position of a product; acquiring the position P of the tail end flange when the working tool reaches the working position in a desired posture according to the working position of the product T And angular attitude alpha T
According to the calibration method based on visual recognition of some embodiments of the present invention, the marker corresponds to N operation positions, N being greater than or equal to 1; wherein, when the working tool reaches the working position in a desired posture, the position P of the tail end flange is obtained T And angular attitude alpha T The method comprises the following steps: for each working position, the position P of the end flange 20 is obtained when the working tool 40 reaches each working position in a desired posture Tn And angular attitude alpha Tn Wherein N is greater than or equal to N is greater than or equal to 1.
According to the calibration method based on visual recognition according to some embodiments of the invention, according to P 1 、α 1 、P T 、α T Obtain (P) 1 、α 1 ) And (P) T 、α T ) The conversion relation offset of (2) includes the following steps: according to P 1 、α 1 And the recorded position P of each set of end flanges Tn And powderEnd flange angular attitude alpha Tn Obtain (P) 1 、α 1 ) And each group (P Tn 、α Tn ) Is a transform of offset-n.
According to a second aspect of the invention, an operation method based on visual recognition is applied to an operation device, the operation device comprises an end effector and a movement mechanism, the end effector comprises an end flange, a camera and an operation tool, the camera and the operation tool are arranged on the end flange, and the movement mechanism can drive the end flange to freely move on an XY plane and can drive the end flange to rotate around an axis of the end flange; the operation method comprises the following steps:
the TOOL of the camera is obtained by determining the position relationship between the center and the tail end flange of the camera C The method comprises the steps of carrying out a first treatment on the surface of the According to the TOOL of the camera C Determining that the end flange is in a first fixed angular attitude alpha 1 At this time, the conversion relation T between the pixel coordinate system and the physical position with the camera center as the reference PC The method comprises the steps of carrying out a first treatment on the surface of the At the end flange in a first fixed angular attitude alpha 1 Shooting a single characteristic point on the marker, wherein the relative position between the marker and the operation position is fixed, and recording the position of the current tail end flange when shooting the characteristic point; image obtained according to shooting characteristic points and conversion relation T PC Acquiring the position relationship between the characteristic points and the center of the camera during shooting; according to the position relation between the characteristic points and the center of the camera during shooting, the current end flange position during shooting the characteristic points and the TOOL of the camera C When the camera center alignment feature point is acquired, the position P of the tail end flange 1 The method comprises the steps of carrying out a first treatment on the surface of the Acquiring an angle ROLL-V of the marked object when the marked object is marked according to the image obtained by shooting the characteristic points 1 The method comprises the steps of carrying out a first treatment on the surface of the Acquiring the position P of the end flange when the work tool reaches the work position in a desired posture T And angular attitude alpha T The method comprises the steps of carrying out a first treatment on the surface of the According to P 1 、α 1 、P T 、α T Obtain (P) 1 、α 1 ) And (P) T 、α T ) Is a transform of the offset;
at the end flange in a first fixed angular attitude alpha 1 Then, shooting the characteristic points on the marker again,recording the position coordinates of the current tail end flange; based on the image obtained by photographing the marker again and the conversion relation T PC Obtaining the position relation between the feature points and the center of the camera during re-shooting; according to the position relation between the characteristic points and the camera center during the re-shooting, the current terminal flange position during the re-shooting of the characteristic points and the camera TOOL C Acquiring the position P of the camera center when the camera center is aligned with the feature point again 5 The method comprises the steps of carrying out a first treatment on the surface of the Acquiring an angle ROLL-V of the marker during operation according to the image obtained by shooting the marker again 2 The method comprises the steps of carrying out a first treatment on the surface of the According to ROLL-V 2 And ROLL-V 1 Deviation between camera TOOL C Acquiring the center of the camera to be at P 5 Position and end flange in angular attitude alpha 1 +(ROLL-V 2 -ROLL-V 1 ) Lower end flange position P 6 The method comprises the steps of carrying out a first treatment on the surface of the According to P 6 Angle posture alpha 1 +(ROLL-V 2 -ROLL-V 1 ) Converting the relation offset to obtain the end flange position P in the working position 7 And angular attitude alpha 7 The method comprises the steps of carrying out a first treatment on the surface of the Moving the end flange to P 7 And rotating the end flange to an angular attitude alpha 7 The work is performed at the work position by the work tool.
The operation method based on visual recognition according to the embodiment of the second aspect of the invention has at least the following beneficial effects: the operation method comprises the calibration method of the embodiment of the first aspect and corresponding operation steps when the characteristic points are one; in the working step, the position P of the end flange is found by finding the position P of the end flange when the camera center is also aligned with the feature point and the angle of the marker in the camera field of view is consistent with the angle of the marker in the camera field of view 60 during calibration 6 And angular attitude alpha 1 +(ROLL-V 2 -ROLL-V 1 ) Then, by means of the conversion relation offset obtained in the calibration process, the position P of the end flange can be obtained when the working tool arrives at the actual working position in the expected posture 7 And angular attitude alpha 7 Therefore, the tail end flange can be driven to reach the position and the angle posture through the movement mechanism, and the working tool can be ensured to smoothly finish the operation; thus, embodiments of the second aspect of the inventionThe operation method based on visual recognition can finish the operation under the condition that the photographing position and the operation position are not coincident.
According to a third aspect of the invention, the visual recognition-based working method is applied to a working device, the working device comprises an end effector and a movement mechanism, the end effector comprises an end flange, a camera and a working tool, the camera and the working tool are arranged on the end flange, and the movement mechanism can drive the end flange to freely move on an XY plane and can drive the end flange to rotate around an axis of the end flange; the operation method comprises the following steps:
the TOOL of the camera is obtained by determining the position relationship between the center and the tail end flange of the camera C The method comprises the steps of carrying out a first treatment on the surface of the According to the TOOL of the camera C Determining that the end flange is in a first fixed angular attitude alpha 1 At this time, the conversion relation T between the pixel coordinate system and the physical position with the camera center as the reference PC The method comprises the steps of carrying out a first treatment on the surface of the At the end flange in a first fixed angular attitude alpha 1 Shooting a plurality of characteristic points on the marker, wherein the relative positions between the marker and the operation position are fixed, and recording the position of the current tail end flange when shooting each characteristic point; according to the image obtained by shooting each characteristic point and the conversion relation T PC Acquiring the position relationship between each characteristic point and the center of the camera during shooting; according to the position relationship between each characteristic point and the center of the camera during shooting, the current end flange position and the TOOL of the camera during shooting each characteristic point C When the set relative position between the camera center alignment feature points is obtained, the position P of the tail end flange 1 The method comprises the steps of carrying out a first treatment on the surface of the Acquiring an angle ROLL-V of the marked object when the marked object is marked according to the image obtained by shooting the characteristic points 1 The method comprises the steps of carrying out a first treatment on the surface of the Acquiring the position P of the end flange when the work tool reaches the work position in a desired posture T And angular attitude alpha T The method comprises the steps of carrying out a first treatment on the surface of the According to P 1 、α 1 、P T 、α T Obtain (P) 1 、α 1 ) And (P) T 、α T ) Is a transform of the offset;
at the end flange in a first fixed angular attitude alpha 1 Next, for each of the markers againA feature point is photographed, and when each feature point is photographed, the position of the current tail end flange is recorded; according to the image obtained by shooting each characteristic point again and the conversion relation T PC Obtaining the position relation between each characteristic point and the center of the camera during secondary shooting; according to the position relation between each characteristic point and the center of the camera during re-shooting, the current end flange position and the camera TOOL during re-shooting each characteristic point C Acquiring a position P of a camera center in which the camera center is aligned again with a set relative position between feature points 5 The method comprises the steps of carrying out a first treatment on the surface of the Acquiring an angle ROLL-V of the marker during operation according to the image obtained by shooting each characteristic point again 2 The method comprises the steps of carrying out a first treatment on the surface of the According to ROLL-V 2 And ROLL-V 1 Deviation between camera TOOL C Acquiring the center of the camera to be at P 5 Position and end flange in angular attitude alpha 1 +(ROLL-V 2 -ROLL-V 1 ) In the state of (2) the position P of the end flange 6 The method comprises the steps of carrying out a first treatment on the surface of the According to P 6 Angle posture alpha 1 +(ROLL-V 2 -ROLL-V 1 ) Converting the relation offset to obtain the end flange position P in the working position 7 And angular attitude alpha 7 The method comprises the steps of carrying out a first treatment on the surface of the Moving the end flange to P 7 And rotating the end flange to an angular attitude alpha 7 The work is performed at the work position by the work tool.
According to the operation method based on visual identification, which is an embodiment of the third aspect of the invention, the operation method has at least the following beneficial effects: the operation method comprises the calibration method of the embodiment of the first aspect and corresponding operation steps when the characteristic points are a plurality of; in the working step, the position P of the end flange is found when the camera center is also aligned with the set relative position between the feature points and the angle of the marker in the camera view is consistent with the angle of the marker in the camera view in the calibration process 6 And angular attitude alpha 1 +(ROLL-V 2 -ROLL-V 1 ) Then, by means of the conversion relation offset obtained in the calibration process, the position P of the end flange can be obtained when the working tool arrives at the actual working position in the expected posture 7 And angular attitude alpha 7 Thereby can be driven by the movement mechanismThe movable end flange comes to the position and the angle posture, so that the working tool can smoothly finish the operation; thus, the visual recognition-based working method according to the embodiment of the third aspect of the present invention can complete the work in the case where the photographing position and the working position do not coincide.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the invention will become apparent and may be better understood from the following description of embodiments taken in conjunction with the accompanying drawings in which:
FIG. 1 is a flow chart of a calibration method based on visual recognition in an embodiment of the first aspect of the present invention;
FIG. 2 is a schematic diagram of a working apparatus according to an embodiment of the present invention;
fig. 3 is a schematic diagram of the movement paths of the camera center and the corresponding nine photo spots of the camera center in step S210;
Fig. 4 is a schematic diagram of a 3×3 dot matrix in which the dot corresponding patterns are arranged in the pixel coordinate system in step S210;
fig. 5 is a process diagram of steps S300 to S700 when there are two feature points;
fig. 6 is a process diagram of steps S300 to S700 when the feature points are single;
FIG. 7 is a flow chart of a visual recognition-based job method of a second aspect embodiment;
FIG. 8 is a process diagram of S900b to S1400b in FIG. 7;
FIG. 9 is a flow chart of a visual recognition-based job method in accordance with an embodiment of the third aspect;
fig. 10 is a process diagram of S900b to S1400b in fig. 9.
Reference numerals:
motion mechanism 10, end flange 20, camera 30, work tool 40, calibration point 50, camera field of view 60, marker 70, work position 710, product 80, feature point 90.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention.
In the description of the present invention, it should be understood that the direction or positional relationship indicated with respect to the description of the orientation, such as up, down, left, right, front, rear, etc., is based on the direction or positional relationship shown in the drawings, is merely for convenience of describing the present invention and simplifying the description, and does not indicate or imply that the apparatus or element to be referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention.
In the description of the present invention, unless explicitly defined otherwise, terms such as arrangement, installation, connection, etc. should be construed broadly and the specific meaning of the terms in the present invention can be reasonably determined by a person skilled in the art in combination with the specific contents of the technical scheme.
A visual recognition-based calibration method according to an embodiment of the first aspect of the present invention, which is mainly applied to the following working apparatuses, will be described below with reference to fig. 1 to 6.
Referring to fig. 1 and 2, the working device includes an end effector and a movement mechanism 10, the end effector includes an end flange 20, a camera 30 and a working tool 40, the camera 30 and the working tool 40 are disposed on the end flange 20, and the movement mechanism 10 can drive the end flange 20 to freely move on an XY plane and can drive the end flange 20 to rotate around its own axis.
Depending on the type of the work tool 40, for example, the work tool 40 may be a clamping jaw, a welding gun, a detection device, or the like, and correspondingly, the work equipment may be a loading/unloading and material handling equipment for taking and placing the product 80, a welding equipment for welding, a detection equipment for detecting, or the like, and the work tool 40 and the work equipment are not particularly limited.
Likewise, the movement mechanism 10 may be of various types, such as an articulated robot, or other automated device having a rotary driving module and a plurality of linear driving modules, and the like, and the movement mechanism 10 is not particularly limited herein, as long as it can drive the end flange 20 to freely move on the XY plane and can drive the end flange 20 to rotate about its own axis.
The calibration method based on visual identification comprises the following steps:
s100, determining the position relationship between the center of the camera and the end flange 20 to obtain the TOOL of the camera C
S200, according to the TOOL of the camera C The end flange 20 is determined to be in the first fixed angular attitude α 1 At this time, the conversion relation T between the pixel coordinate system and the physical position with the camera center as the reference PC
S300, at the end flange 20, in the first fixed angular attitude alpha 1 Photographing at least one characteristic point 90 on the marker 70, wherein the relative position between the marker 70 and the working position 710 is fixed, and recording the position of the current end flange 20 when photographing each characteristic point 90;
s400, according to the image obtained by shooting each feature point 90, the conversion relation T PC Acquiring a positional relationship between the feature points 90 and the camera center at the time of shooting;
S500, according to the position relationship between each feature point 90 and the center of the camera during shooting, the current end flange position during shooting each feature point 90, and the TOOL of the camera C When the camera center is aligned with the feature points 90 or the set relative positions between the feature points 90 are obtained, the position P of the end flange 20 1
S600, acquiring the angle ROLL-V of the marker 70 when in calibration according to the image obtained by shooting the feature points 90 1
S700, when the working tool 40 reaches the working position 710 in a desired posture, the position P of the end flange 20 is obtained T And angular attitude alpha T
S800 according to P 1 、α 1 、P T 、α T Obtain (P) 1 、α 1 ) And (P) T 、α T ) Is a transform of the relation offset.
It will be appreciated that the tag 70 may be a product 80, a carrier already carrying the product 80, a carrier waiting to receive the product 80, etc.; for example, when the marker 70 is a product 80, the working location 710 is located on the product 80, and may be, for example, a welding location on the product 80, a detection location on the product 80, a grasping location of the product 80, and the like; for example, when the marker 70 is a carrier that already carries the product 80, the working position 710 is located on the product 80 in the carrier, and may be a welding position on the product 80, a detection position on the product 80, a gripping position of the product 80, or the like; for another example, when the tag 70 is a carrier waiting to receive the product 80, the operation position 710 may be a placement position of the corresponding product 80 on the carrier, and the corresponding operation process is to place the product 80 on the carrier.
It will be appreciated that the angle ROLL-V is marked with respect to the marker 70 1 And an angle ROLL-V at the time of operation of the tag 70 described later 2 Since the change value of the corresponding angle of the marker 70, namely ROLL-V, is finally obtained during the operation 2 -ROLL-V 1 Thus ROLL-V 1 And ROLL-V 2 The angle of the marker 70 in the image coordinate system may be obtained, or the angle of the marker 70 in the movement mechanism coordinate system may be obtained, or the first fixed angular attitude α of the end flange 20 may be obtained 1 In this state, the angle of the marker 70 in the camera coordinate system may be, for example, an angle between a straight line with a constant relative position on the marker 70 and a certain coordinate axis in the pixel coordinate system, an angle between a straight line with a constant relative position on the marker 70 and a certain coordinate axis in the motion mechanism coordinate system, or an angle between a straight line with a constant relative position on the marker 70 and a horizontal center line of the camera.
It can be understood that the feature points 90 may be MARK points manually set on the product 80 or the carrier, or may be some existing structural features on the product 80 or the carrier, for example, may be existing holes, bosses or boundaries, vertices, etc. on the product 80 or the carrier, where the corresponding graphic after photographing is convenient for recognition in the image.
The calibration method of the embodiment comprises the steps of firstly obtaining TOOL of a camera C Then according to the TOOL of the camera C Acquiring a first fixed angle attitude alpha of any one end flange 1 The transformation relation between the pixel coordinate system and the physical position taking the camera center as the reference is adopted, then the characteristic points 90 are photographed, the relative position relation between the characteristic points 90 and the camera center can be obtained according to the coordinates of the characteristic points 90 in the pixel coordinate system, and then the relative position relation between the characteristic points 90 and the camera center can be obtained according to the TOOL of the camera C And the acquired relative positional relationship between the feature point 90 and the camera center is known while maintaining the first fixed angular attitude α 1 When the camera center is aligned with the feature point 90 or the set relative position between the feature points 90, the position P of the flange end center 1 At the same time, the angle ROLL-V when the marker is marked can be known according to the image obtained by shooting the feature point 90 1 Then, when the work tool 40 reaches the work position 710 in a desired posture, the position P of the end flange 20 is obtained T And end flange angular attitude alpha T I.e. according to P 1 、α 1 、P T 、α T Thus, the product (P) 1 、α 1 ) And (P) T 、α T ) Transforming the relation offset so as to complete the calibration process; according to the transformation relation offset obtained during calibration and the angle ROLL-V of the calibration of the marker 1 In the subsequent actual operation process, after photographing the feature point 90 fixed relative to the marker 70, the position and the angular posture of the end flange 20 can be known according to the photographed image when the operation tool 40 reaches the operation position 710 in the expected posture, so that the operation tool 40 can be driven to come to the operation position 710 to complete the operation in the proper posture; therefore, after the operation equipment is calibrated according to the calibration method based on visual recognition, the operation equipment can adapt to the operation requirement in the case that the photographing position and the operation position 710 are not coincident. How to obtain the transformation relation offset according to the calibration and the angle ROLL-V of the calibration of the marker 70 1 Bringing the work tool 40 to the work position 710 to perform work in a proper posture will be described in detail in the second and third embodiments of the present invention.
It will be appreciated that, unless specifically stated otherwise, the position of the end flange 20 refers to the coordinates of the end flange 20 in the motion mechanism coordinate system; it is to be understood that, unless specifically described otherwise, the position of the camera center refers to the coordinates of the camera center in the movement mechanism coordinate system; the motion mechanism coordinate system may be established with any one of the fixed points on the base of the motion mechanism 10 as the origin, and the XY plane in the motion mechanism 10 is parallel to the plane in which the marker 70 is located.
It will be appreciated that in step S100, the positional relationship between the center of the camera and the end flange 20 is determined, i.e. the camera TOOL is obtained C The method comprises the following steps:
s110, the end flange 20 is positioned at a certain angle posture alpha 3 Then, photographing the calibration object from at least three different position points respectively, and obtaining an angle posture alpha according to the position of the end flange 20 and the pixel coordinates of the calibration object in the image when photographing each time 1 The transformation relation between the lower pixel coordinates and the flange end position, and the position P required to be reached by the end flange 20 when the calibration object is positioned in the center of the image is obtained according to the transformation relation 3
It will be appreciated that α 3 The angle may be any angle, for example, 0 degrees, 45 degrees, 90 degrees, 180 degrees, or the like.
Specifically, the end flange 20 is first driven to move to a preset angular posture α by the movement mechanism 10 3 Before photographing, the movement of the tail end flange 20 is controlled by the movement mechanism 10, and the camera 30 is made to come to a position near the calibration object; then selecting at least three points, such as three points, four points or more than four points, near the current position, then driving the three points to automatically move to each point one by one through a movement mechanism 10, photographing through a camera 30 after moving to each point, and ensuring that a calibration object is in the visual field of the camera 30 when photographing each time; at each point of the photograph, record The position of the lower current end flange 20; the current position and angular attitude α of the end flange 20 3 All can be obtained in real time through the bottom layer of the operation equipment; according to the shot image, the coordinates of the corresponding graph of the calibration object in the image, namely the pixel coordinates of the calibration object in the image, can be obtained through an image processing technology. The angular attitude alpha of the end flange 20 can be obtained through the obtained coordinates of the positions of the at least three end flanges 20 and the obtained coordinates of the three pixels in one-to-one correspondence 3 The transformation relationship between the pixel coordinates and the coordinates of the center position of the end flange 20; then according to the pixel coordinates of the image center and the transformation relation, the position P of the end flange 20 required to be reached when the calibration object is positioned at the image center can be obtained 3 The method comprises the steps of carrying out a first treatment on the surface of the The calibration object is positioned at the center of the image to indicate that the camera center is aligned with the calibration object at the moment, namely, the angle posture alpha of the flange 20 at the tail end is obtained 3 When the camera is centered on the calibration object, the position P of the end flange 20 3
S120, the end flange 20 is positioned at a certain angle posture alpha 4 Then, photographing the calibration object from at least three different position points respectively, and obtaining an angle posture alpha according to the position of the end flange 20 and the pixel coordinates of the calibration object in the image when photographing each time 2 The transformation relation between the lower pixel coordinates and the flange end position, and the position P required to be reached by the end flange 20 when the calibration object is positioned in the center of the image is obtained according to the transformation relation 4 The method comprises the steps of carrying out a first treatment on the surface of the Wherein alpha is 4 Not equal to alpha 3
It will be appreciated that α 4 May be at any angle, such as 0 degrees, 45 degrees, 90 degrees, 180 degrees, etc., but α 4 Not equal to alpha 3 The method comprises the steps of carrying out a first treatment on the surface of the Alpha can also be selected for ease of calculation 3 0 degrees, and alpha 4 180 degrees.
Specifically, the step of S320 is substantially the same as the step of S310, except that at the beginning, the end flange 20 is first driven to move to a preset angular attitude α by the movement mechanism 10 4 . And, correspondingly, the angular attitude alpha of the flange 20 at the end will be obtained 4 When the camera is centered on the calibration object, the position of the end flange 20P4。
S130 according to alpha 3 And P 3 Alpha and alpha 4 And P 4 Calculate the TOOL of the camera C
It should be appreciated that due to the camera TOOL C And the position of the calibration object under the coordinate of the motion mechanism are fixed, so that the TOOL of the camera can be obtained C . The camera TOOL will be given below C Is a specific calculation mode.
In the first calculation mode, step S330, according to α 3 And P 3 Alpha and alpha 4 And P 4 Calculate the TOOL of the camera C The method comprises the following steps:
s131a, acquiring the position P of the end flange 20 3 And the angle posture is alpha 3 In the state of (2), the camera center coordinates with respect to the camera TOOL in the motion mechanism coordinate system C Expression T of medium parameter 3
As can be appreciated, the camera TOOL C For the purpose of describing the deviation between the center of the camera and the center of the end flange 20 in the X direction and Y direction under the coordinate system of the movement mechanism in the state that the end flange 20 is in the angular posture of 0 degree, the deviations in the two directions are respectively denoted as X for convenience of description C And Y C TOOL of camera C Denoted as P C ,P C =[X C Y C ] T ,P 3 Position sitting at (X) 3 ,Y 3 ),P 4 Position sitting at (X) 4 ,Y 4 ) The method comprises the steps of carrying out a first treatment on the surface of the For ease of computation, the spatial coordinates are converted into a matrix representation consisting of a 3×3 rotation matrix R and a 1×3 offset matrix M, expressed as follows:
will P 3 、P 4 TOOL of camera C Conversion to matrix for operation, where P 3 The corresponding matrix is:
P 3 the corresponding matrix is:
P C the corresponding matrix is:
since the camera center is thus co-ordinate in the motion mechanism coordinate system with respect to the camera TOOL C Expression T of medium parameter 3 The method comprises the following steps:
s132a, acquiring the position P of the end flange 20 4 And the angle posture is alpha 4 In the state of (2), the camera center coordinates with respect to the camera TOOL in the motion mechanism coordinate system C Expression T of medium parameter 4
It will be appreciated that, substantially the same as in S131a described above, the expression T3 of the coordinates of the camera center in the motion mechanism coordinate system with respect to the parameters in the camera tollc is given as follows:
S132a, establishing an equation T according to the relationship that the coordinate position of the camera center in the motion mechanism is unchanged in the two states 3 =T 4 Obtaining the TOOL of the camera C
It can be understood that since the camera centers in the two states are aligned with the calibration object, i.e. the coordinate positions of the camera centers in the motion mechanism are unchanged, equation T can be established 3 =T 4
From the above equation, it can be derived that:
since only the offset value X is required here C And Y C The rotation matrix can therefore be ignored to obtain:
R 3 M C +M 3 =R 4 M C +M 4
the simplification can be obtained:
(R 3 -R 4 )M C =M 4 -M 3
finally, it can be obtained that:
M C =(R 3 -R 4 ) -1 ×(M 4 -M 3 )
ignoring the height-brought-in expression may yield:
in the second calculation mode, step S330, according to α 3 And P 3 Alpha and alpha 4 And P 4 Calculate the TOOL of the camera C The method comprises the following steps:
s130b, will be alpha 3 、P 3 、α 4 、P 4 The related numerical values in the map f are substituted into the mapping relation f TOOL Calculating to obtain the TOOL of the camera C
It can be appreciated that the mapping relation f TOOL Can preset the internal program of the working equipment in advance and then make alpha in the teaching process 3 、P 3 、α 4 、P 4 The related numerical values in the map f are substituted into the mapping relation f TOOL Is directly calculated.
Wherein,
(X 3 ,Y 3 ) Is P 3 Position coordinates of (X) 4 ,Y 4 ) Is P 4 Is used for the position coordinates of the object.
Wherein the mapping relation f TOOL The principle and detailed calculation procedure of (a) are basically the same as those of the first calculation method, and will not be described in detail here.
Referring to fig. 3 and 4, it can be understood that in step S200, according to the camera TOOL C The end flange 20 is determined to be in the first fixed angular attitude α 1 At this time, the conversion relation T between the pixel coordinate system and the physical position with the camera center as the reference PC The method comprises the following steps:
s210, at the end flange 20, in a first fixed angular attitude alpha 1 Photographing the calibration points 50 from at least 3 different positions respectively, so that the corresponding graph of the calibration points 50 in the image is not collinear among at least three points in a pixel coordinate system; recording the current end flange positions respectively when photographing the calibration points 50 at each position;
it will be appreciated that α 1 The angle may be any angle, for example, 0 degrees, 45 degrees, 90 degrees, 180 degrees, or the like.
And, it can be understood that the more the position points are selected, the more accurate the result is, for example, in order to ensure the accuracy, the following way may be selected to perform shooting: the movement mechanism 10 drives the camera center to move by a fixed offset to 9 different positions along the X direction and the Y direction to take pictures respectively, so that corresponding patterns of the standard points 50 in the images are arranged into a 3X 3 lattice in a pixel coordinate system; the current end flange 20 positions were recorded separately at 9 position shots. Specifically, the end flange 20 is first driven to rotate by the movement mechanism 10 by the angular posture α 1 Then the movement mechanism 10 drives the camera center to come near the single calibration point 50 to ensure that the single calibration point 50 can both be used in the subsequent photographing processInto the camera field of view 60; then the movement mechanism 10 drives the camera center to reach one of preset points, photographing is started, and the current position of the tail end flange 20 is recorded; then, the movement mechanism 10 drives the camera center to move a fixed distance along the X direction or the Y direction each time, and after each movement, photographing and recording the current position of the tail end flange 20 are carried out; the track of the movement of the camera center and the position of the calibration point 50 driven by the movement mechanism 10 can be set in a manner shown in fig. 3; it will be understood, of course, that the initial camera point, the camera center, may be selected to be aligned with the calibration point 50, or may be selected to be misaligned with the calibration point 50. Moreover, it will be appreciated that the locus of movement of the camera center by the movement mechanism 10 may be selected in other ways than the one shown in fig. 3, as long as it is ensured that the corresponding pattern of the calibration point 50 in the image is not collinear with at least three points in the pixel coordinate system.
S220, according to the terminal flange position and the TOOL of the camera recorded when the calibration point 50 is shot each time C The position of the camera center of each shooting calibration point 50 is obtained through conversion, and the conversion relation T between the pixel coordinates and the physical position taking the camera center as a reference is obtained according to the pixel coordinates corresponding to the calibration point 50 under each shooting position and the position of the camera center of each shooting calibration point 50 PC
It will be appreciated that, by image processing, the pixel coordinates corresponding to the lower calibration point 50 of each photographing position can be obtained, and the position of each end flange 20 corresponding to the photographing position can be directly obtained from the bottom of the moving mechanism 10 during photographing, and then the camera TOOL is used C The position of the center of each photographing point camera, that is, the coordinates of the center of the camera under the coordinate system of the movement mechanism can be converted. Then the conversion relation T between the pixel coordinate system and the physical position taking the camera center as the reference can be obtained through the pixel coordinate corresponding to the lower calibration point of each photographing position and the position of the camera center of each photographing calibration point PC
Specifically, for example, on the basis of the above 9 different photographing positions, the subscripts of the 9 photographing positions can be obtained by an image processing techniqueThe 9 pixel coordinates corresponding to the fixed point 50, and the 9 terminal flange 20 positions corresponding to the photographing point can be directly obtained from the bottom of the movement mechanism 10 during photographing, and then the camera TOOL is used C The position of the camera center of each photographing point, that is, the coordinates of the camera center under the coordinate system of the movement mechanism during each photographing point, can be converted. Then through the 9 position coordinates of the camera center and the corresponding 9 pixel coordinates, the conversion relation T between the pixel coordinate system and the physical position taking the camera center as the reference can be obtained PC
By the conversion relation T between the acquired pixel coordinate system and the physical position taking the camera center as the reference PC The relative position between the object and the center of the camera, that is, the distance of the object with respect to the center of the camera in the X direction and the Y direction under the motion mechanism coordinates can be obtained by means of the pixel coordinates of the corresponding pattern of the object in the image. It will be appreciated that if the relative distance between the working position 710 and the feature point 90 is large, for example, the marker is a large table, and the feature point 90 is disposed at one corner of the table, and the working position 710 is located at another diagonal or center of the table, when the positioning is performed by only a single feature point 90, there may be a small deviation in image recognition due to the single feature point 90, and thus, there is a large deviation between the actually arrived position of the working tool 40 and the working position 710 in actual working. In order to solve this problem, a plurality of feature points 90 may be used for positioning, for example, two feature points 90, three feature points 90, or three or more feature points 90, or the like.
Referring to fig. 5, if there are two or more feature points 90, in step S300, the end flange 20 is in the first fixed angular attitude α 1 Next, at least one feature point 90 on the marker 70 is photographed, wherein the relative position between the marker 70 and the working position 710 is fixed, and when each feature point 90 is photographed, the position of the current end flange 20 is recorded, which includes the following steps:
s310a, the end flange 20 maintains a first fixed angular attitude α 1 Or the movement mechanism 10 drives the end flange 20 to rotate to reach the first fixed angle posture alpha 1 Lower part;
s320a, aiming at each characteristic point 90, the movement mechanism 10 drives the tail end flange 20 to drive the camera 30 to move to the vicinity of each characteristic point 90, and photographing is respectively carried out at the vicinity of each characteristic point 90; and, when each of the feature points 90 is photographed, the position of the front end flange 20 is recorded.
It will be understood that if the number of feature points 90 is 2 or more, in step S400, the conversion relationship T is determined based on the image obtained by capturing each feature point 90 PC The method for acquiring the position relationship between each feature point 90 and the center of the camera during shooting comprises the following steps:
s410a, after photographing near each feature point 90, acquiring the coordinates of the pattern corresponding to each feature point 90 in a pixel coordinate system by an image processing technology according to the image acquired after each photographing;
S420a, according to the conversion relation T between the pixel coordinate system obtained in step S200 and the physical position with the camera center as the reference PC The coordinate value of each feature point corresponding to the graph in the pixel coordinate system obtained in step S410a can be converted to obtain the positional relationship between each feature point 90 and the center of the camera during photographing, that is, the distance Δx between each feature point and the center of the camera in the X direction and the distance Δy between each feature point and the center of the camera in the Y direction under the coordinate system of the motion mechanism during photographing.
Referring to fig. 5, it can be understood that if there are 2 feature points 90, in step S500, according to the positional relationship between each feature point 90 and the center of the camera at the time of photographing, the current end flange position at the time of photographing each feature point 90, and the camera TOOL C When the camera center is aligned with the feature points 90 or the set relative positions between the feature points 90 are obtained, the position P of the end flange 20 1 The method comprises the following steps:
s510a, acquiring the positions P of the two camera centers when the camera centers are respectively aligned with the two feature points 90 according to the position relationship between each feature point 90 and the camera center during shooting, the TOOL of the camera, and the position of the current end flange 20 during shooting each feature point 90 O1 And P O2
Specifically, since in step S400, it is acquired The distance DeltaX between each characteristic point 90 and the center of the camera in the X direction and the distance DeltaY between the distances DeltaX and Y in the coordinate system of the moving mechanism are the same, and the camera 30 is mounted on the end flange 20, so that on the basis of the current position of the end flange 20 when each characteristic point 90 is shot, if the end flange 20 is driven to move the distances DeltaX and DeltaY, the center of the camera is aligned with the corresponding characteristic point 90, and then the camera TOOL is combined C The positions P of the two camera centers when the camera centers are aligned with the two feature points 90 can be calculated O1 And P O2 . It should be understood, of course, that it is not necessary to actually move the end flange 20 by Δx and Δy as described above, and it may be obtained by calculation only.
S520a, according to the obtained P O1 And P O2 Obtaining P O1 P O2 The position of the central point O of the connecting line;
it can be appreciated that P is acquired O1 P O2 The position of the center point O of the connection line may represent the position when the center of the camera is obtained and aligned with the midpoint of the connection line between the two feature points 90, and at this time, the set relative position between the feature points 90 is the midpoint of the connection line between the two feature points 90.
S530a, according to the position of the center point O and the camera TOOLc, obtaining the position P of the end flange 20 when the center of the camera is aligned with the center point O 1
Specifically, since the position of the center point O may represent the position when the camera center is aligned with the midpoint of the line between the two feature points 90, the position P of the end flange 20 can be obtained by the camera toloc when the camera center is aligned with the relative position between the feature points 90 (i.e., the midpoint of the line between the two feature points 90) 1
It will be appreciated that when dual feature points 90 are used, the camera center is required to be aligned with the set relative position between the feature points 90, except for P as described above O1 P O2 The central point O of the connection line can be P O1 P O2 Other types of points such as the three-point of the line, as long as the relative position between the two feature points 90 is fixed. It will be appreciated that the detailed procedure of step S500 is given above only for the dual feature points 90However, in the calibration method of the present invention, the case of the multiple feature points 90 is not limited to the double feature points 90, and three feature points 90 or more than three feature points 90 may be selected, for example, the set relative positions between the feature points 90 whose camera centers are to be aligned may be set in the case of using three feature points 90, so long as the relative positions between the three feature points 90 and the center point, the inner center, the vertical center, etc. of the triangle structure are fixed.
It will be appreciated that if the dual feature points 90 are used, in step S600, the angle rol-V at which the marker 70 is marked is obtained based on the image obtained by capturing the feature points 90 1 The method comprises the following steps:
s600a, according to P obtained in step S500 O1 And P O2 Obtaining P O1 P O2 The angle of the line to the horizontal center line of the camera, i.e. the angle ROLL-V at which the marker is calibrated 1
Specifically, according to P O1 And P O2 Can obtain P O1 P O2 Angle gamma between connecting line and X axis under moving mechanism coordinate system 1 While the end flange is in the first fixed angular attitude alpha 1 So the included angle between the horizontal center line of the camera and the X axis under the coordinate system of the movement mechanism is fixed, and P can be obtained by calculation O1 P O2 An angle of the line to the horizontal center line of the camera; for example, assume that when the angular pose of the end flange is 0 degrees, the camera horizontal center line forms an angle γ with the X-axis in the motion mechanism coordinate system 2 The end flange is positioned at a first fixed angle posture alpha 1 When the camera is in use, an angle gamma is formed between the horizontal central line of the camera and the X axis under the coordinate system of the movement mechanism 21 Thereby the angle ROLL-V of the marked object can be calculated 1 Is gamma 1 -(γ 21 )。
It will be appreciated that when three or more feature points 90 are used, the step S600 is substantially the same as the step S600a, and only any two feature points among the feature points 90 are required to be used as the dual feature points 90 in the step S600a, and then P is obtained correspondingly O1 And P O2 Then get P again O1 P O2 The angle between the connecting line and the horizontal center line of the camera is only needed.
Referring to fig. 6, it can be understood that if the feature points 90 are one, in step S300, at least one feature point 90 on the marker 70 is photographed in the first fixed angle posture of the end flange 20, and when each feature point 90 is photographed, the current end flange position is recorded, which includes the following steps:
s310b, the end flange 20 maintains a first fixed angular attitude α 1 Or the movement mechanism 10 drives the end flange 20 to rotate to reach the first fixed angle posture alpha 1 Lower part;
s320b, the movement mechanism 10 drives the end flange 20 to drive the camera 30 to move to the vicinity of the feature point 90, and then the camera 30 photographs the feature point 90, and records the current position of the end flange 20.
Moving the camera 30 near the feature point 90 ensures that the feature point 90 is within the camera field of view 60 during photographing to ensure that the captured image has a pattern corresponding to the feature point 90 so that the coordinates of the pattern corresponding to the feature point 90 in the pixel coordinate system can be obtained in a subsequent step.
It will be appreciated that if the feature points 90 are one, in step S400, the conversion relationship T is determined based on the image obtained by capturing each feature point 90 PC The method for acquiring the position relationship between each feature point 90 and the center of the camera during shooting comprises the following steps:
S410b, acquiring coordinate values of the graph corresponding to the feature points 90 in a pixel coordinate system through an image processing technology according to the image shot in the step S300;
s420b, according to the conversion relation T between the pixel coordinate system obtained in the step S200 and the physical position with the camera center as the reference PC The coordinate values of the feature points 90 corresponding to the graphics in the pixel coordinate system obtained in step S410b can be converted to obtain the positional relationship between the feature points and the camera center, i.e. the distances Δx and Δy between the feature points 90 and the camera center in the X direction and the Y direction under the motion coordinate system are obtained.
Referring to fig. 6, it can be understood that if the feature points 90 are one, in step S500, according to the positional relationship between each feature point and the center of the camera at the time of photographing, the current end flange position at the time of photographing each of the feature points 90, and the camera TOOL C When the camera center is aligned with the feature points 90 or the set relative positions between the feature points 90 are obtained, the position P of the end flange 20 1 The method comprises the following steps:
s510b, acquiring the position of the camera center when the camera center is aligned with the feature point 90 according to the position relation between the feature point 90 and the camera center acquired in the step S400 and the position of the end flange 20 when photographing in the step S400;
It is to be understood that the position of the camera center is acquired here in the same manner as described in step S510a, with respect to the camera center alignment feature point 90.
S520b, based on the position of the camera center obtained in step 510b and the camera TOOLC, the position P of the end flange 20 can be calculated when the camera center is aligned with the feature point 90 1
It should be understood that, similarly, the position of the camera center in the alignment feature point 90 is obtained without actually moving the camera center and aligning the feature point 90, which simply requires calculating the position of the camera center in the alignment feature point 90, and finally obtaining the position P of the end flange 20 in the corresponding state 1 And (3) obtaining the product.
It will be appreciated that if a single feature 90 is used, the line between the features 90 cannot be relied upon to obtain the angle ROLL-V at which the marker 70 is marked 1 The method comprises the steps of carrying out a first treatment on the surface of the To solve this problem, when the feature point 90 is one, the feature point 90 is provided in a non-circular and other asymmetric structure, for example, the feature point 90 is provided in a non-isosceles triangle or a non-isosceles trapezoid, etc.; when the feature points 90 are set to the triangular form, specifically, in step S600, the image angle rol-V of the product 80 is acquired from the photographed image 1 The method comprises the following steps:
s600b, obtaining characteristic points according to the image processing technology90, the angle between the connecting line between two pixel points on the boundary of the feature point 90 and the horizontal center line of the camera (namely the u axis of the pixel coordinate system) in the pixel coordinate system is obtained, namely the angle ROLL-V when the marker 70 is marked 1
It can be understood that, during photographing, the boundary of the feature point 90 can be obtained by image processing technology by means of the difference of color, brightness and the like between the figure of the feature point 90 and the background figure in the image, and then the coordinates of two pixel points on the boundary of the feature point 90 can be obtained, and then the calibration angle ROLL-V of the marker can be obtained in a similar form to S600a 1
It is understood that the above step S600b is equally applicable to the case where the feature points 90 are in an asymmetric structure.
It will be appreciated that in step S700, the position P of the end flange 20 is obtained when the work tool 40 reaches the work position 710 in a desired posture T And angular attitude alpha T This may be accomplished in a number of different ways, a few of which are listed below.
In the first mode, in step S700, the position P of the end flange 20 is acquired when the work tool 40 reaches the work position 710 in a desired posture T And angular attitude alpha T The method comprises the following steps:
s710a, controlling the movement mechanism 10 to drive the tail end flange 20 to move in a manual teaching mode, so that the working tool 40 reaches the working position 710 in a desired posture;
s720a, recording the current end flange position P T And angular attitude alpha T
It can be understood that in the first mode, the first manual teaching is mainly implemented, and when the first mode is adopted in step S700, the position relationship between the working tool 40 and the end flange 20 is not required to be acquired or calculated, i.e. the working tool is not required to be calculated, throughout the whole calibration method; and because the working tool 40 is irregular in some working scenarios, the working tool is difficult to obtain effectively, and the working tool and the image processing technology are adopted for obtainingTaking P T And alpha T The acquisition process is complex, the operation is complex, relatively complex offset calculation such as component calculation of the XY direction caused by angle offset is needed, and meanwhile, the adjustment of the angle is needed to be tried manually to obtain the correct working angle posture. Step S700 is therefore simpler in its operation by employing the first approach, and is equally applicable even if work tool 40 is irregular.
In the second mode, if the marker 70 is a carrier or a product carrying a product, step S700 is performed to obtain the position P of the end flange when the work tool reaches the work position in a desired posture T And angular attitude alpha T The method comprises the following steps:
s710b at the end flange 20 in a first fixed angular position α 1 At the time, the product 80 is photographed by a camera, so as to obtain a product image;
s720b, converting the conversion relation T between the product image, the pixel coordinates and the physical position with the camera center as the reference PC TOOL of camera C Acquiring the operation position of the product 80;
it will be appreciated that, specifically, assuming that the operation to be performed on the product 80 is a grabbing product or a welding operation, the characteristics of the boundary or the weld of the product 80 can be obtained by an image processing technique, and the pixel coordinates of the grabbing position or the welding position of the product 80 in the pixel coordinate system can be deduced and calculated according to the characteristics of the boundary or the weld of the product 80, and then according to the conversion relation T between the pixel coordinates, the pixel coordinates and the physical position with the camera center as a reference PC TOOL of camera C The position coordinates of the camera center, namely the operation position of the product under the coordinate system of the movement mechanism, can be obtained when the camera center is aligned to the grabbing position or the welding position (namely the operation position);
S730b, according to the working position of the product 80, acquiring the position P of the end flange 20 when the working tool 40 reaches the working position 710 in a desired posture T And angular attitude alpha T
It can be appreciated that the calibration method described in connection with the embodiments of the present inventionThe calculated tool can calculate P based on the coordinate of the working position 710 of the product 80 in the coordinate system of the motion mechanism T And alpha T
In the third aspect, if the marker 70 is a carrier for receiving the product 80 and a placement structure for placing the product 80 is provided on the carrier, step S700 is performed to obtain the position P of the end flange 20 when the working tool 40 reaches the working position 710 in the desired posture T And angular attitude alpha T The method comprises the following steps:
acquiring the position P of the end flange when the work tool 40 reaches the work position 710 in a desired attitude T And angular attitude alpha T The method comprises the following steps:
s710c at the end flange 20 in a first fixed angular position α 1 At the time, the placement structure is photographed by the camera 30, and an image of the placement structure is obtained;
s720c, according to the conversion relation T between the image of the placement structure, the pixel coordinates and the physical position with the camera center as the reference PC TOOL of camera C Acquiring a working position 710 of the product 80;
it will be appreciated that, assuming that the placement structure is a containing cavity of the product 80, the image of the placement structure is processed by image processing technology to obtain the characteristics of the boundary of the containing cavity, and the pixel coordinates of the placement position of the product 80 in the pixel coordinate system can be deduced and calculated according to the characteristics of the boundary of the containing cavity, and then the conversion relation T between the pixel coordinates, the pixel coordinates and the physical position with the camera center as the reference is calculated PC TOOL of camera C The position coordinates of the camera center, that is, the working position 710 of the product under the coordinate system of the moving mechanism, can be obtained when the camera center is aligned to the grabbing position or the welding position (that is, the working position 710);
s730c, according to the working position 710 of the product, acquiring the position P of the end flange 20 when the working tool 40 reaches the working position 710 in a desired posture T And angular attitude alpha T
It will be appreciated that embodiments of the invention are incorporated thereinThe calculated tool, etc. can calculate P based on the coordinate of the working position 710 of the product 80 in the coordinate system of the motion mechanism T And alpha T
If the operation is performed only for a single product 80 or a single operation position 710 on a carrier, in step S700, the operation is performed only in the first to third manners described above, however, in a real industrial production process, there is often a case where the operation is required for a plurality of products 80 at the same time, or there is a need to perform the operation for a plurality of operation positions 710 on the same product 80. To solve this need, the following methods may be combined on the basis of the first to third modes described above.
Referring to FIG. 5, it can be appreciated that when the marker 70 corresponds to N job positions 710, N is greater than or equal to 1; step S700 of acquiring the position P of the end flange 20 when the work tool 40 reaches the work position 710 in a desired posture T And angular attitude alpha T The method comprises the following steps:
s700d, for each work position 710, the position P of the end flange 20 is obtained when the work tool 40 reaches each work position 710 in the desired posture Tn And angular attitude alpha Tn Wherein N is greater than or equal to N is greater than or equal to 1.
It will be appreciated that, at S700d, for any one of the job positions 710, the corresponding P may be acquired by selecting any one of the first to third modes described above Tn And alpha Tn . For example, the movement mechanism 10 may be controlled to drive the end flange 20 to move by manual teaching, and cause the work tool 40 to arrive at each work position 710 in a desired posture, and record the position P of the current end flange 20 when the work tool 40 arrives at each work position 710 Tn And end flange 20 angular attitude alpha Tn Wherein N is greater than or equal to N is greater than or equal to 1.
In addition to step S700d, step S800 is performed according to P 1 、α 1 、P T 、α T Obtain (P) 1 、α 1 ) And (P) T 、α T ) The transformation relation offset of (1) includes The steps are as follows:
s800d according to P 1 、α 1 And the recorded position P of each set of end flanges Tn And end flange angular attitude alpha Tn Obtain (P) 1 、α 1 ) And each group (P Tn 、α Tn ) Is a transform of offset-n.
It can be understood that, when the marker 70 is a carrier carrying the products 80, the N operation positions 710 corresponding to the marker 70 may be distributed on N different products 80, so that in the operation process after the teaching is completed through the steps S700d and S800d, according to the transformation relationship offset-N corresponding to each product 80, the operation can be ensured to be completed with the desired gesture and position for each product 80 in the subsequent operation process, the operation accuracy of each product 80 is ensured, and the requirement of simultaneously operating a plurality of products 80 is satisfied.
It will be appreciated that, when the tag 70 is a carrier waiting to receive the product 80, the N operation positions 710 corresponding to the tag 70 may be N different placement structures of the product 80 distributed on the carrier, so that in the operation process after the completion of teaching, according to the transformation relationship offset-N of the placement position (operation position) of each product 80, in the subsequent operation process, the product 80 can be placed one by one in a desired posture and position for each product 80 and the like, so as to meet the requirement of placing the plurality of products 80 simultaneously.
It will be appreciated that when the tag 70 is the product 80, the N job positions 710 corresponding to the tag 70 may be N different job positions 710 distributed on the same product 80, so that through the above steps S700d and S800d, the job can be guaranteed to be completed in a desired posture and position for each job position 710 on the product 80 during the job process after the teaching is completed, the accuracy of the job at each job position 710 is guaranteed, and the requirement of simultaneously performing the job on multiple job positions 710 on the same product 80 is satisfied.
Embodiments of the second aspect of the present invention and the visual recognition-based working method implemented in the third aspect of the present invention are described below, both being equally applicable to the working apparatus described in the embodiments of the first aspect.
Referring to fig. 7, the visual recognition-based operation method according to the second embodiment of the present invention includes two major method steps, wherein the first major step is a calibration step, and the second major step is an operation step, wherein the calibration step adopts the visual recognition-based calibration method according to the first embodiment of the present invention, and is a case that a single feature point 90 is adopted in the visual recognition-based operation method according to the first embodiment of the present invention, and therefore, a detailed description will not be given for the calibration step portion in the first major step.
Referring to fig. 8, the visual recognition-based operation method according to the second aspect of the present invention further includes the following second major steps:
s900b at the end flange 20 in a first fixed angular position α 1 Photographing the feature points 90 on the marker 70 again, and recording the position coordinates of the current end flange 20;
s1000b, converting the relationship T according to the image obtained by photographing the marker 70 again in S900b PC The positional relationship between the feature points 90 and the camera center at the time of re-shooting is obtained;
s1100b, according to the position relation between the feature point 90 and the camera center during re-shooting, the current end flange position during re-shooting the feature point 90, and the camera TOOL C Acquiring the position P of the camera center of the camera with the camera center aligned with the feature point 90 again 5
S1200b, acquiring the angle ROLL-V of the marker 70 during operation according to the image obtained by shooting the marker 70 again in S900b 2
It will be appreciated that the detailed steps of S900b to S1200b are substantially identical to the steps of S300 to S600 under the single feature point 90 in the embodiment of the first aspect;
s1300b, according to ROLL-V 2 And ROLL-V 1 Deviation between camera TOOL C Acquiring the center of the camera to be at P 5 Position and end flange in angular attitude alpha 1 +(ROLL-V 2 -ROLL-V 1 ) The lower part of the upper part is provided with a lower part,position P of the end flange 6
It can be appreciated that when the camera center is at P 5 Positioned with the end flange 20 in the angular attitude alpha 1 +(ROLL-V 2 -ROLL-V 1 ) Indicating the position and angle of the marker 70 in the camera field of view 60, and the end flange 20 at calibration at P 1 And alpha 1 The position and angle of the marker 70 in the camera view 60 are consistent in the state according to P 5 Position coordinates of (a) and camera TOOL C The position P of the end flange 20 in this state can be deduced 6
S1400b according to P 6 Angle posture alpha 1 +(ROLL-V 2 -ROLL-V 1 ) The relation offset is transformed to obtain the end flange position P at the working position 710 7 And alpha 7 The method comprises the steps of carrying out a first treatment on the surface of the Moving the end flanges 20 to P 7 And rotates the end flange 20 to the angular attitude alpha 7 The work is performed at the work position by the work tool.
It will be appreciated that when the end flange is at P 6 Position and angle posture alpha 1 +(ROLL-V 2 -ROLL-V 1 ) The position and angular orientation of end flange 20 and work tool 40 in the marker coordinate system established with reference to marker 70 itself is then determined as compared to the position and angular orientation of end flange at the calibration method step (i.e., at P1 and alpha with the end flange 1 In the state of (2) is consistent, and thus passes P at this time 6 Angle posture alpha 1 +(ROLL-V 2 -ROLL-V 1 ) By shifting the relation offset, the end flange position P of the work tool 40 at the desired posture at the work position 710 can be obtained 7 And angular attitude alpha 7 Then the end flange 20 is driven to move by the moving mechanism 10, and the end flange 20 is positioned at the position P 7 And angular attitude alpha 7 I.e., work may be performed directly at work location 710 by work tool 40.
Referring to fig. 9 and 10, the visual recognition-based operation method according to the third embodiment of the present invention includes two main method steps, wherein the first main step is a calibration step, and the second main step is an operation step, wherein the calibration step adopts the visual recognition-based calibration method according to the first embodiment of the present invention, and is a case that a plurality of feature points are adopted in the visual recognition-based operation method according to the first embodiment of the present invention, so that a detailed description will not be given for the calibration step portion in the first main step.
Referring to fig. 10, in a second major step of the visual recognition-based operation method according to the third aspect of the present invention, the operation step includes the steps of:
s900a is in a first fixed angular position alpha at the end flange 20 1 Photographing each feature point 90 on the marker 70 again, and recording the position coordinates of the current end flange 20 when photographing each feature point 90;
S1000a, capturing the image obtained by each feature point 90 again according to the transformation relation T in S900a PC Obtaining the position relationship between each feature point 90 and the center of the camera during re-shooting;
s1100a, according to the position relation between each feature point 90 and the center of the camera when shooting again in S1000a, the current end flange position when shooting again each feature point 90, and the TOOL of the camera C The position P of the camera center is obtained in which the camera center is again aligned with the position P of the camera center between the feature points 90 set relative to each other 5
S1200a, acquiring the angle ROLL-V of the marker 70 during operation according to the image obtained by shooting each feature point 90 again in S900a 2
It will be appreciated that the detailed steps of S900a to S1200a are substantially identical to the steps of S300 to S600 under the plurality of feature points 90 in the embodiment of the first aspect;
s1300a, according to ROLL-V 2 And ROLL-V 1 Deviation between camera TOOL C Acquiring the center of the camera to be at P 5 Positioned with the end flange 20 in the angular attitude alpha 1 +(ROLL-V 2 -ROLL-V 1 ) Position P of end flange 20 in the state of (2) 6
S1400a according to P 6 Angle posture alpha 1 +(ROLL-V 2 -ROLL-V 1 ) Switch with changeable functionIs offset to obtain the end flange position P at the working position 710 7 And alpha 7 The method comprises the steps of carrying out a first treatment on the surface of the Moving the end flange to P 7 And rotating the end flange to an angular attitude alpha 7 Work is performed at work position 710 by work tool 40.
In the description of the present specification, reference to the terms "one embodiment," "some embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.

Claims (15)

1. The calibration method based on visual identification is characterized by being applied to operation equipment, wherein the operation equipment comprises an end effector and a movement mechanism, the end effector comprises an end flange, a camera and an operation tool, the camera and the operation tool are arranged on the end flange, and the movement mechanism can drive the end flange to freely move on an XY plane and can drive the end flange to rotate around an axis of the end flange;
The calibration method comprises the following steps:
the TOOL of the camera is obtained by determining the position relationship between the center and the tail end flange of the camera C
According to the TOOL of the camera C Determining that the end flange is in a first fixed angular attitude alpha 1 At this time, the conversion relation T between the pixel coordinates and the physical position with reference to the camera center PC
At the end flange in a first fixed angular attitude alpha 1 Shooting at least one characteristic point on the marker, wherein the relative position between the marker and the working position is fixed, and recording the position of the current tail end flange when shooting each characteristic point;
according to the image obtained by shooting each characteristic point and the conversion relation T PC Acquiring the position relationship between each characteristic point and the center of the camera during shooting;
according to the position relationship between each characteristic point and the center of the camera during shooting, the current end flange position and the TOOL of the camera during shooting each characteristic point C When the camera center is aligned with the feature points or the set relative positions among the feature points are obtained, the position P of the tail end flange 1
Acquiring an angle ROLL-V of the marked object when the marked object is marked according to the image obtained by shooting the characteristic points 1
Acquiring the position P of the end flange when the work tool reaches the work position in a desired posture T And angular attitude alpha T
According to P 1 、α 1 、P T 、α T Obtain (P) 1 、α 1 ) And (P) T 、α T ) Is a transform of the relation offset.
2. The visual recognition-based calibration method according to claim 1, wherein determining the positional relationship between the center of the camera and the end flange obtains the TOOL of the camera C The method comprises the following steps:
at the end flange at a certain angular attitude alpha 3 Respectively photographing the calibration object from at least three different position points, and obtaining an angle posture alpha according to the position of the tail end flange and the pixel coordinates of the calibration object in the image when photographing each time 3 The transformation relation between the lower pixel coordinates and the flange end position, and the position P required to be reached by the end flange when the calibration object is positioned in the center of the image is obtained according to the transformation relation 3
At the end flange at a certain angular attitude alpha 4 The lower part of the upper part is provided with a lower part,photographing the calibration object from at least three different position points respectively, and obtaining an angle posture alpha according to the position of the tail end flange and the pixel coordinates of the calibration object in the image when photographing each time 4 The transformation relation between the lower pixel coordinates and the flange end position, and the position P required to be reached by the end flange when the calibration object is positioned in the center of the image is obtained according to the transformation relation 4 The method comprises the steps of carrying out a first treatment on the surface of the Wherein alpha is 4 Not equal to alpha 3
According to alpha 3 And P 3 Alpha and alpha 4 And P 4 Calculate the TOOL of the camera C
3. The visual recognition-based calibration method according to claim 2, wherein the method is based on a 3 And P 3 Alpha and alpha 4 And P 4 Calculate the TOOL of the camera C The method comprises the following steps:
will be alpha 3 、P 3 、α 4 、P 4 The related numerical values in the map f are substituted into the mapping relation f TOOL The camera TOOL, wherein,
(X 3 ,Y 3 ) Is P 3 Position coordinates of (X) 4 ,Y 4 ) Is P 4 Is used for the position coordinates of the object.
4. The visual recognition-based calibration method according to claim 2, wherein the method is based on a 3 And P 3 Alpha and alpha 4 And P 4 Calculate the TOOL of the camera C The method comprises the following steps:
acquiring the position P of the end flange 3 And the angle posture is alpha 3 In the state of (2), the camera center coordinates with respect to the camera TOOL in the motion mechanism coordinate system C Expression T of medium parameter 3
Acquiring the position P of the end flange 4 And the angle posture is alpha 4 State of (2)The camera center coordinates in the motion mechanism coordinate system are related to the TOOL of the camera C Expression T of medium parameter 4
According to the relationship that the coordinate position of the camera center in the motion mechanism is unchanged in the two states, an equation T is established 3 =T 4 Obtaining the TOOL of the camera C
Wherein, (X 3 ,Y 3 ) Is P 3 Position coordinates of (X) 4 ,Y 4 ) Is P 4 Is used for the position coordinates of the object.
5. The visual recognition-based calibration method according to claim 1, wherein the method is based on camera TOOL C Determining that the end flange is in a first fixed angular attitude alpha 1 At this time, the conversion relation T between the pixel coordinates and the physical position with reference to the camera center PC The method comprises the following steps:
at the end flange in a first fixed angular attitude alpha 1 Photographing the standard points from at least 3 different positions respectively, so that the corresponding graph of the standard points in the image is not collinear among at least three points in a pixel coordinate system; when photographing the standard points at each position, recording the current position of the tail end flange respectively;
according to the terminal flange position recorded when each shooting of the calibration point and the TOOL of the camera C Converting to obtain the position of the camera center of each shooting calibration point, and acquiring the conversion relation T between the pixel coordinate and the physical position taking the camera center as a reference according to the pixel coordinate corresponding to the calibration point under each shooting position and the position of the camera center of each shooting calibration point PC
6. The visual recognition-based calibration method according to claim 1, wherein if the number of feature points is 2,
wherein,according to the position relationship between each characteristic point and the center of the camera during shooting, the current end flange position and the TOOL of the camera during shooting each characteristic point C When the camera center is aligned with the feature points or the set relative positions among the feature points are obtained, the position P of the tail end flange 1 The method comprises the following steps:
according to the position relationship between each feature point and the center of the camera and the TOOL of the camera during shooting C Shooting the current end flange position of each characteristic point, and acquiring the positions P of the two camera centers when the camera centers are respectively aligned with the two characteristic points O1 And P O2
According to the obtained P O1 And P O2 Obtaining P O1 P O2 The position of the central point O of the connecting line;
based on the position of the center point O and the camera TOOL C Acquiring a terminal flange position P when a camera center is aligned with a center point O 1
7. The visual recognition-based calibration method according to claim 6, wherein, if the number of feature points is two,
wherein, according to the image obtained by shooting each characteristic point, the angle ROLL-V of the marked object when the marked object is marked is obtained 1 The method comprises the following steps:
according to the obtained P O1 And P O2 Obtaining P O1 P O2 The angle between the line and the horizontal center line of the camera, i.e. the angle ROLL-V when the marker is calibrated 1
8. The visual recognition-based calibration method according to claim 1, wherein, if the feature point is one,
wherein, according to the position relationship between each characteristic point and the center of the camera during shooting, the current end flange position and the TOOL of the camera during shooting each characteristic point C When the camera center is aligned with the feature points or the set relative positions among the feature points are obtained, the position P of the tail end flange 1 The method comprises the following steps:
according to the phaseTOOL machine C The position relation between the characteristic points and the camera center during shooting and the current end flange position during shooting the characteristic points are obtained, and the position of the camera center when the camera center is aligned to the characteristic points is obtained;
according to the position of the camera center when the camera center is aligned with the feature point and the camera TOOLC, the position P of the tail end flange is obtained when the camera center is aligned with the feature point 1
9. The visual recognition-based calibration method according to claim 1, wherein the position P of the end flange is obtained when the work tool reaches the work position in a desired posture T And angular attitude alpha T The method comprises the following steps:
the motion mechanism is controlled to drive the tail end flange to move and enable the working tool to reach the working position in a desired gesture by means of manual teaching, and the current position P of the tail end flange is recorded T And angular attitude alpha T
10. The method of claim 1, wherein, if the marker is a carrier or a product carrying the product,
acquiring the position P of the end flange when the work tool reaches the work position in a desired posture T And angular attitude alpha T The method comprises the following steps:
at the end flange in a first fixed angular attitude alpha 1 When the product is photographed by a camera, a product image is obtained;
based on the conversion relation T between the product image, the pixel coordinates and the physical position with the camera center as the reference PC TOOL of camera C Acquiring the operation position of a product;
acquiring the position P of the tail end flange when the working tool reaches the working position in a desired posture according to the working position of the product T And angular attitude alpha T
11. The method of claim 1, wherein if the marker is a carrier for receiving the product and the carrier is provided with a placement structure for placing the product,
acquiring the position P of the end flange when the work tool reaches the work position in a desired posture T And angular attitude alpha T The method comprises the following steps:
at the end flange in a first fixed angular attitude alpha 1 When the camera is used, the camera is used for photographing the placement structure, so that an image of the placement structure is obtained;
based on the conversion relation T between the image of the placement structure, the pixel coordinates and the physical position referenced to the camera center PC TOOL of camera C Acquiring the operation position of a product;
Acquiring the position P of the tail end flange when the working tool reaches the working position in a desired posture according to the working position of the product T And angular attitude alpha T
12. The visual recognition-based calibration method according to claim 1, wherein;
the marker corresponds to N operation positions, and N is greater than or equal to 1;
wherein, when the working tool reaches the working position in a desired posture, the position P of the tail end flange is obtained T And angular attitude alpha T The method comprises the following steps:
for each working position, the position P of the end flange 20 is obtained when the working tool 40 reaches each working position in a desired posture Tn And angular attitude alpha Tn Wherein N is greater than or equal to N is greater than or equal to 1.
13. The visual recognition-based calibration method of claim 12, wherein; according to P 1 、α 1 、P T 、α T Obtain (P) 1 、α 1 ) And (P) T 、α T ) The conversion relation offset of (2) includes the following steps:
according to P 1 、α 1 And the recorded position P of each set of end flanges Tn And end flange angular attitude alpha Tn Obtain (P) 1 、α 1 ) And each group (P Tn 、α Tn ) Is a transform of offset-n.
14. The operation method based on visual identification is characterized by being applied to operation equipment, wherein the operation equipment comprises an end effector and a movement mechanism, the end effector comprises an end flange, a camera and an operation tool, the camera and the operation tool are arranged on the end flange, and the movement mechanism can drive the end flange to freely move on an XY plane and can drive the end flange to rotate around an axis of the end flange;
The operation method comprises the following steps:
the TOOL of the camera is obtained by determining the position relationship between the center and the tail end flange of the camera C The method comprises the steps of carrying out a first treatment on the surface of the According to the TOOL of the camera C Determining that the end flange is in a first fixed angular attitude alpha 1 At this time, the conversion relation T between the pixel coordinate system and the physical position with the camera center as the reference PC The method comprises the steps of carrying out a first treatment on the surface of the At the end flange in a first fixed angular attitude alpha 1 Shooting a single characteristic point on the marker, wherein the relative position between the marker and the operation position is fixed, and recording the position of the current tail end flange when shooting the characteristic point; image obtained according to shooting characteristic points and conversion relation T PC Acquiring the position relationship between the characteristic points and the center of the camera during shooting; according to the position relation between the characteristic points and the center of the camera during shooting, the current end flange position during shooting the characteristic points and the TOOL of the camera C When the camera center alignment feature point is acquired, the position P of the tail end flange 1 The method comprises the steps of carrying out a first treatment on the surface of the Acquiring an angle ROLL-V of the marked object when the marked object is marked according to the image obtained by shooting the characteristic points 1 The method comprises the steps of carrying out a first treatment on the surface of the Acquiring the position P of the end flange when the work tool reaches the work position in a desired posture T And angular attitude alpha T The method comprises the steps of carrying out a first treatment on the surface of the According to P 1 、α 1 、P T 、α T Obtain (P) 1 、α 1 ) And (P) T 、α T ) Is a transform of the offset;
At the end flange is in a first fixed stateAngular attitude alpha 1 Shooting the feature points on the marker again, and recording the position coordinates of the current tail end flange; based on the image obtained by shooting the feature points again, the conversion relation T PC Obtaining the position relation between the feature points and the center of the camera during re-shooting; according to the position relation between the characteristic points and the camera center during the re-shooting, the current terminal flange position during the re-shooting of the characteristic points and the camera TOOL C Acquiring the position P of the camera center when the camera center is aligned with the feature point again 5 The method comprises the steps of carrying out a first treatment on the surface of the Acquiring an angle ROLL-V of the marker during operation according to the image obtained by shooting the characteristic points again 2 The method comprises the steps of carrying out a first treatment on the surface of the According to ROLL-V 2 And ROLL-V 1 Deviation between camera TOOL C Acquiring the center of the camera to be at P 5 Position and end flange in angular attitude alpha 1 +(ROLL-V 2 -ROLL-V 1 ) Lower end flange position P 6 The method comprises the steps of carrying out a first treatment on the surface of the According to P 6 Angle posture alpha 1 +(ROLL-V 2 -ROLL-V 1 ) Converting the relation offset to obtain the end flange position P in the working position 7 And angular attitude alpha 7 The method comprises the steps of carrying out a first treatment on the surface of the Moving the end flange to P 7 And rotating the end flange to an angular attitude alpha 7 The work is performed at the work position by the work tool.
15. The operation method based on visual identification is characterized by being applied to operation equipment, wherein the operation equipment comprises an end effector and a movement mechanism, the end effector comprises an end flange, a camera and an operation tool, the camera and the operation tool are arranged on the end flange, and the movement mechanism can drive the end flange to freely move on an XY plane and can drive the end flange to rotate around an axis of the end flange;
The operation method comprises the following steps:
the TOOL of the camera is obtained by determining the position relationship between the center and the tail end flange of the camera C The method comprises the steps of carrying out a first treatment on the surface of the According to the TOOL of the camera C Determining that the end flange is in a first fixed angular attitude alpha 1 At this time, the conversion relation T between the pixel coordinate system and the physical position with the camera center as the reference PC The method comprises the steps of carrying out a first treatment on the surface of the At the end flange in a first fixed angular attitude alpha 1 Shooting a plurality of characteristic points on the marker, wherein the relative positions between the marker and the operation position are fixed, and recording the position of the current tail end flange when shooting each characteristic point; according to the image obtained by shooting each characteristic point and the conversion relation T PC Acquiring the position relationship between each characteristic point and the center of the camera during shooting; according to the position relationship between each characteristic point and the center of the camera during shooting, the current end flange position and the TOOL of the camera during shooting each characteristic point C When the set relative position between the camera center alignment feature points is obtained, the position P of the tail end flange 1 The method comprises the steps of carrying out a first treatment on the surface of the Acquiring the angle ROLL-V of the marked object when the marked object is marked according to the image obtained by shooting the marked point 1 The method comprises the steps of carrying out a first treatment on the surface of the Acquiring the position P of the end flange when the work tool reaches the work position in a desired posture T And angular attitude alpha T The method comprises the steps of carrying out a first treatment on the surface of the According to P 1 、α 1 、P T 、α T Obtain (P) 1 、α 1 ) And (P) T 、α T ) Is a transform of the offset;
at the end flange in a first fixed angular attitude alpha 1 Photographing each characteristic point on the marker again, and recording the position of the current tail end flange when photographing each characteristic point; according to the image obtained by shooting each characteristic point again and the conversion relation T PC Obtaining the position relation between each characteristic point and the center of the camera during secondary shooting; according to the position relation between each characteristic point and the center of the camera during re-shooting, the current end flange position and the camera TOOL during re-shooting each characteristic point C Acquiring a position P of a camera center in which the camera center is aligned again with a set relative position between feature points 5 The method comprises the steps of carrying out a first treatment on the surface of the Acquiring an angle ROLL-V of the marker during operation according to the image obtained by shooting each characteristic point again 2 The method comprises the steps of carrying out a first treatment on the surface of the According to ROLL-V 2 And ROLL-V 1 Deviation between camera TOOL C Acquiring the center of the camera to be at P 5 Position and end flange in angular attitude alpha 1 +(ROLL-V 2 -ROLL-V 1 ) In the state of (2) the position P of the end flange 6 The method comprises the steps of carrying out a first treatment on the surface of the According to P 6 Angle posture alpha 1 +(ROLL-V 2 -ROLL-V 1 ) Converting the relation offset to obtain the end flange position P in the working position 7 And angular attitude alpha 7 The method comprises the steps of carrying out a first treatment on the surface of the Moving the end flange to P 7 And rotating the end flange to an angular attitude alpha 7 The work is performed at the work position by the work tool.
CN202011545117.XA 2020-12-24 2020-12-24 Calibration method and operation method based on visual recognition Active CN112598752B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011545117.XA CN112598752B (en) 2020-12-24 2020-12-24 Calibration method and operation method based on visual recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011545117.XA CN112598752B (en) 2020-12-24 2020-12-24 Calibration method and operation method based on visual recognition

Publications (2)

Publication Number Publication Date
CN112598752A CN112598752A (en) 2021-04-02
CN112598752B true CN112598752B (en) 2024-02-27

Family

ID=75200614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011545117.XA Active CN112598752B (en) 2020-12-24 2020-12-24 Calibration method and operation method based on visual recognition

Country Status (1)

Country Link
CN (1) CN112598752B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115471446A (en) * 2022-06-23 2022-12-13 上海江波龙数字技术有限公司 Slot position coordinate obtaining method and device and storage medium
CN116297531B (en) * 2023-05-22 2023-08-01 中科慧远视觉技术(北京)有限公司 Machine vision detection method, system, medium and equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108724190A (en) * 2018-06-27 2018-11-02 西安交通大学 A kind of industrial robot number twinned system emulation mode and device
CN109454634A (en) * 2018-09-20 2019-03-12 广东工业大学 A kind of Robotic Hand-Eye Calibration method based on flat image identification
CN110450163A (en) * 2019-08-20 2019-11-15 上海中车瑞伯德智能***股份有限公司 The general hand and eye calibrating method based on 3D vision without scaling board
CN110634164A (en) * 2019-10-16 2019-12-31 易思维(杭州)科技有限公司 Quick calibration method for vision sensor
CN111127568A (en) * 2019-12-31 2020-05-08 南京埃克里得视觉技术有限公司 Camera pose calibration method based on space point location information
KR102111655B1 (en) * 2019-11-01 2020-06-04 주식회사 뉴로메카 Automatic calibration method and apparatus for robot vision system
CN111649667A (en) * 2020-05-29 2020-09-11 新拓三维技术(深圳)有限公司 Flange pipeline end measuring method, measuring device and adapter structure
CN111791227A (en) * 2019-12-31 2020-10-20 深圳市豪恩声学股份有限公司 Robot hand-eye calibration method and device and robot

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10345743A1 (en) * 2003-10-01 2005-05-04 Kuka Roboter Gmbh Method and device for determining the position and orientation of an image receiving device
WO2016023636A1 (en) * 2014-08-14 2016-02-18 Kuka Roboter Gmbh Carrier system for a manipulator

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108724190A (en) * 2018-06-27 2018-11-02 西安交通大学 A kind of industrial robot number twinned system emulation mode and device
CN109454634A (en) * 2018-09-20 2019-03-12 广东工业大学 A kind of Robotic Hand-Eye Calibration method based on flat image identification
CN110450163A (en) * 2019-08-20 2019-11-15 上海中车瑞伯德智能***股份有限公司 The general hand and eye calibrating method based on 3D vision without scaling board
CN110634164A (en) * 2019-10-16 2019-12-31 易思维(杭州)科技有限公司 Quick calibration method for vision sensor
KR102111655B1 (en) * 2019-11-01 2020-06-04 주식회사 뉴로메카 Automatic calibration method and apparatus for robot vision system
CN111127568A (en) * 2019-12-31 2020-05-08 南京埃克里得视觉技术有限公司 Camera pose calibration method based on space point location information
CN111791227A (en) * 2019-12-31 2020-10-20 深圳市豪恩声学股份有限公司 Robot hand-eye calibration method and device and robot
CN111649667A (en) * 2020-05-29 2020-09-11 新拓三维技术(深圳)有限公司 Flange pipeline end measuring method, measuring device and adapter structure

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Flange-Based Hand-Eye Calibration Using a 3D Camera With High Resolution, Accuracy, and Frame Rate;Fang Wan等;《Front Robot AI 》;20200501;1-10 *
Robot calibration using a 3D vision-based measurement system with a single camera;José Maurı́cio S.T. Motta等;《Robotics and Computer-Integrated Manufacturing》;20011231;第17卷(第6期);487-497 *
基于激光跟踪仪的机器人抛光工具***标定;孙义林;樊成;陈国栋;龚勋;许辉;;制造业自动化;20141225(24);全文 *
自由度冗余蛇形臂机器人手眼标定研究;王达;娄小平;董明利;孙鹏;;计算机测量与控制;20150825(08);全文 *

Also Published As

Publication number Publication date
CN112598752A (en) 2021-04-02

Similar Documents

Publication Publication Date Title
JP6966582B2 (en) Systems and methods for automatic hand-eye calibration of vision systems for robot motion
JP5815761B2 (en) Visual sensor data creation system and detection simulation system
US6816755B2 (en) Method and apparatus for single camera 3D vision guided robotics
TWI672206B (en) Method and apparatus of non-contact tool center point calibration for a mechanical arm, and a mechanical arm system with said calibration function
JP4021413B2 (en) Measuring device
JP4191080B2 (en) Measuring device
US8095237B2 (en) Method and apparatus for single image 3D vision guided robotics
CN112598752B (en) Calibration method and operation method based on visual recognition
US20090234502A1 (en) Apparatus for determining pickup pose of robot arm with camera
CN110276799B (en) Coordinate calibration method, calibration system and mechanical arm
CN109671122A (en) Trick camera calibration method and device
WO2004007150A1 (en) Carriage robot system and its controlling method
CN113001535A (en) Automatic correction system and method for robot workpiece coordinate system
CN110148187A (en) A kind of the high-precision hand and eye calibrating method and system of SCARA manipulator Eye-in-Hand
CN110980276B (en) Method for implementing automatic casting blanking by three-dimensional vision in cooperation with robot
TWI699264B (en) Correction method of vision guided robotic arm
CN110202560A (en) A kind of hand and eye calibrating method based on single feature point
CN115008477B (en) Manipulator movement compensation method, manipulator movement compensation device and computer-readable storage medium
JP2016165778A (en) Stage mechanism
CN113500593A (en) Method for grabbing designated part of shaft workpiece for loading
JP6912529B2 (en) How to correct the visual guidance robot arm
JPWO2018173192A1 (en) Parallelism determination method for articulated robot and tilt adjustment device for articulated robot
CN114643577B (en) Universal robot vision automatic calibration device and method
WO2022075303A1 (en) Robot system
CN115409878A (en) AI algorithm for workpiece sorting and homing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant