CN115922695A - Grabbing method based on plane vision guide mechanical arm - Google Patents

Grabbing method based on plane vision guide mechanical arm Download PDF

Info

Publication number
CN115922695A
CN115922695A CN202211372629.XA CN202211372629A CN115922695A CN 115922695 A CN115922695 A CN 115922695A CN 202211372629 A CN202211372629 A CN 202211372629A CN 115922695 A CN115922695 A CN 115922695A
Authority
CN
China
Prior art keywords
image
coordinate system
mechanical arm
tag
grabbing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211372629.XA
Other languages
Chinese (zh)
Inventor
胡进杰
张颂哲
练洪威
杨焯荣
曾广胜
吴凯平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xinhao Precision Technology Co ltd
Original Assignee
Guangzhou Xinhao Precision Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xinhao Precision Technology Co ltd filed Critical Guangzhou Xinhao Precision Technology Co ltd
Priority to CN202211372629.XA priority Critical patent/CN115922695A/en
Publication of CN115922695A publication Critical patent/CN115922695A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Manipulator (AREA)

Abstract

The invention provides a grabbing method based on a plane vision guide mechanical arm, which comprises the steps of placing parts in a mechanical arm grabbing area, then shooting an image of the mechanical arm grabbing area by using a camera, and carrying out gray processing on the image; carrying out data type conversion on the image, subtracting the pixel values of the image and the background image, taking the absolute value, and storing the subtracted image; carrying out noise reduction, binarization processing and black-white negation processing, and carrying out outline outsourcing to obtain a central point coordinate of the outline relative to an image coordinate system and a deflection angle of the outline; and calculating the coordinate and the rotation angle of the part profile under the tool coordinate system, sending the coordinate and the rotation angle to the mechanical arm, and guiding the mechanical arm to finish grabbing the part. According to the invention, the mechanical arm is guided to act by accurately identifying the coordinates and the rotation angle of the part to be grabbed, and finally, the part is grabbed accurately; the invention can detect the parts in the grabbing area, has low requirement on the placement of the parts, and has high recognition speed and grabbing efficiency.

Description

Grabbing method based on plane vision guide mechanical arm
Technical Field
The invention relates to the technical field of robots, in particular to a grabbing method based on a plane vision guide mechanical arm.
Background
At present, the automation degree of a production line of the manufacturing industry is higher and higher, and the mechanical arm and the plane vision guide system are integrated on the automatic production line, so that the metal part is guided and grabbed by the vision system.
However, the existing robot mechanical arm is low in visual degree, cannot accurately identify the position and the posture of a product part, so that the part needs to be placed according to requirements, and the mechanical arm can identify and grab the part, so that grabbing efficiency is low.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a grabbing method based on a plane vision guide mechanical arm.
The technical scheme of the invention is as follows: a grabbing method based on a plane vision guide mechanical arm comprises the following steps:
s1) placing the part in a region grabbed by a mechanical arm, and then shooting an image X of the region grabbed by the mechanical arm by using a camera 1
S2) image X of the area grabbed by the manipulator with the part 1 Performing graying to obtain grayed image X 2
S3), utilizing the astype () function of OpenCV to perform gray level processing on the image X in the step S2) 2 Background image M of mechanical hand grabbing area subjected to graying processing 1 Performing data type conversion, and then converting the image X 2 With the background image M 1 Subtracting the pixel values of (a) and taking the absolute value and saving the subtracted image X 3
Wherein, the image X 2 With the background image M 1 The part of the image X with the absolute value of 0 obtained by subtracting the pixel values is used as the background of the manipulator grabbing area 2 With the background image M 1 The part of which the absolute value after subtraction of the pixel values is not 0 is a part;
s4), morphological function morphologyEx (by using OpenCV)) For the image X processed in the step S3) 3 Carrying out noise reduction treatment, and carrying out binarization treatment and black-white inversion treatment by utilizing a threshold () function to finally obtain a pixel partial image X of the part 4
S5), image X processed in step S4) is subjected to a findContours () function of OpenCV 4 Carrying out outline outsourcing and obtaining the center point coordinate (x) of the outline relative to an image coordinate system tag ,y tag ) And the deflection angle theta of the profile r
S6), calculating pixel partial image X of part 4 Coordinates (X) of the contour in the tool coordinate system tag ,Y tag ) To the angle of rotation theta tag
S7), obtaining the coordinates (X) tag ,Y tag ) To the angle of rotation theta tag And sending the data to the mechanical arm, and guiding the mechanical arm to finish grabbing the part.
Preferably, the following are included before step S1):
before placing the parts, shooting an image of an area captured by a manipulator without the parts by using a camera, carrying out gray processing on the image, and taking the image after the gray processing as a background image M 1 And then storing the picture after graying processing, wherein the graying processing is as follows:
Gray=0.299*R+0.587*G+0.114*B。
preferably, the following are included before step S1): the method comprises the steps of randomly confirming a point in a camera shooting area as an original point of a tool coordinate system, selecting the original point of the tool coordinate system outside a manipulator grabbing area, determining a base coordinate system of a manipulator through a manipulator modeling method, and reading a coordinate point (X) of the original point of the tool coordinate system under the base coordinate system by using a demonstrator tool ,Y tool )。
Preferably, in step S2), the image X taken by the camera is 1 The formula for performing graying processing is as follows:
Gray=0.299*R+0.587*G+0.114*B。
preferably, in step S4), the denoising process specifically includes the following steps:
the image X processed in step S3) is processed by using morphology function morphologeEx () of OpenCV 3 Performing 5-by-5 median filtering, defining a data kernel size, and performing morphological closed operation processing on the median-filtered picture;
finally, thresholding is performed on the image, where pixel values greater than 127 become 255 and pixel values less than 127 become 0, which will produce a picture containing only the workpiece.
Preferably, in step S4), the binarization processing is: the pixel value of the image larger than 127 is changed to 255, and the pixel value of the image lower than 127 is changed to 0.
Preferably, in step S4), the black-and-white inversion process is performed such that a dot with a pixel value of 255 is changed to 0 and a dot with a pixel value of 0 is changed to 255; wherein, when the pixel point is 255, the image is pure black, and when the pixel point is 0, the image is pure white.
Preferably, in step S6), a pixel partial image X of the part is calculated 4 Coordinates (X) of the contour in the tool coordinate system tag ,Y tag ) To the angle of rotation theta tag (ii) a The method specifically comprises the following steps:
s61), calculating the relative displacement, specifically as follows:
Figure BDA0003923905620000031
in the formula, X p ,Y p Is the relative displacement in the x and y directions, respectively, (x) 0 ,y 0 ) Coordinates of the origin of the tool coordinate system in the image coordinate system; theta is an included angle between the tool coordinate system and the image coordinate system; x is the number of err Error offset in x direction, y err Error displacement in the y direction; a = f/H, where f is the focal length and H is the height of the camera;
s62), coordinates (X) tag ,Y tag ) To the angle of rotation theta tag The method comprises the following specific steps:
Figure BDA0003923905620000041
wherein (X) tool ,Y tool ) A coordinate point of an origin of a tool coordinate system under a base coordinate system; theta.theta. tool Is the angle between the tool coordinate system and the base coordinate system.
The invention has the beneficial effects that:
1. according to the invention, the mechanical arm is guided to act by accurately identifying the coordinates and the rotation angle of the part to be grabbed, and finally, the part is grabbed accurately;
2. the invention can detect the parts in the grabbing area, has low requirement on the placement of the parts and thus improves the grabbing efficiency;
3. according to the invention, the shot part image is compared with the background image, so that the position and the posture of the part are accurately identified and transmitted to the mechanical arm for grabbing, the shooting and identifying speed is high, and the grabbing efficiency is further improved.
Drawings
FIG. 1 is a schematic diagram of 3 coordinates according to an embodiment of the present invention;
FIG. 2 is a diagram of an image coordinate system according to an embodiment of the present invention.
Detailed Description
The following further describes embodiments of the present invention in conjunction with the attached figures:
the embodiment provides a grabbing method based on a plane vision guide mechanical arm, which comprises the following steps:
s1), placing the part in a region grabbed by the manipulator, and then shooting an image X of the region grabbed by the manipulator by using a camera 1
S2) image X of the region captured by the manipulator with the part 1 Performing graying to obtain grayed image X 2 (ii) a Wherein the image X taken by the camera 1 The formula for performing graying processing is as follows:
Gray=0.299*R+0.587*G+0.114*B。
s3), utilizing the astype () function of OpenCV to perform gray level processing on the image X processed in the step S2) 2 And graying processingThe manipulator captures the background image M of the area 1 Performing data type conversion to make image X 2 And a background image M 1 The data formats of the data are the same; then image X 2 With the background image M 1 Subtracting the pixel values of (a) and taking the absolute value and saving the subtracted image X 3
Wherein, the image X 2 With the background image M 1 The part of the image X with the absolute value of 0 obtained by subtracting the pixel values is used as the background of the manipulator grabbing area 2 With the background image M 1 The part of which the absolute value after subtraction of the pixel values is not 0 is a part;
s4), utilizing morphology function morphologyEx () of OpenCV to carry out processing on the image X processed in the step S3) 3 Carrying out noise reduction treatment, and carrying out binarization treatment and black-white inversion treatment by utilizing a threshold () function to finally obtain a pixel partial image X of the part 4
In this embodiment, the denoising process is as follows:
the image X processed in step S3) is processed by using morphology function morphologeEx () of OpenCV 3 Performing 5-by-5 median filtering, defining a data kernel size, and performing morphological closed operation processing on the median-filtered picture;
finally, thresholding is performed on the image, where pixel values greater than 127 become 255 and pixel values less than 127 become 0, which results in a picture containing only the workpiece.
The binarization processing is as follows: the pixel value of the image larger than 127 is changed to 255, and the pixel value of the image lower than 127 is changed to 0.
The black-white inversion processing means that a point with a pixel value of 255 is changed into 0, and a point with a pixel value of 0 is changed into 255; wherein, when the pixel point is 255, the image is pure black, and when the pixel point is 0, the image is pure white.
S5), the image X processed in the step S4) is processed by using the findContours () function of OpenCV 4 Carrying out outline outsourcing and obtaining the center point coordinate (x) of the outline relative to an image coordinate system tag ,y tag ) And the deflection angle theta of the profile r (ii) a As shown in fig. 2;
s6), calculating pixel partial image X of part 4 Coordinates (X) of the contour in the tool coordinate system tag ,Y tag ) To the angle of rotation theta tag (ii) a The method specifically comprises the following steps:
s61), calculating the relative displacement, specifically as follows:
Figure BDA0003923905620000061
in the formula, X p ,Y p Is the relative displacement in the x and y directions, respectively, (x) 0 ,y 0 ) Coordinates of the origin of the tool coordinate system in the image coordinate system; theta is an included angle between the tool coordinate system and the image coordinate system; x is the number of err Error offset in x-direction, y err Error displacement in the y direction; a = f/H, where f is the focal length and H is the height of the camera;
s62), coordinates (X) tag ,Y tag ) To the angle of rotation theta tag The method comprises the following specific steps:
Figure BDA0003923905620000062
wherein (X) tool ,Y tool ) A coordinate point of an origin of a tool coordinate system under a base coordinate system; theta tool Is the included angle between the tool coordinate system and the base coordinate system;
s7), obtaining the coordinates (X) tag ,Y tag ) To the angle of rotation theta tag And sending the part to the mechanical arm, and guiding the mechanical arm to finish grabbing the part.
Preferably, in this embodiment, before step S1), the following steps are further included:
before placing the parts, shooting an image of an area captured by a manipulator without the parts by using a camera, carrying out gray processing on the image, and taking the image after the gray processing as a background image M 1 And then storing the picture after graying processing, wherein the graying processing is as follows:
Gray=0.299*R+0.587*G+0.114*B。
preferably, the following is included before step S1): the method comprises the steps of randomly confirming a point in a camera shooting area as an original point of a tool coordinate system, selecting the original point of the tool coordinate system outside a manipulator grabbing area, determining a base coordinate system of a manipulator through a manipulator modeling method, and reading a coordinate point (X) of the original point of the tool coordinate system under the base coordinate system by using a demonstrator tool ,Y tool ). As shown in fig. 1, in the present embodiment, the tool coordinate system, the base coordinate system, and the image coordinate system are shown in fig. 1.
The foregoing embodiments and description have been presented only to illustrate the principles and preferred embodiments of the invention, and various changes and modifications may be made therein without departing from the spirit and scope of the invention as hereinafter claimed.

Claims (8)

1. A grabbing method based on a plane vision guide mechanical arm is characterized by comprising the following steps:
s1), placing the part in a region grabbed by the manipulator, and then shooting an image X of the region grabbed by the manipulator by using a camera 1
S2) image X of the region captured by the manipulator with the part 1 Performing graying to obtain grayed image X 2
S3), utilizing the astype () function of OpenCV to perform gray level processing on the image X in the step S2) 2 Background image M of mechanical hand grabbing area subjected to graying processing 1 Performing data type conversion, and then converting the image X 2 With the background image M 1 And taking the absolute value of the subtracted pixel values, and storing the subtracted image X 3
Wherein, the image X 2 With the background image M 1 The part of the image X with the absolute value of 0 obtained by subtracting the pixel values is used as the background of the manipulator grabbing area 2 With the background image M 1 The part of which the absolute value after subtraction of the pixel values is not 0 is a part;
s4) utilization ofThe morphological function morphologyEx () of OpenCV is applied to the image X processed in step S3) 3 Carrying out noise reduction treatment, and carrying out binarization treatment and black-white inversion treatment by utilizing a threshold () function to finally obtain a pixel partial image X of the part 4
S5), image X processed in step S4) is subjected to a findContours () function of OpenCV 4 Carrying out outline outsourcing and obtaining the center point coordinate (x) of the outline relative to an image coordinate system tag ,y tag ) And the deflection angle theta of the profile r
S6), calculating pixel partial image X of part 4 Coordinates (X) of the contour in the tool coordinate system tag ,Y tag ) To the angle of rotation theta tag
S7), obtaining the coordinates (X) tag ,Y tag ) To the angle of rotation theta tag And sending the part to the mechanical arm, and guiding the mechanical arm to finish grabbing the part.
2. The grabbing method based on the planar vision guiding mechanical arm is characterized in that: before step S1), the method further comprises the following steps:
before placing the parts, shooting an image of an area captured by a manipulator without the parts by using a camera, carrying out gray processing on the image, and taking the image after the gray processing as a background image M 1 And then storing the picture after graying processing, wherein the graying processing is as follows:
Gray=0.299*R+0.587*G+0.114*B。
3. the grabbing method based on the planar vision guiding mechanical arm is characterized in that: the following are included before step S1): the method comprises the steps of randomly confirming a point in a camera shooting area as an original point of a tool coordinate system, selecting the original point of the tool coordinate system outside a manipulator grabbing area, determining a base coordinate system of a manipulator through a manipulator modeling method, and reading a coordinate point (X) of the original point of the tool coordinate system under the base coordinate system by using a demonstrator tool ,Y tool )。
4. The grabbing method based on the planar vision guiding mechanical arm is characterized in that: in step S2), the image X shot by the camera is 1 The formula for performing graying processing is as follows:
Gray=0.299*R+0.587*G+0.114*B。
5. the grabbing method based on the planar vision guiding mechanical arm is characterized in that: in step S4), the denoising process specifically includes the following steps:
the image X processed in step S3) is processed by using morphology function morphologyEx () of OpenCV 3 And 5-by-5 median filtering is carried out, a data core size is defined, and morphological closed operation processing is carried out on the median-filtered picture.
6. The grabbing method based on the planar vision guiding mechanical arm is characterized in that: in step S4), the binarization processing is: the pixel value of the image larger than 127 is changed to 255, and the pixel value of the image lower than 127 is changed to 0.
7. The grabbing method based on the planar vision guide mechanical arm as claimed in claim 6, wherein: in step S4), the black-and-white inversion process is to change the pixel value of 255 to 0 and the pixel value of 0 to 255; wherein, when the pixel point is 255, the image is pure black, and when the pixel point is 0, the image is pure white.
8. The grabbing method based on the planar vision guiding mechanical arm is characterized in that: in step S6), pixel partial image X of the part is calculated 4 Coordinates (X) of the contour in the tool coordinate system tag ,Y tag ) To the angle of rotation theta tag (ii) a The method specifically comprises the following steps:
s61), calculating the relative displacement, specifically as follows:
Figure FDA0003923905610000031
in the formula, X p ,Y p Is the relative displacement in the x and y directions, respectively, (x) 0 ,y 0 ) Coordinates of an origin of a tool coordinate system in an image coordinate system; theta is an included angle between the tool coordinate system and the image coordinate system; x is a radical of a fluorine atom err Error offset in x-direction, y err Error displacement in the y direction; a = f/H, where f is the focal length and H is the height of the camera;
s62), coordinates (X) tag ,Y tag ) To the angle of rotation theta tag The method comprises the following specific steps:
Figure FDA0003923905610000032
wherein (X) tool ,Y tool ) A coordinate point of the origin of the tool coordinate system under the base coordinate system; theta tool Is the angle between the tool coordinate system and the base coordinate system.
CN202211372629.XA 2022-11-03 2022-11-03 Grabbing method based on plane vision guide mechanical arm Pending CN115922695A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211372629.XA CN115922695A (en) 2022-11-03 2022-11-03 Grabbing method based on plane vision guide mechanical arm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211372629.XA CN115922695A (en) 2022-11-03 2022-11-03 Grabbing method based on plane vision guide mechanical arm

Publications (1)

Publication Number Publication Date
CN115922695A true CN115922695A (en) 2023-04-07

Family

ID=86655004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211372629.XA Pending CN115922695A (en) 2022-11-03 2022-11-03 Grabbing method based on plane vision guide mechanical arm

Country Status (1)

Country Link
CN (1) CN115922695A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117689716A (en) * 2023-12-15 2024-03-12 广州赛志***科技有限公司 Plate visual positioning, identifying and grabbing method, control system and plate production line

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117689716A (en) * 2023-12-15 2024-03-12 广州赛志***科技有限公司 Plate visual positioning, identifying and grabbing method, control system and plate production line
CN117689716B (en) * 2023-12-15 2024-05-17 广州赛志***科技有限公司 Plate visual positioning, identifying and grabbing method, control system and plate production line

Similar Documents

Publication Publication Date Title
CN109785317B (en) Automatic pile up neatly truss robot's vision system
CN110163853B (en) Edge defect detection method
CN110648367A (en) Geometric object positioning method based on multilayer depth and color visual information
CN108416809B (en) Steel drum threaded cap pose recognition method based on machine vision
CN111652085B (en) Object identification method based on combination of 2D and 3D features
CN113902810B (en) Robot gear chamfering processing method based on parallel binocular stereoscopic vision
CN110866903B (en) Ping-pong ball identification method based on Hough circle transformation technology
CN112529858A (en) Welding seam image processing method based on machine vision
CN113034600B (en) Template matching-based texture-free planar structure industrial part identification and 6D pose estimation method
CN112318485B (en) Object sorting system and image processing method and device thereof
CN112497219B (en) Columnar workpiece classifying and positioning method based on target detection and machine vision
CN113146172A (en) Multi-vision-based detection and assembly system and method
CN115922695A (en) Grabbing method based on plane vision guide mechanical arm
CN115830018B (en) Carbon block detection method and system based on deep learning and binocular vision
CN112084964A (en) Product identification apparatus, method and storage medium
CN113822810A (en) Method for positioning workpiece in three-dimensional space based on machine vision
CN113781413B (en) Electrolytic capacitor positioning method based on Hough gradient method
CN115205286A (en) Mechanical arm bolt identification and positioning method for tower-climbing robot, storage medium and terminal
CN110992416A (en) High-reflection-surface metal part pose measurement method based on binocular vision and CAD model
CN112338898B (en) Image processing method and device of object sorting system and object sorting system
CN113705564A (en) Pointer type instrument identification reading method
CN117237391A (en) Contour detection method based on machine vision
CN110899147B (en) Laser scanning-based online stone sorting method for conveyor belt
CN113052794A (en) Image definition recognition method based on edge features
CN116594351A (en) Numerical control machining unit system based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination