CN116766183B - Mechanical arm control method and device based on visual image - Google Patents

Mechanical arm control method and device based on visual image Download PDF

Info

Publication number
CN116766183B
CN116766183B CN202310710829.XA CN202310710829A CN116766183B CN 116766183 B CN116766183 B CN 116766183B CN 202310710829 A CN202310710829 A CN 202310710829A CN 116766183 B CN116766183 B CN 116766183B
Authority
CN
China
Prior art keywords
center point
grabbing
current picture
target object
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310710829.XA
Other languages
Chinese (zh)
Other versions
CN116766183A (en
Inventor
陈国栋
姚军亭
贾风光
李志锋
丁斌
古缘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Zhongqing Intelligent Technology Co ltd
Original Assignee
Shandong Zhongqing Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Zhongqing Intelligent Technology Co ltd filed Critical Shandong Zhongqing Intelligent Technology Co ltd
Priority to CN202310710829.XA priority Critical patent/CN116766183B/en
Publication of CN116766183A publication Critical patent/CN116766183A/en
Application granted granted Critical
Publication of CN116766183B publication Critical patent/CN116766183B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1612Programme controls characterised by the hand, wrist, grip control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J15/00Gripping heads and other end effectors
    • B25J15/02Gripping heads and other end effectors servo-actuated
    • B25J15/0206Gripping heads and other end effectors servo-actuated comprising articulated grippers
    • B25J15/022Gripping heads and other end effectors servo-actuated comprising articulated grippers actuated by articulated links
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J15/00Gripping heads and other end effectors
    • B25J15/08Gripping heads and other end effectors having finger members
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Manipulator (AREA)

Abstract

The application discloses a mechanical arm control method and device based on visual images. In the mechanical arm control device, each grabbing finger of the mechanical grabbing hand is fixedly provided with an illumination optical fiber, and the laser can project indication light spots to the workbench through the illumination optical fiber; the central camera is arranged between the grabbing fingers and used for shooting the workbench, and the current picture shot by the central camera comprises a target object and each indication light spot; the control device can identify a target object and an indication light spot in the current picture; connecting all the indicating light spots to form a grabbing range profile, and determining a profile center point of the grabbing range profile; and the relative position between the mechanical arm and the target object is rapidly and accurately calibrated through the position relation between the contour center point and the target center point of the target object.

Description

Mechanical arm control method and device based on visual image
Technical Field
The application relates to the field of intelligent robots, in particular to a method and a device for controlling a mechanical arm based on visual images.
Background
At present, various automation industries rapidly develop, and certain deviation exists in manual operation for some operations which are simple and repeated and require high precision, and a high-precision mechanical arm is required to operate. Machine vision is a branch of the rapid development of artificial intelligence, and in short, machine vision is to use a machine instead of a human eye to make measurements and decisions. In the process of grabbing an object by an industrial robot arm based on machine vision, a camera is required to acquire a picture of the object, determine position information of the object and calibrate the relative position between the robot arm and the object, so that the robot arm is controlled to perform related operations to grab the object.
Among them, how to quickly and accurately calibrate the relative position between the mechanical arm and the target object is the most critical issue.
Disclosure of Invention
The present application aims to provide a method and a device for controlling a mechanical arm based on a visual image, which can improve the problems.
Embodiments of the present application are implemented as follows:
in a first aspect, the present application provides a visual image-based robotic arm control device, comprising:
the mechanical gripper comprises a gripping driving piece and at least two gripping fingers, wherein the gripping driving piece is used for driving the at least two gripping fingers to realize gripping actions;
the motion driving assembly is used for driving the mechanical gripper to move in a three-dimensional space;
the central camera is arranged on the mechanical gripper and between the at least two gripping fingers;
the tail end of each illumination optical fiber is fixed on the corresponding grabbing finger through an optical fiber fixing device, and the tail end of each illumination optical fiber and the central camera are arranged towards the extending direction of the grabbing finger;
the output end of the laser is connected with the head end of the illumination optical fiber;
and the control device is respectively and electrically connected with the laser, the central camera, the motion driving assembly and the grabbing driving piece.
It can be understood that the application discloses a mechanical arm control device based on visual images, wherein each grabbing finger of a mechanical gripper is fixedly provided with an illumination optical fiber, and a laser can project indication light spots to a workbench through the illumination optical fiber; the central camera is arranged between the grabbing fingers and used for shooting the workbench, and the current picture shot by the central camera comprises a target object and each indication light spot; the control device can rapidly and accurately calibrate the relative position between the mechanical arm and the target object by analyzing the current picture.
In an alternative embodiment of the present application, the optical fiber fixing device includes an optical barrel, a fixing member, and a beam expanding optical element; the tail end of the illumination optical fiber is inserted into the light cylinder and is fixed on the light cylinder through the fixing piece, the beam expanding optical element is further arranged on the output light path of the illumination optical fiber and used for expanding the emergent light beam of the illumination optical fiber.
It can be understood that the output light path of the illumination optical fiber is further provided with a beam expanding optical element, and the beam expanding optical element can be a lens assembly formed by a plurality of lenses and is used for expanding the outgoing light beam of the illumination optical fiber so as to enlarge the area of the indication light spot projected on the workbench.
In an optional embodiment of the present application, the optical fiber fixing device further includes a beam expanding driving member, where the beam expanding driving member is configured to drive the beam expanding optical element to expand the outgoing beam of the illumination optical fiber under the control of the control device.
It can be understood that the beam expanding optical element may be an electrically controlled beam expanding optical element, which expands the outgoing beam of the illumination fiber under the control of the control device, and only plays a role in transmitting the outgoing beam of the illumination fiber when the control command of the control device is not received.
In a second aspect, the present application provides a method for controlling a mechanical arm based on a visual image, which is applied to the control device of the mechanical arm control device based on a visual image according to any one of the first aspect, and the method for controlling a mechanical arm includes:
s1, sending a starting instruction to the laser, so that the laser projects an indication light spot to a workbench through the illumination optical fiber and then obtains a current picture shot by the central camera;
s2, identifying a target object in the current picture, and determining a target center point of the target object in the current picture;
s3, identifying the indication light spots in the current picture, sequentially connecting the indication light spots through straight lines to form a grabbing range profile, and determining a profile center point of the grabbing range profile;
and S4, sending a motion instruction to the motion driving assembly to drive the mechanical gripper to move towards the target object, so that the contour center point coincides with the target center point.
The steps S1, S2, etc. are only step identifiers, and the execution sequence of the method is not necessarily performed in the order from small to large, for example, the step S2 may be executed first and then the step S1 may be executed, which is not limited in this application.
It can be understood that the application discloses a control method of a mechanical arm based on a visual image, which is applied to the control device of any mechanical arm control device based on the visual image, and the method mainly comprises the following steps: after the indicating light spots are projected on the workbench, acquiring a current picture which is shot by the central camera and comprises a target object and all the indicating light spots; identifying a target object and an indication light spot in a current picture; connecting all the indicating light spots to form a grabbing range profile, and determining a profile center point of the grabbing range profile; and the relative position between the mechanical arm and the target object is rapidly and accurately calibrated through the position relation between the contour center point and the target center point of the target object. Under the condition that the contour center point is coincident with the target center point, the mechanical gripper is judged to move to be right above the target object, and the gripping operation is most suitable, so that the motion driving assembly is controlled to drive the mechanical gripper to move towards the target object, and the contour center point is coincident with the target center point.
In an alternative embodiment of the present application, step S2 includes:
s21, identifying a target object in the current picture, and determining an object contour of the target object in the current picture;
s22, calculating an edge distance difference value corresponding to each pixel point in the object outline;
s23, finding out the pixel point with the smallest edge distance difference value in the object outline as a target center point of the target object.
It can be understood that the pixel point with the smallest difference in edge distance in the object contour is the target center point of the target object, and the difference in distance between the target center point and the edge of the object contour along the opposite directions is the smallest. In the case of an object contour with an inner edge uniformly applying an external force inward, the force applied to the center point of the object is most uniform.
In an alternative embodiment of the present application, step S22 includes:
s221, taking each pixel point in the object contour as a target pixel point one by one;
s222, generating at least one direction line group by taking the target pixel point as an intersection point, wherein the direction line group comprises two direction lines which are perpendicular to each other;
s223, taking two intersection points of the direction line and the object contour as a first intersection point and a second intersection point respectively, taking the distance between the first intersection point and the target pixel point as a first distance value, and taking the distance between the second intersection point and the target pixel point as a second distance value;
s224, calculating an average value of the first distance values in each group of the direction line groups corresponding to the target pixel point to be used as a first average distance value, and calculating an average value of the second distance values in each group of the direction line groups corresponding to the target pixel point to be used as a second average distance value;
s225, taking the absolute value of the difference value between the first average distance value and the second average distance value as the edge distance difference value corresponding to the target pixel point.
In an alternative embodiment of the present application, step S3 includes:
s31, identifying the indication light spot in the current picture, and determining the geometric center point of the indication light spot as a light spot;
s32, under the condition that the number of the indication light spots in the current picture is 2, capturing line segments are formed by connecting the light spot points of the two indication light spots in a straight line;
s33, taking the geometric center point of the grabbing line segment as a contour center point.
It will be appreciated that in the case of a two-finger mechanical gripper, the indication spots projected by the illumination fibers at the left and right fingers may represent the front projection positions of the two gripping fingers at the workbench. When the mechanical gripper grips the target object, the left finger and the right finger uniformly apply external force to the target object to grip the target object. Therefore, when the center point of the grabbing line segment coincides with the target center point of the target object, the mechanical gripper moves to the position right above the target object, and grabbing operation is most suitable.
In an alternative embodiment of the present application, step S3 includes:
s34, identifying the indication light spot in the current picture, and determining the geometric center point of the indication light spot as a light spot;
s35, under the condition that the number of the indication light spots in the current picture is 3, sequentially connecting the light spot points of the indication light spots through straight lines to form a grabbing triangle profile;
s36, taking the geometric gravity center point of the grabbing triangle profile as a profile center point.
The mathematical geometric center of gravity point refers to the intersection of the three centerlines of the triangle.
It will be appreciated that in the case of a three-finger gripper, the indication spots projected by the illuminating fibers at the three fingers may each represent the forward projected position of the current three gripping fingers on the table. When the mechanical gripper grabs the target object, the three fingers uniformly apply external force to the target object to grab the target object. Therefore, when the geometric center of gravity of the triangle profile coincides with the target center of the target object, the mechanical gripper moves to the position right above the target object, and the gripping operation is most suitable.
In an optional embodiment of the present application, the method for controlling a mechanical arm further includes: and sending a beam expanding instruction to the beam expanding driving piece to expand the emergent beam of the illumination optical fiber under the condition that the spot area of the indication spot in the current picture is smaller than a first area threshold value, so that the spot area reaches the first area threshold value.
It can be appreciated that the first area threshold may be formulated by a person skilled in the art according to specific situations, so as to prevent the spot area of the indication spot from being too small, which is not beneficial to the image recognition of the current picture.
In an optional embodiment of the present application, the method for controlling a mechanical arm further includes:
s5, under the condition that the contour center point coincides with the target center point, sending a descending instruction to the motion driving assembly to drive the mechanical gripper to move towards the target object;
and S6, sending a grabbing instruction to the grabbing driving piece to drive the mechanical gripper to grab the target object under the condition that the spot area of the indicated light spot reaches a second area threshold value.
It will be appreciated that in the case where the contour center point coincides with the target center point, it may be determined that the robot hand moves directly above the target object, but the movement distance of the robot hand in the longitudinal direction cannot be determined. Therefore, the distance between the current mechanical gripper and the target object in the longitudinal direction can be judged by indicating the spot area of the spot. The second area threshold may be formulated by a person skilled in the art according to circumstances, with the aim of determining as a criterion whether the manipulator moves in the longitudinal direction to a position suitable for gripping the target object.
In a third aspect, the present application provides a mechanical arm control device based on a visual image, including: a processor and a memory connected to each other; the memory is for storing a computer program comprising program instructions, the processor being configured to invoke the program instructions to perform the method according to any of the second aspects.
In a fourth aspect, the present invention provides a computer readable storage medium storing a computer program comprising program instructions which when executed by a processor implement the steps of any of the methods of the second aspect.
Advantageous effects
The application discloses a mechanical arm control device based on visual images, wherein each grabbing finger of a mechanical grabbing hand is fixedly provided with an illumination optical fiber, and a laser can project indication light spots to a workbench through the illumination optical fiber; the central camera is arranged between the grabbing fingers and used for shooting the workbench, and the current picture shot by the central camera comprises a target object and each indication light spot; the control device can rapidly and accurately calibrate the relative position between the mechanical arm and the target object by analyzing the current picture.
The application discloses a mechanical arm control method based on visual images, which is applied to the control device of any mechanical arm control device based on visual images, and mainly comprises the following steps: after the indicating light spots are projected on the workbench, acquiring a current picture which is shot by the central camera and comprises a target object and all the indicating light spots; identifying a target object and an indication light spot in a current picture; connecting all the indicating light spots to form a grabbing range profile, and determining a profile center point of the grabbing range profile; and the relative position between the mechanical arm and the target object is rapidly and accurately calibrated through the position relation between the contour center point and the target center point of the target object.
In order to make the above objects, features and advantages of the present application more comprehensible, alternative embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a mechanical arm control device based on a visual image provided in the present application;
fig. 2 is a schematic flow chart of a method for controlling a mechanical arm based on a visual image provided in the present application;
FIG. 3 is a schematic view of a current frame with a contour center point not coincident with a target center point in the case of a two-finger gripper as a mechanical gripper;
FIG. 4 is a schematic diagram illustrating a calculation method of an edge distance difference value of a target pixel;
FIG. 5 is a schematic view of a current frame with a contour center point coincident with a target center point in the case of a two-finger gripper;
fig. 6 is a schematic diagram of a current frame in which a contour center point coincides with a target center point in the case where the mechanical gripper is a three-finger gripper.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
In a first aspect, as shown in fig. 1, the present application provides a robot arm control device 100 based on a visual image, which includes: the mechanical gripper 1 comprises a gripping driving piece 10, a first gripping finger 11 and a second gripping finger 12, wherein the gripping driving piece 10 is used for driving the first gripping finger 11 and the second gripping finger 12 to realize gripping actions; a motion driving assembly (not shown) for driving the mechanical gripper 1 to move in a three-dimensional space; the central camera 2 is arranged on the mechanical gripper 1 and between the first grabbing finger 11 and the second grabbing finger 12; the tail ends of the first illumination optical fibers 31 and the second illumination optical fibers 32 are fixed on the corresponding first grabbing fingers 11 through first optical fiber fixing devices 41, the tail ends of the second illumination optical fibers 32 are fixed on the corresponding second grabbing fingers 12 through second optical fiber fixing devices 42, and the tail ends of the first illumination optical fibers 31 and the second illumination optical fibers 32 and the central camera 2 are arranged towards the extending direction of the grabbing fingers; a laser (not shown in the figure), the output end of which is connected to the head ends of the first illumination optical fiber 31 and the second illumination optical fiber 32; a control device (not shown in the figures) is electrically connected to the laser, the central camera 2, the movement drive assembly and the gripping drive 10, respectively.
In the embodiment of the present application, fig. 1 shows a case where the mechanical gripper is a two-finger gripper, and actually the number of gripping fingers of the mechanical gripper may also be a positive integer greater than 2, for example: three-finger grips, four-finger grips, and the like. Correspondingly, the number of the illumination fibers is the same as the number of the grabbing fingers, and each grabbing finger is fixed with one illumination fiber through a light fixing device
It can be understood that the application discloses a mechanical arm control device based on visual images, wherein each grabbing finger of a mechanical gripper is fixedly provided with an illumination optical fiber, and a laser can project indication light spots to a workbench through the illumination optical fiber; the central camera is arranged between the grabbing fingers and used for shooting the workbench, and the current picture shot by the central camera comprises a target object and each indication light spot; the control device can rapidly and accurately calibrate the relative position between the mechanical arm and the target object by analyzing the current picture.
In an alternative embodiment of the present application, the first fiber fixing device 41 and the second fiber fixing device 42 are identical in structure. The optical fiber fixing device comprises an optical cylinder, a fixing piece and a beam expanding optical element; the tail end of the illumination optical fiber is inserted into the light cylinder and is fixed on the light cylinder through the fixing piece, and a beam expanding optical element is further arranged on the output light path of the illumination optical fiber and used for expanding the outgoing light beam of the illumination optical fiber.
It can be understood that the output light path of the illumination optical fiber is further provided with a beam expanding optical element, and the beam expanding optical element can be a lens assembly formed by a plurality of lenses and is used for expanding the outgoing light beam of the illumination optical fiber so as to enlarge the area of the indication light spot projected on the workbench.
In an alternative embodiment of the present application, the optical fiber fixing device further includes a beam expanding driving member for driving the beam expanding optical element to expand the outgoing beam of the illumination optical fiber under the control of the control device.
It can be understood that the beam expanding optical element may be an electrically controlled beam expanding optical element, which expands the outgoing beam of the illumination fiber under the control of the control device, and only plays a role in transmitting the outgoing beam of the illumination fiber when the control command of the control device is not received.
In a second aspect, as shown in fig. 2, the present application provides a method for controlling a robot arm based on a visual image, which is applied to the control device 100 of the robot arm control device based on a visual image according to any one of the first aspect, and includes:
s1, sending a starting instruction to the laser, and obtaining a current picture shot by the central camera after the laser projects an indication light spot to the workbench through the illumination optical fiber.
It will be appreciated that after the table has been projected with the pilot spots, the current picture taken by the central camera includes both the target object and the respective pilot spots. The relative position between the mechanical arm and the target object can be rapidly and accurately calibrated through the position relation between the target object and the indication light spot in the current picture. The current frame 300 shown in fig. 3 includes both the irregularly contoured target object 50 and the two pilot spots 51 and 52.
S2, identifying a target object in the current picture, and determining a target center point of the target object in the current picture.
In this embodiment of the present application, the pixel point with the smallest difference in distance between the inner edge of the object contour is the target center point of the target object, and the difference in distance between the target center point and the edge of the object contour along the opposite directions is the smallest, as shown in fig. 3, which is the target center point a of the target object 50.
And S3, identifying indication light spots in the current picture, sequentially connecting the indication light spots through straight lines to form a grabbing range profile, and determining a profile center point of the grabbing range profile.
For example, the current frame 300 shown in fig. 5 includes two indicating light spots 51 and 52, and the connecting indicating light spots 51 and 52 may form a capturing line segment; another current frame 400, shown in fig. 6, includes three pilot spots 61, 62, and 63, which are connected in sequence to form a grabbing triangle profile.
And S4, sending a motion instruction to the motion driving assembly to drive the mechanical gripper to move towards the target object, so that the contour center point coincides with the target center point.
The steps S1, S2, etc. are only step identifiers, and the execution sequence of the method is not necessarily performed in the order from small to large, for example, the step S2 may be executed first and then the step S1 may be executed, which is not limited in this application.
It can be understood that the application discloses a control method of a mechanical arm based on a visual image, which is applied to a control device of any mechanical arm control device based on the visual image, and the method mainly comprises the following steps: after the indicating light spots are projected on the workbench, acquiring a current picture which is shot by the central camera and comprises a target object and all the indicating light spots; identifying a target object and an indication light spot in a current picture; connecting all the indicating light spots to form a grabbing range profile, and determining a profile center point of the grabbing range profile; and the relative position between the mechanical arm and the target object is rapidly and accurately calibrated through the position relation between the contour center point and the target center point of the target object. Under the condition that the contour center point is coincident with the target center point, the mechanical gripper is judged to move to be right above the target object, and the gripping operation is most suitable, so that the motion driving assembly is controlled to drive the mechanical gripper to move towards the target object, and the contour center point is coincident with the target center point.
In an alternative embodiment of the present application, step S2 includes:
s21, identifying a target object in the current picture, and determining the object contour of the target object in the current picture.
As shown in fig. 3, object recognition is performed on the current screen 300, and the target object 40 can be recognized, and the outline of the object is shown as an irregularly closed solid line in the figure.
S22, calculating an edge distance difference value corresponding to each pixel point in the object outline.
S23, finding out the pixel point with the smallest edge distance difference value in the object outline as the target center point of the target object.
It can be understood that the pixel point with the smallest difference in edge distance in the object contour is the target center point of the target object, and the difference in distance between the target center point and the edge of the object contour along the opposite directions is the smallest. In the case of an object contour with an inner edge uniformly applying an external force inward, the force applied to the center point of the object is most uniform.
In an alternative embodiment of the present application, step S22 includes:
s221, taking each pixel point in the object contour as a target pixel point one by one.
As shown in fig. 4, edge distance difference calculation is performed by sequentially considering each pixel in the target object 40 as a target pixel point one by one.
S222, generating at least one group of direction line groups by taking the target pixel point as an intersection point, wherein the direction line groups comprise two direction lines which are perpendicular to each other.
As shown in fig. 4, taking the target pixel point S as an example in the figure, at least one direction line group is generated by taking the target pixel point S as an intersection point, and fig. 4 only illustrates one direction line group, which includes an X direction line and a Y direction line that are perpendicular to each other.
S223, taking two intersection points of the direction line and the object contour as a first intersection point and a second intersection point respectively, taking the distance between the first intersection point and the target pixel point as a first distance value, and taking the distance between the second intersection point and the target pixel point as a second distance value.
With continued reference to fig. 4, the intersection of the x-direction line and the contour of the target object 40 is at a first intersection point x1 and a second intersection point x2, where the distance between the first intersection point x1 and the target pixel point S is the first distance d1, and the distance between the second intersection point x2 and the target pixel point S is the second distance d2; in addition, the intersection of the Y-direction line and the contour of the target object 40 is performed at a first intersection Y1 and a second intersection Y2, where the distance between the first intersection Y1 and the target pixel point S is the first distance d3, and the distance between the second intersection Y2 and the target pixel point S is the second distance d4.
S224, calculating an average value of first distance values in each group of direction line groups corresponding to the target pixel point to be used as a first average distance value, and calculating an average value of second distance values in each group of direction line groups corresponding to the target pixel point to be used as a second average distance value.
With continued reference to fig. 4, it can be seen that the target pixel point S corresponds to two first distances d1 and d3 and two second distances d2 and d4, and at this time, a first average distance value and a second average distance value corresponding to the target pixel point S need to be calculated.
S225, taking the absolute value of the difference value between the first average distance value and the second average distance value as the edge distance difference value corresponding to the target pixel point.
In an alternative embodiment of the present application, step S3 includes:
s31, identifying the indication light spot in the current picture, and determining the geometric center point of the indication light spot as the light spot.
S32, under the condition that the number of the indication light spots in the current picture is 2, the two indication light spots are connected through a straight line to form a grabbing line segment.
S33, taking the geometric center point of the grabbing line segment as the contour center point.
It will be appreciated that in the case of a two-finger mechanical gripper, the indication spots projected by the illumination fibers at the left and right fingers may represent the front projection positions of the two gripping fingers at the workbench. When the mechanical gripper grips the target object, the left finger and the right finger uniformly apply external force to the target object to grip the target object. Therefore, when the center point of the grasping line segment coincides with the target center point of the target object, the mechanical gripper moves to a position directly above the target object as shown in fig. 5, and the grasping operation is most suitably performed.
In an alternative embodiment of the present application, step S3 includes:
s34, identifying the indication light spot in the current picture, and determining the geometric center point of the indication light spot as the light spot.
And S35, under the condition that the number of the indication light spots in the current picture is 3, sequentially connecting the light spots of the indication light spots through straight lines to form a grabbing triangle profile.
S36, taking the geometric gravity center point of the grabbing triangle profile as a profile center point.
The mathematical geometric center of gravity point refers to the intersection of the three centerlines of the triangle. It will be appreciated that in the case of a three-finger gripper, the indication spots projected by the illuminating fibers at the three fingers may each represent the forward projected position of the current three gripping fingers on the table. When the mechanical gripper grabs the target object, the three fingers uniformly apply external force to the target object to grab the target object. Therefore, when the geometric center of gravity of the triangle contour coincides with the target center of the target object, the mechanical gripper moves to a position directly above the target object as shown in fig. 6, and the gripping operation is most suitably performed.
In an optional embodiment of the present application, the method for controlling a mechanical arm further includes: and sending a beam expanding instruction to the beam expanding driving piece to expand the outgoing beam of the illumination fiber under the condition that the spot area of the indication spot in the current picture is smaller than the first area threshold value, so that the spot area reaches the first area threshold value.
It can be appreciated that the first area threshold may be formulated by a person skilled in the art according to specific situations, so as to prevent the spot area of the indication spot from being too small, which is not beneficial to the image recognition of the current picture.
In an optional embodiment of the present application, the method for controlling a mechanical arm further includes:
s5, under the condition that the contour center point coincides with the target center point, a descending instruction is sent to the motion driving assembly to drive the mechanical gripper to move towards the target object;
and S6, sending a grabbing instruction to the grabbing driving piece under the condition that the spot area of the indication spot reaches a second area threshold value, and driving the mechanical grabbing hand to grab the target object.
It will be appreciated that in the case where the contour center point coincides with the target center point, it may be determined that the robot hand moves directly above the target object, but the movement distance of the robot hand in the longitudinal direction cannot be determined. Therefore, the distance between the current mechanical gripper and the target object in the longitudinal direction can be judged by indicating the spot area of the spot. The second area threshold may be formulated by a person skilled in the art according to circumstances, with the aim of determining as a criterion whether the manipulator moves in the longitudinal direction to a position suitable for gripping the target object.
In a third aspect, the present application provides a visual image based robotic arm control device comprising one or more processors and memory. The processor and the memory are connected through a bus. The memory is for storing a computer program comprising program instructions and the processor is for executing the program instructions stored by the memory. Wherein the processor is configured to invoke the program instructions to perform the operations of any of the methods of the second aspect.
It should be appreciated that in embodiments of the present invention, the processor may be a central processing unit (Central Processing Unit, CPU), which may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSPs), application specific integrated circuits (Application Specific Integrated Circuit, ASICs), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may include read only memory and random access memory and provide instructions and data to the processor. A portion of the memory may also include non-volatile random access memory. For example, the memory may also store information of the device type.
In a fourth aspect, the present invention provides a computer readable storage medium storing a computer program comprising program instructions which when executed by a processor implement the steps of any of the methods of the second aspect.
The computer readable storage medium may be an internal storage unit of the terminal device of any of the foregoing embodiments, for example, a hard disk or a memory of the terminal device. The computer readable storage medium may be an external storage device of the terminal device, for example, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, which are provided in the terminal device. Further, the computer-readable storage medium may further include both an internal storage unit and an external storage device of the terminal device. The computer-readable storage medium is used for storing the computer program and other programs and data required by the terminal device. The above-described computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In several embodiments provided in the present application, it should be understood that the disclosed terminal device and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the above-described division of units is merely a logical function division, and there may be another division manner in actual implementation, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices, or elements, or may be an electrical, mechanical, or other form of connection.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment of the present invention.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention is essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method in the various embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The terms "first," "second," "the first," or "the second," as used in various embodiments of the present disclosure, may modify various components without regard to order and/or importance, but these terms do not limit the corresponding components. The above description is only configured for the purpose of distinguishing an element from other elements. For example, the first user device and the second user device represent different user devices, although both are user devices. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure.
When an element (e.g., a first element) is referred to as being "coupled" (operatively or communicatively) to "another element (e.g., a second element) or" connected "to another element (e.g., a second element), it is understood that the one element is directly connected to the other element or the one element is indirectly connected to the other element via yet another element (e.g., a third element). In contrast, it will be understood that when an element (e.g., a first element) is referred to as being "directly connected" or "directly coupled" to another element (a second element), then no element (e.g., a third element) is interposed therebetween.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the element defined by the phrase "comprising one … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element, and furthermore, elements having the same name in different embodiments of the present application may have the same meaning or may have different meanings, a particular meaning of which is to be determined by its interpretation in this particular embodiment or by further combining the context of this particular embodiment.
The above description is only illustrative of the principles of the technology being applied to alternative embodiments of the present application. It will be appreciated by persons skilled in the art that the scope of the invention referred to in this application is not limited to the specific combinations of features described above, but it is intended to cover other embodiments in which any combination of features described above or equivalents thereof is possible without departing from the spirit of the invention. Such as the above-described features and technical features having similar functions (but not limited to) disclosed in the present application are replaced with each other.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrase "if determined" or "if detected (stated condition or event)" may be interpreted as "when determined" or "in response to determination" or "when detected (stated condition or event)" or "in response to detection (stated condition or event), depending on the context.
The above description is only illustrative of the principles of the technology being applied to alternative embodiments of the present application. It will be appreciated by persons skilled in the art that the scope of the invention referred to in this application is not limited to the specific combinations of features described above, but it is intended to cover other embodiments in which any combination of features described above or equivalents thereof is possible without departing from the spirit of the invention. Such as the above-described features and technical features having similar functions (but not limited to) disclosed in the present application are replaced with each other.
The foregoing is merely an alternative embodiment of the present application and is not intended to limit the present application, and various modifications and variations may be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.

Claims (5)

1. The mechanical arm control method based on the visual image is applied to the control device of the mechanical arm control device based on the visual image, and the mechanical arm control device based on the visual image comprises the following components:
the mechanical gripper comprises a gripping driving piece and at least two gripping fingers, wherein the gripping driving piece is used for driving the at least two gripping fingers to realize gripping actions;
the motion driving assembly is used for driving the mechanical gripper to move in a three-dimensional space;
the central camera is arranged on the mechanical gripper and between the at least two gripping fingers;
the tail end of each illumination optical fiber is fixed on the corresponding grabbing finger through an optical fiber fixing device, and the tail end of each illumination optical fiber and the central camera are arranged towards the extending direction of the grabbing finger;
the output end of the laser is connected with the head end of the illumination optical fiber;
the control device is respectively and electrically connected with the laser, the central camera, the motion driving assembly and the grabbing driving piece;
the optical fiber fixing device comprises an optical cylinder, a fixing piece and a beam expanding optical element;
the tail end of the illumination optical fiber is inserted into the light cylinder and is fixed on the light cylinder through the fixing piece, the beam expanding optical element is further arranged on the output light path of the illumination optical fiber and is used for expanding the emergent light beam of the illumination optical fiber;
the optical fiber fixing device further comprises a beam expanding driving piece, wherein the beam expanding driving piece is used for driving the beam expanding optical element to expand an emergent beam of the illumination optical fiber under the control of the control device;
the mechanical arm control method based on the visual image is characterized by comprising the following steps of:
sending a starting instruction to the laser, so that the laser projects an indication light spot to a workbench through the illumination optical fiber and then obtains a current picture shot by the central camera;
identifying a target object in the current picture, and determining a target center point of the target object in the current picture;
identifying the indication light spots in the current picture, sequentially connecting the indication light spots through straight lines to form a grabbing range profile, and determining a profile center point of the grabbing range profile;
sending a motion instruction to the motion driving assembly to drive the mechanical gripper to move towards the target object, so that the contour center point coincides with the target center point;
the identifying the target object in the current picture, and determining the target center point of the target object in the current picture includes:
identifying a target object in the current picture, and determining an object contour of the target object in the current picture;
calculating an edge distance difference value corresponding to each pixel point in the object outline;
finding out a pixel point with the minimum edge distance difference value from the object contour as a target center point of the target object;
the calculating the edge distance difference value corresponding to each pixel point in the object outline comprises the following steps:
taking each pixel point in the object contour as a target pixel point one by one;
generating at least one group of direction line groups by taking the target pixel point as an intersection point, wherein the direction line groups comprise two direction lines which are perpendicular to each other;
taking two intersection points of the direction line and the object contour as a first intersection point and a second intersection point respectively, taking the distance between the first intersection point and the target pixel point as a first distance value, and taking the distance between the second intersection point and the target pixel point as a second distance value;
calculating an average value of the first distance values in each group of the direction line groups corresponding to the target pixel point to be used as a first average distance value, and calculating an average value of the second distance values in each group of the direction line groups corresponding to the target pixel point to be used as a second average distance value;
and taking the absolute value of the difference value between the first average distance value and the second average distance value as the edge distance difference value corresponding to the target pixel point.
2. The method for controlling a robot arm based on a visual image according to claim 1, wherein,
the identifying the indication light spots in the current picture, sequentially connecting the indication light spots through straight lines to form a grabbing range profile, and determining a profile center point of the grabbing range profile comprises the following steps:
identifying the indication light spot in the current picture, and determining the geometric center point of the indication light spot as a light spot;
under the condition that the number of the indication light spots in the current picture is 2, capturing line segments are formed by connecting the light spot points of the two indication light spots in a straight line;
and taking the geometric center point of the grabbing line segment as a contour center point.
3. The method for controlling a robot arm based on a visual image according to claim 1, wherein,
the identifying the indication light spots in the current picture, sequentially connecting the indication light spots through straight lines to form a grabbing range profile, and determining a profile center point of the grabbing range profile comprises the following steps:
identifying the indication light spot in the current picture, and determining the geometric center point of the indication light spot as a light spot;
under the condition that the number of the indication light spots in the current picture is 3, sequentially connecting the light spot points of all the indication light spots through straight lines to form a grabbing triangle profile;
and taking the geometric gravity center point of the grabbing triangle profile as a profile center point.
4. The visual image-based robot arm control method according to claim 3, further comprising:
and sending a beam expanding instruction to the beam expanding driving piece to expand the emergent beam of the illumination optical fiber under the condition that the spot area of the indication spot in the current picture is smaller than a first area threshold value, so that the spot area reaches the first area threshold value.
5. The visual image-based robot arm control method according to claim 3, further comprising:
sending a descending instruction to the motion driving assembly to drive the mechanical gripper to move towards the target object under the condition that the contour center point coincides with the target center point;
and sending a grabbing instruction to the grabbing driving piece under the condition that the spot area of the indication light spot reaches a second area threshold value, and driving the mechanical gripper to grab the target object.
CN202310710829.XA 2023-06-15 2023-06-15 Mechanical arm control method and device based on visual image Active CN116766183B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310710829.XA CN116766183B (en) 2023-06-15 2023-06-15 Mechanical arm control method and device based on visual image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310710829.XA CN116766183B (en) 2023-06-15 2023-06-15 Mechanical arm control method and device based on visual image

Publications (2)

Publication Number Publication Date
CN116766183A CN116766183A (en) 2023-09-19
CN116766183B true CN116766183B (en) 2023-12-26

Family

ID=87995750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310710829.XA Active CN116766183B (en) 2023-06-15 2023-06-15 Mechanical arm control method and device based on visual image

Country Status (1)

Country Link
CN (1) CN116766183B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0385528A2 (en) * 1989-02-27 1990-09-05 Jean-Louis Tournier Process and system for the determination of the position and relative orientation of two objects in space
CN101048058A (en) * 2006-03-30 2007-10-03 阿森姆布里昂股份有限公司 A component placement unit as well as a component placement device comprising such a component placement unit
CN106507966B (en) * 2011-08-19 2014-07-16 东南大学 Carry the emergent robot of coring and its control method of four-degree-of-freedom mechanical hand
CN105798909A (en) * 2016-04-29 2016-07-27 上海交通大学 Calibration system and method of zero position of robot based on laser and vision
CN108568810A (en) * 2017-03-08 2018-09-25 本田技研工业株式会社 Posture method of adjustment
CN110231036A (en) * 2019-07-19 2019-09-13 广东博智林机器人有限公司 A kind of robotic positioning device and method based on cross laser and machine vision
CN110355754A (en) * 2018-12-15 2019-10-22 深圳铭杰医疗科技有限公司 Robot eye system, control method, equipment and storage medium
CN112192603A (en) * 2020-09-30 2021-01-08 中石化四机石油机械有限公司 Minor repair platform oil pipe pushing and supporting manipulator device and using method
CN114347015A (en) * 2021-12-09 2022-04-15 华南理工大学 Robot grabbing control method, system, device and medium
CN115609591A (en) * 2022-11-17 2023-01-17 上海仙工智能科技有限公司 2D Marker-based visual positioning method and system and composite robot

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0385528A2 (en) * 1989-02-27 1990-09-05 Jean-Louis Tournier Process and system for the determination of the position and relative orientation of two objects in space
CN101048058A (en) * 2006-03-30 2007-10-03 阿森姆布里昂股份有限公司 A component placement unit as well as a component placement device comprising such a component placement unit
CN106507966B (en) * 2011-08-19 2014-07-16 东南大学 Carry the emergent robot of coring and its control method of four-degree-of-freedom mechanical hand
CN105798909A (en) * 2016-04-29 2016-07-27 上海交通大学 Calibration system and method of zero position of robot based on laser and vision
CN108568810A (en) * 2017-03-08 2018-09-25 本田技研工业株式会社 Posture method of adjustment
CN110355754A (en) * 2018-12-15 2019-10-22 深圳铭杰医疗科技有限公司 Robot eye system, control method, equipment and storage medium
CN110231036A (en) * 2019-07-19 2019-09-13 广东博智林机器人有限公司 A kind of robotic positioning device and method based on cross laser and machine vision
CN112192603A (en) * 2020-09-30 2021-01-08 中石化四机石油机械有限公司 Minor repair platform oil pipe pushing and supporting manipulator device and using method
CN114347015A (en) * 2021-12-09 2022-04-15 华南理工大学 Robot grabbing control method, system, device and medium
CN115609591A (en) * 2022-11-17 2023-01-17 上海仙工智能科技有限公司 2D Marker-based visual positioning method and system and composite robot

Also Published As

Publication number Publication date
CN116766183A (en) 2023-09-19

Similar Documents

Publication Publication Date Title
CN108044627B (en) Method and device for detecting grabbing position and mechanical arm
CN110712202B (en) Special-shaped component grabbing method, device and system, control device and storage medium
CN113524194A (en) Target grabbing method of robot vision grabbing system based on multi-mode feature deep learning
CN106256512B (en) Robot device including machine vision
CN111085997A (en) Capturing training method and system based on point cloud acquisition and processing
US20210023718A1 (en) Three-dimensional data generation device and robot control system
CN111604942A (en) Object detection device, control device, and computer program for object detection
Kirschner et al. YuMi, come and play with Me! A collaborative robot for piecing together a tangram puzzle
CN113379849A (en) Robot autonomous recognition intelligent grabbing method and system based on depth camera
CN112947458B (en) Robot accurate grabbing method based on multi-mode information and computer readable medium
CN114355953A (en) High-precision control method and system of multi-axis servo system based on machine vision
US11724396B2 (en) Goal-oriented control of a robotic arm
US20190099892A1 (en) Identification code reading apparatus and machine learning device
CN113954076B (en) Robot precision assembling method based on cross-modal prediction assembling scene
Dewi et al. Finger Cue for Mobile Robot Motion Control
CN113172636B (en) Automatic hand-eye calibration method and device and storage medium
CN116766183B (en) Mechanical arm control method and device based on visual image
CN114092428A (en) Image data processing method, image data processing device, electronic equipment and storage medium
CN114037595A (en) Image data processing method, image data processing device, electronic equipment and storage medium
CN116385437B (en) Multi-view multi-image fusion method and device
WO2023082417A1 (en) Grabbing point information obtaining method and apparatus, electronic device, and storage medium
CN115972192A (en) 3D computer vision system with variable spatial resolution
CN114022342A (en) Acquisition method and device for acquisition point information, electronic equipment and storage medium
CN112184819A (en) Robot guiding method and device, computer equipment and storage medium
Diaz et al. Path planning based on an artificial vision system and optical character recognition (OCR)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant