CN115609591A - 2D Marker-based visual positioning method and system and composite robot - Google Patents

2D Marker-based visual positioning method and system and composite robot Download PDF

Info

Publication number
CN115609591A
CN115609591A CN202211463733.XA CN202211463733A CN115609591A CN 115609591 A CN115609591 A CN 115609591A CN 202211463733 A CN202211463733 A CN 202211463733A CN 115609591 A CN115609591 A CN 115609591A
Authority
CN
China
Prior art keywords
marker
point
points
image
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211463733.XA
Other languages
Chinese (zh)
Other versions
CN115609591B (en
Inventor
王益亮
陆蕴凡
石岩
李华伟
沈锴
陈忠伟
赵越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xiangong Intelligent Technology Co ltd
Original Assignee
Shanghai Xiangong Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xiangong Intelligent Technology Co ltd filed Critical Shanghai Xiangong Intelligent Technology Co ltd
Priority to CN202211463733.XA priority Critical patent/CN115609591B/en
Publication of CN115609591A publication Critical patent/CN115609591A/en
Application granted granted Critical
Publication of CN115609591B publication Critical patent/CN115609591B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Robotics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Fuzzy Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a visual positioning method and system based on a 2D Marker and a composite robot, wherein the method comprises the following steps: fixing a 2D Marker beside a target object; acquiring a 2D Marker image by a camera after hand-eye calibration, performing threshold segmentation to obtain a binary image, then performing contour searching to obtain all contours in the image, performing corner detection and circular reference detection respectively, identifying coordinates of inner and outer corners of a frame feature and corresponding radii and central point coordinates of circular features, and performing point position sequencing by taking the corner point closest to each central point as a starting point; after each point is subjected to sub-pixelization processing, calculating the 2D Marker plane pose; and then, according to the acquired plane pose, teaching the conversion relation between the target object and the 2D Marker so as to acquire the pose of the target object in a base coordinate system, thereby improving the pose calculation accuracy of the 2D Marker.

Description

2D Marker-based visual positioning method and system and composite robot
Technical Field
The invention designs a visual positioning technology, and particularly relates to a visual positioning method and system based on a 2D Marker and a composite robot.
Background
With the development of intellectualization and digitalization of the warehouse logistics industry, AMR (Automated Mobile Robot) for carrying and a mechanical arm for grasping are widely applied to industrial production processes in various fields. In recent years, the technical requirements of intelligent factory's brisk development to wisdom commodity circulation field constantly promote, and the demand of the automatic transportation of spare part between the complicated production line has been unable to be satisfied to simple transport AMR and fixed arm, and consequently, the compound robot of AMR and arm combination takes place in due course. Under the influence of factors such as a positioning navigation technology and a factory environment, the positioning precision of the composite robot cannot realize the precise grabbing or placing of goods by the mechanical arm, so a visual system needs to be carried at the tail end of the arm of the composite robot, the positioning error of the robot is compensated and corrected, and the precise grabbing and placing of the composite robot are realized.
The composite robot terminal vision system is divided into a 3D scheme and a 2D scheme, the 3D scheme carries a 3D camera, point cloud data are collected for identification, although a target object can be directly identified, the identification error is usually large under the influence of the imaging precision of the 3D camera, and the 3D camera has the defects of high price, large size, heavy quality, slow identification beat and the like, so that the application of the composite robot terminal vision system in the composite robot is limited. The 2D scheme carries a 2D camera, has the advantages of low price, small size, light weight, fast recognition beat, high precision and the like, and has wider application scenes in the composite robot.
Generally, a 2D camera cannot directly identify the three-dimensional space pose of a target object, needs to identify by means of a specific Marker, and utilizes the characteristic that the spatial position relation between the Marker and a grabbed target is fixed to realize the accurate grabbing and placing of the composite robot.
For example, chinese patent publication No. CN111516006B discloses a composite robot operation method based on a specific mark, which uses an ArUco label or a customized simple label for positioning to obtain coordinates of the label in a robot arm coordinate system, however, in this technique, only the coordinates of the label in the robot arm coordinate system can be obtained by identifying the label, which means that the robot arm can only grab the position of the label, and therefore, when there is a large distance between the label position and the grabbing position, the robot cannot grab the label directly.
Therefore, the technical scheme is greatly limited by the use scene, the tag pose must be limited to be the grabbing pose, and otherwise the tag pose cannot be grabbed. Secondly, the label used by the technology only uses 4 control points to calculate the pose, and the identified control points do not reach sub-pixelation, so the overall pose calculation error is large, and the composite robot is difficult to accurately grab.
Disclosure of Invention
The invention mainly aims to provide a 2D Marker-based visual positioning method and system and a composite robot, so as to improve the pose calculation accuracy of the 2D Marker.
In order to achieve the above object, according to a first aspect of the present invention, there is provided a 2D Marker-based visual positioning method, comprising the steps of:
step S100, fixing a 2D Marker beside a position of a target, wherein the marking characteristics of the 2D Marker comprise: the color-distinguishing rectangular frame has obvious color distinguishing degree, and the circular features with different radiuses are gathered in the frame feature frame and close to the same corner point;
step S200, acquiring a 2D Marker image by a camera after hand-eye calibration, carrying out threshold segmentation to obtain a binary image, then carrying out contour searching to obtain all contours in the image, respectively carrying out corner detection and circular reference detection to identify the coordinates of inner and outer corners of a frame feature and the corresponding radius and the coordinates of a central point of each circular feature, and carrying out point location sequencing by taking the corner point nearest to each central point as a starting point;
step S300, calculating the 2D Marker plane pose after each point is subjected to sub-pixelization, wherein the steps comprise: establishing a space coordinate system Ow by taking the 2D Marker as an XOY plane according to the known physical size of the 2D Marker, establishing matched key points between each corner point and circle center point obtained in the step S200 and a three-dimensional space on the Ow coordinate system, and calculating the plane pose of the 2D Marker through a PnP algorithm under the condition of known camera parameters;
and step S400, according to the plane pose acquired in the step S300, teaching the conversion relation between the target object and the 2D Marker so as to acquire the pose of the target object in the base coordinate system.
In a possible preferred embodiment, step S200 further includes an image denoising step: and carrying out noise reduction on the acquired 2D Marker image by adopting a Gaussian smoothing algorithm.
In a possible preferred embodiment, in the step S200, the acquired 2D Marker image is adaptively threshold-segmented by using a maximum inter-class variance method, so as to obtain a binary image.
In a possible preferred embodiment, the step of acquiring all the contours in the image in step S200 to perform corner detection separately includes: performing polygon fitting on the contour, connecting the first and last points on the contour curve into a straight line, calculating the distance from all the points on the contour to the straight line, and finding out the maximum distance valued max Defining a limit difference D, ifd max <D, the middle contour points between the two points are all dropped off, ifd max >D, then reserved max Dividing the outline into two sub-outlines by taking the corresponding coordinate point as a boundary, repeating the method for the sub-outlines, and finally, keeping the coordinate point as the vertex of a fitted polygon; and (3) screening a quadrangle from the polygons, calculating the deviation of each angle compared with a right angle, and when the deviation meets the preset condition, considering the quadrangle as a rectangular frame, and using each vertex of the polygon as an angular point.
In a possible preferred embodiment, the circular reference detection step comprises: and performing convexity judgment, probabilistic judgment and roundness judgment calculation on the polygon to locate the corresponding contour in the graph, thereby acquiring the corresponding radius and the central point coordinate of each circular feature.
In a possible preferred embodiment, the sub-pixelation processing step comprises:
the q point is set as a sub-pixel point,
Figure 363738DEST_PATH_IMAGE002
for points in the neighborhood of the q point, the coordinates are known,
Figure DEST_PATH_IMAGE004
is composed of
Figure 703583DEST_PATH_IMAGE002
A gray scale gradient of
Figure 372462DEST_PATH_IMAGE002
At the edge of the pixel, then
Figure 834667DEST_PATH_IMAGE002
The gradient direction of the point pixel is vertical to the edge direction, when the vector is
Figure DEST_PATH_IMAGE006
Is in line with the edge direction, then
Figure 526680DEST_PATH_IMAGE006
The dot product operation result of the vector and the gradient vector of the p points is 0:
Figure 732533DEST_PATH_IMAGE007
expand the equation and shift the term:
Figure 724760DEST_PATH_IMAGE008
collecting a plurality of pixels near each corner
Figure 420184DEST_PATH_IMAGE009
According to
Figure 68334DEST_PATH_IMAGE009
Distance from center weighted
Figure 874616DEST_PATH_IMAGE011
Constructing a system of equations according to the above formula and solving using least squares
Figure 986928DEST_PATH_IMAGE013
:
Figure 790936DEST_PATH_IMAGE015
To achieve the above object, according to a second aspect of the present invention, there is also provided a 2D Marker-based visual positioning system for recognizing the 2D Marker as described above, wherein the visual positioning system comprises:
a storage unit for storing a program comprising the steps of the 2D Marker-based visual positioning method according to any one of claims 1 to 6, and being used for the control unit, the camera, the mechanical arm, the processing unit and the information output unit to be called and executed at proper time;
wherein the camera is arranged at the end of the mechanical arm, and the control unit is used for coordinating:
after the camera is calibrated by hands and eyes, the camera is driven by a mechanical arm to acquire a 2D Marker image;
the processing unit is used for carrying out image denoising processing on the 2D Marker image, carrying out threshold segmentation to obtain a binary image, then carrying out contour searching to obtain all contours in the image, respectively carrying out corner detection and circular reference detection to identify the coordinates of inner and outer corners of the frame features and the corresponding radius and the center point coordinates of each circular feature, and carrying out point location sequencing by taking the corner point closest to each center point as a starting point; and then, performing sub-pixelation processing on each point, and calculating the 2D Marker plane pose, wherein the method comprises the following steps: according to the known physical size of a 2D Marker, establishing a space coordinate system Ow by taking the 2D Marker as an XOY plane, establishing matched key points of each angle point and circle center point which are obtained before and a three-dimensional space on the Ow coordinate system, and under the condition of the known camera internal reference, calculating the plane pose of the 2D Marker through a PnP algorithm so as to teach the conversion relation between a target object and the 2D Marker and obtain the pose of the target object under a mechanical arm base coordinate system;
and the information output unit is used for outputting the pose of the target object under the base coordinate system of the mechanical arm.
In a possible preferred embodiment, the step of acquiring all the contours in the image in step S200 to respectively perform corner detection includes: carrying out polygon fitting on the contour, connecting the first and last points on the contour curve into a straight line, solving the distance from all the points on the contour to the straight line, and finding out the maximum distance valued max Defining a limit difference D ifd max <D, the middle contour points between the two points are all dropped off, ifd max >D, then reserved max Dividing the outline into two sub-outlines by taking the corresponding coordinate point as a boundary, repeating the method for the sub-outlines, and finally, keeping the coordinate point as the vertex of a fitted polygon; and (3) screening out a quadrangle from the polygon, calculating the deviation of each angle relative to a right angle, and when the deviation meets a preset condition, considering the quadrangle as a rectangular frame, wherein each vertex of the polygon is an angular point.
In a possible preferred embodiment, the circular reference detecting step comprises: and performing convexity judgment, probabilistic judgment and roundness judgment calculation on the polygon to locate the corresponding contour in the graph, thereby acquiring the corresponding radius and the central point coordinate of each circular feature.
In order to achieve the above object, according to a third aspect of the present invention, there is also provided a compound robot comprising: the vision grabbing unit is any one of the vision positioning systems based on the 2D Marker.
Through the 2D Marker marking characteristic and the corresponding identification method thereof, the 2D Marker-based visual positioning method and system and the composite robot provided by the invention can obviously improve the pose calculation precision of the 2D Marker, and further improve the precision of the composite robot for grabbing/placing the target object; in addition, through the mode of teaching, calculate the conversion relation between 2D Marker and the object of waiting to snatch for 2D Marker can be at near waiting to snatch the object optional position and the angle installation, has solved traditional 2D Marker mounted position and has received the great problem of scene restriction.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a diagram of method steps for a first embodiment of the present invention;
FIG. 2 is a logic diagram of a first embodiment of the present invention;
fig. 3 is a schematic diagram of a process flow of identifying a 2D Marker pose in the first embodiment of the invention;
fig. 4 is a schematic structural diagram of a 2D Marker in a first embodiment of the invention;
fig. 5 is a schematic view of a 2D Marker corner point sequence in the first embodiment of the present invention;
fig. 6 is a schematic diagram of a composite robot performing a grabbing task based on a 2D Marker visual positioning method according to a first embodiment of the present invention;
fig. 7 is a schematic structural diagram of a 2D Marker-based visual positioning system according to a second embodiment of the invention.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the following will clearly and completely describe the specific technical solution of the present invention with reference to the embodiments to help those skilled in the art to further understand the present invention. It should be apparent that the embodiments described herein are only a few embodiments of the present invention, and not all embodiments. It should be noted that the embodiments and features of the embodiments in this application may be combined with each other without departing from the spirit and conflict of the present disclosure, as will be apparent to those of ordinary skill in the art. All other embodiments based on the embodiments of the present invention, which can be obtained by a person of ordinary skill in the art without any creative effort, shall fall within the disclosure and the protection scope of the present invention.
Furthermore, the terms "first," "second," "S100," "S200," and the like in the description and in the claims and the drawings of the present invention are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those described herein. Also, the terms "including" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions. Unless expressly stated or limited otherwise, the terms "disposed," "mounted," "connected," and "connected" are to be construed broadly and encompass, for example, both fixed and removable connections or integral connections; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in this case can be understood by those skilled in the art in combination with the prior art according to the specific situation.
The following will exemplify the visual positioning and grasping of a target object by a composite robot, wherein the composite robot in this example comprises: AMR (automatic Mobile Robot), arm and computer vision system constitute, wherein AMR is responsible for removing, and the arm is responsible for getting and put the goods, and computer vision system is equivalent to the eye of arm to the accurate of adjustment arm is got and is put, and it is obvious that compared in the transfer Robot of single function, the compound Robot can accomplish complicated cooperative task such as freight transportation, goods are got and are put.
(A)
As shown in fig. 1 to 6, according to a first aspect of the present invention, the steps of the 2D Marker-based visual positioning method are provided as follows:
step S100, fixing the 2D Marker beside the position of the target, for example, keeping the relative position relationship between the 2D Marker (hereinafter referred to as Marker/mark) and the target unchanged.
Step S200, a camera after hand-eye calibration collects a 2D Marker image, a binary image is obtained through threshold segmentation, contour searching is carried out, all contours in the image are obtained, corner point detection and circular reference detection are respectively carried out, coordinates of inner and outer corner points of frame features and corresponding radiuses and coordinates of central points of all circular features are identified, and point location sequencing is carried out by taking the corner point closest to each central point as a starting point.
Step S300, after each point is subjected to sub-pixelation processing, a 2D Marker plane pose is calculated, and the method comprises the following steps: and according to the known physical size of the 2D Marker, establishing a space coordinate system Ow by taking the 2D Marker as an XOY plane, establishing matched key points between each corner point and circle center point obtained in the step S200 and a three-dimensional space on the Ow coordinate system, and calculating the plane pose of the 2D Marker through a PnP algorithm under the condition of known camera parameters.
And step S400, according to the plane pose acquired in the step S300, teaching the conversion relation between the target object and the 2D Marker so as to acquire the pose of the target object in the base coordinate system.
Specifically, the target of the composite robot is obtained by the vision grasping of the target object under the base coordinate system of the mechanical arm, and the pose is composed of two parts: three dimensional space coordinates
Figure 988699DEST_PATH_IMAGE017
And amount of spatial rotation
Figure 536355DEST_PATH_IMAGE019
For calculation, the rotation amount is converted into a rotation matrix form, and the spatial coordinates and the rotation amount are combined into a secondary matrix
Figure 703507DEST_PATH_IMAGE021
. Pose of target object under base mark
Figure 678416DEST_PATH_IMAGE023
Comprises the following steps:
Figure 363475DEST_PATH_IMAGE025
wherein, the first and the second end of the pipe are connected with each other,
Figure 714822DEST_PATH_IMAGE027
shows the pose of the flange plate center at the tail end of the mechanical arm under a polar coordinate system,
Figure 801727DEST_PATH_IMAGE029
representing the translation of the camera to the end of the arm, also known as the hand-eye matrix,
Figure 213117DEST_PATH_IMAGE031
representing the pose of the target object in the camera coordinate system.
According to the formula, the pose of the target object under the base mark is obtained
Figure 119893DEST_PATH_IMAGE033
Must have
Figure DEST_PATH_IMAGE034
Figure 478193DEST_PATH_IMAGE029
Figure 419604DEST_PATH_IMAGE035
To do so
Figure 736316DEST_PATH_IMAGE027
The pose data of the tail end of the mechanical arm can be directly read from the cooperative mechanical arm and can be regarded as known quantity.
While
Figure 130388DEST_PATH_IMAGE029
The conversion relationship from the camera to the end flange plate needs to be obtained through calibration.
For example, the compound robot according to the present disclosure adopts an "eye-on-hand" mode, that is, the camera is mounted at the end of the mechanical arm. The calibration of the hand-eye system is the precondition of visual recognition and grabbing, and the calibration precision of the hand-eye system plays a decisive role in the grabbing precision. The calibration for the hand-eye system of the 2D camera is divided into two steps: internal reference calibration and external reference calibration.
1. Internal reference calibration
The camera intrinsic parameters are intrinsic properties of the camera, consisting of
Figure 557959DEST_PATH_IMAGE037
Focal length in direction
Figure 353876DEST_PATH_IMAGE039
Offset of origin
Figure 107069DEST_PATH_IMAGE041
And a series of distortion coefficients, when the focal length of the camera and the focusing ring are fixed, the internal reference is fixed. According to a camera pinhole imaging model:
Figure 251087DEST_PATH_IMAGE042
wherein the content of the first and second substances,
Figure 216769DEST_PATH_IMAGE044
as a point of the pixel coordinates, is,
Figure 132772DEST_PATH_IMAGE046
are spatial coordinate points. The internal reference calibration process usually needs to use a specific calibration object, and the calibration object usually has a checkerboard calibration plate, a circular calibration plate, a ChArUco calibration plate and other forms. The calibration plate is a plane in three-dimensional space
Figure 56866DEST_PATH_IMAGE048
Obtaining an imaging plane according to the projection imaging relationship shown in the formula
Figure 425530DEST_PATH_IMAGE050
' TongtongBy means of the coordinates of the corresponding points of the two planes, a homography matrix between the two planes can be obtained
Figure 663744DEST_PATH_IMAGE052
. The specification and dimensions on the calibration plate are known, the corner points in the image can be extracted by a corner point detection algorithm, and the corner points can be obtained according to the formula
Figure 168675DEST_PATH_IMAGE054
Wherein
Figure 263670DEST_PATH_IMAGE056
Are the coordinates of the points in the image,
Figure 385210DEST_PATH_IMAGE058
in order to calibrate the coordinates of points on the board,
Figure 958274DEST_PATH_IMAGE060
a homography matrix of two planes is shown. When a plurality of calibration plate images are collected, a plurality of groups of homography matrixes are obtained, and least square solution is carried out
Figure 583290DEST_PATH_IMAGE052
Will be
Figure 849186DEST_PATH_IMAGE052
And decomposing to obtain the camera intrinsic parameter K.
2. External reference calibration
The external reference calibration is to calculate the position relationship of the camera at the tail end of the mechanical arm, which is called as a hand-eye matrix, and when the camera is arranged at the tail end of a hand grip, the camera is called as 'eye-on-hand'. In this relationship, the calibration plate is placed in a fixed position,
Figure 661285DEST_PATH_IMAGE062
the conversion relation between the calibration plate and the camera is represented, the image of the calibration plate is collected by moving the tail end position of the mechanical arm, at the moment, the relation between the base coordinate of the mechanical arm and the calibration plate is fixed, the relation between the camera and the tail end of the mechanical arm is required, and the relation can be obtained as follows:
Figure 303618DEST_PATH_IMAGE063
convert it into
Figure DEST_PATH_IMAGE065
Obtaining a plurality of groups of corresponding relations by moving the tail end position of the mechanical arm, and solving the hand-eye matrix by using a Tsai two-step method
Figure 249053DEST_PATH_IMAGE029
On the other hand, the first and second substrates,
Figure 482588DEST_PATH_IMAGE031
the pose of the target in the camera coordinate system needs to be obtained through identification.
Wherein the inventor considers that the Marker identification process is to calculate the conversion relation of the Marker to a camera coordinate system
Figure 47562DEST_PATH_IMAGE067
And the Marker recognition precision plays an important role in the grabbing precision of the composite robot. The Marker used in the prior art generally has checkerboard, arUco, april tag and the like, and the principle is that the Marker is regarded as a plane, a space coordinate system (world coordinate system) is established on the plane, then angular points on the Marker are extracted to obtain world coordinates and pixel coordinates of the angular points, and under the condition that camera internal parameters are known, the conversion relation from the world coordinate system on the plane to a camera coordinate system is calculated
Figure 962428DEST_PATH_IMAGE067
The checkerboard has the central symmetry property, and the direction of a coordinate system established on a checkerboard Marker is not unique, so that the checkerboard cannot be used for identification of a composite robot, and both Aruco and Apriltag are two-dimensional codes with specific patterns, the positioning depends on four corner points of a Marker frame, but the positioning error is large due to the fact that the number of the corner points is small. Furthermore, alberto et al 2016 demonstrated that the positioning error of a single Apriltag Marker increased with the Marker's distance from the camera and the Marker's angle to the plane of the camera, so that the aforementioned Aruco solution would not be able to move the tag position away from the capture position.
Therefore, the defects of the prior art are all based on the problem that the pose calculation precision of the existing 2D Marker scheme is not high, and the requirement for accurate grabbing of the composite robot cannot be met by adopting the existing Aruco and Apriltag scheme.
Therefore, the 2D Marker and the corresponding identification method are designed, the identification rate can be effectively increased, and the pose calculation accuracy can be improved. Wherein the marking characteristics of the 2D Marker comprise: a rectangular border feature with distinct color discrimination, and a plurality of circular features with different radii, wherein each circular feature is gathered near the same corner in the border feature frame, as shown in fig. 4 to 5, in this example, the border feature is a rectangular black border, and the circular feature is that two circles with different radii are disposed near the lower right corner of the border, and the two circles are not overlapped with each other and are clearly separated.
It is worth mentioning that the design concept of the 2D Marker in the present application is derived from that AprilTag and ArUco both determine a unique Marker by specific black and white codes in the codes, (similar to black and white blocks in a two-dimensional code), the black and white codes represent a unique ID, extraction and decoding of the coded black and white blocks are time-consuming operations, and in the application of a composite robot, the ID information is not needed, and only the coordinates of the Marker corner points are needed to calculate the pose, so that the existing Marker has information redundancy.
According to the scheme, the Marker can be quickly positioned by using the black frame and the circular benchmark, meanwhile, a decoding step is not needed, experimental data show that the same equipment has the same pixel size, the recognition rates of AprilTag and AurCo are 200-300 milliseconds, and the Marker detection speed can reach 40-50 milliseconds.
Further, after the 2D Marker is fixed beside the position of the target object, the camera calibrated by hands and eyes collects the 2D Marker image and then enters the corresponding step of the identification method, which comprises the following steps:
step S210, image denoising:
for the Marker image that gathers, because the influence of noise such as ambient light source, debris, directly carry out image processing to the original image and can produce the error, so need fall the noise processing earlier, accord with the characteristics of gaussian noise to the type of noise, this technical scheme uses gaussian smoothing to fall the noise processing to the image, and two-dimentional gaussian smooth function is as follows:
Figure 296458DEST_PATH_IMAGE068
wherein, the first and the second end of the pipe are connected with each other,
Figure 904157DEST_PATH_IMAGE070
and
Figure 956426DEST_PATH_IMAGE072
is the mean value of the gaussian kernel and is,
Figure 674984DEST_PATH_IMAGE074
and
Figure 129099DEST_PATH_IMAGE076
is the variance of the gaussian kernel.
Step S220 threshold segmentation:
and performing adaptive threshold segmentation on the image by using a maximum inter-class variance method. First, the maximum between-class variance is obtained
Figure 173278DEST_PATH_IMAGE078
Calculating a threshold value
Figure 712844DEST_PATH_IMAGE080
:
Figure 235092DEST_PATH_IMAGE081
Wherein:
Figure 543714DEST_PATH_IMAGE082
calculating a threshold value
Figure 493215DEST_PATH_IMAGE080
Then, the image is segmented:
Figure 988918DEST_PATH_IMAGE083
step S230 contour finding:
and for the binary image obtained by threshold segmentation, firstly screening, taking out some noise connected domains with small areas, and then carrying out contour search on the remaining connected domains. For example: for a pixel point with a pixel value of 1, if a pixel with a pixel value of 0 is found in the 4-neighborhood or 8-neighborhood of the pixel, the pixel is defined as a contour point, and the image is traversed to find all contours.
Step S240 corner detection:
and the inner and outer corner points of the black frame where the Marker is located are key points for calculating the pose, and the inner and outer corner points of the black frame are searched on the basis of contour searching.
First polygon fitting is performed on the contour using Douglas-pock (Douglas-Peucker) algorithm:
connecting the first and last points on the contour curve to form a straight line, calculating the distance from all the points on the contour to the straight line, and finding out the maximum distance value
Figure 314857DEST_PATH_IMAGE085
Definition of a limit difference
Figure DEST_PATH_IMAGE087
If, if
Figure DEST_PATH_IMAGE089
The middle contour points between two points are completely omitted, if
Figure DEST_PATH_IMAGE091
Then remain
Figure DEST_PATH_IMAGE092
And dividing the outline into two sub-outlines by taking the changed point as a boundary, repeating the method for the sub-outlines, and finally, keeping the coordinate point as the vertex of the fitted polygon.
And then screening the polygons obtained by fitting, wherein because the inner and outer outlines of the black frames are squares, firstly, a quadrangle is screened from the polygons, and secondly, each angle of the squares is 90 degrees, so that the cosine value of each angle of the quadrangle is calculated
Figure DEST_PATH_IMAGE094
Considering the influence of the photographing angle, the largest of the four corners of the quadrangle
Figure DEST_PATH_IMAGE096
Then, the polygon is considered as a rectangle, and each vertex of the polygon is a corner point.
Step S250 circular reference detection:
besides a black frame, two circular references at the lower right corner of a Marker are also important, for a detected polygon, firstly, the Convexity (Convexity) of the detected polygon is judged, the Convexity is defined as the degree that the polygon is close to a convex polygon, the Convexity of the convex polygon is 1, and a Convexity calculation formula is as follows:
Figure DEST_PATH_IMAGE097
wherein S represents the area of the region enclosed by the outline, and H represents the area of the minimum convex polygon enclosed by all the vertexes of the corresponding outline polygon. When convex =1, the contour is a convex polygon.
Inertia rate (InertiaRatio), which represents the degree of deviation of an elliptical orbit from an ideal circle, in the range of
Figure DEST_PATH_IMAGE099
The inertia rate is closer to 0, the flatter the graph is, the closer to 1 the inertia rate is, the rounder the graph is, the inertia rate i is calculated by the formula:
Figure DEST_PATH_IMAGE100
Wherein c represents the semi-focal length of the ellipse, a represents the semi-major axis of the ellipse,
roundness (Circularity), which represents the degree of fullness of a pattern close to a circle, ranges from
Figure DEST_PATH_IMAGE101
The closer the value is to 0, the closer the graph is to an infinitely elongated rectangle, the closer to 1, the closer the graph is to a circle, and the circularity is calculated by the formula:
Figure DEST_PATH_IMAGE102
where S represents the area of the figure and C represents the perimeter of the figure.
The position of the circular reference can be accurately positioned by judging three parameters of convexity, inertia rate and roundness, and the coordinates and the radius of the central point of the reference are obtained.
Step S260 point location planning:
after obtaining coordinates of inner and outer corner points of the black frame and center points of the two circular references, the point locations need to be ordered, as shown in fig. 5, the circular reference circle with a larger radius is firstly formed, then the coordinates of the center point of the circular reference with a smaller radius are formed, and finally the coordinates of the corner points of the inner and outer frames are formed, wherein the corner point closest to the two circular references is used as a starting point and then is arranged clockwise, so that an ordered coordinate sequence of ten point locations on the image is obtained, and thus the coordinate system direction of the corner points can be marked.
Step S310 sub-pixelation:
furthermore, the angular point detection can only obtain coordinates at a pixel level, the obtained coordinate values of the angular points are integers, and for high-precision positioning of the composite robot, errors caused by the angular points at the pixel level are large, so that the angular point coordinates need to be sub-pixilated in order to obtain angular point position coordinates with higher precision.
Suppose that the q point is a real sub-pixel point, the coordinate value of which is unknown,
Figure DEST_PATH_IMAGE104
for points in the neighborhood of point q, the coordinates are known,
Figure DEST_PATH_IMAGE106
is composed of
Figure 569996DEST_PATH_IMAGE104
At a gray scale gradient of
Figure 690399DEST_PATH_IMAGE104
At the edge of the pixel, then
Figure 1294DEST_PATH_IMAGE104
The gradient direction of the point pixel is vertical to the edge direction, and the vector is
Figure DEST_PATH_IMAGE108
Is in line with the edge direction, then
Figure 68608DEST_PATH_IMAGE108
The dot product operation result of the vector and the gradient vector of the p points is 0:
Figure DEST_PATH_IMAGE109
expand the equation and shift the terms:
Figure DEST_PATH_IMAGE110
many more may be collected near the initial corner point
Figure 44433DEST_PATH_IMAGE104
According to
Figure 398054DEST_PATH_IMAGE104
Distance from center given weight
Figure DEST_PATH_IMAGE112
Constructing a system of equations according to the above formula and solving using least squares
Figure DEST_PATH_IMAGE114
:
Figure DEST_PATH_IMAGE115
Through sub-pixelization processing, the pose calculation precision of the Marker can be effectively improved, and even the operation precision of the composite robot reaches the millimeter level.
Step S320, pose solving:
assuming that the physical size of the Marker is known, regarding the Marker as a space plane, establishing a space coordinate system by using the Marker as an XOY plane
Figure DEST_PATH_IMAGE117
Angular and central points in
Figure 9295DEST_PATH_IMAGE117
The three-dimensional spatial coordinates on the coordinate system are known. The sub-pixel coordinates of the 10 key points in the image are obtained through recognition, the pose of the Marker in a camera coordinate system is calculated through the (2D-3D) coordinates of the corresponding Point pairs, and the pose of the Marker plane can be calculated through a PnP (Passive-n-Point) algorithm under the condition that camera parameters are known.
The PnP algorithm has various solving methods, and a Direct Linear Transform (DLT) method, an EPnP method, and a minimum reprojection error method are common, and in this scheme, the minimum reprojection error method is taken as an example to solve the Marker pose.
For example, according to the pinhole imaging principle, the projection relationship of the world coordinate system to the pixel coordinate system
Figure DEST_PATH_IMAGE119
Wherein
Figure DEST_PATH_IMAGE121
Is the coordinate of the pixel coordinate system, and is,
Figure DEST_PATH_IMAGE123
is a reference matrix in the camera, and the reference matrix is a reference matrix in the camera,
Figure DEST_PATH_IMAGE125
represents the external parameter from the Marker to the camera coordinate system, namely the position and the attitude,
Figure 286824DEST_PATH_IMAGE021
a homogeneous matrix formed for the outer parameters,
Figure DEST_PATH_IMAGE127
which represents the coordinates in the world coordinate system,
Figure DEST_PATH_IMAGE129
representing the depth of the feature points in the camera coordinate system.
Solving for optimal external parameters by minimizing reprojection errors
Figure 362227DEST_PATH_IMAGE021
Figure DEST_PATH_IMAGE130
Thereby obtaining the pose of the Marker plane.
Step S410, teaching a conversion relation between the target object and the 2D Marker:
the pose of the Marker in the mechanical arm base coordinate system can only be obtained through identification
Figure DEST_PATH_IMAGE132
Target materials beside a Marker are usually required to be grabbed, and finally the pose of the target object under the base Marker system is required to be obtained
Figure DEST_PATH_IMAGE133
. So that a conversion relation between the target object and the Marker is also required
Figure DEST_PATH_IMAGE135
The conversion relationship is obtained by teaching:
after identifying the Mark, the conversion relation of the Mark to the base Mark can be obtained
Figure 962448DEST_PATH_IMAGE132
:
Figure DEST_PATH_IMAGE136
Then moving the tail end pose of the mechanical arm to the grabbing pose of the target object, wherein the tail end position of the mechanical arm is the conversion relation between the target point target and the base coordinate of the mechanical arm
Figure DEST_PATH_IMAGE137
When the relation between the target and the Marker is fixed and unchanged, the conversion relation from the target point to the Marker can be calculated
Figure 857723DEST_PATH_IMAGE135
Figure DEST_PATH_IMAGE139
Therefore, the pose of the target object under the base coordinate system can be obtained.
In the actual use of the technology, as shown in fig. 6, when the compound robot performs a grabbing task, the compound robot is firstly scheduled to reach a designated station, and then the mechanical arm is controlled to move to a recognition position, so that the Marker is ensured to be in the camera view. Obtained according to the identified Marker
Figure DEST_PATH_IMAGE140
And then obtained according to teaching
Figure 266839DEST_PATH_IMAGE135
So as to obtain the grabbing point target on the machinePose under arm base
Figure DEST_PATH_IMAGE141
Figure DEST_PATH_IMAGE143
Thereby, the mechanical arm of the compound robot can accurately grab the target object.
Therefore, the conversion relation between the Marker and the target to be grabbed is calculated by a teaching method, so that the deviation relation is ensured without manual measurement or special processing pieces, the installation of the Marker is not limited by the environments such as materials, machines and the like, the deployment flexibility of the composite robot is improved, and the deployment period and the deployment difficulty are effectively shortened.
On the other hand, AMR also can have certain positioning error in the process of independently moving gradually, consequently can also effectual compensation AMR location through above-mentioned scheme bring the error, very big improvement composite robot gets the precision of putting the goods.
Compared with a 3D Marker recognition scheme, the composite robot of the 2D Marker recognition scheme has the advantages of being fast in beat, high in precision, low in cost and the like. As it is worth mentioning, april tag and ArUco determine the unique Marker by the specific black and white code in the code, (similar to black and white blocks in the two-dimensional code), the black and white code represents the unique ID, the extraction and decoding of the coded black and white blocks are time-consuming operations, and in the composite robot application, the ID information is not needed, and only the Marker corner point coordinates are needed to calculate the pose, so that the information is redundant. The scheme of the invention adopts the black frame and the circular reference, so that the Marker can be quickly positioned without a decoding step, and experimental data show that the same equipment has the same pixel size, and the recognition rates of AprilTag and AurCo are 200-300 milliseconds, while the Marker detection speed of the scheme can reach 40-50 milliseconds.
(II)
As shown in fig. 7, corresponding to the first embodiment, the second aspect of the present invention further provides a 2D Marker-based visual positioning system for identifying a 2D Marker as described above, wherein the visual positioning system comprises:
the storage unit is used for storing a program comprising the steps of the 2D Marker-based visual positioning method in the embodiment one, so that the control unit, the camera, the mechanical arm, the processing unit and the information output unit can be timely invoked and executed;
wherein the camera is arranged at the end of the mechanical arm, and the control unit is used for coordinating:
after the camera is calibrated by hands and eyes, the camera is driven by a mechanical arm to acquire a 2D Marker image;
the processing unit is used for carrying out image denoising processing on the 2D Marker image, carrying out threshold segmentation to obtain a binary image, then carrying out contour searching to obtain all contours in the image, respectively carrying out corner detection and circular reference detection to identify the coordinates of inner and outer corners of the frame features and the corresponding radius and the center point coordinates of each circular feature, and carrying out point location sequencing by taking the corner point closest to each center point as a starting point; and then, performing sub-pixelation processing on each point, and calculating the 2D Marker plane pose, wherein the method comprises the following steps: according to the known physical size of a 2D Marker, establishing a space coordinate system Ow by taking the 2D Marker as an XOY plane, establishing matched key points of each angle point and circle center point which are obtained before and a three-dimensional space on the Ow coordinate system, and under the condition of the known camera internal reference, calculating the plane pose of the 2D Marker through a PnP algorithm so as to teach the conversion relation between a target object and the 2D Marker and obtain the pose of the target object under a mechanical arm base coordinate system;
and the information output unit is used for outputting the pose of the target object under the base coordinate system of the mechanical arm.
In a preferred embodiment, the processing unit performs noise reduction on the acquired 2D Marker image by using a gaussian smoothing algorithm, and performs adaptive threshold segmentation on the acquired 2D Marker image by using a maximum inter-class variance method to obtain a binary image.
Wherein in a preferred embodiment, the sub-pixelation processing step comprises:
the q point is set as a sub-pixel point,
Figure DEST_PATH_IMAGE145
for points in the neighborhood of the q point, the coordinates are known,
Figure DEST_PATH_IMAGE147
is composed of
Figure 134432DEST_PATH_IMAGE145
A gray scale gradient of
Figure 499030DEST_PATH_IMAGE145
At the edge of the pixel, then
Figure 209497DEST_PATH_IMAGE145
The gradient direction of the point pixel is vertical to the edge direction, when the vector is
Figure DEST_PATH_IMAGE149
Is in line with the edge direction, then
Figure 484620DEST_PATH_IMAGE149
The dot product operation result of the vector and the gradient vector of the p points is 0:
Figure DEST_PATH_IMAGE150
expand the equation and shift the terms:
Figure DEST_PATH_IMAGE152
collecting a plurality of
Figure 3457DEST_PATH_IMAGE145
According to
Figure 73044DEST_PATH_IMAGE145
Distance from center given weight
Figure DEST_PATH_IMAGE154
According to the above formulaSystem of equations and solving using least squares
Figure DEST_PATH_IMAGE156
:
Figure DEST_PATH_IMAGE158
In a preferred embodiment, the corner point detecting step includes: carrying out polygon fitting on the contour, connecting the first and last points on the contour curve into a straight line, solving the distance from all the points on the contour to the straight line, and finding out the maximum distance valued max Defining a limit difference D ifd max <D, the middle contour points between the two points are all dropped off, ifd max >D, then reserved max Dividing the outline into two sub-outlines by taking the corresponding coordinate point as a boundary, repeating the method on the sub-outlines, and finally keeping the coordinate point as the vertex of the fitted polygon; and (3) screening a quadrangle from the polygons, calculating the deviation of each angle compared with a right angle, and when the deviation meets the preset condition, considering the quadrangle as a rectangular frame, and using each vertex of the polygon as an angular point.
Wherein in a preferred embodiment, the circular reference detecting step comprises: and performing convexity judgment, probabilistic judgment and roundness judgment calculation on the polygon to locate the corresponding contour in the graph, thereby acquiring the corresponding radius and the central point coordinate of each circular feature.
Therefore, the 2D Marker-based visual positioning system adopts a teaching method to obtain the transformation relation between the Marker and the target to be grabbed, so that the deviation relation is ensured without manual measurement or special processing pieces, the installation of the Marker is not limited by the environments such as materials, machines and the like, and the system is particularly suitable for being configured on a composite robot for use, and meanwhile, the deployment period and the deployment difficulty of the system can be effectively shortened due to the deployment flexibility.
On the other hand, AMR also can have certain positioning error at the autonomous movement in-process gradually, consequently can also effectual compensation AMR location through this system bring the error, very big improvement composite robot gets the precision of putting the goods.
(III)
With respect to the first and second embodiments described above, a third aspect of the present invention also provides a compound robot including: the system comprises a visual grabbing unit and an autonomous mobile robot, wherein the visual grabbing unit is the 2D Marker-based visual positioning system in the second embodiment.
In summary, the 2D Marker marking characteristics and the corresponding recognition method thereof which are specially designed by the visual positioning method and system based on the 2D Marker and the composite robot provided by the invention can obviously improve the pose calculation precision of the 2D Marker, and further improve the precision of the composite robot in grabbing/placing the target object; in addition, through the mode of teaching, calculate 2D Marker and wait to snatch the conversion relation between the object for 2D Marker can be waiting to snatch near object optional position and angle installation, has solved traditional 2D Marker mounted position and has received the great problem of scene restriction.
The preferred embodiments of the invention disclosed above are intended to be illustrative only. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise embodiments disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention. The invention is limited only by the claims and the full scope and equivalents thereof, and any modification, equivalent replacement, or improvement made within the spirit and principle of the invention should be included in the protection scope of the invention.
It will be appreciated by those skilled in the art that, in addition to implementing the system, apparatus and individual modules thereof provided by the present invention in purely computer readable program code means, the system, apparatus and individual modules thereof provided by the present invention can be implemented in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like, all by logically programming the method steps. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
In addition, all or part of the steps of the method according to the above embodiments may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a single chip, a chip, or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In addition, any combination of various different implementation manners of the embodiments of the present invention is also possible, and the embodiments of the present invention should be considered as disclosed in the embodiments of the present invention as long as the combination does not depart from the spirit of the embodiments of the present invention.

Claims (10)

1. A visual positioning method based on a 2D Marker is characterized by comprising the following steps:
step S100, fixing a 2D Marker beside a position of a target, wherein the marking characteristics of the 2D Marker comprise: the color-distinguishing rectangular frame has obvious color distinguishing degree, and the circular features with different radiuses are gathered in the frame feature frame and close to the same corner point;
step S200, acquiring a 2D Marker image by a camera after hand-eye calibration, carrying out threshold segmentation to obtain a binary image, then carrying out contour searching to obtain all contours in the image, respectively carrying out corner detection and circular reference detection to identify the coordinates of inner and outer corners of a frame feature and the corresponding radius and the coordinates of a central point of each circular feature, and carrying out point location sequencing by taking the corner point nearest to each central point as a starting point;
step S300, after each point is subjected to sub-pixelation processing, a 2D Marke plane pose is calculated, and the steps comprise: according to the known physical size of the 2D Marker, the 2D Marker is used as an XOY plane to establish a space coordinate systemOwThe points of each corner point and the center of the circle obtained in step S200 are compared withOwEstablishing matched key points in a three-dimensional space on a coordinate system, and calculating the plane pose of the 2D Marker through a PnP algorithm under the condition of known camera internal parameters;
and S400, teaching the conversion relation between the target object and the 2D Marker according to the plane pose acquired in the step S300 so as to acquire the pose of the target object in the base coordinate system.
2. The 2D Marker-based visual positioning method according to claim 1, wherein the step S200 further comprises an image denoising step: and carrying out noise reduction on the acquired 2D Marker image by adopting a Gaussian smoothing algorithm.
3. The 2D Marker-based visual positioning method according to claim 1, wherein in step S200, the acquired 2D Marker image is subjected to adaptive threshold segmentation by using a maximum inter-class variance method to obtain a binary image.
4. The 2D Marker-based visual positioning method according to claim 1, wherein the step of obtaining all the contours in the image in step S200 to perform corner point detection respectively comprises: carrying out polygon fitting on the contour, connecting the first and last points on the contour curve into a straight line, solving the distance from all the points on the contour to the straight line, and finding out the maximum distance valued max Defining a limit difference D ifd max <D, the middle contour points between the two points are all dropped off, ifd max >D, then reserved max Dividing the outline into two sub-outlines by taking the corresponding coordinate point as a boundary, repeating the method for the sub-outlines, and finally, keeping the coordinate point as the vertex of a fitted polygon; screening out quadrangles from the polygonsAnd calculating the deviation of each angle relative to the right angle, and when the deviation meets the preset condition, determining the deviation to be a rectangular frame, wherein each vertex of the polygon is an angular point.
5. The 2D Marker-based visual positioning method according to claim 4, wherein the circular reference detecting step comprises: and performing convexity judgment, probabilistic judgment and roundness judgment calculation on the polygon to locate the corresponding contour in the graph, thereby obtaining the corresponding radius and the central point coordinate of each circular feature.
6. The 2D Marker-based visual positioning method according to claim 1, wherein the sub-pixelation processing step comprises:
the q point is set as a sub-pixel point,
Figure 190614DEST_PATH_IMAGE001
for points in the neighborhood of point q, the coordinates are known,
Figure 685311DEST_PATH_IMAGE002
is composed of
Figure 797624DEST_PATH_IMAGE001
A gray scale gradient of
Figure 601632DEST_PATH_IMAGE001
At the edge of the pixel, then
Figure 799395DEST_PATH_IMAGE001
The gradient direction of the point pixel is vertical to the edge direction, and the vector is
Figure 861898DEST_PATH_IMAGE003
Is in line with the edge direction, then
Figure 828717DEST_PATH_IMAGE003
The dot product operation result of the vector and the gradient vector of the p point is 0:
Figure 334784DEST_PATH_IMAGE004
Expand the equation and shift the terms:
Figure 754264DEST_PATH_IMAGE005
collecting a plurality of pixels near each corner
Figure 371191DEST_PATH_IMAGE001
According to
Figure 963756DEST_PATH_IMAGE001
Distance from center weighted
Figure 640725DEST_PATH_IMAGE006
Constructing a system of equations according to the above formula and solving using least squares
Figure 547501DEST_PATH_IMAGE007
:
Figure 436960DEST_PATH_IMAGE008
7. A 2D Marker based visual positioning system for identifying a 2D Marker as claimed in claim 1, comprising:
a storage unit for storing a program comprising the steps of the 2D Marker-based visual positioning method according to any one of claims 1 to 6, and being used for the control unit, the camera, the mechanical arm, the processing unit and the information output unit to be called and executed at proper time;
wherein the camera is arranged at the end of the mechanical arm, and the control unit is used for coordinating:
after the camera is calibrated by hands and eyes, the camera is driven by a mechanical arm to acquire a 2D Marker image;
the processing unit is used for carrying out image denoising processing on the 2D Marker image, carrying out threshold segmentation to obtain a binary image, then carrying out contour searching to obtain all contours in the image, respectively carrying out corner point detection and circular reference detection to identify the coordinates of inner and outer corner points of the frame feature and the corresponding radius and the center point coordinate of each circular feature, and carrying out point location sequencing by taking the corner point closest to each center point as a starting point; and then, after each point is subjected to sub-pixelization processing, calculating the pose of the 2D Marke plane, wherein the steps comprise: according to the known physical size of the 2D Marker, the 2D Marker is used as an XOY plane to establish a space coordinate systemOwThe points of each angle point and center point obtained before are compared with the points of the center of a circleOwEstablishing matched key points in a three-dimensional space on a coordinate system, and calculating a plane pose of a 2D Marker through a PnP algorithm under the condition that camera parameters are known so as to teach a conversion relation between a target object and the 2D Marker and obtain the pose of the target object under a mechanical arm base coordinate system;
and the information output unit is used for outputting the pose of the target object under the base coordinate system of the mechanical arm.
8. The 2D Marker-based visual positioning system of claim 7, wherein the step S200 of obtaining all the contours in the image to perform corner detection respectively comprises: carrying out polygon fitting on the contour, connecting the first and last points on the contour curve into a straight line, solving the distance from all the points on the contour to the straight line, and finding out the maximum distance valued max Defining a limit difference D ifd max <D, the middle contour points between the two points are all dropped off, ifd max >D, then reserved max Dividing the outline into two sub-outlines by taking the corresponding coordinate point as a boundary, repeating the method for the sub-outlines, and finally, keeping the coordinate point as the vertex of a fitted polygon; screening out a quadrangle from the polygon, calculating the deviation of each angle relative to the right angle, and considering the quadrangle as a rectangular frame when the deviation accords with the preset condition, and each top of the polygonThe point is then a corner point.
9. The 2D Marker based visual positioning system of claim 8, wherein the circular fiducial detection step comprises: and performing convexity judgment, probabilistic judgment and roundness judgment calculation on the polygon to locate the corresponding contour in the graph, thereby acquiring the corresponding radius and the central point coordinate of each circular feature.
10. A compound robot, comprising: visual grasping unit, autonomous mobile robot, characterized in that the visual grasping unit is a 2D Marker based visual positioning system according to any of claims 7 to 9.
CN202211463733.XA 2022-11-17 2022-11-17 Visual positioning method and system based on 2D Marker and compound robot Active CN115609591B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211463733.XA CN115609591B (en) 2022-11-17 2022-11-17 Visual positioning method and system based on 2D Marker and compound robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211463733.XA CN115609591B (en) 2022-11-17 2022-11-17 Visual positioning method and system based on 2D Marker and compound robot

Publications (2)

Publication Number Publication Date
CN115609591A true CN115609591A (en) 2023-01-17
CN115609591B CN115609591B (en) 2023-04-28

Family

ID=84877747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211463733.XA Active CN115609591B (en) 2022-11-17 2022-11-17 Visual positioning method and system based on 2D Marker and compound robot

Country Status (1)

Country Link
CN (1) CN115609591B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116000942A (en) * 2023-03-22 2023-04-25 深圳市大族机器人有限公司 Semiconductor manufacturing system based on multi-axis cooperative robot
CN116060269A (en) * 2022-12-08 2023-05-05 中晟华越(郑州)智能科技有限公司 Spraying method for loop-shaped product
CN116245877A (en) * 2023-05-08 2023-06-09 济南达宝文汽车设备工程有限公司 Material frame detection method and system based on machine vision
CN116423526A (en) * 2023-06-12 2023-07-14 上海仙工智能科技有限公司 Automatic calibration method and system for mechanical arm tool coordinates and storage medium
CN116766183A (en) * 2023-06-15 2023-09-19 山东中清智能科技股份有限公司 Mechanical arm control method and device based on visual image

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108562274A (en) * 2018-04-20 2018-09-21 南京邮电大学 A kind of noncooperative target pose measuring method based on marker
US20180345483A1 (en) * 2015-12-03 2018-12-06 Abb Schweiz Ag Method For Teaching An Industrial Robot To Pick Parts
CN109363771A (en) * 2018-12-06 2019-02-22 安徽埃克索医疗机器人有限公司 The fracture of neck of femur Multiple tunnel of 2D planning information plants nail positioning system in a kind of fusion
CN111612794A (en) * 2020-04-15 2020-09-01 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Multi-2D vision-based high-precision three-dimensional pose estimation method and system for parts
CN113084808A (en) * 2021-04-02 2021-07-09 上海智能制造功能平台有限公司 Monocular vision-based 2D plane grabbing method for mobile mechanical arm
WO2022034032A1 (en) * 2020-08-11 2022-02-17 Ocado Innovation Limited A selector for robot-retrievable items

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180345483A1 (en) * 2015-12-03 2018-12-06 Abb Schweiz Ag Method For Teaching An Industrial Robot To Pick Parts
CN108562274A (en) * 2018-04-20 2018-09-21 南京邮电大学 A kind of noncooperative target pose measuring method based on marker
CN109363771A (en) * 2018-12-06 2019-02-22 安徽埃克索医疗机器人有限公司 The fracture of neck of femur Multiple tunnel of 2D planning information plants nail positioning system in a kind of fusion
CN111612794A (en) * 2020-04-15 2020-09-01 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Multi-2D vision-based high-precision three-dimensional pose estimation method and system for parts
WO2022034032A1 (en) * 2020-08-11 2022-02-17 Ocado Innovation Limited A selector for robot-retrievable items
CN113084808A (en) * 2021-04-02 2021-07-09 上海智能制造功能平台有限公司 Monocular vision-based 2D plane grabbing method for mobile mechanical arm

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116060269A (en) * 2022-12-08 2023-05-05 中晟华越(郑州)智能科技有限公司 Spraying method for loop-shaped product
CN116060269B (en) * 2022-12-08 2024-06-14 中晟华越(郑州)智能科技有限公司 Spraying method for loop-shaped product
CN116000942A (en) * 2023-03-22 2023-04-25 深圳市大族机器人有限公司 Semiconductor manufacturing system based on multi-axis cooperative robot
CN116245877A (en) * 2023-05-08 2023-06-09 济南达宝文汽车设备工程有限公司 Material frame detection method and system based on machine vision
CN116245877B (en) * 2023-05-08 2023-11-03 济南达宝文汽车设备工程有限公司 Material frame detection method and system based on machine vision
CN116423526A (en) * 2023-06-12 2023-07-14 上海仙工智能科技有限公司 Automatic calibration method and system for mechanical arm tool coordinates and storage medium
CN116423526B (en) * 2023-06-12 2023-09-19 上海仙工智能科技有限公司 Automatic calibration method and system for mechanical arm tool coordinates and storage medium
CN116766183A (en) * 2023-06-15 2023-09-19 山东中清智能科技股份有限公司 Mechanical arm control method and device based on visual image
CN116766183B (en) * 2023-06-15 2023-12-26 山东中清智能科技股份有限公司 Mechanical arm control method and device based on visual image

Also Published As

Publication number Publication date
CN115609591B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
CN115609591B (en) Visual positioning method and system based on 2D Marker and compound robot
CN107507167B (en) Cargo tray detection method and system based on point cloud plane contour matching
Geiger et al. Automatic camera and range sensor calibration using a single shot
CN109903313B (en) Real-time pose tracking method based on target three-dimensional model
Azad et al. Stereo-based 6d object localization for grasping with humanoid robot systems
CN104835173B (en) A kind of localization method based on machine vision
Romero-Ramire et al. Fractal markers: A new approach for long-range marker pose estimation under occlusion
CN115791822A (en) Visual detection algorithm and detection system for wafer surface defects
CN108007388A (en) A kind of turntable angle high precision online measuring method based on machine vision
CN109215016B (en) Identification and positioning method for coding mark
CN111627072A (en) Method and device for calibrating multiple sensors and storage medium
CN107292869B (en) Image speckle detection method based on anisotropic Gaussian kernel and gradient search
CN107766859A (en) Method for positioning mobile robot, device and mobile robot
CN113705268B (en) Two-dimensional code positioning method and system
CN108717709A (en) Image processing system and image processing method
CN110096920A (en) A kind of high-precision high-speed positioning label and localization method towards visual servo
Urban et al. Finding a good feature detector-descriptor combination for the 2D keypoint-based registration of TLS point clouds
CN109784250A (en) The localization method and device of automatically guiding trolley
CN106679671A (en) Navigation marking graph recognition method based on laser data
CN103824275A (en) System and method for finding saddle point-like structures in an image and determining information from the same
CN112257721A (en) Image target region matching method based on Fast ICP
CN115685160A (en) Target-based laser radar and camera calibration method, system and electronic equipment
CN116843748B (en) Remote two-dimensional code and object space pose acquisition method and system thereof
Li et al. Vision-based target detection and positioning approach for underwater robots
CN117496401A (en) Full-automatic identification and tracking method for oval target points of video measurement image sequences

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A 2D Marker based visual positioning method and system, composite robot

Effective date of registration: 20230828

Granted publication date: 20230428

Pledgee: Bank of Communications Ltd. Shanghai New District Branch

Pledgor: Shanghai Xiangong Intelligent Technology Co.,Ltd.

Registration number: Y2023310000491