CN115609591A - 2D Marker-based visual positioning method and system and composite robot - Google Patents
2D Marker-based visual positioning method and system and composite robot Download PDFInfo
- Publication number
- CN115609591A CN115609591A CN202211463733.XA CN202211463733A CN115609591A CN 115609591 A CN115609591 A CN 115609591A CN 202211463733 A CN202211463733 A CN 202211463733A CN 115609591 A CN115609591 A CN 115609591A
- Authority
- CN
- China
- Prior art keywords
- marker
- point
- points
- image
- pose
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
- B25J9/161—Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1661—Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20164—Salient point detection; Corner detection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Robotics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mechanical Engineering (AREA)
- Automation & Control Theory (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Fuzzy Systems (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a visual positioning method and system based on a 2D Marker and a composite robot, wherein the method comprises the following steps: fixing a 2D Marker beside a target object; acquiring a 2D Marker image by a camera after hand-eye calibration, performing threshold segmentation to obtain a binary image, then performing contour searching to obtain all contours in the image, performing corner detection and circular reference detection respectively, identifying coordinates of inner and outer corners of a frame feature and corresponding radii and central point coordinates of circular features, and performing point position sequencing by taking the corner point closest to each central point as a starting point; after each point is subjected to sub-pixelization processing, calculating the 2D Marker plane pose; and then, according to the acquired plane pose, teaching the conversion relation between the target object and the 2D Marker so as to acquire the pose of the target object in a base coordinate system, thereby improving the pose calculation accuracy of the 2D Marker.
Description
Technical Field
The invention designs a visual positioning technology, and particularly relates to a visual positioning method and system based on a 2D Marker and a composite robot.
Background
With the development of intellectualization and digitalization of the warehouse logistics industry, AMR (Automated Mobile Robot) for carrying and a mechanical arm for grasping are widely applied to industrial production processes in various fields. In recent years, the technical requirements of intelligent factory's brisk development to wisdom commodity circulation field constantly promote, and the demand of the automatic transportation of spare part between the complicated production line has been unable to be satisfied to simple transport AMR and fixed arm, and consequently, the compound robot of AMR and arm combination takes place in due course. Under the influence of factors such as a positioning navigation technology and a factory environment, the positioning precision of the composite robot cannot realize the precise grabbing or placing of goods by the mechanical arm, so a visual system needs to be carried at the tail end of the arm of the composite robot, the positioning error of the robot is compensated and corrected, and the precise grabbing and placing of the composite robot are realized.
The composite robot terminal vision system is divided into a 3D scheme and a 2D scheme, the 3D scheme carries a 3D camera, point cloud data are collected for identification, although a target object can be directly identified, the identification error is usually large under the influence of the imaging precision of the 3D camera, and the 3D camera has the defects of high price, large size, heavy quality, slow identification beat and the like, so that the application of the composite robot terminal vision system in the composite robot is limited. The 2D scheme carries a 2D camera, has the advantages of low price, small size, light weight, fast recognition beat, high precision and the like, and has wider application scenes in the composite robot.
Generally, a 2D camera cannot directly identify the three-dimensional space pose of a target object, needs to identify by means of a specific Marker, and utilizes the characteristic that the spatial position relation between the Marker and a grabbed target is fixed to realize the accurate grabbing and placing of the composite robot.
For example, chinese patent publication No. CN111516006B discloses a composite robot operation method based on a specific mark, which uses an ArUco label or a customized simple label for positioning to obtain coordinates of the label in a robot arm coordinate system, however, in this technique, only the coordinates of the label in the robot arm coordinate system can be obtained by identifying the label, which means that the robot arm can only grab the position of the label, and therefore, when there is a large distance between the label position and the grabbing position, the robot cannot grab the label directly.
Therefore, the technical scheme is greatly limited by the use scene, the tag pose must be limited to be the grabbing pose, and otherwise the tag pose cannot be grabbed. Secondly, the label used by the technology only uses 4 control points to calculate the pose, and the identified control points do not reach sub-pixelation, so the overall pose calculation error is large, and the composite robot is difficult to accurately grab.
Disclosure of Invention
The invention mainly aims to provide a 2D Marker-based visual positioning method and system and a composite robot, so as to improve the pose calculation accuracy of the 2D Marker.
In order to achieve the above object, according to a first aspect of the present invention, there is provided a 2D Marker-based visual positioning method, comprising the steps of:
step S100, fixing a 2D Marker beside a position of a target, wherein the marking characteristics of the 2D Marker comprise: the color-distinguishing rectangular frame has obvious color distinguishing degree, and the circular features with different radiuses are gathered in the frame feature frame and close to the same corner point;
step S200, acquiring a 2D Marker image by a camera after hand-eye calibration, carrying out threshold segmentation to obtain a binary image, then carrying out contour searching to obtain all contours in the image, respectively carrying out corner detection and circular reference detection to identify the coordinates of inner and outer corners of a frame feature and the corresponding radius and the coordinates of a central point of each circular feature, and carrying out point location sequencing by taking the corner point nearest to each central point as a starting point;
step S300, calculating the 2D Marker plane pose after each point is subjected to sub-pixelization, wherein the steps comprise: establishing a space coordinate system Ow by taking the 2D Marker as an XOY plane according to the known physical size of the 2D Marker, establishing matched key points between each corner point and circle center point obtained in the step S200 and a three-dimensional space on the Ow coordinate system, and calculating the plane pose of the 2D Marker through a PnP algorithm under the condition of known camera parameters;
and step S400, according to the plane pose acquired in the step S300, teaching the conversion relation between the target object and the 2D Marker so as to acquire the pose of the target object in the base coordinate system.
In a possible preferred embodiment, step S200 further includes an image denoising step: and carrying out noise reduction on the acquired 2D Marker image by adopting a Gaussian smoothing algorithm.
In a possible preferred embodiment, in the step S200, the acquired 2D Marker image is adaptively threshold-segmented by using a maximum inter-class variance method, so as to obtain a binary image.
In a possible preferred embodiment, the step of acquiring all the contours in the image in step S200 to perform corner detection separately includes: performing polygon fitting on the contour, connecting the first and last points on the contour curve into a straight line, calculating the distance from all the points on the contour to the straight line, and finding out the maximum distance valued max Defining a limit difference D, ifd max <D, the middle contour points between the two points are all dropped off, ifd max >D, then reserved max Dividing the outline into two sub-outlines by taking the corresponding coordinate point as a boundary, repeating the method for the sub-outlines, and finally, keeping the coordinate point as the vertex of a fitted polygon; and (3) screening a quadrangle from the polygons, calculating the deviation of each angle compared with a right angle, and when the deviation meets the preset condition, considering the quadrangle as a rectangular frame, and using each vertex of the polygon as an angular point.
In a possible preferred embodiment, the circular reference detection step comprises: and performing convexity judgment, probabilistic judgment and roundness judgment calculation on the polygon to locate the corresponding contour in the graph, thereby acquiring the corresponding radius and the central point coordinate of each circular feature.
In a possible preferred embodiment, the sub-pixelation processing step comprises:
the q point is set as a sub-pixel point,for points in the neighborhood of the q point, the coordinates are known,is composed ofA gray scale gradient ofAt the edge of the pixel, thenThe gradient direction of the point pixel is vertical to the edge direction, when the vector isIs in line with the edge direction, thenThe dot product operation result of the vector and the gradient vector of the p points is 0:
expand the equation and shift the term:
collecting a plurality of pixels near each cornerAccording toDistance from center weightedConstructing a system of equations according to the above formula and solving using least squares:
To achieve the above object, according to a second aspect of the present invention, there is also provided a 2D Marker-based visual positioning system for recognizing the 2D Marker as described above, wherein the visual positioning system comprises:
a storage unit for storing a program comprising the steps of the 2D Marker-based visual positioning method according to any one of claims 1 to 6, and being used for the control unit, the camera, the mechanical arm, the processing unit and the information output unit to be called and executed at proper time;
wherein the camera is arranged at the end of the mechanical arm, and the control unit is used for coordinating:
after the camera is calibrated by hands and eyes, the camera is driven by a mechanical arm to acquire a 2D Marker image;
the processing unit is used for carrying out image denoising processing on the 2D Marker image, carrying out threshold segmentation to obtain a binary image, then carrying out contour searching to obtain all contours in the image, respectively carrying out corner detection and circular reference detection to identify the coordinates of inner and outer corners of the frame features and the corresponding radius and the center point coordinates of each circular feature, and carrying out point location sequencing by taking the corner point closest to each center point as a starting point; and then, performing sub-pixelation processing on each point, and calculating the 2D Marker plane pose, wherein the method comprises the following steps: according to the known physical size of a 2D Marker, establishing a space coordinate system Ow by taking the 2D Marker as an XOY plane, establishing matched key points of each angle point and circle center point which are obtained before and a three-dimensional space on the Ow coordinate system, and under the condition of the known camera internal reference, calculating the plane pose of the 2D Marker through a PnP algorithm so as to teach the conversion relation between a target object and the 2D Marker and obtain the pose of the target object under a mechanical arm base coordinate system;
and the information output unit is used for outputting the pose of the target object under the base coordinate system of the mechanical arm.
In a possible preferred embodiment, the step of acquiring all the contours in the image in step S200 to respectively perform corner detection includes: carrying out polygon fitting on the contour, connecting the first and last points on the contour curve into a straight line, solving the distance from all the points on the contour to the straight line, and finding out the maximum distance valued max Defining a limit difference D ifd max <D, the middle contour points between the two points are all dropped off, ifd max >D, then reserved max Dividing the outline into two sub-outlines by taking the corresponding coordinate point as a boundary, repeating the method for the sub-outlines, and finally, keeping the coordinate point as the vertex of a fitted polygon; and (3) screening out a quadrangle from the polygon, calculating the deviation of each angle relative to a right angle, and when the deviation meets a preset condition, considering the quadrangle as a rectangular frame, wherein each vertex of the polygon is an angular point.
In a possible preferred embodiment, the circular reference detecting step comprises: and performing convexity judgment, probabilistic judgment and roundness judgment calculation on the polygon to locate the corresponding contour in the graph, thereby acquiring the corresponding radius and the central point coordinate of each circular feature.
In order to achieve the above object, according to a third aspect of the present invention, there is also provided a compound robot comprising: the vision grabbing unit is any one of the vision positioning systems based on the 2D Marker.
Through the 2D Marker marking characteristic and the corresponding identification method thereof, the 2D Marker-based visual positioning method and system and the composite robot provided by the invention can obviously improve the pose calculation precision of the 2D Marker, and further improve the precision of the composite robot for grabbing/placing the target object; in addition, through the mode of teaching, calculate the conversion relation between 2D Marker and the object of waiting to snatch for 2D Marker can be at near waiting to snatch the object optional position and the angle installation, has solved traditional 2D Marker mounted position and has received the great problem of scene restriction.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a diagram of method steps for a first embodiment of the present invention;
FIG. 2 is a logic diagram of a first embodiment of the present invention;
fig. 3 is a schematic diagram of a process flow of identifying a 2D Marker pose in the first embodiment of the invention;
fig. 4 is a schematic structural diagram of a 2D Marker in a first embodiment of the invention;
fig. 5 is a schematic view of a 2D Marker corner point sequence in the first embodiment of the present invention;
fig. 6 is a schematic diagram of a composite robot performing a grabbing task based on a 2D Marker visual positioning method according to a first embodiment of the present invention;
fig. 7 is a schematic structural diagram of a 2D Marker-based visual positioning system according to a second embodiment of the invention.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the following will clearly and completely describe the specific technical solution of the present invention with reference to the embodiments to help those skilled in the art to further understand the present invention. It should be apparent that the embodiments described herein are only a few embodiments of the present invention, and not all embodiments. It should be noted that the embodiments and features of the embodiments in this application may be combined with each other without departing from the spirit and conflict of the present disclosure, as will be apparent to those of ordinary skill in the art. All other embodiments based on the embodiments of the present invention, which can be obtained by a person of ordinary skill in the art without any creative effort, shall fall within the disclosure and the protection scope of the present invention.
Furthermore, the terms "first," "second," "S100," "S200," and the like in the description and in the claims and the drawings of the present invention are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those described herein. Also, the terms "including" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions. Unless expressly stated or limited otherwise, the terms "disposed," "mounted," "connected," and "connected" are to be construed broadly and encompass, for example, both fixed and removable connections or integral connections; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in this case can be understood by those skilled in the art in combination with the prior art according to the specific situation.
The following will exemplify the visual positioning and grasping of a target object by a composite robot, wherein the composite robot in this example comprises: AMR (automatic Mobile Robot), arm and computer vision system constitute, wherein AMR is responsible for removing, and the arm is responsible for getting and put the goods, and computer vision system is equivalent to the eye of arm to the accurate of adjustment arm is got and is put, and it is obvious that compared in the transfer Robot of single function, the compound Robot can accomplish complicated cooperative task such as freight transportation, goods are got and are put.
(A)
As shown in fig. 1 to 6, according to a first aspect of the present invention, the steps of the 2D Marker-based visual positioning method are provided as follows:
step S100, fixing the 2D Marker beside the position of the target, for example, keeping the relative position relationship between the 2D Marker (hereinafter referred to as Marker/mark) and the target unchanged.
Step S200, a camera after hand-eye calibration collects a 2D Marker image, a binary image is obtained through threshold segmentation, contour searching is carried out, all contours in the image are obtained, corner point detection and circular reference detection are respectively carried out, coordinates of inner and outer corner points of frame features and corresponding radiuses and coordinates of central points of all circular features are identified, and point location sequencing is carried out by taking the corner point closest to each central point as a starting point.
Step S300, after each point is subjected to sub-pixelation processing, a 2D Marker plane pose is calculated, and the method comprises the following steps: and according to the known physical size of the 2D Marker, establishing a space coordinate system Ow by taking the 2D Marker as an XOY plane, establishing matched key points between each corner point and circle center point obtained in the step S200 and a three-dimensional space on the Ow coordinate system, and calculating the plane pose of the 2D Marker through a PnP algorithm under the condition of known camera parameters.
And step S400, according to the plane pose acquired in the step S300, teaching the conversion relation between the target object and the 2D Marker so as to acquire the pose of the target object in the base coordinate system.
Specifically, the target of the composite robot is obtained by the vision grasping of the target object under the base coordinate system of the mechanical arm, and the pose is composed of two parts: three dimensional space coordinatesAnd amount of spatial rotationFor calculation, the rotation amount is converted into a rotation matrix form, and the spatial coordinates and the rotation amount are combined into a secondary matrix. Pose of target object under base markComprises the following steps:
wherein, the first and the second end of the pipe are connected with each other,shows the pose of the flange plate center at the tail end of the mechanical arm under a polar coordinate system,representing the translation of the camera to the end of the arm, also known as the hand-eye matrix,representing the pose of the target object in the camera coordinate system.
According to the formula, the pose of the target object under the base mark is obtainedMust have,,To do soThe pose data of the tail end of the mechanical arm can be directly read from the cooperative mechanical arm and can be regarded as known quantity.
WhileThe conversion relationship from the camera to the end flange plate needs to be obtained through calibration.
For example, the compound robot according to the present disclosure adopts an "eye-on-hand" mode, that is, the camera is mounted at the end of the mechanical arm. The calibration of the hand-eye system is the precondition of visual recognition and grabbing, and the calibration precision of the hand-eye system plays a decisive role in the grabbing precision. The calibration for the hand-eye system of the 2D camera is divided into two steps: internal reference calibration and external reference calibration.
1. Internal reference calibration
The camera intrinsic parameters are intrinsic properties of the camera, consisting ofFocal length in directionOffset of originAnd a series of distortion coefficients, when the focal length of the camera and the focusing ring are fixed, the internal reference is fixed. According to a camera pinhole imaging model:
wherein the content of the first and second substances,as a point of the pixel coordinates, is,are spatial coordinate points. The internal reference calibration process usually needs to use a specific calibration object, and the calibration object usually has a checkerboard calibration plate, a circular calibration plate, a ChArUco calibration plate and other forms. The calibration plate is a plane in three-dimensional spaceObtaining an imaging plane according to the projection imaging relationship shown in the formula' TongtongBy means of the coordinates of the corresponding points of the two planes, a homography matrix between the two planes can be obtained. The specification and dimensions on the calibration plate are known, the corner points in the image can be extracted by a corner point detection algorithm, and the corner points can be obtained according to the formulaWhereinAre the coordinates of the points in the image,in order to calibrate the coordinates of points on the board,a homography matrix of two planes is shown. When a plurality of calibration plate images are collected, a plurality of groups of homography matrixes are obtained, and least square solution is carried outWill beAnd decomposing to obtain the camera intrinsic parameter K.
2. External reference calibration
The external reference calibration is to calculate the position relationship of the camera at the tail end of the mechanical arm, which is called as a hand-eye matrix, and when the camera is arranged at the tail end of a hand grip, the camera is called as 'eye-on-hand'. In this relationship, the calibration plate is placed in a fixed position,the conversion relation between the calibration plate and the camera is represented, the image of the calibration plate is collected by moving the tail end position of the mechanical arm, at the moment, the relation between the base coordinate of the mechanical arm and the calibration plate is fixed, the relation between the camera and the tail end of the mechanical arm is required, and the relation can be obtained as follows:
convert it intoObtaining a plurality of groups of corresponding relations by moving the tail end position of the mechanical arm, and solving the hand-eye matrix by using a Tsai two-step method。
On the other hand, the first and second substrates,the pose of the target in the camera coordinate system needs to be obtained through identification.
Wherein the inventor considers that the Marker identification process is to calculate the conversion relation of the Marker to a camera coordinate systemAnd the Marker recognition precision plays an important role in the grabbing precision of the composite robot. The Marker used in the prior art generally has checkerboard, arUco, april tag and the like, and the principle is that the Marker is regarded as a plane, a space coordinate system (world coordinate system) is established on the plane, then angular points on the Marker are extracted to obtain world coordinates and pixel coordinates of the angular points, and under the condition that camera internal parameters are known, the conversion relation from the world coordinate system on the plane to a camera coordinate system is calculated。
The checkerboard has the central symmetry property, and the direction of a coordinate system established on a checkerboard Marker is not unique, so that the checkerboard cannot be used for identification of a composite robot, and both Aruco and Apriltag are two-dimensional codes with specific patterns, the positioning depends on four corner points of a Marker frame, but the positioning error is large due to the fact that the number of the corner points is small. Furthermore, alberto et al 2016 demonstrated that the positioning error of a single Apriltag Marker increased with the Marker's distance from the camera and the Marker's angle to the plane of the camera, so that the aforementioned Aruco solution would not be able to move the tag position away from the capture position.
Therefore, the defects of the prior art are all based on the problem that the pose calculation precision of the existing 2D Marker scheme is not high, and the requirement for accurate grabbing of the composite robot cannot be met by adopting the existing Aruco and Apriltag scheme.
Therefore, the 2D Marker and the corresponding identification method are designed, the identification rate can be effectively increased, and the pose calculation accuracy can be improved. Wherein the marking characteristics of the 2D Marker comprise: a rectangular border feature with distinct color discrimination, and a plurality of circular features with different radii, wherein each circular feature is gathered near the same corner in the border feature frame, as shown in fig. 4 to 5, in this example, the border feature is a rectangular black border, and the circular feature is that two circles with different radii are disposed near the lower right corner of the border, and the two circles are not overlapped with each other and are clearly separated.
It is worth mentioning that the design concept of the 2D Marker in the present application is derived from that AprilTag and ArUco both determine a unique Marker by specific black and white codes in the codes, (similar to black and white blocks in a two-dimensional code), the black and white codes represent a unique ID, extraction and decoding of the coded black and white blocks are time-consuming operations, and in the application of a composite robot, the ID information is not needed, and only the coordinates of the Marker corner points are needed to calculate the pose, so that the existing Marker has information redundancy.
According to the scheme, the Marker can be quickly positioned by using the black frame and the circular benchmark, meanwhile, a decoding step is not needed, experimental data show that the same equipment has the same pixel size, the recognition rates of AprilTag and AurCo are 200-300 milliseconds, and the Marker detection speed can reach 40-50 milliseconds.
Further, after the 2D Marker is fixed beside the position of the target object, the camera calibrated by hands and eyes collects the 2D Marker image and then enters the corresponding step of the identification method, which comprises the following steps:
step S210, image denoising:
for the Marker image that gathers, because the influence of noise such as ambient light source, debris, directly carry out image processing to the original image and can produce the error, so need fall the noise processing earlier, accord with the characteristics of gaussian noise to the type of noise, this technical scheme uses gaussian smoothing to fall the noise processing to the image, and two-dimentional gaussian smooth function is as follows:
wherein, the first and the second end of the pipe are connected with each other,andis the mean value of the gaussian kernel and is,andis the variance of the gaussian kernel.
Step S220 threshold segmentation:
and performing adaptive threshold segmentation on the image by using a maximum inter-class variance method. First, the maximum between-class variance is obtainedCalculating a threshold value:
Wherein:
step S230 contour finding:
and for the binary image obtained by threshold segmentation, firstly screening, taking out some noise connected domains with small areas, and then carrying out contour search on the remaining connected domains. For example: for a pixel point with a pixel value of 1, if a pixel with a pixel value of 0 is found in the 4-neighborhood or 8-neighborhood of the pixel, the pixel is defined as a contour point, and the image is traversed to find all contours.
Step S240 corner detection:
and the inner and outer corner points of the black frame where the Marker is located are key points for calculating the pose, and the inner and outer corner points of the black frame are searched on the basis of contour searching.
First polygon fitting is performed on the contour using Douglas-pock (Douglas-Peucker) algorithm:
connecting the first and last points on the contour curve to form a straight line, calculating the distance from all the points on the contour to the straight line, and finding out the maximum distance valueDefinition of a limit differenceIf, ifThe middle contour points between two points are completely omitted, ifThen remainAnd dividing the outline into two sub-outlines by taking the changed point as a boundary, repeating the method for the sub-outlines, and finally, keeping the coordinate point as the vertex of the fitted polygon.
And then screening the polygons obtained by fitting, wherein because the inner and outer outlines of the black frames are squares, firstly, a quadrangle is screened from the polygons, and secondly, each angle of the squares is 90 degrees, so that the cosine value of each angle of the quadrangle is calculatedConsidering the influence of the photographing angle, the largest of the four corners of the quadrangleThen, the polygon is considered as a rectangle, and each vertex of the polygon is a corner point.
Step S250 circular reference detection:
besides a black frame, two circular references at the lower right corner of a Marker are also important, for a detected polygon, firstly, the Convexity (Convexity) of the detected polygon is judged, the Convexity is defined as the degree that the polygon is close to a convex polygon, the Convexity of the convex polygon is 1, and a Convexity calculation formula is as follows:
wherein S represents the area of the region enclosed by the outline, and H represents the area of the minimum convex polygon enclosed by all the vertexes of the corresponding outline polygon. When convex =1, the contour is a convex polygon.
Inertia rate (InertiaRatio), which represents the degree of deviation of an elliptical orbit from an ideal circle, in the range ofThe inertia rate is closer to 0, the flatter the graph is, the closer to 1 the inertia rate is, the rounder the graph is, the inertia rate i is calculated by the formula:
Wherein c represents the semi-focal length of the ellipse, a represents the semi-major axis of the ellipse,
roundness (Circularity), which represents the degree of fullness of a pattern close to a circle, ranges fromThe closer the value is to 0, the closer the graph is to an infinitely elongated rectangle, the closer to 1, the closer the graph is to a circle, and the circularity is calculated by the formula:
where S represents the area of the figure and C represents the perimeter of the figure.
The position of the circular reference can be accurately positioned by judging three parameters of convexity, inertia rate and roundness, and the coordinates and the radius of the central point of the reference are obtained.
Step S260 point location planning:
after obtaining coordinates of inner and outer corner points of the black frame and center points of the two circular references, the point locations need to be ordered, as shown in fig. 5, the circular reference circle with a larger radius is firstly formed, then the coordinates of the center point of the circular reference with a smaller radius are formed, and finally the coordinates of the corner points of the inner and outer frames are formed, wherein the corner point closest to the two circular references is used as a starting point and then is arranged clockwise, so that an ordered coordinate sequence of ten point locations on the image is obtained, and thus the coordinate system direction of the corner points can be marked.
Step S310 sub-pixelation:
furthermore, the angular point detection can only obtain coordinates at a pixel level, the obtained coordinate values of the angular points are integers, and for high-precision positioning of the composite robot, errors caused by the angular points at the pixel level are large, so that the angular point coordinates need to be sub-pixilated in order to obtain angular point position coordinates with higher precision.
Suppose that the q point is a real sub-pixel point, the coordinate value of which is unknown,for points in the neighborhood of point q, the coordinates are known,is composed ofAt a gray scale gradient ofAt the edge of the pixel, thenThe gradient direction of the point pixel is vertical to the edge direction, and the vector isIs in line with the edge direction, thenThe dot product operation result of the vector and the gradient vector of the p points is 0:
expand the equation and shift the terms:
many more may be collected near the initial corner pointAccording toDistance from center given weightConstructing a system of equations according to the above formula and solving using least squares:
Through sub-pixelization processing, the pose calculation precision of the Marker can be effectively improved, and even the operation precision of the composite robot reaches the millimeter level.
Step S320, pose solving:
assuming that the physical size of the Marker is known, regarding the Marker as a space plane, establishing a space coordinate system by using the Marker as an XOY planeAngular and central points inThe three-dimensional spatial coordinates on the coordinate system are known. The sub-pixel coordinates of the 10 key points in the image are obtained through recognition, the pose of the Marker in a camera coordinate system is calculated through the (2D-3D) coordinates of the corresponding Point pairs, and the pose of the Marker plane can be calculated through a PnP (Passive-n-Point) algorithm under the condition that camera parameters are known.
The PnP algorithm has various solving methods, and a Direct Linear Transform (DLT) method, an EPnP method, and a minimum reprojection error method are common, and in this scheme, the minimum reprojection error method is taken as an example to solve the Marker pose.
For example, according to the pinhole imaging principle, the projection relationship of the world coordinate system to the pixel coordinate system
WhereinIs the coordinate of the pixel coordinate system, and is,is a reference matrix in the camera, and the reference matrix is a reference matrix in the camera,represents the external parameter from the Marker to the camera coordinate system, namely the position and the attitude,a homogeneous matrix formed for the outer parameters,which represents the coordinates in the world coordinate system,representing the depth of the feature points in the camera coordinate system.
Thereby obtaining the pose of the Marker plane.
Step S410, teaching a conversion relation between the target object and the 2D Marker:
the pose of the Marker in the mechanical arm base coordinate system can only be obtained through identificationTarget materials beside a Marker are usually required to be grabbed, and finally the pose of the target object under the base Marker system is required to be obtained. So that a conversion relation between the target object and the Marker is also requiredThe conversion relationship is obtained by teaching:
Then moving the tail end pose of the mechanical arm to the grabbing pose of the target object, wherein the tail end position of the mechanical arm is the conversion relation between the target point target and the base coordinate of the mechanical armWhen the relation between the target and the Marker is fixed and unchanged, the conversion relation from the target point to the Marker can be calculated:
Therefore, the pose of the target object under the base coordinate system can be obtained.
In the actual use of the technology, as shown in fig. 6, when the compound robot performs a grabbing task, the compound robot is firstly scheduled to reach a designated station, and then the mechanical arm is controlled to move to a recognition position, so that the Marker is ensured to be in the camera view. Obtained according to the identified MarkerAnd then obtained according to teachingSo as to obtain the grabbing point target on the machinePose under arm base:
Thereby, the mechanical arm of the compound robot can accurately grab the target object.
Therefore, the conversion relation between the Marker and the target to be grabbed is calculated by a teaching method, so that the deviation relation is ensured without manual measurement or special processing pieces, the installation of the Marker is not limited by the environments such as materials, machines and the like, the deployment flexibility of the composite robot is improved, and the deployment period and the deployment difficulty are effectively shortened.
On the other hand, AMR also can have certain positioning error in the process of independently moving gradually, consequently can also effectual compensation AMR location through above-mentioned scheme bring the error, very big improvement composite robot gets the precision of putting the goods.
Compared with a 3D Marker recognition scheme, the composite robot of the 2D Marker recognition scheme has the advantages of being fast in beat, high in precision, low in cost and the like. As it is worth mentioning, april tag and ArUco determine the unique Marker by the specific black and white code in the code, (similar to black and white blocks in the two-dimensional code), the black and white code represents the unique ID, the extraction and decoding of the coded black and white blocks are time-consuming operations, and in the composite robot application, the ID information is not needed, and only the Marker corner point coordinates are needed to calculate the pose, so that the information is redundant. The scheme of the invention adopts the black frame and the circular reference, so that the Marker can be quickly positioned without a decoding step, and experimental data show that the same equipment has the same pixel size, and the recognition rates of AprilTag and AurCo are 200-300 milliseconds, while the Marker detection speed of the scheme can reach 40-50 milliseconds.
(II)
As shown in fig. 7, corresponding to the first embodiment, the second aspect of the present invention further provides a 2D Marker-based visual positioning system for identifying a 2D Marker as described above, wherein the visual positioning system comprises:
the storage unit is used for storing a program comprising the steps of the 2D Marker-based visual positioning method in the embodiment one, so that the control unit, the camera, the mechanical arm, the processing unit and the information output unit can be timely invoked and executed;
wherein the camera is arranged at the end of the mechanical arm, and the control unit is used for coordinating:
after the camera is calibrated by hands and eyes, the camera is driven by a mechanical arm to acquire a 2D Marker image;
the processing unit is used for carrying out image denoising processing on the 2D Marker image, carrying out threshold segmentation to obtain a binary image, then carrying out contour searching to obtain all contours in the image, respectively carrying out corner detection and circular reference detection to identify the coordinates of inner and outer corners of the frame features and the corresponding radius and the center point coordinates of each circular feature, and carrying out point location sequencing by taking the corner point closest to each center point as a starting point; and then, performing sub-pixelation processing on each point, and calculating the 2D Marker plane pose, wherein the method comprises the following steps: according to the known physical size of a 2D Marker, establishing a space coordinate system Ow by taking the 2D Marker as an XOY plane, establishing matched key points of each angle point and circle center point which are obtained before and a three-dimensional space on the Ow coordinate system, and under the condition of the known camera internal reference, calculating the plane pose of the 2D Marker through a PnP algorithm so as to teach the conversion relation between a target object and the 2D Marker and obtain the pose of the target object under a mechanical arm base coordinate system;
and the information output unit is used for outputting the pose of the target object under the base coordinate system of the mechanical arm.
In a preferred embodiment, the processing unit performs noise reduction on the acquired 2D Marker image by using a gaussian smoothing algorithm, and performs adaptive threshold segmentation on the acquired 2D Marker image by using a maximum inter-class variance method to obtain a binary image.
Wherein in a preferred embodiment, the sub-pixelation processing step comprises:
the q point is set as a sub-pixel point,for points in the neighborhood of the q point, the coordinates are known,is composed ofA gray scale gradient ofAt the edge of the pixel, thenThe gradient direction of the point pixel is vertical to the edge direction, when the vector isIs in line with the edge direction, thenThe dot product operation result of the vector and the gradient vector of the p points is 0:
expand the equation and shift the terms:
collecting a plurality ofAccording toDistance from center given weightAccording to the above formulaSystem of equations and solving using least squares:
In a preferred embodiment, the corner point detecting step includes: carrying out polygon fitting on the contour, connecting the first and last points on the contour curve into a straight line, solving the distance from all the points on the contour to the straight line, and finding out the maximum distance valued max Defining a limit difference D ifd max <D, the middle contour points between the two points are all dropped off, ifd max >D, then reserved max Dividing the outline into two sub-outlines by taking the corresponding coordinate point as a boundary, repeating the method on the sub-outlines, and finally keeping the coordinate point as the vertex of the fitted polygon; and (3) screening a quadrangle from the polygons, calculating the deviation of each angle compared with a right angle, and when the deviation meets the preset condition, considering the quadrangle as a rectangular frame, and using each vertex of the polygon as an angular point.
Wherein in a preferred embodiment, the circular reference detecting step comprises: and performing convexity judgment, probabilistic judgment and roundness judgment calculation on the polygon to locate the corresponding contour in the graph, thereby acquiring the corresponding radius and the central point coordinate of each circular feature.
Therefore, the 2D Marker-based visual positioning system adopts a teaching method to obtain the transformation relation between the Marker and the target to be grabbed, so that the deviation relation is ensured without manual measurement or special processing pieces, the installation of the Marker is not limited by the environments such as materials, machines and the like, and the system is particularly suitable for being configured on a composite robot for use, and meanwhile, the deployment period and the deployment difficulty of the system can be effectively shortened due to the deployment flexibility.
On the other hand, AMR also can have certain positioning error at the autonomous movement in-process gradually, consequently can also effectual compensation AMR location through this system bring the error, very big improvement composite robot gets the precision of putting the goods.
(III)
With respect to the first and second embodiments described above, a third aspect of the present invention also provides a compound robot including: the system comprises a visual grabbing unit and an autonomous mobile robot, wherein the visual grabbing unit is the 2D Marker-based visual positioning system in the second embodiment.
In summary, the 2D Marker marking characteristics and the corresponding recognition method thereof which are specially designed by the visual positioning method and system based on the 2D Marker and the composite robot provided by the invention can obviously improve the pose calculation precision of the 2D Marker, and further improve the precision of the composite robot in grabbing/placing the target object; in addition, through the mode of teaching, calculate 2D Marker and wait to snatch the conversion relation between the object for 2D Marker can be waiting to snatch near object optional position and angle installation, has solved traditional 2D Marker mounted position and has received the great problem of scene restriction.
The preferred embodiments of the invention disclosed above are intended to be illustrative only. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise embodiments disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention. The invention is limited only by the claims and the full scope and equivalents thereof, and any modification, equivalent replacement, or improvement made within the spirit and principle of the invention should be included in the protection scope of the invention.
It will be appreciated by those skilled in the art that, in addition to implementing the system, apparatus and individual modules thereof provided by the present invention in purely computer readable program code means, the system, apparatus and individual modules thereof provided by the present invention can be implemented in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like, all by logically programming the method steps. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
In addition, all or part of the steps of the method according to the above embodiments may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a single chip, a chip, or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In addition, any combination of various different implementation manners of the embodiments of the present invention is also possible, and the embodiments of the present invention should be considered as disclosed in the embodiments of the present invention as long as the combination does not depart from the spirit of the embodiments of the present invention.
Claims (10)
1. A visual positioning method based on a 2D Marker is characterized by comprising the following steps:
step S100, fixing a 2D Marker beside a position of a target, wherein the marking characteristics of the 2D Marker comprise: the color-distinguishing rectangular frame has obvious color distinguishing degree, and the circular features with different radiuses are gathered in the frame feature frame and close to the same corner point;
step S200, acquiring a 2D Marker image by a camera after hand-eye calibration, carrying out threshold segmentation to obtain a binary image, then carrying out contour searching to obtain all contours in the image, respectively carrying out corner detection and circular reference detection to identify the coordinates of inner and outer corners of a frame feature and the corresponding radius and the coordinates of a central point of each circular feature, and carrying out point location sequencing by taking the corner point nearest to each central point as a starting point;
step S300, after each point is subjected to sub-pixelation processing, a 2D Marke plane pose is calculated, and the steps comprise: according to the known physical size of the 2D Marker, the 2D Marker is used as an XOY plane to establish a space coordinate systemOwThe points of each corner point and the center of the circle obtained in step S200 are compared withOwEstablishing matched key points in a three-dimensional space on a coordinate system, and calculating the plane pose of the 2D Marker through a PnP algorithm under the condition of known camera internal parameters;
and S400, teaching the conversion relation between the target object and the 2D Marker according to the plane pose acquired in the step S300 so as to acquire the pose of the target object in the base coordinate system.
2. The 2D Marker-based visual positioning method according to claim 1, wherein the step S200 further comprises an image denoising step: and carrying out noise reduction on the acquired 2D Marker image by adopting a Gaussian smoothing algorithm.
3. The 2D Marker-based visual positioning method according to claim 1, wherein in step S200, the acquired 2D Marker image is subjected to adaptive threshold segmentation by using a maximum inter-class variance method to obtain a binary image.
4. The 2D Marker-based visual positioning method according to claim 1, wherein the step of obtaining all the contours in the image in step S200 to perform corner point detection respectively comprises: carrying out polygon fitting on the contour, connecting the first and last points on the contour curve into a straight line, solving the distance from all the points on the contour to the straight line, and finding out the maximum distance valued max Defining a limit difference D ifd max <D, the middle contour points between the two points are all dropped off, ifd max >D, then reserved max Dividing the outline into two sub-outlines by taking the corresponding coordinate point as a boundary, repeating the method for the sub-outlines, and finally, keeping the coordinate point as the vertex of a fitted polygon; screening out quadrangles from the polygonsAnd calculating the deviation of each angle relative to the right angle, and when the deviation meets the preset condition, determining the deviation to be a rectangular frame, wherein each vertex of the polygon is an angular point.
5. The 2D Marker-based visual positioning method according to claim 4, wherein the circular reference detecting step comprises: and performing convexity judgment, probabilistic judgment and roundness judgment calculation on the polygon to locate the corresponding contour in the graph, thereby obtaining the corresponding radius and the central point coordinate of each circular feature.
6. The 2D Marker-based visual positioning method according to claim 1, wherein the sub-pixelation processing step comprises:
the q point is set as a sub-pixel point,for points in the neighborhood of point q, the coordinates are known,is composed ofA gray scale gradient ofAt the edge of the pixel, thenThe gradient direction of the point pixel is vertical to the edge direction, and the vector isIs in line with the edge direction, thenThe dot product operation result of the vector and the gradient vector of the p point is 0:
Expand the equation and shift the terms:
collecting a plurality of pixels near each cornerAccording toDistance from center weightedConstructing a system of equations according to the above formula and solving using least squares:
7. A 2D Marker based visual positioning system for identifying a 2D Marker as claimed in claim 1, comprising:
a storage unit for storing a program comprising the steps of the 2D Marker-based visual positioning method according to any one of claims 1 to 6, and being used for the control unit, the camera, the mechanical arm, the processing unit and the information output unit to be called and executed at proper time;
wherein the camera is arranged at the end of the mechanical arm, and the control unit is used for coordinating:
after the camera is calibrated by hands and eyes, the camera is driven by a mechanical arm to acquire a 2D Marker image;
the processing unit is used for carrying out image denoising processing on the 2D Marker image, carrying out threshold segmentation to obtain a binary image, then carrying out contour searching to obtain all contours in the image, respectively carrying out corner point detection and circular reference detection to identify the coordinates of inner and outer corner points of the frame feature and the corresponding radius and the center point coordinate of each circular feature, and carrying out point location sequencing by taking the corner point closest to each center point as a starting point; and then, after each point is subjected to sub-pixelization processing, calculating the pose of the 2D Marke plane, wherein the steps comprise: according to the known physical size of the 2D Marker, the 2D Marker is used as an XOY plane to establish a space coordinate systemOwThe points of each angle point and center point obtained before are compared with the points of the center of a circleOwEstablishing matched key points in a three-dimensional space on a coordinate system, and calculating a plane pose of a 2D Marker through a PnP algorithm under the condition that camera parameters are known so as to teach a conversion relation between a target object and the 2D Marker and obtain the pose of the target object under a mechanical arm base coordinate system;
and the information output unit is used for outputting the pose of the target object under the base coordinate system of the mechanical arm.
8. The 2D Marker-based visual positioning system of claim 7, wherein the step S200 of obtaining all the contours in the image to perform corner detection respectively comprises: carrying out polygon fitting on the contour, connecting the first and last points on the contour curve into a straight line, solving the distance from all the points on the contour to the straight line, and finding out the maximum distance valued max Defining a limit difference D ifd max <D, the middle contour points between the two points are all dropped off, ifd max >D, then reserved max Dividing the outline into two sub-outlines by taking the corresponding coordinate point as a boundary, repeating the method for the sub-outlines, and finally, keeping the coordinate point as the vertex of a fitted polygon; screening out a quadrangle from the polygon, calculating the deviation of each angle relative to the right angle, and considering the quadrangle as a rectangular frame when the deviation accords with the preset condition, and each top of the polygonThe point is then a corner point.
9. The 2D Marker based visual positioning system of claim 8, wherein the circular fiducial detection step comprises: and performing convexity judgment, probabilistic judgment and roundness judgment calculation on the polygon to locate the corresponding contour in the graph, thereby acquiring the corresponding radius and the central point coordinate of each circular feature.
10. A compound robot, comprising: visual grasping unit, autonomous mobile robot, characterized in that the visual grasping unit is a 2D Marker based visual positioning system according to any of claims 7 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211463733.XA CN115609591B (en) | 2022-11-17 | 2022-11-17 | Visual positioning method and system based on 2D Marker and compound robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211463733.XA CN115609591B (en) | 2022-11-17 | 2022-11-17 | Visual positioning method and system based on 2D Marker and compound robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115609591A true CN115609591A (en) | 2023-01-17 |
CN115609591B CN115609591B (en) | 2023-04-28 |
Family
ID=84877747
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211463733.XA Active CN115609591B (en) | 2022-11-17 | 2022-11-17 | Visual positioning method and system based on 2D Marker and compound robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115609591B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116000942A (en) * | 2023-03-22 | 2023-04-25 | 深圳市大族机器人有限公司 | Semiconductor manufacturing system based on multi-axis cooperative robot |
CN116060269A (en) * | 2022-12-08 | 2023-05-05 | 中晟华越(郑州)智能科技有限公司 | Spraying method for loop-shaped product |
CN116245877A (en) * | 2023-05-08 | 2023-06-09 | 济南达宝文汽车设备工程有限公司 | Material frame detection method and system based on machine vision |
CN116423526A (en) * | 2023-06-12 | 2023-07-14 | 上海仙工智能科技有限公司 | Automatic calibration method and system for mechanical arm tool coordinates and storage medium |
CN116766183A (en) * | 2023-06-15 | 2023-09-19 | 山东中清智能科技股份有限公司 | Mechanical arm control method and device based on visual image |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108562274A (en) * | 2018-04-20 | 2018-09-21 | 南京邮电大学 | A kind of noncooperative target pose measuring method based on marker |
US20180345483A1 (en) * | 2015-12-03 | 2018-12-06 | Abb Schweiz Ag | Method For Teaching An Industrial Robot To Pick Parts |
CN109363771A (en) * | 2018-12-06 | 2019-02-22 | 安徽埃克索医疗机器人有限公司 | The fracture of neck of femur Multiple tunnel of 2D planning information plants nail positioning system in a kind of fusion |
CN111612794A (en) * | 2020-04-15 | 2020-09-01 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Multi-2D vision-based high-precision three-dimensional pose estimation method and system for parts |
CN113084808A (en) * | 2021-04-02 | 2021-07-09 | 上海智能制造功能平台有限公司 | Monocular vision-based 2D plane grabbing method for mobile mechanical arm |
WO2022034032A1 (en) * | 2020-08-11 | 2022-02-17 | Ocado Innovation Limited | A selector for robot-retrievable items |
-
2022
- 2022-11-17 CN CN202211463733.XA patent/CN115609591B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180345483A1 (en) * | 2015-12-03 | 2018-12-06 | Abb Schweiz Ag | Method For Teaching An Industrial Robot To Pick Parts |
CN108562274A (en) * | 2018-04-20 | 2018-09-21 | 南京邮电大学 | A kind of noncooperative target pose measuring method based on marker |
CN109363771A (en) * | 2018-12-06 | 2019-02-22 | 安徽埃克索医疗机器人有限公司 | The fracture of neck of femur Multiple tunnel of 2D planning information plants nail positioning system in a kind of fusion |
CN111612794A (en) * | 2020-04-15 | 2020-09-01 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Multi-2D vision-based high-precision three-dimensional pose estimation method and system for parts |
WO2022034032A1 (en) * | 2020-08-11 | 2022-02-17 | Ocado Innovation Limited | A selector for robot-retrievable items |
CN113084808A (en) * | 2021-04-02 | 2021-07-09 | 上海智能制造功能平台有限公司 | Monocular vision-based 2D plane grabbing method for mobile mechanical arm |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116060269A (en) * | 2022-12-08 | 2023-05-05 | 中晟华越(郑州)智能科技有限公司 | Spraying method for loop-shaped product |
CN116060269B (en) * | 2022-12-08 | 2024-06-14 | 中晟华越(郑州)智能科技有限公司 | Spraying method for loop-shaped product |
CN116000942A (en) * | 2023-03-22 | 2023-04-25 | 深圳市大族机器人有限公司 | Semiconductor manufacturing system based on multi-axis cooperative robot |
CN116245877A (en) * | 2023-05-08 | 2023-06-09 | 济南达宝文汽车设备工程有限公司 | Material frame detection method and system based on machine vision |
CN116245877B (en) * | 2023-05-08 | 2023-11-03 | 济南达宝文汽车设备工程有限公司 | Material frame detection method and system based on machine vision |
CN116423526A (en) * | 2023-06-12 | 2023-07-14 | 上海仙工智能科技有限公司 | Automatic calibration method and system for mechanical arm tool coordinates and storage medium |
CN116423526B (en) * | 2023-06-12 | 2023-09-19 | 上海仙工智能科技有限公司 | Automatic calibration method and system for mechanical arm tool coordinates and storage medium |
CN116766183A (en) * | 2023-06-15 | 2023-09-19 | 山东中清智能科技股份有限公司 | Mechanical arm control method and device based on visual image |
CN116766183B (en) * | 2023-06-15 | 2023-12-26 | 山东中清智能科技股份有限公司 | Mechanical arm control method and device based on visual image |
Also Published As
Publication number | Publication date |
---|---|
CN115609591B (en) | 2023-04-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115609591B (en) | Visual positioning method and system based on 2D Marker and compound robot | |
CN107507167B (en) | Cargo tray detection method and system based on point cloud plane contour matching | |
Geiger et al. | Automatic camera and range sensor calibration using a single shot | |
CN109903313B (en) | Real-time pose tracking method based on target three-dimensional model | |
Azad et al. | Stereo-based 6d object localization for grasping with humanoid robot systems | |
CN104835173B (en) | A kind of localization method based on machine vision | |
Romero-Ramire et al. | Fractal markers: A new approach for long-range marker pose estimation under occlusion | |
CN115791822A (en) | Visual detection algorithm and detection system for wafer surface defects | |
CN108007388A (en) | A kind of turntable angle high precision online measuring method based on machine vision | |
CN109215016B (en) | Identification and positioning method for coding mark | |
CN111627072A (en) | Method and device for calibrating multiple sensors and storage medium | |
CN107292869B (en) | Image speckle detection method based on anisotropic Gaussian kernel and gradient search | |
CN107766859A (en) | Method for positioning mobile robot, device and mobile robot | |
CN113705268B (en) | Two-dimensional code positioning method and system | |
CN108717709A (en) | Image processing system and image processing method | |
CN110096920A (en) | A kind of high-precision high-speed positioning label and localization method towards visual servo | |
Urban et al. | Finding a good feature detector-descriptor combination for the 2D keypoint-based registration of TLS point clouds | |
CN109784250A (en) | The localization method and device of automatically guiding trolley | |
CN106679671A (en) | Navigation marking graph recognition method based on laser data | |
CN103824275A (en) | System and method for finding saddle point-like structures in an image and determining information from the same | |
CN112257721A (en) | Image target region matching method based on Fast ICP | |
CN115685160A (en) | Target-based laser radar and camera calibration method, system and electronic equipment | |
CN116843748B (en) | Remote two-dimensional code and object space pose acquisition method and system thereof | |
Li et al. | Vision-based target detection and positioning approach for underwater robots | |
CN117496401A (en) | Full-automatic identification and tracking method for oval target points of video measurement image sequences |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right |
Denomination of invention: A 2D Marker based visual positioning method and system, composite robot Effective date of registration: 20230828 Granted publication date: 20230428 Pledgee: Bank of Communications Ltd. Shanghai New District Branch Pledgor: Shanghai Xiangong Intelligent Technology Co.,Ltd. Registration number: Y2023310000491 |