AU2019222802B2 - High-precision and high-speed positioning label and positioning method for visual servo - Google Patents

High-precision and high-speed positioning label and positioning method for visual servo Download PDF

Info

Publication number
AU2019222802B2
AU2019222802B2 AU2019222802A AU2019222802A AU2019222802B2 AU 2019222802 B2 AU2019222802 B2 AU 2019222802B2 AU 2019222802 A AU2019222802 A AU 2019222802A AU 2019222802 A AU2019222802 A AU 2019222802A AU 2019222802 B2 AU2019222802 B2 AU 2019222802B2
Authority
AU
Australia
Prior art keywords
white
positioning label
label
black
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
AU2019222802A
Other versions
AU2019222802A1 (en
Inventor
Fulong Chen
Linpeng Peng
Shuailong Qu
Fang WENG
Yongsheng Zhao
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Binhai Industrial Technology Research Institute of Zhejiang University
Original Assignee
Binhai Industrial Technology Research Institute of Zhejiang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Binhai Industrial Technology Research Institute of Zhejiang University filed Critical Binhai Industrial Technology Research Institute of Zhejiang University
Publication of AU2019222802A1 publication Critical patent/AU2019222802A1/en
Application granted granted Critical
Publication of AU2019222802B2 publication Critical patent/AU2019222802B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1443Methods for optical code recognition including a method step for retrieval of the optical code locating of the code in an image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1452Methods for optical code recognition including a method step for retrieval of the optical code detecting bar code edges

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • General Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a high-precision and high-speed positioning method for visual servoing based on a square-shaped positioning label. The square-shaped positioning label comprises: an outer part that is a rectangular frame-shaped marking with straight line outer side members of a generally black colour, and with an inner part marking on each straight line side member of a white colour. Widths of the four sides of the rectangular frame-shaped marking are equal; wherein the respective inner white parts comprise four white spacing bars that are respectively arranged inside the outer frame side members of the rectangular frame shaped marking, wherein a 3:2:3 width ratio of "black-white-black" defines the white inner part with respect to the black outer parts of respective each of the outer frame side members. A bottom end of the white spacing bar in the right frame side member and a right end of the white spacing bar in the lower frame side member are in communication; wherein the rectangular frame-shaped marking encloses at least five circular labels. The disclosure realizes estimation of a pose of the positioning label by scanning, calculating and decoding the positioning label. The positioning label of the disclosure has positioning and encoding capabilities, is more robust and accurate in visual detection and recognition, and has the advantages of accurate pose estimation, low time consumption and wide application range. 1/3 0@@@ 0000 000@@ FIG. 1 2/3 S1 Respectively downsampling the length and the width of an original picture to 1/2 of an original size S2 Converting the picture from a three-channel RGB image to a single-channel Gray image S3 Binarizing the image S4 Conducting morphological processing on the image S5 Searching for a suspected feature point of the frame through progressive scanning and verifying whether the suspected feature point is a target feature point S6 Acquiring and optimizing a location of the positioning label S7 Detecting corners of the positioning label 58 Adjusting a sequence of the corners of the positioning label 59 Acquiring a center point of the circular label S10 Acquiring encoding information of the positioning label Si Estimating a pose of the positioning label FIG. 2

Description

1/3
@@@ 0000 000@@
FIG. 1
2/3
S1 Respectively downsampling the length and the width of an original picture to 1/2 of an original size
S2 Converting the picture from a three-channel RGB image to a single-channel Gray image S3
Binarizing the image
S4
Conducting morphological processing on the image
S5 Searching for a suspected feature point of the frame through progressive scanning and verifying whether the suspected feature point is a target feature point
S6
Acquiring and optimizing a location of the positioning label
S7
Detecting corners of the positioning label
58
Adjusting a sequence of the corners of the positioning label
59
Acquiring a center point of the circular label
S10
Acquiring encoding information of the positioning label
Si
Estimating a pose of the positioning label
FIG. 2
HIGH-PRECISION AND HIGH-SPEED POSITIONING METHOD FOR VISUAL SERVOING TECHNICAL FIELD
[0001] The disclosure belongs to the technical field of positioning, and particularly relates to a high-precision and high-speed positioning method for visual servoing.
BACKGROUD OF THE PRESENT INVENTION
[0002] With the advancement of the Intelligent Manufacturing 2025 program, more and more industrial scenes and positions replace humanwith machines, such as warehousing, material handling and processing and assembly, etc. In the process of movement and production work, a robot needs to accurately and quickly estimate the pose of the robot and a working object to satisfy the high-precision and high-quality production work requirements of modem factories. A vision-based positioning method is generally realized by means of significant features on a target object or significant features of artificial arrangement. Firstly, a machine vision-related method is used to realize the accurate detection of the feature points. Then, the pose of a current camera is calculated by using a visual imaging principle and the physical sizes among the feature points. An optimal method is generally used to achieve the optimal estimation of the pose. In industrial production, the target object generally does not have significant features that can be used for pose estimation. Therefore, in practical application, the pose estimation generally uses artificial mark points, thereby generating various artificial mark points, wherein QR code is currently the most widely used.
[0003] The QR code is a two-bit readable and encodable label that is extended on a one-dimensional barcode, uses a black and white rectangle to realize binary encoding, has powerful encoding capabilities and can carry a large amount of information. At the beginning of design, many significant features are added to the QR code to facilitate visual scanning and detection. By taking QR code as an example, special black and white rectangle marks which are nested are arranged on three vertexes of the square QR code. The width is set to 1:3:1 to facilitate quick and robust retrieval of a visual algorithm. At present, mainstream QR code-based positioning methods are realized by finding special mark points of the square QR code vertexes. QR code positioning has the advantages of convenience, wide application and easy acquirement of material, and also has positioning and encoding capabilities. It also has obvious disadvantages. Firstly, the special artificial marks at the vertexes of the QR code are designed for the convenience of robust detection of the visual algorithm. The positional accuracy is not high. Internal black and white rectangles are out of order due to an encoding need and cannot be used for positioning, resulting in low positioning accuracy of the QR code. Secondly, the detection and decoding of the black and white rectangles inside the QR code consume long time, causing that the speed of QR code positioning is limited and cannot satisfy a real-time visual servoing need in specific scenes. Thirdly, vertex features used for detection by the QR code only account for a small part of the QR code. When the QR code is relatively small in the field of view of a camera, the frequency of detection failure of the QR code is high.
SUMMARY OF PRESENT INVENTION
[0004] In view of this, the disclosure aims at proposing a high-precision and high-speed positioning method for visual servoing, so as to solve the problems of low positioning accuracy, long time consumption, incapability of satisfying a real-time visual servoing need under a specific scene, and high detection failure in a positioning label inthe prior art.
[0005] To achieve the above purpose, the technical solutions of the disclosure are as follows:
[0006] A high-precision and high-speed positioning method for visual servoing based on a square-shaped positioning label is proposed, the square-shaped positioning label comprises: an outer part that is a rectangular frame-shaped marking with straight line outer side members of a generally black colour, and with an inner part marking on each straight line side member of a white colour; widths of the four sides of the rectangular frame-shaped marking are equal; the respective inner white parts comprise four white spacing bars that are respectively arranged inside the outer frame side members of the rectangular frame shaped marking, wherein a 3:2:3 width ratio of "black-white-black" defines the white inner part with respect to the black outer parts of respective each of the outer frame side members; a bottom end of the white spacing bar in the right frame side member and a right end of the white spacing bar in the lower frame side member are in communication; the rectangular frame-shaped marking encloses at least five circular labels; the circular labels comprise a ring, a solid circle, and a crosshair; an inner circle radius of the ring is equal to the radius of the solid circle; wherein the ring and the solid circle are concentric; the crosshair is at a center of the circle; in a visual servoing scene of a black and white camera, the circular label is a gray scale image, a gray scale value of the ring is 0, a gray scale value of the solid circle is 255 or 128, 255 represents 0 in a binary system, and 128 represents 1 in the binary system; in a visual servoing scene of a color camera, the circular label is a RGB image, a RGB value of the ring is (72, 116, 193), a RGB value of the solid circle is (114, 172, 77) or (255, 255, 255), and (114, 172, 77) represents 1 in the binary system, and (255, 255, 255) represents 0 in the binary system; and the positioning method comprises the following steps:
[0007] Si: respectively downsampling a length and a width of an original picture to 1/2 of an original size, i.e., a sampled picture pixel is 1/4 of an original picture pixel;
[0008] S2: converting the picture from a three-channel RGB image to a single-channel Gray image;
[0009] S3: binarizing the image;
[0010] S4: conducting morphological processing on the image;
[0011] S5: searching for a suspected feature point of the frame through progressive scanning and verifying whether the suspected feature point is a target feature point;
[0012] S6: acquiring and optimizing a location of the positioning label;
[0013] S7: detecting corners of the positioning label;
[0014] S8: adjusting a sequence of the corners of the positioning label;
[0015] S9: acquiring a center point of the circular label;
[0016] S10: acquiring encoding information of the positioning label, and acquiring a code of the positioning label according to a binary definition of the circular label; and,
[0017] Sl: estimating a pose of the positioning label.
[0018] Advantageously, in the step S3, image binarization is conducted by an adaptive threshold algorithm, and comprises the following steps:
[0019] S301: calculating a gray histogram of the Gray image;
[0020] S302: normalizing the histogram;
[0021] S303: finding a location of a maximum value in the histogram, expressed as indexhist-max; and finding a location of a first non-zero value in the histogram,
expressed as indexnon-zero'
[0022] S304: extracting a gray histogram in a right interval of(indexhist-max indexnon-zero) < 8 when (indexhist-max + 4, 28 - indexnon-zero); and extracting
a gray histogram in a left interval (0, indexhist-max * 0.5)when (indexhist-max indexnon-zero) 8;
[0023] S305: recalculating the location of the maximum value in the histogram, expressed as indexnew; and
[0024] S306: calculating binarization thresholds according to nonlinear formulas corresponding to the right interval and the left interval, wherein the nonlinear formulas are:
[0025] for the right interval:
threshold = indexhist-max * 6.0 + indexnew * 2.0 - indexnew 2 * 0.1
+ 16.0 + 5*indexhist-max + 5.0; 20.0+indexnew-indexhist-max indexhist-max+indexnew
[0026] for the left interval:
threshold = indexhist-max * 0.35 + indexnew * 4.5 + indexhist-max2 * 0.125
+ 6.0 k 0.5*indexnew indexhist-max-indexnew indexhist-max+indexnew
[0027] Advantageously, the progressive scanning in the step S5 comprises the following steps:
[0028] S501: using a point on an outer boundary of the black frame in an outer circle as a current pixel as a current pixel if a pixel value of the current pixel and a next adjacent pixel is changed from 255 to 0;
[0029] S502: determining that the current pixel is a point on a boundary between the black frame in the outer circle and the white frame in a middle circle if the pixel value of the current pixel and the next adjacent pixel is changed from 0 to 255 after the step S501 is satisfied;
[0030] S503: determining that the current pixel is a point on a boundary between the white frame in the middle circle and the black frame in an inner circle if the pixel value of the current pixel and the next adjacent pixel is changed from 255 to 0 after the step S502 is satisfied;
[0031] S504: determining that the current pixel is a point on an inner boundary of the black frame in the inner circle if the pixel value of the current pixel and the next adjacent pixel is changed from 0 to 255 after the step S503 is satisfied;
[0032] S505: calculating a width of the black-white-black frame after the step S504 is satisfied; and verifying whether the suspected feature point is the target feature point according to the width ratio 3:2:3 of the "black-white-black" of the positioning label;
[0033] S506: using a currently found point as the feature point of the left frame if the width ratio satisfies the condition, and repeating the steps S501-S505; using the currently found point as the feature point of the right frame if a feature point having a width ratio satisfying the condition is found; using the feature point of the left frame previously found as an interference point if a feature point having a width ratio satisfying the condition is not found, continuing to repeat the steps S501-S505, and finding the feature point of the left frame again;
[0034] S507: determining that the feature points are interference points if the width ratio does not satisfy the condition, continuing to repeat the steps S501-S505; and finding the feature point of the left frame again;
[0035] S508: continuing to scan a next row of pixels according to a same method after a row of pixels is scanned until an entire image is scanned; and
[0036] S509: rotating the image by 900 and then scanning the pixels again.
[0037] Advantageously, the acquiring and optimizing the location of the positioning label in the step S6 comprises: respectively obtaining a minimum bounding outer rotating rectangle and a minimum bounding inner rotating rectangle according to the feature points of the black frame in the outer circle and the feature points of the black frame in the inner circle; dividing the corresponding feature points into four groups according to four sides of the obtained rotating rectangle; linearly fitting each group of feature points; respectively solving intersection points, that is, the vertexes of the rotating rectangle by using four straight lines obtained by fitting as four sides of a new rotating rectangle; and repeating the above process until locations of the vertexes of the rotating rectangle are stable.
[0038] Advantageously, the detecting corners of the positioning label in the step S7 comprises: detecting the corners of the positioning label between corresponding vertexes of the outer rotating rectangle and the inner rotating rectangle; extracting a region of interest of the corners of the positioning label according to the locations of the optimized inner rotating rectangle and outer rotating rectangle; extracting the corners in the region of interest; and further optimizing an accuracy of the corners of the positioning label.
[0039] Advantageously, the acquiring the center point of the circular label in the step S9 comprises:
[0040] S901: calculating an affine transformation matrix of a standard positioning label template to a current positioning label according to a coordinate Ptemplate of an outer comer of the standard positioning label template and a coordinate Pcurrent of an
outer comer of the current positioning label, with a formula of
Current = MaffinePtemplate, with a formula of Pcurrent = Maffine Ptemplate, wherein Maffine is a 2x3 matrix;
[0041] S902: projecting the center point of the circular label of the standard positioning label template to the current positioning label as a seed pixel by using the affine transformation matrix;
[0042] S903: detecting the solid circle of the circular label by using a FloodFill algorithm, and calculating the circle center of the solid circle as the center point of the circular label; and
[0043] S904: recalculating the affine transformation matrix by using the obtained center point of the circular label as reference information, and repeating the above steps until a location of the center point of the circular label converges.
[0044] Advantageously, the estimating the pose of the positioning label in the step SlComprises:
[0045] S1101: loading a pre-measured pixel coordinate Ppixeiof the corners of the positioning label and the center point of the circular label, and obtaining the three-dimensional coordinate Preal of the corners of the positioning label and the center point of the circular label in a world coordinate system according to real physical sizes;
[0046] S1102: building a relationship between a pixel coordinate and areal three-dimensional physical coordinate according to a camera imaging model and projection transformation; and
[0047] S1103: optimally solving the pose of the positioning label which conforms to the current observation by using a SolvePNP algorithm.
[0048] Compared with the prior art, the high-precision and high-speed positioning method for visual servoing in the disclosure has the following advantages:
[0049] (1) Different from the QrCode two-dimensional code detection algorithm, the visual detection feature not only appears at the vertex of the positioning label, but also appears around the positioning label, which effectively increases the effective detection distance of the positioning label.
[0050] (2) The positioning label has positioning and encoding capabilities, which effectively improves the accuracy of the positioning label. Especially under the scene with large distortion, the positioning accuracy of the positioning label is significantly improved compared with the two-dimensional code of the same size. It is measured that the positioning accuracy reaches 0.2 mm while the positioning accuracy of the two-dimensional code is about 1 mm.
[0051] (3) Compared with the two-dimensional code type positioning label, the present positioning label has a significant improvement in detection speed, which can better satisfy the need of high-speed visual servoing. It is measured that the positioning label has a significant improvement in the detection speed compared with the two-dimensional code of the same size, and the time consuming for detection is 8 ms while the time consuming for detection of the two-dimensional code is 30 ms.
DESCRIPTION OF THE DRAWINGS
[0052] The drawings which form part of the disclosure are intended to provide further understanding for the disclosure. Schematic embodiments of the disclosure and the illustration thereof are intended to explain the disclosure and do not form improper limitation to the disclosure. In the drawings:
[0053] FIG. 1 is a schematic diagram of a positioning label according to an embodiment of the disclosure;
[0054] FIG. 2 is a flow chart of a positioning method according to an embodiment of the disclosure; and
[0055] FIG. 3 is an algorithm flow chart of progressive scanning according to an embodiment of the disclosure.
DETAILED DESCRIPTION OF PREFERREDEMBODIMENTS
[0056] It should be explained that if there is no conflict, the embodiments in the disclosure and the features in the embodiments can be mutually combined.
[0057] The disclosure will be described in detail below with reference to the drawings and in conjunction with the embodiments.
[0058] As shown in FIG. 1, a positioning label for high-precision and high-speed visual servoing is provided. The positioning label is square, an inner part is in a white color, and an outer part is a rectangular frame with a black color. The widths of the four sides of the rectangular frame are equal. Four white spacing bars are respectively arranged in an upper frame, a lower frame, a left frame and a right frame of the rectangular frame, and width ratios of "black-white-black" of the upper frame, the lower frame, the left frame and the right frame are 3: 2: 3. The bottom end of the white spacing bar of the right frame and the right end of the white spacing bar in the lower frame are communicated. The rectangular frame comprises N circular labels, and N>4. The circular labels comprise a ring, a solid circle, and a crosshair. The inner circle radius of the ring is equal to the radius of the solid circle, and the ring and the solid circle are concentric. It should be noted that the crosshair is used to test the accuracy of the visual detection method, and the positioning label may not be provided with the crosshair in practical application. In a visual servoing scene of a black and white camera, the circular label is a gray scale image, and the gray scale value of the ring is 0. The gray scale value of the solid circle is 255 or 128; 255 represents 0 in the binary system, and 128 represents 1 in the binary system. In a visual servoing scene of a color camera, the circular label is a RGB image, and the RGB value of the ring is (72, 116, 193); and the RGB value of the solid circle is (114, 172, 77) or (255, 255, 255). (114, 172, 77) represents 1 in the binary system, and (255, 255, 255) represents 0 in the binary system. In the present embodiment, the positioning label comprises 23 circular labels arranged in 5 rows. The number of the circular labels in each row is 5, 4, 5, 4, 5 in sequence, which can represent 2A 2 3 encoding information, so as to satisfy the encoding needs of most visual servoings.
[0059] As shown in FIG. 2, a high-precision and high-speed positioning method for visual servoing based on the above positioning label comprises the following steps:
[0060] Si: respectively downsampling a length and a width of an original picture to 1/2 of an original size, i.e., a sampled picture pixel is 1/4 of an original picture pixel;
[0061] S2: converting the picture from a three-channel RGB image to a single-channel Gray image;
[0062] S3: binarizing the image;
[0063] S4: conducting morphological processing on the image;
[0064] S5: searching for a suspected feature point of the frame through progressive scanning and verifying whether the suspected feature point is a target feature point;
[0065] S6: acquiring and optimizing a location of the positioning label;
[0066] S7: detecting corners of the positioning label;
[0067] S8: adjusting a sequence of the corners of the positioning label;
[0068] S9: acquiring a center point of the circular label;
[0069] S1O: acquiring encoding information of the positioning label, and acquiring a code of the positioning label according to a binary definition of the circular label; and,
[0070] S11: estimating a pose of the positioning label.
[0071] Advantageously, in the step S3, image binarization is conducted by an adaptive threshold algorithm, and comprises the following steps:
[0072] S301: calculating a gray histogram of the Gray image;
[0073] S302: normalizing the histogram;
[0074] S303: finding a location of a maximum value in the histogram, expressed as indexhist-max; and finding a location of a first non-zero value in the histogram,
expressed as indexnon-zero;
[0075] S304: extracting a gray histogram in a right interval of(indexhist-max
indexnon-zero) < when (indeXhist-max + 4, 28 - indexnon-zero); and
extracting a gray histogram in a left interval (0, indexhist-max * O.5)when (indexhist-max - indexnon-zero) 8;
[0076] S305: recalculating the location of the maximum value in the histogram, expressed as indexnew; and
[0077] S306: calculating binarization thresholds according to nonlinear formulas corresponding to the right interval and the left interval, wherein the nonlinear formulas are:
[0078] for the right interval:
threshold = indexnist-max * 6.0 + indexnew * 2.0 - indexnew 2 * 0.1 +
16.0 + 5*indexhist-max + 5.0; 20.0+indexnew -indexhist-max inde X hist-max+indexnew
[0079] for the left interval:
threshold = indexhist-max * 0.35 + indexnew * 4.5 + indexnist-max2 * 0.125+ 6.0 + 0.5*indexnew indexhist-max-indexnew indexhfist-max+indexnew
[0080] In the step S4, the morphological processing comprises expansion and corrosion processing of the binarized image. The size of the rectangular core used isS x 5, which can further eliminate the influence of noise on binarization and improve the effect of binarization.
[0081] The progressive scanning in the step S5 comprises the following steps:
[0082] S501: using a point on an outer boundary of the black frame in an outer circle as a current pixel as a current pixel if a pixel value of the current pixel and a next adjacent pixel is changed from 255 to 0;
[0083] S502: determining that the current pixel is a point on a boundary between the black frame in the outer circle and the white frame in a middle circle if the pixel value of the current pixel and the next adjacent pixel is changed from 0 to 255 after the step S501 is satisfied;
[0084] S503: determining that the current pixel is a point on a boundary between the white frame in the middle circle and the black frame in an inner circle if the pixel value of the current pixel and the next adjacent pixel is changed from 255 to 0 after the step S501 and the step S502 are satisfied;
[0085] S504: determining that the current pixel is a point on an inner boundary of the black frame in the inner circle if the pixel value of the current pixel and the next adjacent pixel is changed from 0 to 255 after the step S501 to the step S503 are satisfied;
[0086] S505: calculating the width of the black-white-black frame after the step S501 to the step S504 are satisfied; and verifying the width ratio;
[0087] S506: using the currently found point as the feature point of the left frame if the width ratio satisfies the condition, and repeating the steps S501-S505; using the currently found point as the feature point of the right frame if a feature point having a width ratio satisfying the condition is found; using the feature point of the left frame previously found as an interference point if a feature point having a width ratio satisfying the condition is not found; continuing to repeat the steps S501-S505; and finding the feature point of the left frame again;
[0088] S507: determining that the feature points are interference points if the width ratio does not satisfy the condition; continuing to repeat the steps S501-S505; and finding the feature point of the left frame again;
[0089] S508: adding two groups of target feature points to a list after finding and confirming the two groups of target feature points on one row; and continuing to scan the next row of pixels according to the same method until the entire image is scanned; and
[0090] S509: considering that the positioning label may be tilted at a large angle on the image, rotating the image by 900 and then scanning the pixels again.
[0091] In order to accelerate the scanning speed, one row can be scanned at an interval of N rows within a reasonable range.
[0092] FIG. 3 is an algorithm flow chart of progressive scanning, wherein the initialization comprises:
[0093] setting the image height to H; setting the image width to W; setting flag that Flag-rotate=0; setting flat that Flag-black-outer=0; setting flag that Flag-black-white=0; setting flag that Flag-white-black=0; setting flag that Flag-black-inner=0; setting flag that Flag-left-feature=0; setting flag that Flag-right-feature=0; defining the list of the feature points on the left as List-left; defining the list of the feature points on the right as List-right; defining the list of the feature points on the top as List-top; defining the list of the feature points on the bottom as List-bottom; defining the temporary feature point on the left as feature-tmp; and defining the pixel value of the image as 1.
[0094] The acquiring the location of the positioning label in the step S6 comprises:
[0095] conducting progressive scanning to obtain the feature points on the black-white-black frame of the positioning label; dividing the feature points into feature points of the black frame in the outer circle and feature points of the black frame in the inner circle; taking the feature points of the black frame in the outer circle and the feature points of the black frame in the inner circle as inputs respectively; firstly using a method proposed by Sklansky in 1982 to find a convex polygon of minimum bounding feature points; then traversing the vertexes of the convex polygon; searching for the top value, the bottom value, the left value and the right value of the vertexes of the convex polygon, and calculating the directions with the adjacent points; and finally determining the direction and size information of a minimum bounding rectangle according to the maximum value point in four directions and the directions with the adjacent points to obtain a minimum bounding outer rotating rectangle and a minimum bounding inner rotating rectangle.
[0096] The optimizing the location of the positioning label comprises: dividing the corresponding feature points into four groups according to the four sides of the obtained rotating rectangle; linearly fitting each group of feature points; respectively solving an intersection point, that is, the vertexes of the rotating rectangle by using four straight lines obtained by fitting as four sides of a new rotating rectangle; and repeating the above process until the locations of the vertexes of the rotating rectangle are stable.
[0097] The detecting the corners of the positioning label in the step S7 comprises: detecting the corners of the positioning label between corresponding vertexes of the outer rotating rectangle and the inner rotating rectangle; to accelerate the detection speed of the corners and eliminate interference as much as possible, firstly, extracting the region of interest of the corners of the positioning label according to the locations of the optimized inner rotating rectangle and outer rotating rectangle, wherein if the coordinates of a group of vertexes of the inner rotating rectangle and the outer rotating rectangle are(uinner, vinner)and(uouter, Vouter), then the upper left vertex of the region of interest is(min(uinner, outer) - abs(uinner,-uouter) v ) abs(vinner -vouter> -min(v and the size is(abs(uinner - outerr, abs(vinner - vouter)); extracting the corners in the region of interest by using Harris corner operators; and then further optimizing the accuracy of the corners of the positioning label by using a sub-pixel optimization algorithm.
[0098] In the step S8, the corner that connects two white spacing bars is defined as a starting corner, and the sequence of the corners of the positioning label is adjusted in a counterclockwise order.
[0099] The acquiring the center point of the circular label in the step S9 comprises:
[00100] S901: calculating an affine transformation matrix of a standard positioning label template to a current positioning label according to a coordinate Ptemplate of
an outer corner of the standard positioning label template and a coordinate Pcurrent
of an outer corner of the current positioning label, with a formula of Pcurrent = Maffine Ptemplate, with a formula of Pcurrent = Maffine Ptemplate, wherein Maffine is a
2x3 matrix.
[00101] S902: projecting the center point of the circular label of the standard positioning label template to the current positioning label as a seed pixel by using the affine transformation matrix;
[00102] S903: detecting the solid circle of the circular label by using a FloodFill algorithm, and calculating the circle center of the solid circle as the center point of the circular label; and
[00103] S904: recalculating the affine transformation matrix by using the obtained center point of the circular label as reference information, and repeating the above steps until the location of the center point of the circular label converges.
[00104] Theestimatingthepose of the positioning label in the step SlIcomprises:
[00105] S1101: loading the pre-measured pixel coordinatePpixei of the corners of the
positioning label and the center point of the circular label, and obtaining the three-dimensional coordinate Prea of the corners of the positioning label and the center point of the circular label in a world coordinate system according to real physical sizes;
[00106] S1102: building a relationship between a pixel coordinate and a real three-dimensional physical coordinate according to a camera imaging model and projection transformation; and
[00107] S1103: optimally solving the pose of the positioning label which conforms to the current observation by using a SolvePNP algorithm.
[00108] The above only describes preferred embodiments of the disclosure and is not intended to limit the disclosure. Any modification, equivalent replacement, improvement, etc. made within the spirit and the principle of the disclosure shall be included within the protection scope of the disclosure.
EDITORIAL NOTE
2019222802
Subject: Clerical error- page numbering.
Claim pages should be considered as pages 14-18 instead of pages 1-5 as the description ends at page 13.

Claims (1)

1/5 We claim:
1. A high-precision and high-speed positioning method for visual servoing based on a square-shaped positioning label, wherein the square-shaped positioning label comprises: an outer part that is a rectangular frame-shaped marking with straight line outer side members of a generally black colour, and with an inner part marking on each straight line side member of a white colour; wherein widths of the four sides of the rectangular frame-shaped marking are equal; wherein the respective inner white parts comprise four white spacing bars that are respectively arranged inside the outer frame side members of the rectangular frame shaped marking, wherein a 3:2:3 width ratio of "black-white-black" defines the white inner part with respect to the black outer parts of respective each of the outer frame side members; wherein a bottom end of the white spacing bar in the right frame side member and a right end of the white spacing bar in the lower frame side member are in communication; wherein the rectangular frame-shaped marking encloses at least five circular labels; wherein the circular labels comprise a ring, a solid circle, and a crosshair; wherein an inner circle radius of the ring is equal to the radius of the solid circle; wherein the ring and the solid circle are concentric; wherein the crosshair is at a center of the circle; wherein in a visual servoing scene of a black and white camera, the circular label is a gray scale image, a gray scale value of the ring is 0, a gray scale value of the solid circle is 255 or 128, 255 represents 0 in a binary system, and 128 represents 1 in the binary system; wherein in a visual servoing scene of a color camera, the circular label is a RGB image, a RGB value of the ring is (72, 116, 193), a RGB value of the solid circle is (114, 172, 77) or (255, 255, 255), and (114, 172, 77) represents 1 in the binary system, and (255, 255, 255) represents 0 in the binary system; and wherein the positioning method comprises the following steps: Si: respectively downsamplinga length and a width of an original picture to 1/2 of an original size, i.e., a sampled picture pixel is 1/4 of an original picture pixel; S2: converting the picture from a three-channel RGB image to a single-channel Gray image; S3: binarizing the image; S4: conducting morphological processing on the image; S5: searching for a suspected feature point of the frame through progressive scanning
AU2019222802A 2019-04-22 2019-08-26 High-precision and high-speed positioning label and positioning method for visual servo Active AU2019222802B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910323897.4A CN110096920B (en) 2019-04-22 2019-04-22 High-precision high-speed positioning label and positioning method for visual servo
CN201910323897.4 2019-04-22

Publications (2)

Publication Number Publication Date
AU2019222802A1 AU2019222802A1 (en) 2020-11-05
AU2019222802B2 true AU2019222802B2 (en) 2020-11-12

Family

ID=67445443

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2019222802A Active AU2019222802B2 (en) 2019-04-22 2019-08-26 High-precision and high-speed positioning label and positioning method for visual servo

Country Status (2)

Country Link
CN (1) CN110096920B (en)
AU (1) AU2019222802B2 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110610219B (en) * 2019-08-12 2020-11-06 上海交通大学 Color annular two-dimensional code and generation and decoding method thereof
CN111071589B (en) * 2019-12-30 2022-02-08 环旭电子股份有限公司 Method and system for automatically positioning, scanning and labeling
CN112506499B (en) * 2020-12-04 2022-09-30 合肥工业大学 Method for automatically arranging measurement software tags
CN112819509B (en) * 2021-01-18 2024-03-26 上海携程商务有限公司 Method, system, electronic device and storage medium for automatically screening advertisement pictures
CN113298213B (en) * 2021-06-03 2023-12-22 杭州三坛医疗科技有限公司 Label, label detection method and device, visual reference system, device and medium
CN113870203B (en) * 2021-09-18 2024-04-02 西安交通大学 High-precision motion measurement method for positioning center of ghost boundary
CN113865677A (en) * 2021-10-12 2021-12-31 安徽翼迈科技股份有限公司 Water meter plum-blossom needle rotation speed detection method and water meter plum-blossom needle
CN115514944A (en) * 2022-09-21 2022-12-23 南京创斐信息技术有限公司 Intelligent household projection angle correction system
CN115857519B (en) * 2023-02-14 2023-07-14 复亚智能科技(太仓)有限公司 Unmanned plane curved surface platform autonomous landing method based on visual positioning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160118338A1 (en) * 2014-10-28 2016-04-28 Semiconductor Manufacturing International (Shanghai) Corporation Semiconductor structures and fabrication methods thereof
AT519176B1 (en) * 2016-10-14 2019-02-15 Engel Austria Gmbh robot system
US20190054734A1 (en) * 2017-08-09 2019-02-21 Lumii, Inc. Manufacturing light field prints
CN109947089A (en) * 2017-12-20 2019-06-28 北京京东尚科信息技术有限公司 Automatic guide vehicle attitude control method and device, automatic guide vehicle

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2711866A1 (en) * 2012-09-21 2014-03-26 Biomerieux Method and system for detecting a 2D barcode in a circular label
CN104933387B (en) * 2015-06-24 2017-05-17 上海快仓智能科技有限公司 Rapid positioning and identifying method based on two-dimensional code decoding
CN106546233A (en) * 2016-10-31 2017-03-29 西北工业大学 A kind of monocular visual positioning method towards cooperative target
CN107689061A (en) * 2017-07-11 2018-02-13 西北工业大学 Rule schema shape code and localization method for indoor mobile robot positioning
CN107451508A (en) * 2017-09-20 2017-12-08 天津通信广播集团有限公司 A kind of self-defined Quick Response Code position and azimuth determining system and implementation method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160118338A1 (en) * 2014-10-28 2016-04-28 Semiconductor Manufacturing International (Shanghai) Corporation Semiconductor structures and fabrication methods thereof
AT519176B1 (en) * 2016-10-14 2019-02-15 Engel Austria Gmbh robot system
US20190054734A1 (en) * 2017-08-09 2019-02-21 Lumii, Inc. Manufacturing light field prints
CN109947089A (en) * 2017-12-20 2019-06-28 北京京东尚科信息技术有限公司 Automatic guide vehicle attitude control method and device, automatic guide vehicle

Also Published As

Publication number Publication date
CN110096920A (en) 2019-08-06
AU2019222802A1 (en) 2020-11-05
CN110096920B (en) 2022-05-17

Similar Documents

Publication Publication Date Title
AU2019222802B2 (en) High-precision and high-speed positioning label and positioning method for visual servo
CN108225303B (en) Two-dimensional code positioning label, positioning navigation system and method based on two-dimensional code
CN107633192B (en) Bar code segmentation and reading method based on machine vision under complex background
CN108573511B (en) Point-distributed cooperative coding mark and identification and positioning method thereof
CN109215016B (en) Identification and positioning method for coding mark
CN105260693A (en) Laser two-dimensional code positioning method
CN104331689B (en) The recognition methods of a kind of cooperation mark and how intelligent individual identity and pose
CN101398895A (en) Image preprocess method based on data matrix two-dimension bar code identification
CN208937054U (en) Positioning navigation system and robot based on two-dimensional code
CN111046843B (en) Monocular ranging method in intelligent driving environment
CN113177959B (en) QR code real-time extraction method in rapid movement process
CN104573674A (en) 1D (one-dimensional) barcode recognition for real-time embedded system
CN103824275B (en) Saddle dots structure and the system and method for determining its information are searched in the picture
CN105095937B (en) A kind of visual identity method of the circular array graphic code based on straight line cluster
CN105975894B (en) A kind of one-dimension code recognizer based on auto-adaptable image edge detection and mapping model
CN108256375B (en) One-dimensional bar code scanning method
CN107292212B (en) Two-dimensional code positioning method under low signal-to-noise ratio environment
CN113506276B (en) Marker and method for measuring structural displacement
CN105719306A (en) Rapid building extraction method from high-resolution remote sensing image
CN102831428B (en) Method for extracting quick response matrix code region in image
CN114792104A (en) Method for identifying and decoding ring-shaped coding points
CN106326801B (en) A kind of scan method of stereoscopic two-dimensional code
CN112686070B (en) AGV positioning and navigation method based on improved two-dimensional code
CN112270716B (en) Decoding and positioning method for artificial visual landmarks
CN111667429B (en) Target positioning correction method for inspection robot

Legal Events

Date Code Title Description
FGA Letters patent sealed or granted (standard patent)