WO2022012337A1 - 运动臂***以及控制方法 - Google Patents

运动臂***以及控制方法 Download PDF

Info

Publication number
WO2022012337A1
WO2022012337A1 PCT/CN2021/103719 CN2021103719W WO2022012337A1 WO 2022012337 A1 WO2022012337 A1 WO 2022012337A1 CN 2021103719 W CN2021103719 W CN 2021103719W WO 2022012337 A1 WO2022012337 A1 WO 2022012337A1
Authority
WO
WIPO (PCT)
Prior art keywords
marker
corner
image
suspected
moving arm
Prior art date
Application number
PCT/CN2021/103719
Other languages
English (en)
French (fr)
Inventor
徐凯
吴百波
杨皓哲
刘旭
Original Assignee
北京术锐技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京术锐技术有限公司 filed Critical 北京术锐技术有限公司
Publication of WO2022012337A1 publication Critical patent/WO2022012337A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques

Definitions

  • the present disclosure relates to the field of robots, and in particular, to a motion arm system and a control method.
  • the closed-loop in-position correction method adopted by the existing moving arm control method of the robot system mainly determines the compensation condition of the closed-loop compensation by acquiring the correction data from the in-position condition of the moving joint of the moving arm.
  • This compensation method cannot solve the in-position accuracy problem caused by the error of the equipment itself, especially for the precise closed-loop control of the moving arm.
  • Existing spatial positioning devices usually use the identification of existing positioning marks to determine the spatial position of the object where the positioning marks are located.
  • identification and control methods there are many schemes for realizing such identification and control methods, but each has its own limitations.
  • a binocular camera is required for the spatial positioning of the marked ball, which requires more equipment.
  • markers with contrast of light and dark or black and white are required for the spatial positioning of the marked ball, which requires more equipment.
  • markers with contrast of light and dark or black and white are markers, and uses marker likelihood calculation for each pixel in the image, or uses an existing template to perform template matching on all pixels within a pixel within a certain range. This method is easy to misidentify the part that is not a marker as a marker, or has a very large amount of calculation, which is not suitable for the application of a high-precision real-time closed-loop correction algorithm.
  • the present disclosure provides a control method of a moving arm, comprising: obtaining an image captured by an image acquisition device, the image including an image of a marker disposed on the end of the moving arm, the marker comprising a plurality of marker corners; identifying marker corners in the captured image; and determining the current relative position of the end of the moving arm relative to the image capturing device based on the identified marker corners posture.
  • the present disclosure provides a motion arm system, comprising: an image acquisition device for acquiring an image; at least one motion arm, including a motion arm end with a marker on the motion arm end, the motion arm The marker includes a plurality of marker corners; and a control device configured to perform a control method according to some embodiments of the present disclosure.
  • the present disclosure provides a computer-readable storage medium comprising one or more computer-executable instructions stored thereon, the computer-executable instructions being executed by a processor that has been configured to perform according to Control methods of some embodiments of the present disclosure.
  • FIG. 1 shows a schematic structural diagram of a moving arm system according to some embodiments of the present disclosure
  • Figure 2(a) shows a schematic structural diagram of a moving arm according to some embodiments of the present disclosure
  • Figure 2(b) shows a schematic structural diagram of a marker according to some embodiments of the present disclosure
  • FIG. 3 shows an enlarged schematic view of the end of the moving arm in a moving state according to some embodiments of the present disclosure
  • FIG. 4 shows a flowchart of a control method of a moving arm system according to some embodiments of the present disclosure
  • FIG. 5 shows a flowchart of a control method for identifying marker corners in an image according to some embodiments of the present disclosure
  • FIG. 6 shows a schematic diagram of corner search for suspected markers according to some embodiments of the present disclosure
  • Figure 7 shows a flowchart of a method for determining the position of an end of a moving arm in a world coordinate system in accordance with some embodiments of the present disclosure
  • FIG. 8 shows a schematic structural diagram of a continuum flexible arm according to some embodiments of the present disclosure
  • FIG. 9 shows a schematic structural diagram of a single continuum segment of a continuum flexible arm according to some embodiments of the present disclosure.
  • the terms “installed”, “connected”, “connected” and “coupled” should be understood in a broad sense, for example, it may be a fixed connection, or It can be a detachable connection; it can be a mechanical connection or an electrical connection; it can be a direct connection or an indirect connection through an intermediate medium; it can be an internal connection between two components.
  • the specific meanings of the above terms in the present disclosure can be understood according to specific situations.
  • the end close to the user (such as a doctor) is defined as proximal, proximal or rear, rear, and the end close to the surgical patient is defined as distal, distal or front, anterior.
  • position refers to the location of an object or a portion of an object in three-dimensional space (eg, the three translational degrees of freedom can be described using changes in Cartesian X, Y, and Z coordinates, such as along the Cartesian X, respectively axis, three translational degrees of freedom in Y-axis and Z-axis).
  • position refers to a rotational setting of an object or a portion of an object (eg, three rotational degrees of freedom, which may be described using roll, pitch, and yaw).
  • the term "pose” refers to a combination of position and pose of an object or a part of an object, which can be described, for example, using the six parameters of the six degrees of freedom mentioned above.
  • the pose of the moving arm or a part thereof refers to the pose of the coordinate system defined by the moving arm or a part thereof relative to the coordinate system defined by the support, the base where the moving arm is located, or the world coordinate system.
  • FIG. 1 shows a structural block diagram of a motion arm system 100 according to some embodiments of the present disclosure.
  • the moving arm system 100 may include an image capturing device 10 , at least one moving arm 20 and a control device 30 .
  • the image capturing device 10 and the at least one moving arm 20 are respectively connected in communication with the control device 30 .
  • the control device 30 may be used to control the movement of the at least one moving arm 20 to adjust the posture, coordinate with each other, and the like of the at least one moving arm 20 .
  • at least one kinematic arm 20 may include a kinematic arm tip 21 at the tip or distal end.
  • At least one movement arm 20 may further include a movement arm body 22 , and the movement arm end 21 may be a distal end portion of the movement arm body 22 or an end effector disposed at the distal end of the movement arm body 22 .
  • the control device 30 can control the movement of the at least one moving arm 20 to move the end 21 of the moving arm to a desired position and posture.
  • the motion arm system 100 can be applied to a surgical robotic system, such as a laparoscopic surgical robotic system. It should be understood that the kinematic arm system 100 may also be applied to special purpose or general purpose robotic systems in other fields (eg, manufacturing, machinery, etc.).
  • the control device 30 may be connected in communication with the motor of the at least one moving arm 20, so that the motor controls the at least one moving arm 20 to move to a corresponding target pose based on the driving signal.
  • the motor that controls the movement of the moving arm can be a servo motor, which can be instructed by the control device to control the movement of the moving arm.
  • the control device 30 can also be connected in communication with a sensor coupled to the motor, for example, through a communication interface, so as to receive movement data of the moving arm 20 and monitor the movement state of the moving arm 20 .
  • the communication interface may be a CAN (Controller Area Network) bus communication interface, which enables the control device 30 to communicate with the motor and the sensor through the CAN bus.
  • the moving arm 20 may comprise a continuous flexible arm or a moving arm composed of multiple joints with multiple degrees of freedom, for example, a moving arm that can achieve 6 degrees of freedom motion.
  • the image capture device 10 may include, but is not limited to, a dual-lens image capture device or a single-lens image capture device, such as a binocular or monocular camera.
  • FIG. 2( a ) shows a schematic structural diagram of a moving arm 20 according to some embodiments of the present disclosure.
  • the moving arm 20 may include a moving arm end 21 and a moving arm body 22 , and a marker 211 is fixed on the moving arm end 21 .
  • Figure 2(b) shows a schematic structural diagram of the marker 211 according to some embodiments of the present disclosure.
  • the marker 211 may include a plurality of marker corners 2111 .
  • the marker 211 can be in the shape of a cylinder, and is fixedly covered on the end 21 of the moving arm.
  • the marker 211 may include a plurality of marker corners 2111 distributed on the cylindrical shape (eg, regularly distributed). It should be understood that the markers may be graphics in known arrangements with distinct differences in brightness, grayscale, and hue. The corners of the marker can be points with special distinguishing features on the marker, and the graphics or colors in the surrounding range of the corners of the marker can be point-symmetrically distributed, so that the image acquisition device can collect the corners of the marker. In some embodiments, the plurality of marker corner points are distributed regularly, which facilitates the determination of parameters of the marker corner points, such as parameters such as distribution angle and separation distance between the plurality of marker corner points. In some embodiments, as shown in FIG. 2( b ), the markers 211 may include, but are not limited to, black and white checkerboard markers, and the corner points 2111 of the markers may be, for example, intersections of two line segments in the checkerboard marker .
  • FIG. 3 shows an enlarged schematic view of the end of the moving arm in a moving state according to some embodiments of the present disclosure.
  • the at least one moving arm may include, but is not limited to, two moving arms, each of which is provided with a marker at the end.
  • FIG. 4 shows a flowchart of a control method 400 of a motion arm system (eg, motion arm system 100 ) according to some embodiments of the present disclosure.
  • the method 400 may be performed by a control device (eg, the control device 30 ) of the motion arm system 100 .
  • the control device 30 may be configured on a computing device.
  • Method 400 may be implemented by software, firmware, and/or hardware.
  • an image captured by an image capturing device is obtained.
  • an image of the end of the moving arm may be captured by a dual-lens image capturing device or a single-lens image capturing device, and the captured images include images of markers on the end of the moving arm.
  • marker corners in the acquired image are identified.
  • the acquired images may be pre-processed by the control device to identify marker corners in the images.
  • An exemplary method of identifying marker corners in an image is detailed in the method shown in FIG. 5 .
  • the current relative pose of the end of the moving arm relative to the image capture device is determined.
  • the current pose of the marker relative to the image acquisition device may be determined based on the identified corner points of the marker, and based on the current pose of the marker and the relative pose of the marker relative to the end of the moving arm, the determination may be made The current relative pose of the end of the kinematic arm. It should be understood that since the marker is fixedly arranged on the end of the moving arm, the relative pose of the marker with respect to the end of the moving arm is known.
  • the method 500 may be performed by a control device (eg, control device 30 ) of a moving arm system (eg, moving arm system 100 ).
  • the control device 30 may be configured on a computing device.
  • Method 500 may be implemented by software, firmware, and/or hardware.
  • a region of interest is determined in the acquired image.
  • the ROI is determined to be a full image, or the ROI is determined to be a partial image based on the position of suspected marker corner points in a previous frame of image (eg, the image processed in the previous motion control loop), and the ROI is determined as a partial image. Convert to the corresponding grayscale image. For example, based on the collected image, the whole image or partial image can be intercepted as the ROI, and the ROI can be converted into a corresponding grayscale image to quantify the grayscale information of each pixel.
  • it may be determined that the ROI of the first frame of image is the full image based on the captured image being the first frame of image.
  • the ROI may be determined as a partial image based on the position of the corner point of the suspected marker in the previous frame image.
  • suspected marker corners are identified in the ROI. For example, the possibility that each pixel is a corner of the marker can be determined based on the grayscale information of each pixel, and a pixel with a high probability is determined as a corner of the suspected marker.
  • the partial image may include a set distance range centered on a virtual point formed by the average coordinates of the corner points of the suspected marker in the previous frame of images, and the set distance may include the average interval of the corner points of the multiple suspected markers A predetermined multiple of the distance.
  • the predetermined multiple may include, but is not limited to, a fixed multiple, such as twice, of the average separation distance between the corner points of a plurality of suspected markers. It should be understood that the predetermined multiple can also be a variable multiple of the average separation distance between the corner points of a plurality of suspected markers.
  • a corner likelihood (CL) for each pixel in the ROI can be determined. Divide the ROI into multiple sub-ROIs, and determine the pixel with the largest CL value in each sub-ROI. Based on a plurality of pixel points with the largest CL values of the plurality of sub-ROIs, a set of pixel points with a CL value greater than a first threshold is determined. Determine the pixel point with the largest CL value in the pixel point set as the corner point of the first suspected marker. Based on the first suspected marker corner points, the second suspected marker corner points are searched along a plurality of edge directions with a set step size.
  • a corner likelihood (CL) is determined for each pixel in the ROI. For example, a convolution operation is performed on each pixel in the range of the ROI image to obtain the first-order and/or second-order derivative of each pixel.
  • the corner likelihood (CL) of each pixel is obtained by using the first and/or second derivative of each pixel in the aforementioned ROI image range.
  • the CL of each pixel in the ROI image can be calculated according to the following formula:
  • is a set constant
  • I x , I 45 , I y , and I n45 are the first-order derivatives of each pixel in the four directions of 0, ⁇ /4, ⁇ /2 and - ⁇ /4
  • I xy and I 45_45 which are the second derivative of each pixel in the 0, ⁇ /2 and ⁇ /4, - ⁇ /4 directions, respectively
  • c xy is the CL value of each pixel in the 0, ⁇ /2 direction
  • c 45 is the CL value of each pixel in the ⁇ /4 direction.
  • the ROI is divided into multiple sub-ROIs.
  • the non-maximum suppression method can be used to evenly segment multiple sub-ROIs in an ROI image range.
  • the ROI image may be equally divided into multiple sub-ROIs of 5x5 pixels.
  • the above-mentioned embodiments are exemplary and non-limiting, and it should be understood that the ROI image may also be divided into multiple sub-ROIs of other sizes.
  • the pixel point with the largest CL value in each sub-ROI may be determined, and the pixel point with the largest CL value in each sub-ROI is compared with the first threshold to determine a set of pixel points whose CL value is greater than the first threshold.
  • the first threshold may be set to 0.06. It should be understood that the first threshold value can also be set to other values.
  • the pixel point with the largest CL value in the pixel point set is determined as the corner point of the first suspected marker. For example, all the pixels in the pixel set may be sorted in descending order of CL value, and the pixel with the largest CL value may be used as the first suspected marker corner.
  • a second suspected marker corner point is searched along a plurality of edge directions with a set step size.
  • FIG. 6 shows a schematic diagram of corner search for suspected markers according to some embodiments of the present disclosure. As shown in FIG. 6 , starting from the corner point of the first suspected marker, the second suspected marker corner points are sequentially searched for the four edge directions with a set step size. In some embodiments, the set step size may be 10 pixels.
  • the pixel with the largest CL value is sequentially searched in the set range in the edge direction as the second corner of the suspected marker.
  • the setting range may be a 10 ⁇ 10 square area.
  • the set step size may also be other number of pixels, and the set range may also be an area of other size.
  • four edge directions of the corner points of the first suspected marker can be determined, and based on each edge direction, a mobile search is performed with a set step size to find the pixel with the largest CL value within the set range, as the second suspected point. Marker corners.
  • the first predetermined range may be a 10 ⁇ 10 square area. In a 10 ⁇ 10 square pixel neighborhood centered on the corner of the first suspected marker, determine the gradient direction and gradient weight of each pixel to determine multiple edge directions of the corner of the first suspected marker. It should be understood that the corner point of the second suspected marker can also determine the edge direction by a similar method. By adjusting the edge direction based on the corners of the suspected markers, the corners of the markers can be searched in a targeted manner to reduce the amount of computation.
  • the gradient direction and gradient weight of each pixel in the first predetermined range can be calculated by the following formula:
  • I angle is the gradient direction of the corresponding pixel
  • I weight is the gradient weight of the corresponding pixel
  • I x and I y are the first-order derivatives of the corresponding pixel in the 0 and ⁇ /2 directions, respectively.
  • the I angle and I weight of each pixel in the first predetermined range are calculated by a clustering method to obtain the edge direction of the corresponding pixel.
  • the edge direction may be adjusted based on the second suspected marker corner, and along the adjusted edge direction, the third suspected marker corner may be searched. For example, similar to the above, by determining the gradient direction and gradient weight of each pixel point in the predetermined range centered on the corner of the second suspected marker, and based on the gradient direction and gradient weight of each pixel point in the predetermined range, determine The updated edge orientation, as shown in Figure 6.
  • a correlation test is performed on the corners of the suspected markers in the ROI to determine whether the corners of the suspected markers in the ROI image are marker corners.
  • the pixel points of the first suspected marker corner point and the searched suspected marker corner point (for example, the second suspected marker corner point) can be substituted into the standard corner point model for correlation judgment, so as to determine the suspected marker corner point Whether it is a marker corner.
  • the corner detection method can be based on the combination of corner likelihood estimation (Corner Likelihood) and template matching (Template Matching) to determine the image (for example, when the image acquisition device is a binocular image acquisition device, the extracted image includes the left lens image and the right lens image. marker corners in the lens image).
  • a correlation coefficient between the grayscale distribution of the standard corner model and the grayscale distribution of pixels in a second predetermined range centered on the pixel of the suspected marker corner is determined, in response to the correlation coefficient being greater than
  • the third threshold is to determine the corner of the suspected marker as the corner of the marker.
  • the marker may be a black and white checkerboard pattern
  • the standard corner model may be a hyperbolic tangent model (HTM).
  • HTM hyperbolic tangent model
  • the suspected marker angle can be determined based on the correlation coefficient CC between the gray distribution of a standard corner model (eg, HTM) and the gray distribution of a predetermined range of pixel neighborhoods centered on the pixel of the suspected marker corner. Whether the point is a marker corner.
  • the second predetermined range centered on the pixel point of the suspected marker corner point may include, but is not limited to, a 10 ⁇ 10 square pixel neighborhood.
  • the correlation coefficient CC can be defined as follows:
  • G image is the gray distribution of the pixel neighborhood in a predetermined range centered on the pixel of the suspected marker corner
  • G HTM is the gray distribution of the standard corner model
  • Var is the variance function
  • Cov is the covariance function.
  • the threshold may be set to 0.8. It should be understood that the threshold value may also be other set values. For example, take the corner point of the first suspected marker as the starting point, search for the corner point of the second suspected marker, perform correlation verification on the corner point of the second suspected marker that has been searched, determine that the value of the correlation coefficient CC is greater than 0.8, and determine the second corner point of the suspected marker. Suspected marker corners are marker corners.
  • the corresponding pixel points may be substituted into the standard corner point model for correlation judgment to determine the identified suspected marker corner points. Whether the point is a marker corner.
  • searching in opposite edge directions may be performed to determine other suspected marker corner points.
  • searching in opposite edge directions may be performed in two opposite edge directions respectively.
  • the first suspected marker corner point is used as a starting point to search in a direction away from the second suspected marker corner point
  • the second suspected marker corner point is used as a starting point to search in a direction away from the first suspected marker corner point.
  • the search may be stopped in response to the number of suspected marker corners being greater than or equal to the second threshold.
  • the second threshold may be set to four. It should be understood that the above embodiment is an example, not a limitation, and the second threshold may also be set to 5 or other values. For example, it is determined that the total number of suspected marker corner points is greater than or equal to 4 and the correlation verification is passed, indicating that the marker is successfully found, and the determined multiple suspected marker corner points are used as marker corner points.
  • the searched distance in response to the searched distance being greater than a predetermined multiple of the distance between the first suspected marker corner point and the second suspected marker corner point or in response to the determined number of suspected marker corner points being less than the second threshold, Based on the next pixel in the pixel point set in descending order of CL value as the first suspected marker corner, search for suspected marker corners along the edge direction with a set step size until the number of suspected marker corners is greater than or equal to equal to the second threshold.
  • the predetermined multiple may be twice the distance between the first suspected marker corner point and the second suspected marker corner point, the search distance is greater than the predetermined multiple, and the current search cycle ends.
  • the next search cycle can be restarted based on the next pixel point in the pixel point set in descending order of the CL value as the first suspected marker corner point. Or, based on the above-mentioned search, it is determined that the total number of corner points of the suspected marker is less than 4, indicating that the marker has not been successfully identified, and the next pixel in the pixel point set in descending order of the CL value is used as the first suspected marker. Corner, start the next search cycle, search for other suspected marker corners, until the total number of suspected marker corners is determined to be greater than or equal to a second threshold (eg, 4) and the correlation is verified.
  • a second threshold eg, 4
  • sub-pixel localization of the marker corner is performed in response to the suspected marker corner in the ROI being the marker corner.
  • the coordinate accuracy of marker corners can be optimized by sub-pixel positioning.
  • the CL value of each pixel in each sub-ROI can be fitted based on a model to determine the coordinates of the sub-pixel positioned marker corners.
  • the fitting function of the CL value of each pixel point in each sub-ROI may be a quadratic surface function, and the extreme point of the function is a sub-pixel point.
  • the fitting function can be as follows:
  • S(x, y) is the CL value fitting function of all pixels in each sub-ROI, and a, b, c, d, e, and f are coefficients;
  • x c is the x coordinate of the corner of the marker
  • y c is the y coordinate of the corner of the marker
  • the three-dimensional coordinates of the marker corner points relative to the marker coordinate system may be determined, the two-dimensional coordinates of the marker corner points in the acquired image may be determined, and the homography matrix of the two-dimensional coordinates and the three-dimensional coordinates may be determined.
  • the analytical solution based on the analytical solution of the homography matrix, determines the current relative pose of the marker relative to the image acquisition device. It will be appreciated that the three-dimensional coordinates of the image-identified marker corners may be determined based on known parameters of the marker corners. It should be understood that, based on the dual-lens image acquisition device, the following processing may be performed after aligning the corners of markers in the left-lens image and the right-lens image. For example, the three-dimensional coordinates of the marker corners relative to the marker coordinate system can be as follows:
  • r wm and ⁇ may be parameters determined based on the known characteristics of the marker. For example, for the black and white checkerboard markers, which are arranged as cylinders at the end of the moving arm, the corner points of the checkerboard markers are evenly distributed around the circumference of the cylinder markers (as shown in Figure 2(b)), where r wm is The radius of the cylinder formed by the marker, and ⁇ is the distribution angle of the corner points of the marker.
  • the corner points of the marker can be calculated directly based on the parameters determined by the known characteristics of the marker (such as the radius of the formed cylinder and the distribution angle of the corner points of the marker).
  • the three-dimensional coordinates of the object coordinate system Based on the dual-lens image acquisition device, the corner points of the markers detected in the left and right images can be aligned, so that the coordinates of the same space point in the y direction after binocular stereo correction are consistent, and then the parameters determined based on the known features of the markers , calculate the three-dimensional coordinates of the marker corners in the marker coordinate system.
  • the marker coordinate system can be established as follows: Wherein, the origin of the marker coordinate system coincides with the marker center, and the x-axis may point from the marker center to the first marker corner point (for example, the first determined marker corner point).
  • determining the analytical solution of the homography matrix of the two-dimensional and three-dimensional coordinates may include calculating based on the following formula:
  • A is the known internal parameter matrix of the image acquisition device
  • r 1 , r 2 are the first two columns of the ll R wm matrix
  • ll represents the image acquisition device coordinate system ⁇ ll ⁇ (here, the left lens coordinate system
  • n is any non-zero scalar.
  • the analytical solution of the homography matrix may also include calculation based on the following formula:
  • x H [h 1 T h 2 T h 3 T ] T
  • L is a 2n ⁇ 9 matrix
  • n is the number of marker corners determined in the acquired image
  • x H is the singular value decomposition of matrix L (SVD ) is the right singular vector corresponding to the smallest singular value after .
  • the current relative pose of the marker relative to the image acquisition device can be determined based on the following formula:
  • a nonlinear optimization process may be performed to obtain the current relative position of the marker with respect to the image acquisition device based on an analytical solution of the homography matrix between the two-dimensional and three-dimensional coordinates of each marker corner point posture.
  • a nonlinear optimization process is established to calculate the current relative pose of the marker with respect to the image acquisition device by algebraic methods.
  • the nonlinear optimization process may include minimizing the average geometric error between the projection of the three-dimensional coordinates on the acquired image and the marker corners.
  • the image capturing device is a single-lens image capturing device, and the captured image is a single-lens (herein, the left lens is taken as an example, denoted as "left") images
  • the nonlinear optimization process may include optimization based on the following formula:
  • the image capturing device is a dual-lens image capturing device, and the captured images include a left-lens (denoted as "left”) image and a right-lens (denoted as "right”) image, and the nonlinear optimization process may include the following: Formula optimization:
  • ll R wm and ll p wm are respectively the pose and position of the marker in the coordinate system of the image acquisition device (or the left lens coordinate system) ⁇ 11 ⁇ ; H left and H right are the image with the left lens and the right lens, respectively image-related homography matrices; left s i and right s i , respectively, are the scale factors for the analytical decorrelation of the homography matrices related to the left-lens image and the right-lens image, respectively, and are the extended coordinates of the corner point of the i-th marker related to the left-lens image and the right-lens image, respectively.
  • H left is the homography matrix corresponding to the two-dimensional coordinates and three-dimensional coordinates of the marker corners captured by the left camera image
  • H right is the corresponding two-dimensional coordinates and three-dimensional coordinates of the marker corners captured by the right camera image.
  • Homography matrix left s i is a non-zero scalar associated with the analytical solution of H left
  • right s i is a non-zero scalar associated with the analytical solution of H right.
  • a nonlinear optimization process can be established through the correspondence between the two-dimensional coordinates and the three-dimensional coordinates of the marker corners in the left-lens image and the right-lens image.
  • the pose of the marker relative to the coordinate system of the image acquisition device [ ll R wm ll p wm ] is the optimization variable, and the corner points of the three-dimensional marker are reprojected to the left lens image and the marker corner on the right lens image and the actual detection mark
  • the minimum average geometric error between the object corners is taken as the optimization objective to obtain the optimal left-lens extrinsic parameters (such as the relative pose of the marker and the dual-lens image acquisition device). It should be understood that when the right lens in the dual-lens image acquisition device is used as the main lens, the left and right mirrors are adjusted, and the algorithm is basically similar to the aforementioned process, and will not be described in detail in this disclosure.
  • the pose matrix [ ll R wm ll p wm ] of the marker relative to the image acquisition device is optimized and estimated, and the end of the moving arm relative to the world coordinate system is calculated based on the formula
  • the pose, and based on the pose information of the end of the moving arm, the real-time closed-loop control of the pose of the end of the moving arm is performed to improve the motion accuracy of the moving arm of the system.
  • Figure 7 shows a flowchart of a method 700 for determining the position of the end of a moving arm in a world coordinate system, according to some embodiments of the present disclosure.
  • the method 700 may be performed by a control device (eg, control device 10 ) of a moving arm system (eg, moving arm system 100 ).
  • the control device 30 may be configured on a computing device.
  • Method 700 may be implemented by software, firmware, and/or hardware.
  • the current pose of the moving arm in the world coordinate system is determined based on the current relative pose of the end of the moving arm.
  • the marker may be based on the identified corner, the marker is determined with respect to the image capture device current relative pose (e.g., position and posture ll R wm ll p wm).
  • the position of the end of the moving arm where the marker is located in the world coordinate system can be determined. For example, determining the position of the end of the kinematic arm in the world coordinate system can be based on the following formula:
  • W p tip W R ll ( ll R wm wm p tip + ll p wm )+ W p ll
  • W p tip is the position of the end of the moving arm in the world coordinate system ⁇ W ⁇
  • wm p tip is the position of the end of the moving arm in the marker coordinate system ⁇ wm ⁇
  • W R ll and W p ll are the image acquisition
  • ll R wm and ll p wm are the pose and position of the marker in the image acquisition device coordinate system ⁇ ll ⁇ , respectively.
  • the pose determination of the image acquisition device, W R 11 and W p 11 may be known matrices.
  • the world coordinate system Its origin can be located at the center of the fixed base (eg, sheath) (as shown in Figure 2(a) or Figure 8).
  • marker coordinate system Its origin coincides with the center of the movement arm marker, and the x-axis points from the center of the marker to the first marker corner (as shown in Figure 2(a), Figure 2(b)).
  • Left camera coordinate system Its origin coincides with the center of the left lens of the dual-lens image acquisition device, and the x-axis points from the center of the left lens of the dual-lens image acquisition device to the center of the right lens of the dual-lens image acquisition device.
  • Right lens coordinate system Its origin coincides with the center of the right lens of the dual-lens image acquisition device, and the x-axis coincides with the x-axis of the left lens coordinate system of the dual-lens image acquisition device, and points to the same.
  • step 703 based on the target pose and the current pose of the end of the moving arm in the world coordinate system, the difference between the target pose and the current pose of the end of the moving arm is determined.
  • the target pose of the end of the moving arm in the world coordinate system may be input by the user through an input device.
  • the difference between the target pose and the current pose of the end of the moving arm can be determined. For example, based on the target position and the current position of the end of the moving arm in the world coordinate system, the difference ⁇ p between the target position and the current position can be calculated, and ⁇ p can be as follows:
  • p d at the end of the movement arm to the target position in the world coordinate system p c is the current position of the movement arm in the end of the world coordinate system (e.g., process step 701 the calculated W p tip).
  • v xlim is the set maximum allowable space velocity
  • J + is the Moore–Penrose pseudo-inverse matrix of J
  • is the angular velocity of the end of the moving arm
  • can represent the bending angle of the moving arm or the rotation angle parameter vector of the joint of the moving arm , can be the derivative of ⁇ .
  • is the bending angle parameter vector of the arm body of the flexible moving arm.
  • is the rotation angle parameter vector of the moving arm joint.
  • the parameter space vector at the end of the motion arm can be updated:
  • ⁇ t is the period of the motion control loop.
  • a drive signal for the moving arm is determined based on the difference value and an inverse kinematics numerical iterative algorithm of the moving arm. For example, based on the difference between the target pose of the end of the moving arm in the world coordinate system and the current pose, through the inverse kinematics numerical iterative algorithm of the kinematics model of the moving arm, it can be determined that the joints included in the moving arm are currently moving.
  • the drive value within the control loop or the drive value of the corresponding multiple motors that control the motion of the moving arm.
  • the kinematic model may represent a mathematical model of the kinematic relationship of the joint space and the task space of the moving arm. For example, the kinematic model can be established by methods such as DH parameter method and exponential product representation.
  • the method 700 may further include determining a drive signal for the motion arm at predetermined cycles to implement a plurality of motion control cycles. For example, multiple motion control loops are iteratively executed, and in each motion control loop, the control method according to some embodiments of the present disclosure may be executed to control the motion of the moving arm to the target pose. By iteratively executing multiple motion control loops, real-time closed-loop control of the end position of the motion arm can be achieved, which can improve the position control accuracy of the motion arm and avoid kinematic modeling caused by the open-loop control algorithm when the end of the motion arm has a load Inaccurate.
  • the trajectory tracking error of the moving arm (eg, the continuum flexible arm) can be improved.
  • the trajectory tracking error of the moving arm can be reduced to 25.23% of the open loop control error.
  • the kinematic arm is a flexible kinematic arm (eg, a continuum flexible arm) as an example.
  • 8 and 9 respectively show a schematic structural diagram of a continuum flexible arm and a structural schematic diagram of a single continuum segment of the continuum flexible arm according to some embodiments of the present disclosure.
  • the continuum flexible arm may include two continuous body segments and two straight rod segments, each continuous body segment (as shown in Figure 9) comprising 2 degrees of freedom structures, wherein one straight rod segment is located in Between the two continuums, the other straight rod segment contains 1 degree of freedom for feeding and 1 degree of freedom for the whole to rotate around its own axis.
  • each continuum segment may include a base ring, an end ring, and a plurality of juxtaposed structural bones extending through the base ring and the end ring, and the plurality of structural bones may be fixedly connected to the end ring and to the base ring Slide to connect.
  • the motion process of the continuum flexible arm is plane bending.
  • the kinematics of the continuum flexible arm and the continuum segments it contains are described as follows:
  • the base ring coordinate system Attached to the base ring of the continuum segment of Section t, with its origin at the center of the base ring and the XY plane coincident with the base ring plane, Point from the center of the base ring to the first structural bone.
  • Curved Plane Coordinate System Its origin coincides with the origin of the base ring coordinate system, the XZ plane coincides with the bending plane, and coincide.
  • Curved Plane Coordinate System Its origin is at the center of the end ring, the XY plane and the bending plane coincide, and coincide.
  • end ring coordinate system Attached to the end ring of the t-th continuum segment with its origin at the center of the end ring and the XY plane coincident with the end ring plane, Point from the center of the end ring to the first structural bone.
  • each symbol for kinematic modeling of the continuous flexible arm and the continuous body segment may be as shown in Table 1.
  • the kinematic description of the entire continuum of flexible arms is established as follows:
  • W T tip W T 1b 1b T 1e 1e T 2b 2b T 2e 2e T tip
  • W T tip represents the homogeneous transformation matrix of the end of the continuum relative to the world coordinate system
  • W T 1b represents the homogeneous transformation matrix of the base ring of the first continuum relative to the world coordinate system
  • 1b T 1e represents the first Homogeneous transformation matrix of the end rings of one continuum relative to the base rings of the first continuum
  • 1e T 2b denotes the homogeneous transformation of the base rings of the second continuum relative to the end rings of the first continuum transformation matrix
  • 2b T 2e represents the homogeneous transformation matrix of the end ring of the second continuum relative to the base ring of the second continuum
  • 2e T tip represents the end of the continuum relative to the end ring of the second continuum
  • W 1 , W 2 and W 3 in the matrix can be expressed as the following formulas respectively:
  • the present disclosure provides a computer-readable storage medium that can include at least one instruction executed by a processor to configure the processor to perform the control in any of the above embodiments method.
  • the present disclosure provides a computer system that can include a non-volatile storage medium and at least one processor.
  • the non-volatile storage medium may include at least one instruction.
  • the processor is configured to execute at least one instruction to configure the processor to perform the control method in any of the above embodiments.
  • a computer-readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any combination of the above.
  • computer readable storage media may include, but are not limited to, portable computer disks, hard disks, read only memory (ROM), random access memory (RAM), erasable programmable read only memory (EPROM) , Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash memory or other solid-state memory technology, CD-ROM, Digital Versatile Disc (DVD), HD-DVD, Blue-Ray or other optical storage devices, magnetic tape, Disk storage or other magnetic storage device, or any other medium capable of storing required information and accessible by a computer, having stored thereon computer-executable instructions that, when executed in a machine (eg, a computer device), The machine is caused to execute the control method of the present disclosure.
  • computer devices may include personal computers, servers, or network devices, among others.
  • Some embodiments of the present disclosure can detect the posture of the end of the moving arm in real time based on the closed-loop control method of visual tracking and the set markers, which can effectively reduce the trajectory tracking error of the moving arm, and can reduce the motion control error. Furthermore, the control method according to some embodiments of the present disclosure may execute a motion control loop in a predetermined period, and in each motion control loop, determine the posture of the end of the moving arm, and control the motion of the moving arm to achieve the target posture.
  • Some embodiments of the present disclosure based on a targeted marker corner search strategy, greatly reduce the amount of computation, and can iteratively implement trajectory tracking and error correction for the end of the moving arm.
  • the technical solution of the present disclosure has a wide range of adaptability, can be applied to a variety of markers and marker identification equipment, and can also be widely used in robots, for example, it can be used for tracking and tracking of moving ends of surgical robots, industrial robots, etc. Correction.
  • the present disclosure also discloses the following:
  • a control method for a moving arm comprising:
  • an image captured by an image capture device the image including an image of a marker disposed on the end of the motion arm, the marker including a plurality of marker corners;
  • the current relative pose of the end of the moving arm relative to the image acquisition device is determined.
  • ROI region of interest
  • a correlation test is performed on the corners of the suspected markers in the ROI to determine whether the corners of the suspected markers in the ROI image are marker corners.
  • a second suspected marker corner point is searched along a plurality of edge directions with a set step size.
  • the plurality of edge directions are determined based on gradient directions and gradient weights of each pixel in the first predetermined range.
  • an edge direction is adjusted; and along the adjusted edge direction, a third suspected marker corner point is searched.
  • the search is performed in the opposite edge direction.
  • the suspected marker corner is determined to be a marker corner.
  • sub-pixel localization of the marker corner is performed.
  • the coordinates of the marker corners are calculated based on the following formula:
  • S(x, y) is the CL value fitting function of all pixels in each sub-ROI, and a, b, c, d, e, and f are coefficients;
  • x c is the x coordinate of the corner of the marker
  • y c is the y coordinate of the corner of the marker
  • the current relative pose of the marker with respect to the image acquisition device is determined.
  • determining the three-dimensional coordinates of the marker corner points relative to the marker coordinate system comprises determining the three-dimensional coordinates of the marker corner points relative to the marker coordinate system based on the following formula: :
  • r wm is the radius of the cylinder formed by the marker
  • is the distribution angle of the corner point of the marker.
  • determining the analytical solution of the homography matrix of the two-dimensional coordinates and the three-dimensional coordinates comprises:
  • A is the known internal parameter matrix of the image acquisition device
  • r 1 , r 2 are the first two columns of the ll R wm matrix
  • is an arbitrary non-zero scalar.
  • determining the analytical solution of the homography matrix of the two-dimensional coordinates and the three-dimensional coordinates further comprises:
  • x H [h 1 T h 2 T h 3 T ] T
  • L is a 2n ⁇ 9 matrix
  • n is the number of marker corner points determined in the acquired image
  • x H is the singular value decomposition of matrix L The right singular vector corresponding to the smallest singular value after (SVD).
  • control method of item 16 further comprising:
  • the current relative pose of the marker relative to the image acquisition device is determined based on the following formula:
  • a nonlinear optimization process is performed to obtain the current relative pose of the marker with respect to the image acquisition device.
  • the image acquisition device is a single-lens image acquisition device, the acquired image is a single-lens image, and performing the nonlinear optimization process includes:
  • the image acquisition device is a dual-lens image acquisition device, the acquired images include a left-lens image and a right-lens image, and performing the nonlinear optimization process includes:
  • H left and H right are the homography matrices related to the left and right lens images, respectively; left s i and right s i are the analytical solutions of the homography matrices related to the left and right lens images, respectively the associated scale factor, and are the extended coordinates of the i-th marker corner point related to the left-lens image and the right-lens image, respectively.
  • a drive signal for the moving arm is determined based on the difference and an inverse kinematics numerical iterative algorithm of the moving arm.
  • the drive signal of the motion arm is determined to implement a plurality of motion control cycles.
  • the current pose of the end of the moving arm in the world coordinate system is determined:
  • W p tip W R ll ( ll R wm wm p tip + ll p wm )+ W p ll
  • W p tip is the position of the end of the moving arm in the world coordinate system ⁇ W ⁇
  • wmp tip is the position of the end of the moving arm in the marker coordinate system ⁇ wm ⁇
  • W R 11 , W p 11 are respectively the posture and position of the image acquisition device in the world coordinate system ⁇ W ⁇
  • 11 R wm and 11 p wm are respectively the posture and position of the marker in the image acquisition device coordinate system ⁇ 11 ⁇ ;
  • An exercise arm system comprising:
  • Image acquisition equipment for acquiring images
  • At least one moving arm including the end of the moving arm, the end of the moving arm is provided with a marker, the marker includes a plurality of marker corners;
  • a control device configured to perform the control method of any of items 1-24.
  • a computer-readable storage medium comprising one or more computer-executable instructions stored thereon, the computer-executable instructions being executed by a processor that has been configured to perform as described in any of items 1-24 control method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Health & Medical Sciences (AREA)
  • Mechanical Engineering (AREA)
  • Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Image Analysis (AREA)

Abstract

本公开涉及机器人器械领域,公开一种运动臂的控制方法,包括获得由图像采集设备采集的图像,图像包括设置在运动臂末端上的标记物的图像,标记物包括多个标记物角点,识别采集的图像中的标记物角点,以及基于识别的标记物角点,确定运动臂末端相对于图像采集设备的当前相对位姿。基于视觉跟踪的闭环控制方法及设置的标记物,实时检测运动臂末端的姿态,有效降低运动臂的轨迹跟踪误差,可降低运动控制误差。

Description

运动臂***以及控制方法
相关申请的交叉引用
本申请要求于2020年7月11日提交的、申请号为202010665741.7、发明名称为“一种运动臂***控制方法、装置、机器人***及存储介质”的中国专利申请的优先权,该申请的全文以引用方式整体结合于此。
技术领域
本公开涉及机器人领域,尤其涉及运动臂***以及控制方法。
背景技术
现有的机器人***的运动臂控制方法采用的闭环到位校正方式,主要通过从运动臂运动关节的到位情况进行校正数据的获取,以确定闭环补偿的补偿情况。这种补偿方式不能解决设备自身的误差导致的到位精度问题,尤其是针对精密的运动臂闭环控制。
现有空间定位设备通常采用对既有定位标识的识别来确定定位标识所在物的空间位置。目前实现这样的识别和控制方式有许多方案,但各有局限,如对于标记小球的空间定位需要双目摄像头,对设备要求较多。还有一种方案采用明暗或黑白对比的标记作为标记物,采用对图像内每个像素点进行标记物可能性计算或者采用既有模板对像素内的所有像素在一定范围内进行模板匹配。这样的方式或者容易将不是标记物的部分误识为标记物或者具有非常巨大的计算量,不适合高精度实时闭环校正算法的应用。
综上,在运动臂的闭环控制领域还有许多课题和技术亟待处理。
发明内容
在一些实施例中,本公开提供了一种运动臂的控制方法,包括:获得由图像采集设备采集的图像,所述图像包括设置在所述运动臂末端上的标记物的图像,所述标记物包括多个标记物角点;识别所述采集的图像中的标记物角点;以及基于所述识别的标记物角点,确定所述运动臂末端相对于所述图像采集设备的当前相对位姿。
在一些实施例中,本公开提供了一种运动臂***,包括:图像采集设备,用于采集图像;至少一个运动臂,包括运动臂末端,所述运动臂末端上设有标记物,所述标记物包括多个标记物角点;控制装置,被配置成执行根据本公开一些实施例的控制方法。
在一些实施例中,本公开提供了一种计算机可读存储介质,包括一个或多个其上存储有计算机可执行指令,所述计算机可执行指令由处理器执行已将处理器配置为执行根据本公开一些实施例的控制方法。
附图说明
为了清楚地说明本公开实施例中的技术方案,下面将对本公开实施例描述中所需要使用的附图作简单的介绍。下面描述中的附图仅仅示出本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据本公开实施例的内容和这些附图获得其他的实施例。
图1示出根据本公开一些实施例中的运动臂***的结构示意图;
图2(a)示出根据本公开一些实施例中的运动臂的结构示意图;
图2(b)示出根据本公开一些实施例中的标记物的结构示意图;
图3示出根据本公开一些实施例中的运动状态下的运动臂末端的放大示意图;
图4示出根据本公开一些实施例中的运动臂***的控制方法的流程图;
图5示出根据本公开一些实施例中的用于识别图像中的标记物角点的控制方法的流程图;
图6示出根据本公开一些实施例中的疑似标记物角点搜索示意图;
图7示出根据本公开一些实施例中的用于确定运动臂末端在世界坐标系中的位置的方法的流程图;
图8示出根据本公开一些实施例中的连续体柔性臂的结构示意图;
图9示出根据本公开一些实施例中的连续体柔性臂的单个连续体段的结构示意图。
具体实施方式
为使本公开解决的技术问题、采用的技术方案和达到的技术效果更加清楚,下面将结合附图对本公开实施例的技术方案作进一步的详细描述,显然,所描述的实施例仅仅是本公开示例性实施例,而不是全部的实施例。
在本公开的描述中,需要说明的是,术语“中心”、“上”、“下”、“左”、“右”、“竖直”、“水平”、“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本公开和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本公开的限制。此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性。在本公开的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”、“耦合”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连;可以是两个元件内部的连通。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本公开中的具体含义。在本公开中,在手 术机器人***中,定义靠近用户(例如医生)的一端为近端、近部或后端、后部,靠近手术患者的一端为远端、远部或前端、前部。
在本公开中,术语“位置”指对象或对象的一部分在三维空间中的定位(例如,可使用笛卡尔X、Y和Z坐标方面的变化描述三个平移自由度,例如分别沿笛卡尔X轴、Y轴和Z轴的三个平移自由度)。在本公开中,术语“姿态”指对象或对象的一部分的旋转设置(例如三个旋转自由度,可使用滚转、俯仰和偏转来描述这三个旋转自由度)。在本公开中,术语“位姿”指对象或对象的一部分的位置和姿态的组合,例如可使用以上提到的六个自由度中的六个参数来描述。在本公开中,运动臂或其一部分的位姿是指运动臂或其一部分定义的坐标系相对于运动臂所在的支架、基座定义的坐标系或世界坐标系的位姿。
图1示出了根据本公开一些实施例的运动臂***100的结构框图。如图1所示,运动臂***100可以包括图像采集设备10、至少一个运动臂20和控制装置30。图像采集设备10和至少一个运动臂20分别与控制装置30通信连接。在一些实施例中,如图1所示,控制装置30可以用于控制至少一个运动臂20的运动,以调整至少一个运动臂20的位姿、相互协调等。在一些实施例中,至少一个运动臂20在末端或远端处可以包括运动臂末端21。在一些实施例中,至少一个运动臂20还可以包括运动臂臂体22,运动臂末端21可以为运动臂臂体22的远端部分或者为设置在运动臂臂体22远端的末端执行器。控制装置30可以控制至少一个运动臂20运动,以使运动臂末端21运动至期望的位置和姿态。本领域的技术人应理解,运动臂***100可以应用于手术机器人***,例如腔镜手术机器人***。应当理解,运动臂***100还可以应用于其他领域(例如,制造、机械等等)的专用或通用机器人***。
在本公开中,控制装置30可以与至少一个运动臂20的电机通信连接,从而使电机基于驱动信号控制至少一个运动臂20运动到相应的目标位姿。例如,控制运动臂运 动的电机可以为伺服电机,可以接受控制装置的指令以控制运动臂运动。控制装置30还可例如通过通信接口与电机耦合的传感器通信连接,以接收运动臂20的运动数据,实现对运动臂20的运动状态监控。在本公开的一个示例中,该通信接口可以为CAN(Controller Area Network)总线通信接口,其使得控制装置30能够通过CAN总线与电机以及传感器连接通信。
在一些实施例中,运动臂20可以包括连续体柔性臂或者由多个关节构成的具有多自由度的运动臂,例如可以实现6个自由度运动的运动臂。图像采集设备10可以包括但不限于双镜头图像采集设备或单镜头图像采集设备,例如双目或单目相机。
图2(a)示出了根据本公开一些实施例的运动臂20的结构示意图。如图2(a)所示,运动臂20可以包括运动臂末端21和运动臂臂体22,运动臂末端21上固定设有标记物211。图2(b)示出了根据本公开一些实施例的标记物211的结构示意图。如图2(b)所示,标记物211可以包括多个标记物角点2111。在一些实施例中,标记物211可以呈筒状,固定包覆于运动臂末端21。标记物211可以包括多个分布在筒状上(例如规律分布)的标记物角点2111。应当理解,标记物可以是在亮度、灰度、色相有明显区别的已知排布的图形。标记物角点可以为标记物上有特殊区别特征的点,标记物角点的周围范围内图形或颜色可以呈点对称分布,以便于图像采集设备采集标记物角点。在一些实施例中,多个标记物角点按规律分布,便于确定标记物角点的参数,例如多个标记物角点之间的分布角度和间隔距离等参数。在一些实施例中,如图2(b)所示,标记物211可以包括但不限于黑白相间的棋盘格标记物,标记物角点2111可以为例如棋盘格标记物中的两个线段的交点。
图3示出根据本公开一些实施例中的运动状态下的运动臂末端的放大示意图。如图3所示,至少一个运动臂可以包括但不限于两个运动臂,每个运动臂末端都设有标记物。
本公开的一些实施例提供了一种运动臂的控制方法。图4示出了根据本公开一些实施例的运动臂***(例如运动臂***100)的控制方法400的流程图。如图4所示,该方法400可以由运动臂***100的控制装置(例如控制装置30)来执行。控制装置30可以配置在计算设备上。方法400可以由软件、固件和/或硬件来实现。
在步骤401,获得由图像采集设备采集的图像。例如,可以由双镜头图像采集设备或单镜头图像采集设备采集运动臂末端的图像,采集的图像中包括运动臂末端上的标记物的图像。
在步骤403,识别采集的图像中的标记物角点。在一些实施例中,可以通过控制装置对采集的图像进行预处理,以识别图像中的标记物角点。识别图像中的标记物角点的示例性方法如图5所示方法详述。
在步骤405,基于识别的标记物角点,确定运动臂末端相对于图像采集设备的当前相对位姿。在一些实施例中,可以基于识别的标记物角点,确定标记物相对于图像采集设备的当前位姿,基于标记物的当前位姿以及标记物相对于运动臂末端的相对位姿,可以确定运动臂末端的当前相对位姿。应当理解,由于标记物固定设置于运动臂末端,标记物相对于运动臂末端的相对位姿是已知的。
图5示出了根据本公开一些实施例的用于识别采集的图像中的标记物角点的方法500的流程图。如图5所示,该方法500可以由运动臂***(例如运动臂***100)的控制装置(例如控制装置30)来执行。控制装置30可以配置在计算设备上。方法500可以由软件、固件和/或硬件来实现。
在步骤中501,在采集的图像中确定感兴趣区域(ROI)。在一些实施例中,确定ROI为全图像,或者基于上一帧图像(例如,上一个运动控制循环中所处理的图像)中的疑似标记物角点位置,确定ROI为局部图像,以及将ROI转为相应的灰度图像。例如,可以基于采集的图像,截取全图像或者局部图像作为ROI,并将ROI转为相应 的灰度图像,以将每个像素点的灰度信息量化。在一些实施例中,可以基于采集的图像为第一帧图像,确定第一帧图像的ROI为全图像。在一些实施例中,基于采集的图像为非第一帧图像,可以基于上一帧图像中的疑似标记物角点位置,确定ROI为局部图像。
在步骤中503,在ROI中识别疑似标记物角点。例如可以基于每个像素点的灰度信息,确定每个像素点为标记物角点的可能性,将其中可能性高的像素点,确定为疑似标记物角点。
在一些实施例中,局部图像可以包括以上一帧图像中的疑似标记物角点的平均坐标构成的虚点为中心的设定距离范围,设定距离可以包括多个疑似标记物角点平均间隔距离的预定倍数。应理解,例如,预定倍数可以包括但不限于多个疑似标记物角点平均间隔距离的固定倍数,例如两倍。应理解,预定倍数还可以是多个疑似标记物角点平均间隔距离的可变倍数。
在一些实施例中,可以确定ROI中的每个像素点的角点似然值(CL)。将ROI划分成多个子ROI,确定每个子ROI中CL值最大的像素点。基于多个子ROI的多个CL值最大的像素点,确定CL值大于第一阈值的像素点集合。确定像素点集合中CL值最大的像素点,作为第一疑似标记物角点。基于第一疑似标记物角点,以设定步长沿多个边缘方向搜索第二疑似标记物角点。
在一些实施例中,确定ROI中的每个像素点的角点似然值(CL)。例如,对ROI图像范围内每一个像素点进行卷积操作,得到每个像素点的一阶和/或二阶导数。利用前述ROI图像范围内每个像素点的一阶和/或二阶导数求出每个像素点的角点似然值(CL)。
在一些实施例中,ROI图像内的每个像素点的CL可以根据如下公式计算:
CL=max(c xy,c 45)
c xy=ε 2·|I xy|-1.5·ε·(|I 45|+|I n45|)
c 45=ε 2·|I 45_45|-1.5·ε·(|I x|+|I y|)      (1)
其中,ε为设定常数,I x、I 45、I y、I n45分别是每个像素点在0、π/4、π/2和-π/4四个方向的一阶导数;I xy和I 45_45,分别是每个像素点在0,π/2和在π/4,-π/4方向的二阶导数;c xy是每个像素点在0,π/2方向的CL值,c 45是每个像素点在π/4方向的CL值。
在一些实施例中,将ROI划分成多个子ROI。例如,可以采用非极大抑制法在一个ROI图像范围中平均分割出多个子ROI。在一些实施例中,可以将ROI图像平均分割成5×5像素的多个子ROI。上述实施例为示例性的,并非限制性的,应当理解,还可以将ROI图像分割成其他尺寸大小的多个子ROI。可以确定每个子ROI中的CL值最大的像素点,将每个子ROI中的CL值最大的像素点与第一阈值进行比较,确定CL值大于第一阈值的像素点集合。在一些实施例中,第一阈值可以设定为0.06。应当理解,第一阈值还可以设定为其他值。
在一些实施例中,确定像素点集合中CL值最大的像素点,作为第一疑似标记物角点。例如,可以将该像素点集合中的所有像素点让CL值从大到小的顺序排序,并将CL值最大的像素点作为第一疑似标记物角点。基于所述第一疑似标记物角点,以设定步长沿多个边缘方向搜索第二疑似标记物角点。图6示出根据本公开一些实施例中的疑似标记物角点搜索示意图。如图6所示,以第一疑似标记物角点为起点,以设定步长向四个边缘方向依次搜索第二疑似标记物角点。在一些实施例中,设定步长可以为10个像素点。例如,以第一疑似标记物角点为起点,以10个像素点为步长,在边缘方向上依次在设定范围内搜索CL值最大的像素点,作为第二疑似标记物角点。例如,设定范围可以为10×10正方形区域。应理解,上述实施例为示例,并非限制, 设定步长还可以为其他数量的像素点,设定范围也可以为其他尺寸大小的区域。例如,可以确定第一疑似标记物角点的四个边缘方向,基于每个边缘方向,以设定步长进行移动搜索,以在设定范围内寻找CL值最大的像素点,作为第二疑似标记物角点。
在一些实施例中,可以通过确定以第一疑似标记物角点为中心的第一预定范围中的各像素点的梯度方向和梯度权重,基于第一预定范围中的各像素点的梯度方向和梯度权重,确定多个边缘方向。例如,第一预定范围可以为10×10正方形区域。以第一疑似标记物角点为中心的10×10正方形像素邻域内,确定各像素点的梯度方向和梯度权重,以确定第一疑似标记物角点的多个边缘方向。应当理解,第二疑似标记物角点也可以通过类似方法确定边缘方向。通过基于疑似标记物角点调整边缘方向,可以对标记物角点进行针对性搜索,以降低运算量。
在一些实施例中,第一预定范围中的各像素点的梯度方向和梯度权重可以通过如下公式计算:
Figure PCTCN2021103719-appb-000001
其中,I angle为对应像素点的梯度方向,I weight为对应像素点的梯度权重;I x、I y分别是对应像素点在0、π/2方向的一阶导数。将第一预定范围内的每个像素点的I angle和I weight进行聚类方法计算,以获得对应像素点的边缘方向。
在一些实施例中,可以基于第二疑似标记物角点,调整边缘方向,并且沿经调整的边缘方向,搜索第三疑似标记物角点。例如,与以上类似,可以通过确定以第二疑似标记物角点为中心的预定范围中的各像素点的梯度方向和梯度权重,基于预定范围中的各像素点的梯度方向和梯度权重,确定更新的边缘方向,如图6所示。
在步骤中505,基于标准角点模型,对ROI中的疑似标记物角点进行相关性检验,以判断ROI图像中的疑似标记物角点是否为标记物角点。例如,可以将第一疑似标记 物角点以及搜索到的疑似标记物角点(例如第二疑似标记物角点)的像素点代入标准角点模型进行相关性判断,以确定疑似标记物角点是否为标记物角点。可以基于角点似然估计(Corner Likelihood)和模板匹配(Template Matching)相结合的角点检测方法,判断图像(例如在图像采集设备为双目图像采集设备时,提取图像包括左镜头图像和右镜头图像)中的标记物角点。
在一些实施例中,确定标准角点模型的灰度分布与以疑似标记物角点的像素点为中心的第二预定范围内像素灰度分布之间的相关性系数,响应于相关性系数大于第三阈值,确定疑似标记物角点为标记物角点。在一些实施例中,标记物可以为黑白相间的棋盘格图形,标准角点模型可以是双曲线正切模型(HTM)。例如,可以基于标准角点模型(例如HTM)的灰度分布与以疑似标记物角点的像素点为中心的预定范围像素邻域灰度分布之间的相关性系数CC,确定疑似标记物角点是否为标记物角点。例如,以疑似标记物角点的像素点为中心的第二预定范围可以包括但不限于10×10正方形像素邻域。相关系系数CC可以如下定义:
Figure PCTCN2021103719-appb-000002
其中,G image为以疑似标记物角点的像素点为中心的预定范围像素邻域灰度分布,G HTM为标准角点模型的灰度分布,Var为方差函数,Cov为协方差函数。
在判断相关系系数CC值大于阈值,表示该像素点为中心的预定范围内的灰度分布与双曲线正切模型相关系较高,可以确定该疑似标记物角点为标记物角点。反之,确定该疑似标记物角点不是标记物角点。在一些实施例中,可以设定阈值为0.8。应当理解,阈值还可以为其他设定值。例如,以第一疑似标记物角点为起点,搜索第二疑似标记物角点,对搜索到的第二疑似标记物角点进行相关性验证,判断相关系系数CC值大于0.8,确定第二疑似标记物角点为标记物角点。
应当理解,可以在每次搜索到疑似标记物角点后或者搜索到多个疑似标记物角点后,将对应的像素点代入标准角点模型进行相关性判断,以确定识别的疑似标记物角点是否为标记物角点。
在一些实施例中,可以基于第一疑似标记物角点和第二疑似标记物角点,向相反边缘方向搜索,以确定其他的疑似标记物角点。如图6所示,以第一疑似标记物角点和第二疑似标记物角点为起点,分别向两个相反边缘方向进行搜索。例如,以第一疑似标记物角点为起点沿远离第二疑似标记物角点的方向搜索,以及以第二疑似标记物角点为起点沿远离第一疑似标记物角点的方向搜索。
在一些实施例中,响应于疑似标记物角点数量大于或等于第二阈值,可以停止搜索。在一些实施例中,第二阈值可以设定为4。应当理解,上述实施例为示例,并非限制,第二阈值还可以设定为5或者其他数值。例如,确定疑似标记物角点的总数量大于或等于4个并且通过相关性验证,表示成功找到标记物,将确定的多个疑似标记物角点作为标记物角点。
在一些实施例中,响应于搜索的距离大于第一疑似标记物角点和第二疑似标记物角点之间的距离的预定倍数或者响应于确定的疑似标记物角点数量小于第二阈值,基于像素点集合中CL值从大到小排序的下一个像素点作为第一疑似标记物角点,以设定步长沿边缘方向搜索疑似标记物角点,直到疑似标记物角点数量大于或等于第二阈值。在一些实施例中,预定倍数可以是第一疑似标记物角点和第二疑似标记物角点之间的距离的两倍,搜索距离大于该预定倍数,当前搜索循环结束。可以基于像素点集合中CL值从大到小排序的下一个像素点作为第一疑似标记物角点,重新开始下一个搜索循环。或者,基于上述方式搜索,确定疑似标记物角点的总数量小于4个,表示未成功识别标记物,基于像素点集合中CL值从大到小排序的下一个像素点作为第一疑似标记物角点,开始下一搜索循环,搜索其他的疑似标记物角点,直到确定疑似标 记物角点的总数量大于或等于第二阈值(例如4)并且通过相关性验证。
在一些实施例中,响应于ROI中的疑似标记物角点为标记物角点,对标记物角点进行亚像素定位。通过亚像素定位可以对标记物角点的坐标精度进行优化。在一些实施例中,可以对每个子ROI中的每个像素点的CL值基于模型进行拟合,以确定经亚像素定位后的标记物角点的坐标。例如,每个子ROI中的每个像素点的CL值的拟合函数可以为二次曲面函数,该函数的极值点为亚像素点。拟合函数可以如下:
S(x,y)=ax 2+by 2+cx+dy+exy+f
                             (4)
其中,S(x,y)为每个子ROI中的所有像素点的CL值拟合函数,a、b、c、d、e、f为系数;
Figure PCTCN2021103719-appb-000003
其中,x c为标记物角点的x坐标,y c为标记物角点的y坐标。
在一些实施例中,可以确定标记物角点相对于标记物坐标系的三维坐标,确定标记物角点在采集的图像中的二维坐标,确定二维坐标和三维坐标的单应性矩阵的解析解,基于单应性矩阵的解析解,确定标记物相对于图像采集设备的当前相对位姿。应当理解,可以基于标记物角点的已知参数,确定图像识别的标记物角点的三维坐标。应当理解,基于双镜头图像采集设备,可以将左镜头图像、右镜头图像中的标记物角点对齐后,进行以下处理。例如,标记物角点相对于标记物坐标系的三维坐标可以如下:
Figure PCTCN2021103719-appb-000004
其中,
Figure PCTCN2021103719-appb-000005
为第i个标记物角点对应的三维坐标,r wm和β可以是基于标记物的已 知特征确定的参数。例如,对于黑白相间的棋盘格标记物呈圆筒设置在运动臂末端,棋盘格标记物角点均匀分布在圆筒标记物周向(如图2(b)所示),其中,r wm为标记物形成的圆筒的半径,β为标记物角点的分布角。
在一些实施例中,基于单镜头图像采集设备,可以直接基于标记物的已知特征确定的参数(例如形成的圆筒的半径以及标记物角点的分布角),计算标记物角点在标记物坐标系的三维坐标。基于双镜头图像采集设备,可以对齐左、右图像中各自检测到的标记物角点,以使双目立体矫正后同一空间点在y方向坐标一致,然后基于标记物的已知特征确定的参数,计算标记物角点在标记物坐标系的三维坐标。标记物坐标系可以如下建立:
Figure PCTCN2021103719-appb-000006
其中,标记物坐标系的原点与标记物中心重合,x轴可以从标记物中心指向第一个标记物角点(例如最先确定的标记物角点)。
在一些实施例中,确定二维坐标和三维坐标的单应性矩阵的解析解可以包括基于如下公式计算:
Figure PCTCN2021103719-appb-000007
H left=[h 1 h 2 h 3]=η·A·[r 1 r 2  llp wm]       (8)
式(7)中,
Figure PCTCN2021103719-appb-000008
为第i个标记物角点的扩展坐标,[u i,v i]为第i个标记物角点的二维坐标, lefts i为任意的非零标量, lefts i为比例因子,以使式(7)左右两边最后一列数字相同,H left为单应性矩阵;
式(8)中,A为图像采集设备的已知内部参数矩阵,r 1,r 2llR wm矩阵的前两列,ll表示图像采集设备坐标系{ll}(此处以左镜头坐标系为例),η为任意的非零标量。应当理解,通过引入扩展坐标,可以简化计算过程。
在一些实施例中,单应性矩阵的解析解还可以包括基于如下公式计算:
Figure PCTCN2021103719-appb-000009
其中,x H=[h 1 T h 2 T h 3 T] T,L是2n×9矩阵,n为采集的图像中确定的标记物角点数量,x H为矩阵L的奇异值分解(SVD)后最小奇异值对应的右奇异向量。
在一些实施例中,可以基于以下公式,确定标记物相对于图像采集设备的当前相对位姿:
r 1=κ·A -1·h 1
r 2=κ·A -1·h 2
llR wm=[r 1 r 2 r 1×r 2]
llp wm=κ·A -1h 3         (10)
其中,κ=1/||A -1·h 1||=1/||A -1·h 2||。
在一些实施例中,可以基于每个标记物角点的二维坐标和三维坐标之间的单应性矩阵的解析解,执行非线性优化处理以获得标记物相对于图像采集设备的当前相对位姿。例如,建立非线性优化处理,通过代数法计算标记物相对于图像采集设备的当前相对位姿。例如,非线性优化处理可以包括使三维坐标在采集的图像上的投影与标记物角点之间的平均几何误差最小化。
在一些实施例中,图像采集设备是单镜头图像采集设备,采集的图像是单镜头(此处以左镜头为例,表示为“left”)图像,非线性优化处理可以包括基于如下公式优化:
Figure PCTCN2021103719-appb-000010
在一些实施例中,图像采集设备为双镜头图像采集设备,采集的图像包括左镜头(表示为“left”)图像和右镜头(表示为“right”)图像,非线性优化处理可以包括基于如下公式优化:
Figure PCTCN2021103719-appb-000011
其中, llR wmllp wm,分别为标记物在图像采集设备坐标系(或者左镜头坐标系){ll}中的姿态和位置;H left和H right分别为与左镜头图像、右镜头图像相关的单应性矩阵; lefts irights i,分别为与左镜头图像、右镜头图像相关的单应性矩阵的解析解相关的比例因子,
Figure PCTCN2021103719-appb-000012
Figure PCTCN2021103719-appb-000013
分别为与左镜头图像、右镜头图像相关的第i个标记物角点的扩展坐标。应当理解,H left为左镜头图像采集的标记物角点的二维坐标与三维坐标对应的单应性矩阵,H right为右镜头图像采集的标记物角点的二维坐标与三维坐标对应的单应性矩阵, lefts i为与H left的解析解相关的非零标量,为 rights i与H right的解析解相关的非零标量。例如,在图像采集设备为双镜头图像采集设备时,可以通过左镜头图像、右镜头图像中标记物角点的二维坐标及三维坐标的对应关系,建立非线性优化处理。其中标记物相对于图像采集设备坐标系的位姿[ llR wm  llp wm]为优化变量,三维标记物角点重投影到左镜头图像、右镜头图像上的标记物角点与实际检测标记物角点间的平均几何误差最小作为优化目标,以获得最优化的左镜头外参(例如标记物与双镜头图像采集设备的相对位姿)。应当理解,以双镜头图像采集设备中的右镜头作为主镜头时,将左右镜像调整,算法与前述流程基本类似,本公开不再详述。
通过识别图像采集设备采集的图像中的标记物角点,优化估计标记物相对于图像采集设备的位姿矩阵[ llR wm  llp wm],并基于公式计算运动臂末端相对于世界坐标系的位姿,并基于运动臂末端的位姿信息,进行运动臂末端位姿的实时闭环控制,以提高***的运动臂运动精度。
图7示出了根据本公开一些实施例的用于确定运动臂末端在世界坐标系中的位置的方法700的流程图。如图7所示,该方法700可以由运动臂***(例如运动臂***100)的控制装置(例如控制装置10)来执行。控制装置30可以配置在计算设备上。方法700可以由软件、固件和/或硬件来实现。
在步骤701,基于运动臂末端的当前相对位姿,确定运动臂在世界坐标系中的当 前位姿。在一些实施例中,可以基于识别的标记物角点,确定标记物相对于图像采集设备的当前相对位姿(例如姿态 llR wm和位置 llp wm)。基于标记物与运动臂末端的已知相对位姿关系,以及图像采集设备在世界坐标系中的位姿,可以确定标记物所在的运动臂末端在世界坐标系中的位置。例如,确定运动臂末端在世界坐标系中的位置可以基于如下公式:
Wp tipWR ll( llR wm wmp tip+ llp wm)+ Wp ll
                                (13)
其中, Wp tip为运动臂末端在世界坐标系{W}中的位置, wmp tip为运动臂末端在标记物坐标系{wm}中的位置; WR llWp ll分别为图像采集设备在世界坐标系{W}中的姿态和位置; llR wmllp wm分别为标记物在图像采集设备坐标系{ll}中的姿态和位置。应当理解,图像采集设备的位姿确定, WR llWp ll可以为已知矩阵。
其中,
Figure PCTCN2021103719-appb-000014
为运动臂末端在标记物坐标系{wm}中的位置坐标。
在本公开中,世界坐标系
Figure PCTCN2021103719-appb-000015
其原点可以位于固定基座(例如鞘套)的中心处(如图2(a)或图8所示)。标记物坐标系
Figure PCTCN2021103719-appb-000016
其原点与运动臂标记物中心重合,x轴从标记物中心指向第一个标记物角点(如图2(a)、图2(b)所示)。左镜头坐标系
Figure PCTCN2021103719-appb-000017
其原点与双镜头图像采集设备左镜头的中心重合,x轴从双镜头图像采集设备左镜头的中心指向双镜头图像采集设备右镜头的中心。右镜头坐标系
Figure PCTCN2021103719-appb-000018
其原点与双镜头图像采集设备右镜头的中心重合,x轴与双镜头图像采集设备左镜头坐标系的x轴重合,指向相同。
在步骤703,基于运动臂末端在世界坐标系中的目标位姿和当前位姿,确定运动臂末端的目标位姿和当前位姿的差值。在一些实施例中,运动臂末端在世界坐标系中的目标位姿可以由用户通过输入装置输入。通过比较计算,可以确定运动臂末端的目标位姿和当前位姿的差值。例如,可以基于运动臂末端在世界坐标系中的目标位置和 当前位置,计算目标位置和当前位置的差值Δp,Δp可以如下:
Δp=p d-p c
                       (14)
其中,p d为运动臂末端在世界坐标系的目标位置,p c为运动臂末端在世界坐标系的当前位置(例如前述方法步骤701计算得到的 Wp tip)。
基于差值Δp得到运动臂末端的空间速度v及参数
Figure PCTCN2021103719-appb-000019
分别为:
v=v xlimΔp/||Δp||        (15)
Figure PCTCN2021103719-appb-000020
其中,v xlim为设定的最大允许空间速度;J +是J的Moore–Penrose伪逆矩阵;ω为运动臂末端角速度;ψ可以表示运动臂的弯转角度或者运动臂关节的旋转角度参数向量,
Figure PCTCN2021103719-appb-000021
可以为ψ的导数。例如在运动臂为柔性运动臂时,ψ为柔性运动臂臂体的弯转角度参数向量,在运动臂为普通关节运动臂结构时,ψ为运动臂关节的旋转角度参数向量。
在每个运动控制循环,可以更新运动臂末端的参数空间向量:
Figure PCTCN2021103719-appb-000022
其中,Δt为运动控制循环的周期。
在步骤705,基于差值和运动臂的逆运动学数值迭代算法,确定运动臂的驱动信号。例如,基于运动臂末端在世界坐标系中的目标位姿和当前位姿的差值,通过运动臂运动学模型的逆运动学数值迭代算法,可以确定运动臂所包括的多个关节在当前运动控制循环内的驱动值(或者控制运动臂运动的对应多个电机的驱动值)。应当理解,运动学模型可以表示运动臂的关节空间和任务空间的运动关系的数学模型。例如,运动学模型可以通过DH参数法和指数积表示法等方法建立。
在一些实施例中,方法700还可以包括:以预定周期,确定运动臂的驱动信号以 实现多个运动控制循环。例如,迭代地执行多个运动控制循环,在每个运动控制循环,可以执行根据本公开一些实施例的控制方法,以控制运动臂运动到目标位姿。通过迭代地执行多个运动控制循环,可以实现运动臂末端位置的实时闭环控制,可以提高运动臂的位置控制精度,避免在运动臂末端具有负载时,通过开环控制算法导致的运动学建模不准确。应理解,经本公开的方法实现运动臂的位置控制,能改进运动臂(例如连续体柔性臂)的轨迹跟踪误差。例如,在一些实施例中,运动臂的轨迹跟踪误差可以降低至开环控制误差的25.23%。
在一些实施例中,以运动臂为柔性运动臂(例如连续体柔性臂)作为示例。图8和图9分别示出根据本公开一些实施例中的连续体柔性臂的结构示意图、连续体柔性臂的单个连续体段的结构示意图。如图8和图9所示,连续体柔性臂可包括两段连续体段和两个直杆段,每个连续体段(如图9)包括2个自由度结构,其中一个直杆段位于两个连续体之间,另一个直杆段含1个进给自由度和1个整体绕自身轴线旋转自由度。
如图9所示,每个连续体段可以包括基座环、末端环以及贯穿基座环和末端环的多根并列的结构骨,多根结构骨可以与末端环固定连接,与基座环滑动连接。连续体柔性臂的运动过程为平面弯转,连续体柔性臂及其包含的连续体段运动学描述如下:
在本公开中,应当理解,基座环坐标系
Figure PCTCN2021103719-appb-000023
附着在第t节连续体段的基座环上,其原点位于基座环中心,XY平面与基座环平面重合,
Figure PCTCN2021103719-appb-000024
从基座环中心指向第一根结构骨。
弯曲平面坐标系
Figure PCTCN2021103719-appb-000025
其原点与基座环坐标系原点重合,XZ平面和弯曲平面重合,
Figure PCTCN2021103719-appb-000026
Figure PCTCN2021103719-appb-000027
重合。
弯曲平面坐标系
Figure PCTCN2021103719-appb-000028
其原点位于末端环中心,XY平面和弯曲平面重合,
Figure PCTCN2021103719-appb-000029
Figure PCTCN2021103719-appb-000030
重合。
末端环坐标系
Figure PCTCN2021103719-appb-000031
附着在第t节连续体段的末端环上,其原点位于末端环中心,XY平面与末端环平面重合,
Figure PCTCN2021103719-appb-000032
从末端环中心指向第一根结构骨。
在一些实施例中,连续柔性臂以及连续体段运动学建模的各符号的定义可以如表1所示。
表1
Figure PCTCN2021103719-appb-000033
如图9所示,单个连续体段的运动学如下:
单个连续体段末端的位置 tbp tc、姿态 tbR te可以如以下公式所示:
Figure PCTCN2021103719-appb-000034
tbR tetbR t1 t1R t2 t2R te
                                   (19)
单个连续体段末端的角速度ω t、线速度v t和θ t、δ t速度的关系,可以如以下公式所示:
Figure PCTCN2021103719-appb-000035
在一些实施例中,整个连续体柔性臂的运动学描述建立如下:
根据图8中所示的各坐标系间的变换关系,连续体柔性臂的末端位置姿态在世界坐标系{w}中可表示为:
WT tipWT 1b 1bT 1e 1eT 2b 2bT 2e 2eT tip
                    (21)
其中, WT tip表示连续体的末端相对于世界坐标系的齐次变换矩阵; WT 1b表示第一个连续体的基座环相对于世界坐标系的齐次变换矩阵; 1bT 1e表示第一个连续体的末端环相对于第一个连续体的基座环的齐次变换矩阵; 1eT 2b表示第二个连续体的基座环相对于第一个连续体的末端环的齐次变换矩阵; 2bT 2e表示第二个连续体的末端环相对于第二个连续体的基座环的齐次变换矩阵; 2eT tip表示连续体的末端相对于第二个连续体的末端环的齐次变换矩阵。
应理解,连续体柔性臂的末端位置在世界坐标系{w}中的齐次变换矩阵可表示为:
WT tipWT wm wmT tip
                                (22)
应理解,整个连续体柔性臂的末端的速度,可以如以下公式所示:
Figure PCTCN2021103719-appb-000036
其中,
Figure PCTCN2021103719-appb-000037
为速度雅可比矩阵;矩阵中W 1,W 2和W 3分别可以如以下公式所示:
Figure PCTCN2021103719-appb-000038
Figure PCTCN2021103719-appb-000039
W 3=J 2v
                            (26)
Figure PCTCN2021103719-appb-000040
为在坐标系A下,从坐标系B到坐标系C的向量的表达;
Figure PCTCN2021103719-appb-000041
为所述向量的反对称矩阵。
该雅克比矩阵中:
Figure PCTCN2021103719-appb-000042
本领域技术人员可以理解,基于连续体运动臂运动学模的逆运动学解算,例如基于公式(15)、(16)和(23),可以确定控制连续体运动臂运动的对应多个电机在当前运动控制循环内的驱动值。
在一些实施例中,本公开提供了一种计算机可读存储介质,计算机可读存储介质可以包括至少一个指令,至少一个指令由处理器执行以将处理器配置为执行以上任何实施例中的控制方法。
在一些实施例中,本公开提供了一种计算机***,可以包括非易失性存储介质和至少一个处理器。非易失性存储介质可以包括至少一个指令。处理器被配置为执行至少一个指令以将处理器配置为执行以上任何实施例中的控制方法。
在一些实施例中,计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是但不限于电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意组合。
在一些实施例中,计算机可读取存储介质可以包括但不限于:便携式计算机盘、硬盘、只读存储器(ROM)、随机存取存储器(RAM)、可擦除可编程只读存储器(EPROM)、电可擦可编程只读存储器(EEPROM)、闪存或其他固态存储器技术、CD-ROM、数字多功能盘(DVD)、HD-DVD、蓝光(Blue-Ray)或其他光存储设备、磁带、磁盘存储或其他磁性存储设备、或能用于存储所需信息且可以由计算机访问的任何其他介质,其上存储有计算机可执行指令,计算机可执行指令在机器(例如计算机设备)中运行时,使得机器执行本公开的控制方法。应当理解,计算机设备可以包括个人计算机、服务器或者网络设备等。
应理解,可由计算机可执行指令实现流程图和方框图中的每一流程和方框、以及流程图和方框图中的流程和方框的结合。可提供这些计算机可执行指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和方框图一个方框或多个方框中指定的功能的装置。
本公开的一些实施例,能够基于视觉跟踪的闭环控制方法及设置的标记物,实时检测运动臂末端的姿态,有效降低运动臂的轨迹跟踪误差,可降低运动控制误差。而且,根据本公开一些实施例的控制方法可以以预定周期执行运动控制循环,每次运动控制循环中,确定运动臂末端的姿态,并对运动臂的运动进行控制,以实现目标位姿。
本公开的一些实施例,基于有针对性的标记物角点搜索策略,大大降低了运算量,可以迭代实现对于运动臂末端的轨迹跟踪和误差校正。
本公开的一些实施例,通过在不同步骤使用不同的方法,以对标记物角点进行识 别和确定,可以避免由于各种因素导致的标记物角点的误识和错过,可以在短时间内实现高精确的运动臂末端的跟踪和实时校正。且本公开的技术方案具有广泛的适应性,可以适用于多种标记物和标记物识别设备,也可以广泛地应用于机器人,例如可以用于对手术机器人、工业机器人等的运动末端的跟踪及校正。
本公开还公开了以下:
1.一种运动臂的控制方法,包括:
获得由图像采集设备采集的图像,所述图像包括设置在所述运动臂末端上的标记物的图像,所述标记物包括多个标记物角点;
识别所述采集的图像中的标记物角点;以及
基于所述识别的标记物角点,确定所述运动臂末端相对于所述图像采集设备的当前相对位姿。
2.如第1项所述的控制方法,还包括:
在所述采集的图像中确定感兴趣区域(ROI);
在所述ROI中识别疑似标记物角点;以及
基于标准角点模型,对所述ROI中的疑似标记物角点进行相关性检验,以判断所述ROI图像中的疑似标记物角点是否为标记物角点。
3.如第2项所述的控制方法,还包括:
确定所述ROI为全图像,或者基于上一帧图像中的疑似标记物角点位置,确定所述ROI为局部图像;以及
将所述ROI转为相应的灰度图像。
4.如第3项所述的控制方法,所述局部图像包括以上一帧图像中的疑似标记物角点的平均坐标构成的虚点为中心的设定距离范围,所述设定距离包括多个所述疑似标记物角点平均间隔距离的预定倍数。
5.如第2-4中任一项所述的控制方法,在所述ROI中识别疑似标记物角点包括:
确定所述ROI中的每个像素点的角点似然值(CL);
将所述ROI划分成多个子ROI;
确定每个所述子ROI中CL值最大的像素点;
基于所述多个子ROI的多个所述CL值最大的像素点,确定CL值大于第一阈值的像素点集合;
确定所述像素点集合中CL值最大的像素点,作为第一疑似标记物角点;以及
基于所述第一疑似标记物角点,以设定步长沿多个边缘方向搜索第二疑似标记物角点。
6.如第5项所述的控制方法,还包括:
确定以第一疑似标记物角点为中心的第一预定范围中的各像素点的梯度方向和梯度权重;
基于所述第一预定范围中的各像素点的梯度方向和梯度权重,确定所述多个边缘方向。
7.如第5-6中任一项所述的控制方法,还包括:
基于所述第一疑似标记物角点和所述第二疑似标记物角点,调整边缘方向;以及沿经调整的边缘方向,搜索第三疑似标记物角点。
8.如第5-7项中任一项所述的控制方法,还包括:
基于所述第一疑似标记物角点和所述第二疑似标记物角点,向相反边缘方向搜索。
9.如第5-7项中任一项所述的控制方法,还包括:
响应于疑似标记物角点数量大于或等于第二阈值,停止所述搜索;或者
响应于搜索的距离大于所述第一疑似标记物角点和所述第二疑似标记物角点之间的距离的预定倍数或者响应于确定的疑似标记物角点数量小于所述第二阈值,基于所 述像素点集合中CL值从大到小排序的下一个像素点作为第一疑似标记物角点,以所述设定步长沿边缘方向搜索疑似标记物角点,直到疑似标记物角点数量大于或等于第二阈值。
10.如第2-9项中任一项所述的控制方法,对所述疑似标记物角点进行相关性检验包括:
确定所述标准角点模型的灰度分布与以所述疑似标记物角点的像素点为中心的第二预定范围内像素灰度分布之间的相关性系数;
响应于所述相关性系数大于第三阈值,确定所述疑似标记物角点为标记物角点。
11.如第2-9项中任一项所述的控制方法,还包括:
响应于所述ROI中的疑似标记物角点为标记物角点,对所述标记物角点进行亚像素定位。
12.如第11项所述的控制方法,对所述标记物角点进行亚像素定位包括:
基于以下公式,计算所述标记物角点的坐标:
S(x,y)=ax 2+by 2+cx+dy+exy+f
其中,S(x,y)为每个子ROI中的所有像素点的CL值拟合函数,a、b、c、d、e、f为系数;
Figure PCTCN2021103719-appb-000043
其中,x c为所述标记物角点的x坐标,y c为所述标记物角点的y坐标。
13.如第1-12项中任一项所述的控制方法,还包括:
确定所述标记物角点相对于标记物坐标系的三维坐标;
确定所述标记物角点在所述采集的图像中的二维坐标;
确定所述二维坐标和所述三维坐标的单应性矩阵的解析解;以及
基于所述单应性矩阵的解析解,确定所述标记物相对于所述图像采集设备的当前 相对位姿。
14.如第13项所述的控制方法,确定所述标记物角点相对于标记物坐标系的三维坐标包括基于以下公式确定所述标记物角点相对于所述标记物坐标系的三维坐标:
Figure PCTCN2021103719-appb-000044
其中,
Figure PCTCN2021103719-appb-000045
为第i个标记物角点对应的三维坐标,r wm为所述标记物形成的圆筒的半径,β为标记物角点的分布角。
15.如第14项所述的控制方法,确定所述二维坐标和所述三维坐标的单应性矩阵的解析解包括:
基于以下公式计算所述解析解:
Figure PCTCN2021103719-appb-000046
H left=[h 1 h 2 h 3]=η·A·[r 1 r 2  llp wm]       (2)
式(1)中,
Figure PCTCN2021103719-appb-000047
为第i个标记物角点的扩展坐标,[u i,v i]为第i个标记物角点的二维坐标, lefts i为任意的非零标量, lefts i为比例因子,以使式(1)左右两边最后一列数字相同,H left为单应性矩阵;以及
式(2)中,A为图像采集设备的已知内部参数矩阵,r 1,r 2llR wm矩阵的前两列,η为任意的非零标量。
16.如第15项所述的控制方法,确定所述二维坐标和所述三维坐标的单应性矩阵的解析解还包括:
基于以下公式计算所述解析解:
Figure PCTCN2021103719-appb-000048
其中,x H=[h 1 T h 2 T h 3 T] T,L是2n×9矩阵,n为所述采集的图像中确定的标记物角点数量,x H为矩阵L的奇异值分解(SVD)后最小奇异值对应的右奇异向量。
17.如第16项所述的控制方法,还包括:
基于以下公式,确定所述标记物相对于所述图像采集设备的当前相对位姿:
r 1=κ·A -1·h 1
r 2=κ·A -1·h 2
llR wm=[r 1 r 2 r 1×r 2]
llp wm=κ·A -1h 3
其中,κ=1/||A -1·h 1||=1/||A -1·h 2||。
18.如第14-17项中任一项所述的控制方法,还包括:
基于所述单应性矩阵的解析解,执行非线性优化处理以获得所述标记物相对于所述图像采集设备的当前相对位姿。
19.如第18项所述的控制方法,执行所述非线性优化处理包括使所述三维坐标在所述采集的图像上的投影与所述标记物角点之间的平均几何误差最小化。
20.如第19项所述的控制方法,
所述图像采集设备是单镜头图像采集设备,所述采集的图像是单镜头图像,执行所述非线性优化处理包括:
基于以下公式执行优化处理:
Figure PCTCN2021103719-appb-000049
或者
所述图像采集设备为双镜头图像采集设备,所述采集的图像包括左镜头图像和右镜头图像,执行所述非线性优化处理包括:
基于以下公式执行优化处理:
Figure PCTCN2021103719-appb-000050
其中,H left和H right分别为与左镜头图像、右镜头图像相关的单应性矩阵; lefts irights i分别为与左镜头图像、右镜头图像相关的单应性矩阵的解析解相关的比例因子,
Figure PCTCN2021103719-appb-000051
Figure PCTCN2021103719-appb-000052
分别为与左镜头图像、右镜头图像相关的第i个标记物角点的扩展坐标。
21.如第1-20项中任一项所述的控制方法,还包括:
基于所述运动臂末端的当前相对位姿,确定所述运动臂在世界坐标系中的当前位姿;
基于所述运动臂末端在世界坐标系中的目标位姿和当前位姿,确定所述运动臂末端的目标位姿和当前位姿的差值;以及
基于所述差值和所述运动臂的逆运动学数值迭代算法,确定所述运动臂的驱动信号。
22.如第21项所述的控制方法,还包括:
以预定周期,确定所述运动臂的所述驱动信号以实现多个运动控制循环。
23.如第1-22项中任一项所述的控制方法,还包括:
基于所述识别的标记物角点,确定所述标记物相对于所述图像采集设备的当前相对位姿;以及
基于所述标记物的当前相对位姿和以下对应关系,确定所述运动臂末端在世界坐标系中的当前位姿:
Wp tipWR ll( llR wm wmp tip+ llp wm)+ Wp ll
其中, Wp tip为所述运动臂末端在世界坐标系{W}中的位置, wmp tip为所述运动臂末端在标记物坐标系{wm}中的位置; WR llWp ll分别为所述图像采集设备在世界坐标系{W}中的姿态和位置; llR wmllp wm分别为所述标记物在图像采集设备坐标系{ll}中的姿态和位置;
Figure PCTCN2021103719-appb-000053
其中,
Figure PCTCN2021103719-appb-000054
为所述运动臂末端在所述标记物坐标系{wm}中的位置坐标。
24.如第1-23项中任一项所述的控制方法,所述标记物呈筒状,包括多个分布在筒状上的标记物角点。
25.一种运动臂***,包括:
图像采集设备,用于采集图像;
至少一个运动臂,包括运动臂末端,所述运动臂末端上设有标记物,所述标记物包括多个标记物角点;
控制装置,被配置成执行如第1-24项中任一项所述的控制方法。
26.计算机可读存储介质,包括一个或多个其上存储有计算机可执行指令,所述计算机可执行指令由处理器执行已将处理器配置为执行如第1-24项任一项所述的控制方法。
注意,上述仅为本公开的示例性实施例及所运用技术原理。本领域技术人员会理解,本公开不限于这里的特定实施例,对本领域技术人员来说能够进行各种明显的变化、重新调整和替代而不会脱离本公开的保护范围。因此,虽然通过以上实施例对本公开进行了较为详细的说明,但是本公开不仅仅限于以上实施例,在不脱离本公开构思的情况下,还可以包括更多其他等效实施例,而本公开的范围由所附的权利要求范围决定。

Claims (20)

  1. 一种运动臂的控制方法,包括:
    获得由图像采集设备采集的图像,所述图像包括设置在所述运动臂末端上的标记物的图像,所述标记物包括多个标记物角点;
    识别所述采集的图像中的标记物角点;以及
    基于所述识别的标记物角点,确定所述运动臂末端相对于所述图像采集设备的当前相对位姿。
  2. 根据权利要求1所述的控制方法,其特征在于,还包括:
    在所述采集的图像中确定感兴趣区域(ROI);
    在所述ROI中识别疑似标记物角点;以及
    基于标准角点模型,对所述ROI中的疑似标记物角点进行相关性检验,以判断所述ROI图像中的疑似标记物角点是否为标记物角点。
  3. 根据权利要求2所述的控制方法,其特征在于,还包括:
    确定所述ROI为全图像,或者基于上一帧图像中的疑似标记物角点位置,确定所述ROI为局部图像;以及
    将所述ROI转为相应的灰度图像;
    所述局部图像包括以上一帧图像中的疑似标记物角点的平均坐标构成的虚点为中心的设定距离范围,所述设定距离包括多个所述疑似标记物角点平均间隔距离的预定倍数。
  4. 根据权利要求2所述的控制方法,其特征在于,在所述ROI中识别疑似标记物角点包括:
    确定所述ROI中的每个像素点的角点似然值(CL);
    将所述ROI划分成多个子ROI;
    确定每个所述子ROI中CL值最大的像素点;
    基于所述多个子ROI的多个所述CL值最大的像素点,确定CL值大于第一阈值的像素点集合;
    确定所述像素点集合中CL值最大的像素点,作为第一疑似标记物角点;以及
    基于所述第一疑似标记物角点,以设定步长沿多个边缘方向搜索第二疑似标记物角点。
  5. 根据权利要求4所述的控制方法,其特征在于,还包括:
    确定以第一疑似标记物角点为中心的第一预定范围中的各像素点的梯度方向和梯度权重;
    基于所述第一预定范围中的各像素点的梯度方向和梯度权重,确定所述多个边缘方向。
  6. 根据权利要求4所述的控制方法,其特征在于,还包括:
    基于所述第一疑似标记物角点和所述第二疑似标记物角点,调整边缘方向;以及
    沿经调整的边缘方向,搜索第三疑似标记物角点;
    基于所述第一疑似标记物角点和所述第二疑似标记物角点,向相反边缘方向搜索。
  7. 根据权利要求4所述的控制方法,其特征在于,还包括:
    响应于疑似标记物角点数量大于或等于第二阈值,停止所述搜索;或者
    响应于搜索的距离大于所述第一疑似标记物角点和所述第二疑似标记物角点之间的距离的预定倍数或者响应于确定的疑似标记物角点数量小于所述第二阈值,基于所述像素点集合中CL值从大到小排序的下一个像素点作为第一疑似标记物角点,以所述设定步长沿边缘方向搜索疑似标记物角点,直到疑似标记物角点数量大于或等于第二阈值。
  8. 根据权利要求2所述的控制方法,其特征在于,对所述疑似标记物角点进行相关性检验包括:
    确定所述标准角点模型的灰度分布与以所述疑似标记物角点的像素点为中心的第二预定范围内像素灰度分布之间的相关性系数;
    响应于所述相关性系数大于第三阈值,确定所述疑似标记物角点为标记物角点。
  9. 根据权利要求2所述的控制方法,其特征在于,还包括:
    响应于所述ROI中的疑似标记物角点为标记物角点,对所述标记物角点进行亚像素定位;
    对所述标记物角点进行亚像素定位包括:
    基于以下公式,计算所述标记物角点的坐标:
    S(x,y)=ax 2+by 2+cx+dy+exy+f
    其中,S(x,y)为每个子ROI中的所有像素点的CL值拟合函数,a、b、c、d、e、f为系数;
    Figure PCTCN2021103719-appb-100001
    其中,x c为所述标记物角点的x坐标,y c为所述标记物角点的y坐标。
  10. 根据权利要求1所述的控制方法,其特征在于,还包括:
    确定所述标记物角点相对于标记物坐标系的三维坐标;
    确定所述标记物角点在所述采集的图像中的二维坐标;
    确定所述二维坐标和所述三维坐标的单应性矩阵的解析解;以及
    基于所述单应性矩阵的解析解,确定所述标记物相对于所述图像采集设备的当前相对位姿。
  11. 根据权利要求10所述的控制方法,其特征在于,确定所述标记物角点相对于标记物坐标系的三维坐标包括基于以下公式确定所述标记物角点相对于所述标记物 坐标系的三维坐标:
    Figure PCTCN2021103719-appb-100002
    其中,
    Figure PCTCN2021103719-appb-100003
    为第i个标记物角点对应的三维坐标,r wm为所述标记物形成的圆筒的半径,β为标记物角点的分布角。
  12. 根据权利要求11所述的控制方法,其特征在于,确定所述二维坐标和所述三维坐标的单应性矩阵的解析解包括:
    基于以下公式计算所述解析解:
    Figure PCTCN2021103719-appb-100004
    H left=[h 1 h 2 h 3]=η·A·[r 1 r 2  llp wm]    (2)
    式(1)中,
    Figure PCTCN2021103719-appb-100005
    为第i个标记物角点的扩展坐标,[u i,v i]为第i个标记物角点的二维坐标, lefts i为任意的非零标量, lefts i为比例因子,以使式(1)左右两边最后一列数字相同,H left为单应性矩阵;以及
    式(2)中,A为图像采集设备的已知内部参数矩阵,r 1,r 2llR wm矩阵的前两列,η为任意的非零标量;
    确定所述二维坐标和所述三维坐标的单应性矩阵的解析解还包括:
    基于以下公式计算所述解析解:
    Figure PCTCN2021103719-appb-100006
    其中,x H=[h 1 T h 2 T h 3 T] T,L是2n×9矩阵,n为所述采集的图像中确定的标记物角点数量,x H为矩阵L的奇异值分解(SVD)后最小奇异值对应的右奇异向量;
    还包括基于以下公式,确定所述标记物相对于所述图像采集设备的当前相对位姿:
    r 1=κ·A -1·h 1
    r 2=κ·A -1·h 2
    llR wm=[r 1 r 2 r 1×r 2]
    llp wm=κ·A -1h 3
    其中,κ=1/||A -1·h 1||=1/||A -1·h 2||。
  13. 根据权利要求10所述的控制方法,还包括:
    基于所述单应性矩阵的解析解,执行非线性优化处理以获得所述标记物相对于所述图像采集设备的当前相对位姿;
    执行所述非线性优化处理包括使所述三维坐标在所述采集的图像上的投影与所述标记物角点之间的平均几何误差最小化。
  14. 根据权利要求13所述的控制方法,其特征在于,
    所述图像采集设备是单镜头图像采集设备,所述采集的图像是单镜头图像,执行所述非线性优化处理包括:
    基于以下公式执行优化处理:
    Figure PCTCN2021103719-appb-100007
    或者
    所述图像采集设备为双镜头图像采集设备,所述采集的图像包括左镜头图像和右镜头图像,执行所述非线性优化处理包括:
    基于以下公式执行优化处理:
    Figure PCTCN2021103719-appb-100008
    其中,H left和H right分别为与左镜头图像、右镜头图像相关的单应性矩阵; lefts irights i分别为与左镜头图像、右镜头图像相关的单应性矩阵的解析解相关的比例因子,
    Figure PCTCN2021103719-appb-100009
    Figure PCTCN2021103719-appb-100010
    分别为与左镜头图像、右镜头图像相关的第i个标记物角点的扩展坐标。
  15. 根据权利要求1所述的控制方法,还包括:
    基于所述运动臂末端的当前相对位姿,确定所述运动臂在世界坐标系中的当前位姿;
    基于所述运动臂末端在世界坐标系中的目标位姿和当前位姿,确定所述运动臂末端的目标位姿和当前位姿的差值;以及
    基于所述差值和所述运动臂的逆运动学数值迭代算法,确定所述运动臂的驱动信号。
  16. 根据权利要求15所述的控制方法,还包括:
    以预定周期,确定所述运动臂的所述驱动信号以实现多个运动控制循环。
  17. 根据权利要求1所述的控制方法,其特征在于,还包括:
    基于所述识别的标记物角点,确定所述标记物相对于所述图像采集设备的当前相对位姿;以及
    基于所述标记物的当前相对位姿和以下对应关系,确定所述运动臂末端在世界坐标系中的当前位姿:
    Wp tipWR ll( llR wm wmp tip+ llp wm)+ Wp ll
    其中, Wp tip为所述运动臂末端在世界坐标系{W}中的位置, wmp tip为所述运动臂末端在标记物坐标系{wm}中的位置; WR llWp ll分别为所述图像采集设备在世界坐标系{W}中的姿态和位置; llR wmllp wm分别为所述标记物在图像采集设备坐标系{ll}中的姿态和位置;
    Figure PCTCN2021103719-appb-100011
    其中,
    Figure PCTCN2021103719-appb-100012
    为所述运动臂末端在所述标记物坐标系{wm}中的位置坐标。
  18. 根据权利要求1所述的控制方法,其特征在于,所述标记物呈筒状,包括多个分布在筒状上的标记物角点。
  19. 一种运动臂***,包括:
    图像采集设备,用于采集图像;
    至少一个运动臂,包括运动臂末端,所述运动臂末端上设有标记物,所述标记物包括多个标记物角点;
    控制装置,被配置成获得由所述图像采集设备采集的图像,所述图像包括设置在所述运动臂末端上的标记物的图像;所述控制装置还被配置成识别所述采集的图像中的标记物角点,以及基于所述识别的标记物角点,确定所述运动臂末端相对于所述图像采集设备的当前相对位姿。
  20. 一种计算机可读存储介质,包括一个或多个其上存储有计算机可执行指令,所述计算机可执行指令由处理器执行已将处理器配置为执行控制方法,所述方法包括:
    获得由图像采集设备采集的图像,所述图像包括设置在运动臂末端上的标记物的图像,所述标记物包括多个标记物角点;
    识别所述采集的图像中的标记物角点;以及
    基于所述识别的标记物角点,确定所述运动臂末端相对于所述图像采集设备的当前相对位姿。
PCT/CN2021/103719 2020-07-11 2021-06-30 运动臂***以及控制方法 WO2022012337A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010665741 2020-07-11
CN202010665741.7 2020-07-11

Publications (1)

Publication Number Publication Date
WO2022012337A1 true WO2022012337A1 (zh) 2022-01-20

Family

ID=79232854

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/103719 WO2022012337A1 (zh) 2020-07-11 2021-06-30 运动臂***以及控制方法

Country Status (2)

Country Link
CN (1) CN113910219B (zh)
WO (1) WO2022012337A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114536329A (zh) * 2022-02-16 2022-05-27 中国医学科学院北京协和医院 基于复合标识确定可形变机械臂的外部受力的方法及机器人***
CN114711968A (zh) * 2022-03-31 2022-07-08 广东工业大学 一种基于手术机器人***的无标定靶区定位跟踪方法

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114536331B (zh) * 2022-02-16 2023-10-20 中国医学科学院北京协和医院 基于关联标识确定可形变机械臂的外部受力的方法及机器人***
CN114347037B (zh) * 2022-02-16 2024-03-29 中国医学科学院北京协和医院 基于复合标识的机器人***故障检测处理方法及机器人***
CN114536330B (zh) * 2022-02-16 2023-10-20 中国医学科学院北京协和医院 基于多个位姿标识确定可形变机械臂的外部受力的方法及机器人***
CN114536401B (zh) * 2022-02-16 2024-03-29 中国医学科学院北京协和医院 基于多个位姿标识的机器人***故障检测处理方法及机器人***
CN114536402B (zh) * 2022-02-16 2024-04-09 中国医学科学院北京协和医院 基于关联标识的机器人***故障检测处理方法及机器人***
CN114742785A (zh) * 2022-03-31 2022-07-12 启东普力马机械有限公司 基于图像处理的液压接头清洁度控制方法
CN115761602B (zh) * 2023-01-07 2023-04-18 深圳市蓝鲸智联科技有限公司 用于车窗控制***的视频智能识别方法
CN117245651A (zh) * 2023-09-12 2023-12-19 北京小米机器人技术有限公司 机械臂插拔控制方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009285778A (ja) * 2008-05-29 2009-12-10 Toyota Industries Corp ロボットハンドの姿勢検知システム
JP2010172986A (ja) * 2009-01-28 2010-08-12 Fuji Electric Holdings Co Ltd ロボットビジョンシステムおよび自動キャリブレーション方法
CN102922521A (zh) * 2012-08-07 2013-02-13 中国科学技术大学 一种基于立体视觉伺服的机械臂***及其实时校准方法
CN109949366A (zh) * 2019-03-08 2019-06-28 鲁班嫡系机器人(深圳)有限公司 一种定位设备及其方法
CN110959099A (zh) * 2017-06-20 2020-04-03 卡尔蔡司Smt有限责任公司 确定可移动物体在空间中的位置的***、方法和标记物

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107784672B (zh) * 2016-08-26 2021-07-20 百度在线网络技术(北京)有限公司 用于获取车载相机的外部参数的方法和装置
CN108717709B (zh) * 2018-05-24 2022-01-28 东北大学 图像处理***及图像处理方法
CN108827316B (zh) * 2018-08-20 2021-12-28 南京理工大学 基于改进的Apriltag标签的移动机器人视觉定位方法
CN109597067B (zh) * 2018-12-21 2023-05-09 创意银航(山东)技术有限公司 毫米波辐射计线列扫描低识别度目标的分析方法和***

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009285778A (ja) * 2008-05-29 2009-12-10 Toyota Industries Corp ロボットハンドの姿勢検知システム
JP2010172986A (ja) * 2009-01-28 2010-08-12 Fuji Electric Holdings Co Ltd ロボットビジョンシステムおよび自動キャリブレーション方法
CN102922521A (zh) * 2012-08-07 2013-02-13 中国科学技术大学 一种基于立体视觉伺服的机械臂***及其实时校准方法
CN110959099A (zh) * 2017-06-20 2020-04-03 卡尔蔡司Smt有限责任公司 确定可移动物体在空间中的位置的***、方法和标记物
CN109949366A (zh) * 2019-03-08 2019-06-28 鲁班嫡系机器人(深圳)有限公司 一种定位设备及其方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114536329A (zh) * 2022-02-16 2022-05-27 中国医学科学院北京协和医院 基于复合标识确定可形变机械臂的外部受力的方法及机器人***
CN114536329B (zh) * 2022-02-16 2024-05-17 中国医学科学院北京协和医院 基于复合标识确定可形变机械臂的外部受力的方法及机器人***
CN114711968A (zh) * 2022-03-31 2022-07-08 广东工业大学 一种基于手术机器人***的无标定靶区定位跟踪方法

Also Published As

Publication number Publication date
CN113910219A (zh) 2022-01-11
CN113910219B (zh) 2024-07-05

Similar Documents

Publication Publication Date Title
WO2022012337A1 (zh) 运动臂***以及控制方法
CN110116407B (zh) 柔性机器人位姿测量方法及装置
CN109308693B (zh) 由一台ptz相机构建的目标检测和位姿测量单双目视觉***
JP6271953B2 (ja) 画像処理装置、画像処理方法
US20180066934A1 (en) Three-dimensional measurement apparatus, processing method, and non-transitory computer-readable storage medium
CN109658457B (zh) 一种激光与相机任意相对位姿关系的标定方法
Doignon et al. Segmentation and guidance of multiple rigid objects for intra-operative endoscopic vision
WO2016193781A1 (en) Motion control system for a direct drive robot through visual servoing
CN109785373B (zh) 一种基于散斑的六自由度位姿估计***及方法
CN111360821A (zh) 一种采摘控制方法、装置、设备及计算机刻度存储介质
WO2022217794A1 (zh) 一种动态环境移动机器人的定位方法
Gratal et al. Visual servoing on unknown objects
US20230219221A1 (en) Error detection method and robot system based on a plurality of pose identifications
US20230219220A1 (en) Error detection method and robot system based on association identification
CN113510700A (zh) 一种机器人抓取任务的触觉感知方法
CN116766194A (zh) 基于双目视觉的盘类工件定位与抓取***和方法
JP5698815B2 (ja) 情報処理装置、情報処理装置の制御方法及びプログラム
Baek et al. Full state visual forceps tracking under a microscope using projective contour models
Mair et al. Efficient camera-based pose estimation for real-time applications
CN211028657U (zh) 一种智能焊接机器人***
JP2014238687A (ja) 画像処理装置、ロボット制御システム、ロボット、画像処理方法及び画像処理プログラム
Yu et al. Vision-based method of kinematic calibration and image tracking of position and posture for 3-RPS parallel robot
CN116051630B (zh) 高频率6DoF姿态估计方法及***
Wu et al. Robot motion visual measurement based on RANSAC and weighted constraints method
Kahmen et al. Orientation of point clouds for complex surfaces in medical surgery using trinocular visual odometry and stereo ORB-SLAM2

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21843241

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21843241

Country of ref document: EP

Kind code of ref document: A1