WO2023231098A1 - 目标跟踪方法、***及机器人 - Google Patents

目标跟踪方法、***及机器人 Download PDF

Info

Publication number
WO2023231098A1
WO2023231098A1 PCT/CN2022/101290 CN2022101290W WO2023231098A1 WO 2023231098 A1 WO2023231098 A1 WO 2023231098A1 CN 2022101290 W CN2022101290 W CN 2022101290W WO 2023231098 A1 WO2023231098 A1 WO 2023231098A1
Authority
WO
WIPO (PCT)
Prior art keywords
code
coordinates
checkerboard
marker
visible light
Prior art date
Application number
PCT/CN2022/101290
Other languages
English (en)
French (fr)
Inventor
祝世杰
佘丰客
郑钢铁
潘勇卫
赵喆
宋飞
Original Assignee
清华大学
北京清华长庚医院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 清华大学, 北京清华长庚医院 filed Critical 清华大学
Publication of WO2023231098A1 publication Critical patent/WO2023231098A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/90Identification means for patients or instruments, e.g. tags
    • A61B90/94Identification means for patients or instruments, e.g. tags coded with symbols, e.g. text
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/90Identification means for patients or instruments, e.g. tags
    • A61B90/94Identification means for patients or instruments, e.g. tags coded with symbols, e.g. text
    • A61B90/96Identification means for patients or instruments, e.g. tags coded with symbols, e.g. text using barcodes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Definitions

  • the present disclosure relates to the field of image recognition technology, and in particular to a target tracking method, system and robot.
  • the robot moves the equipment that interacts with people to a pre-planned target path, and assists other operators to perform targeted operations through the excellent stability and reliability of the robot itself.
  • the movement and posture changes during the operation will lead to changes in the target path during the operation. If the robot cannot adjust accordingly according to the changes in the target path, Its own position will lead to a decrease in operation accuracy, and in severe cases, even operation failure.
  • One purpose of the present disclosure is to propose a target tracking method through which markers do not need to intrude into the tracked target during target tracking, with good security and high accuracy of tracking results.
  • a second object of the present disclosure is to propose a robot.
  • the third object of the present disclosure is to propose a target tracking system.
  • the first aspect of the embodiment of the present disclosure proposes a target tracking method.
  • the method includes: acquiring a visible light image and a depth image of a marker attached to the surface of the tracked target, wherein the marker is provided with black and white.
  • QR code detection on the visible light image to obtain the 2D coordinates of the QR code corner point and the QR code ID in the marker; according to the depth Image, the 2D coordinates of the QR code corner point and the QR code ID are used to obtain the 3D coordinates of the checkerboard corner point in the marker; according to the 3D coordinates of the checkerboard corner point, the tracked target is obtained Position information in 3D space, where the position information is used to track the tracked target.
  • a second aspect of the embodiment of the present disclosure proposes a robot.
  • the robot includes: a visible light image acquisition module for acquiring a visible light image of a marker attached to the surface of a tracked target, wherein the marker is configured There is a black and white checkerboard pattern, and there is a QR code inside the white checkerboard; a depth image acquisition module is used to obtain the depth image of the marker; an image processing module is used to perform QR code on the visible light image Detect, obtain the 2D coordinates of the QR code corner point and the QR code ID in the marker, and obtain the mark based on the depth image, the 2D coordinates of the QR code corner point and the QR code ID.
  • the 3D coordinates of the checkerboard corner points in the object are obtained, and the position information of the tracked target in the 3D space is obtained based on the 3D coordinates of the checkerboard corner points, where the position information is used to perform tracking on the tracked target.
  • the third aspect of the embodiment of the present disclosure proposes a target tracking system.
  • the system includes: a marker attached to the surface of the tracked object, wherein the marker is provided with a black and white chessboard.
  • the checkerboard pattern has a QR code inside the white checkerboard; and the robot as described in the second embodiment of the present disclosure.
  • the target tracking method, system and robot of the embodiments of the present disclosure by attaching a marker with a black and white checkerboard pattern to the surface of the tracked target, the need in related technologies to intrude the marker into the interior of the tracked target is avoided, thereby As a result, the tracked target is not injured twice, and the safety is good, and the 3D coordinates of the checkerboard corner points are finally obtained during the target tracking process, which can ensure the tracking accuracy.
  • Figure 1 is a schematic flowchart of a target tracking method according to an embodiment of the present disclosure
  • Figure 2 is a schematic diagram of a checkerboard pattern of the first example of the present disclosure
  • Figure 3 is a schematic diagram of a checkerboard pattern according to the second example of the present disclosure.
  • Figure 4 is a schematic diagram of a checkerboard pattern according to the third example of the present disclosure.
  • Figure 5 is a schematic diagram of a single QR code template according to an example of the present disclosure.
  • FIG. 6 is a schematic flowchart of step S102 of the target tracking method according to an embodiment of the present disclosure
  • Figure 7 is a schematic diagram of matching between a QR code template and a visible light image according to an example of the present disclosure
  • Figure 8 is a schematic flowchart of a target tracking method according to another embodiment of the present disclosure.
  • Figure 9 is a schematic flowchart of step S103 of the target tracking method according to an embodiment of the present disclosure.
  • Figure 10 is a schematic flowchart of an example of the present disclosure for obtaining the key area of interest in the visible light image based on the 2D coordinates of the QR code corner point and the QR code ID;
  • Figure 11 is a schematic diagram of homography transformation from a marker standard image to a visible light image according to an example of the present disclosure
  • Figure 12(a) is a schematic diagram of the position of a tracked target in a visible light image according to an example of the present disclosure
  • Figure 12(b) is a schematic diagram of the position of the tracked target in the depth image according to an example of the present disclosure
  • Figure 12(c) is a schematic diagram of the position of a tracked target in 3D space according to an example of the present disclosure
  • Figure 13 is a schematic structural diagram of a robot according to an embodiment of the present disclosure.
  • Figure 14 is a schematic structural diagram of a target tracking system according to an embodiment of the present disclosure.
  • FIG. 1 is a schematic flowchart of a target tracking method according to an embodiment of the present disclosure. As shown in Figure 1, the target tracking method provided by this embodiment includes the following steps:
  • the marker can use a flexible planar base and can be cut into any shape.
  • the marker is provided with a black and white checkerboard pattern, and a QR code is provided inside the white checkerboard, as shown in Figures 2 and 3 Show.
  • the checkerboard with the QR code can be cut into any shape, and the checkerboards can be combined in any way according to actual needs, and we get The checkerboard pattern on the final marker.
  • the required number of QR codes can be calculated according to the actual usage scenario.
  • long strips are often used for scenarios that require cutting and combination, for example, 5 ⁇ 20.
  • a square shape is often used, such as 5 ⁇ 5.
  • S102 Perform QR code detection on the visible light image to obtain the 2D coordinates of the QR code corner point and the QR code ID in the marker.
  • a corresponding number of information dictionaries can be generated in advance according to the number of selected QR codes.
  • Each information dictionary corresponds to a QR code containing information. and its corresponding ID.
  • the 2D coordinates of the corner points of the QR code and the ID of the information dictionary generated by the QR code can be obtained.
  • QR codes when detecting QR codes on visible light images, at least 2 non-collinear QR codes need to be detected. Each QR code detected contains 4 QR code corner points. Through 2 QR codes can be inferred from other QR codes globally (i.e. on markers).
  • S103 Obtain the 3D coordinates of the checkerboard corner points in the marker based on the depth image, the 2D coordinates of the QR code corner points and the QR code ID.
  • S104 Obtain the position information of the tracked target in the 3D space according to the 3D coordinates of the checkerboard corner points, where the position information is used to track the tracked target.
  • the position information of the tracked target attached to the marker in the 3D space can be obtained.
  • 3D coordinates in 3D space can be obtained.
  • QR code detection is performed on the visible light image to obtain the 2D coordinates of the QR code corner point and the QR code ID in the marker. , which may include the following steps:
  • QR code template For each QR code template of the marker, use the QR code template to match the visible light image, and obtain the similarity between the QR code template and the QR codes in all white checkerboards in the visible light image.
  • the visible light image of the marker can be scaled in various scales first, and then the QR code template is used to match the visible light image.
  • the similarity of the QR code in the white checkerboard grid in the visible light image of the QR code template and the scaled marker can be obtained through the image recognition algorithm.
  • S202 Determine whether to detect a QR code matching the QR code template based on similarity.
  • the QR code matching the QR code template can be determined to be detected, where the preset threshold can be set according to the actual situation, such as 95%.
  • each QR code has a corresponding information dictionary, and the information dictionary includes the QR code ID. Therefore, when the QR code is detected, the information dictionary is The QR code ID of the detected QR code can be obtained by retrieving the relevant data.
  • the 2D coordinates and ID of the corner point of the QR code detected in the visible light image of the marker can be obtained through steps S201-S203.
  • FIG. 8 is a schematic flowchart of a target tracking method according to another embodiment of the present disclosure. As shown in Figure 8, the target tracking method may include the following steps:
  • S302 Perform QR code detection on the visible light image to obtain the 2D coordinates of the QR code corner point and the QR code ID in the marker.
  • S303 Obtain the actual position distribution of the QR code in the marker based on the 2D coordinates of the QR code corner point and the QR code ID.
  • S304 Compare the standard position distribution and the actual position distribution of the QR code in the landmark image, and verify the 2D coordinates of the QR code corner point and the QR code ID.
  • the QR code is detected on the visible light image of the marker.
  • the QR code corner 2D where the verification abnormality appears in the verification result can be directly detected.
  • the coordinates and QR code ID are discarded. If in some embodiments, there are too many calibration exceptions in the calibration results, the calibration situation needs to be reported back in time and the quality of the acquired visible light image is reported to be poor. In practical applications, such situations may be affected by external illumination (for example, changes in light, changes in reflection angle), occlusion and other issues. At this time, external force intervention can be used to adjust it, such as re-obtaining the visible light of the marker. images, etc., to ensure the stability and reliability of subsequent target tracking work.
  • S306 Obtain the 3D coordinates of the checkerboard corner points in the marker based on the depth image, the 2D coordinates of the QR code corner points and the QR code ID.
  • S307 Obtain the position information of the tracked target in the 3D space according to the 3D coordinates of the checkerboard corner points, where the position information is used to track the tracked target.
  • steps S301, S302, S306 and S307 in this embodiment can refer to the specific implementation processes of S101-S104 in the above-mentioned embodiments of the present disclosure, and will not be described again.
  • the result of the QR code detection will be used as a criterion for the stability of the target tracking process.
  • timely feedback and external force can be used to adjust them to improve the reliability of subsequent target tracking work.
  • the 2D coordinates of the QR code corner point and the QR code ID detected from the marker have been obtained, and the verification of the 2D coordinates of the QR code corner point and the QR code ID is completed.
  • the 3D coordinates of the checkerboard corner points in the marker can be calculated based on the 2D coordinates and ID of the corner points of the normal QR code and the obtained depth image of the marker.
  • the checkerboard corner points in the landmark are obtained based on the depth image, the 2D coordinates of the QR code corner point and the QR code ID. coordinates, which can include the following steps:
  • obtaining the key area of interest in the visible light image based on the 2D coordinates of the QR code corner point and the QR code ID may include the following steps:
  • the homography transformation matrix for each detected checkerboard corner point, calculate the homography transformation matrix from the standard image of the marker to the visible light image using the 2D coordinates of the eight QR code corner points of its two adjacent QR codes, and calculate the homography transformation matrix according to the homography
  • the transformation matrix and the preset area of the checkerboard corner points in the standard image of the marker are used to obtain the key areas of interest of the checkerboard corner points in the visible light image.
  • the preset area is a square area with the corner point of the checkerboard as the center and the adjacent two QR code corner points as the diagonal vertices.
  • Figure 11 is a schematic diagram of the homography transformation from a standard image of a marker to a visible light image in an example of the present disclosure.
  • each checkerboard corner point has two other Adjacent QR codes, and each QR code has 4 corner points.
  • the 2D coordinates corresponding to the 8 corner points of the two adjacent QR codes of each detected checkerboard corner point can be used to establish a homography transformation matrix from the standard image of the marker to the visible light image.
  • the preset area in the standard image is a square area determined diagonally with the detected corner point of the checkerboard as the center and the corner points of the two adjacent QR codes as the center. After the preset area is determined, the key areas of interest of the checkerboard corner points in the visible light image can be obtained based on the preset area and homography transformation. Refer to the ROI (Region Of Interest) shown in Figure 11. area of concern) section.
  • the corresponding 3D coordinates of the checkerboard corner points can be obtained based on the key area of interest and the depth image.
  • This implementation may include the following computational steps:
  • the 3D coordinates of each pixel in the area of focus are calculated by the following formula:
  • i ⁇ (1,...,N) represents the N pixels in the focus area
  • f is determined based on the depth image and the camera parameters corresponding to the visible light image.
  • calculate the 3D coordinates of the checkerboard corner points by:
  • the 3D coordinates of the i-th pixel in the area of focus can first be calculated based on the 2D coordinates of the i-th pixel in the depth image and the 2D coordinates in the visible light image. After obtaining the 3D coordinates of each pixel in the area of focus, the 3D coordinates (precise coordinates) of the corner points of the checkerboard are obtained by weighting the average of the 3D coordinates of the pixels in the entire area of focus. Subsequently, the method proposed in step S104 of the embodiment of the present disclosure can be continued to obtain the position information of the tracked target in the 3D space based on the obtained 3D coordinates to achieve target tracking.
  • the corresponding 3D coordinates of the checkerboard corner points can be obtained based on the key area of interest and the depth image, including the following calculation steps:
  • i ⁇ (1,...,4) represents the four corner points of the focus area
  • f is determined based on the camera parameters corresponding to the depth image and visible light image.
  • the 3D coordinates of the i-th area corner point in the focus area can be first calculated based on the 2D coordinates of the i-th area corner point in the depth image and the 2D coordinates in the visible light image.
  • plane fitting can be performed based on the 3D coordinates of the corner points in the entire focus area, and at the same time, based on the four regional corner points on the fitting plane 3D coordinate interpolation to obtain the center point coordinates.
  • the 3D coordinates of the checkerboard corner points are finally obtained.
  • the camera selects a depth camera.
  • plane fitting can be used to avoid errors caused by the depth camera at regional corners or image edges.
  • FIG 12 is a schematic diagram of the position of the tracked target in an example of the present disclosure.
  • Figure 12(a) is a schematic diagram of the position of the tracked target in the visible light image.
  • Figure 12(b) is a schematic diagram of the position of the tracked target in the depth image.
  • 12( c) is a schematic diagram of the position of the tracked target in 3D space.
  • the tracked target can be obtained according to the transformation relationship between Figure 12(a), Figure 12(b) and Figure 12(c).
  • the landmark may include at least 6 degrees of freedom, including 3 translational degrees of freedom and 3 rotational degrees of freedom.
  • the target tracking method of the embodiment of the present disclosure avoids the need in related technologies to intrude the marker into the interior of the tracked target by attaching a marker with a black and white checkerboard pattern to the surface of the tracked target, thereby causing
  • the 2D coordinates and ID of each QR code corner point in the marker can be obtained through QR code detection, and the 2D coordinates of the QR code corner points can be obtained.
  • the coordinates and QR code ID are verified. Only QR codes with normal verification results can participate in subsequent target tracking work, which greatly improves the stability and reliability of the target tracking process.
  • the 3D coordinates of the tracked target attached to the marker are determined, and the tracked target can be obtained in real time.
  • the relationship between the position of the target and the change of the tracked target over time ensures the real-time nature of the target tracking process, and because the 3D coordinates of the corner points of the checkerboard are obtained, the tracking accuracy can be guaranteed.
  • the embodiment of the present disclosure proposes a robot 10.
  • the robot 10 includes: a visible light image acquisition module 101, a depth image acquisition module 102, an image processing module 103 and an execution module 104.
  • the visible light image acquisition module 101 is used to acquire visible light images of markers attached to the surface of the tracked target, where the markers are provided with a black and white checkerboard pattern, and a QR code is provided inside the white checkerboard.
  • the depth image acquisition module 102 is used to acquire the depth image of the landmark.
  • the image processing module 103 is used to detect the QR code on the visible light image, obtain the 2D coordinates of the QR code corner point and the QR code ID in the marker, and detect the QR code based on the depth image, the 2D coordinates of the QR code corner point and the QR code.
  • the execution module 104 is used to generate movement instructions for the robot based on the continuously obtained position information of the tracked target in the 3D space, and control the robot to follow the movement of the tracked target in the 3D space.
  • the embodiment of the present disclosure also proposes a target tracking system.
  • the target tracking system 1 includes: a marker 20 and a robot 10 .
  • the marker 20 is attached to the surface of the tracked object, wherein the marker 20 is provided with a black and white checkerboard pattern, and a QR code is provided inside the white checkerboard.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Pathology (AREA)
  • General Physics & Mathematics (AREA)
  • Robotics (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Electromagnetism (AREA)
  • Toxicology (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种目标跟踪方法、机器人和目标跟踪***,涉及图像识别技术领域,方法包括:获取被跟踪目标表面附着标志物的可见光图像和深度图像,其中,标志物设有黑白相间的棋盘格图案,且白色棋盘格内部设有二维码;对可见光图像进行二维码检测,得到标志物中的二维码角点2D坐标和二维码ID;根据深度图像、二维码角点2D坐标和二维码ID,得到标志物中的棋盘格角点3D坐标;根据棋盘格角点3D坐标,得到被跟踪目标在3D空间中的位置信息,在连续获得所述位置信息后,通过机器人的执行机构,控制机器人跟随被跟踪目标运动。

Description

目标跟踪方法、***及机器人
相关申请的交叉引用
本公开要求于2021年05月30日提交的申请号为202210597366.6、名称为“目标跟踪方法、***及机器人”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及图像识别技术领域,尤其涉及一种目标跟踪方法、***及机器人。
背景技术
一般在机器人对人体特定部位跟踪定位过程中,机器人将与人相互作用的器械移动到预先规划好的目标路径中,通过机器人本身优良的稳定性、可靠性辅助其他操作者进行有针对性地操作。然而由于这个过程中被跟踪对象自身的呼吸运动、人体自身的柔性导致操作过程中的移动和姿态变化,会导致操作中目标路径的变化,如果机器人不能根据这种目标路径的变化,相应的调整自身的位置,会导致操作精度的下降,严重时甚至导致操作失败。
为此,相关技术中提出通过某一种附加在人体上的标志物,来对操作过程中的人的呼吸运动、操作过程中的移动和姿态变化进行追踪。为了追踪过程的可靠性,该技术将标志物植入人体骨骼中,以获取标志物相对人体的稳定而不变的姿态。但该植入过程需要对人体切口、对骨骼造成损伤,带来了二次创伤、严重者甚至会造成操作对象之后的二次骨折,因此不适于推广。
发明内容
本公开的一个目的在于提出一种目标跟踪方法,通过该目标跟踪方法,在进行目标跟踪时标志物无需侵入被跟踪目标,安全性好,且跟踪结果精度高。
本公开的第二个目的在于提出一种机器人。
本公开的第三个目的在于提出一种目标跟踪***。
为达到上述目的,本公开实施例第一方面提出一种目标跟踪方法,所述方法包括:获取被跟踪目标表面附着标志物的可见光图像和深度图像,其中,所述标志物设有黑白相间的棋盘格图案,且白色棋盘格内部设有二维码;对所述可见光图像进行二维码检测,得到所述标志物中的二维码角点2D坐标和二维码ID;根据所述深度图像、所述二维码角点2D坐标和所述二维码ID,得到所述标志物中的棋盘格角点3D坐标;根据所述棋盘格角点3D 坐标,得到所述被跟踪目标在3D空间中的位置信息,其中,所述位置信息用于对所述被跟踪目标进行跟踪。
为达到上述目的,本公开实施例第二方面提出一种机器人,所述机器人包括:可见光图像采集模块,用于获取附着在被跟踪目标表面的标志物的可见光图像,其中,所述标志物设有黑白相间的棋盘格图案,且白色棋盘格内部设有二维码;深度图像采集模块,用于获取所述标志物的深度图像;图像处理模块,用于对所述可见光图像进行二维码检测,得到所述标志物中的二维码角点2D坐标和二维码ID,并根据所述深度图像、所述二维码角点2D坐标和所述二维码ID,得到所述标志物中的棋盘格角点3D坐标,以及根据所述棋盘格角点3D坐标,得到所述被跟踪目标在3D空间中的位置信息,其中,所述位置信息用于对所述被跟踪目标进行跟踪;执行模块,用于根据连续得到的所述被跟踪目标在3D空间中的位置信息,生成机器人的运动指令,控制机器人在3D空间中跟随所述被跟踪目标运动。
为达到上述目的,本公开实施例第三方面提出一种目标跟踪***,所述***包括:标志物,所述标志物附着在被跟踪物表面,其中,所述标志物设有黑白相间的棋盘格图案,且白色棋盘格内部设有二维码;以及如本公开第二方面实施例所述的机器人。
根据本公开实施例的目标跟踪方法、***及机器人,通过在被跟踪目标的表面附着设有黑白相间的棋盘格图案的标志物,避免了相关技术中需要将标志物侵入被跟踪目标内部,从而导致的被跟踪目标被二次伤害等情况的发生,安全性好,且在目标跟踪过程中最终获取的是棋盘格角点的3D坐标,可以保证跟踪精度。
本公开附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本公开的实践了解到。
附图说明
被结合在说明书中并构成说明书的一部分的附图示出了本公开的实施例,并且连同其说明一起用于解释本公开实施例的原理。
图1是本公开一个实施例目标跟踪方法的流程示意图;
图2是本公开第一个示例的棋盘格示意图;
图3是本公开第二个示例的棋盘格示意图;
图4是本公开第三个示例的棋盘格示意图;
图5是本公开一个示例的单个二维码模板的示意图;
图6是本公开一个实施例目标跟踪方法的步骤S102的流程示意图;
图7是本公开一个示例的二维码模板与可见光图像的匹配示意图;
图8是本公开另一个实施例目标跟踪方法的流程示意图;
图9是本公开一个实施例目标跟踪方法的步骤S103的流程示意图;
图10是本公开一个示例的根据二维码角点2D坐标和二维码ID,得到可见光图像中的重点关注区域的流程示意图;
图11是本公开一个示例的标志物标准图像到可见光图像的单应性变换示意图;
图12(a)是本公开一个示例被跟踪目标在可见光图像中的位置示意图;
图12(b)是本公开一个示例被跟踪目标在深度图像中的位置示意图;
图12(c)是本公开一个示例被跟踪目标在3D空间中的位置示意图;
图13是本公开一个实施例的机器人的结构示意图;
图14是本公开一个实施例的目标跟踪***的结构示意图。
具体实施方式
下面详细描述本公开的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本公开,而不能理解为对本公开的限制。
下面参考附图1-14以及具体的实施方式描述本公开实施例的目标跟踪方法、***及机器人。
图1是本公开一个实施例的目标跟踪方法的流程示意图。如图1所示,本实施例提供的目标跟踪方法,包括以下步骤:
S101,获取被跟踪目标表面附着标志物的可见光图像和深度图像,其中,标志物设有黑白相间的棋盘格图案,且白色棋盘格内部设有二维码。
在一些实施例中,标志物可采用柔性平面基底,可裁剪成任意形状,该标志物设有黑白相间的棋盘格图案,且白色棋盘格内部设有二维码,如图2和图3所示。在一些示例中,为了保证跟踪过程中的最佳视野,如图4所示,可将设有二维码的棋盘格裁剪为任意形状,并根据实际需求对棋盘格进行任意方式的组合,得到最终标志物上的棋盘格图案。
需要说明的是,图2、图3、图4中示出的棋盘格图案中,只有白色棋盘格内(一半的棋盘格内)保存二维码,此种设计可以保证后续在进行二维码检测时二维码的检出率,且不会受到相邻二维码的干扰。而且如图5所示,棋盘格图案中各白色棋盘格内部设有的二维码具有唯一性和方向性,在设计标志物时,需保证各不同的二维码之间有最大的不相似性,以此降低二维码检测过程中发生识别错误或识别为其他二维码等情况的概率。
作为一个示例,实际应用中,可根据实际使用场景计算所需要的二维码数量,通常情况下,对于需要进行裁剪组合的场景常选用长条形,例如,5×20。对于不需要进行裁剪组合的场景常选用方形,例如5×5。需要说明的是,上述设计方式仅为示例性地,不作为对本公开实施例的限制。
S102,对可见光图像进行二维码检测,得到标志物中的二维码角点2D坐标和二维码ID。
作为一种可行的实施方式,在对可见光图像进行二维码检测时,可提前根据选择的二维码数量生成相应数量的信息字典,其中,每一个信息字典均对应一块包含信息的二维码和其对应的ID。在后续对可见光图像进行二维码检测时,在检出二维码后,便可得到该二维码的角点2D坐标和二维码生成的信息字典的ID。
需要说明的是,在对可见光图像进行二维码检测时,需最少检出2个不共线的二维码,每个二维码检出即包含4个二维码角点,通过2个二维码可以推断出全局(即标志物上)其他二维码。
S103,根据深度图像、二维码角点2D坐标和二维码ID,得到标志物中的棋盘格角点3D坐标。
S104,根据棋盘格角点3D坐标,得到被跟踪目标在3D空间中的位置信息,其中,位置信息用于对被跟踪目标进行跟踪。
具体而言,通过步骤S103获取标志物中棋盘格角点的3D坐标后,便可得到标志物所附着的被跟踪目标在3D空间中的位置信息,通过不断地获取连续帧图像中标志物在3D空间中的3D坐标,实现目标跟踪。
作为一种可能的实现方式,如图6所示,本公开实施例的目标跟踪方法中,对可见光图像进行二维码检测,得到标志物中的二维码角点2D坐标和二维码ID,可包括以下步骤:
S201,针对标志物的每一二维码模板,利用二维码模板对可见光图像进行匹配,得到二维码模板与可见光图像中所有白色棋盘格内二维码之间的相似度。
示例性地,在一些实施例中,可首先对标志物的可见光图像进行各尺度的缩放,然后再利用二维码模板与可见光图像进行匹配。如图7所示,匹配过程中通过图像识别算法便可得出二维码模板和进行缩放后的标志物的可见光图像中的白色棋盘格内二维码的相似度。
S202,根据相似度判断是否检出与二维码模板匹配的二维码。
需要说明的是,只有在相似度高于预设阈值时,才可确定检出与二维码模板匹配的二维码,其中,预设阈值可根据实际情况设定,如95%。
S203,如果检出,则得到所检出二维码的二维码角点2D坐标和二维码ID。
具体而言,如上述实施例中提出的,针对每一个二维码均有与其对应的信息字典,信息字典中便包括二维码ID,所以在检出二维码时,通过对其信息字典中的相关数据进行调取便可得到所检出二维码的二维码ID。
由此,通过步骤S201-S203便可得到标志物的可见光图像中所检出的二维码的角点2D坐标和ID。
在本公开的一些实施例中,为了保证该目标跟踪方法工作过程的稳定性,还需要对获取到的标志物的二维码角点2D坐标和二维码ID进行校验。图8是本公开另一个实施例的目标跟踪方法的流程示意图,如图8所示,目标跟踪方法可包括以下步骤:
S301,获取被跟踪目标表面附着标志物的可见光图像和深度图像,其中,标志物设有黑白相间的棋盘格图案,且白色棋盘格内部设有二维码。
S302,对可见光图像进行二维码检测,得到标志物中的二维码角点2D坐标和二维码ID。
S303,根据二维码角点2D坐标和二维码ID得到标志物中二维码的实际位置分布。
S304,比较标志物图像中二维码的标准位置分布和实际位置分布,对二维码角点2D坐标和二维码ID识进行校验。
S305,针对校验异常的二维码角点2D坐标和二维码ID,将其舍弃,或者,对其进行调整。
具体而言,在本实施例中,对标志物的可见光图像进行二维码检测,当校验结果中出现校验异常时,可直接将检验结果中出现校验异常的二维码角点2D坐标和二维码ID进行舍弃。若在一些实施例中,校验结果出现过多校验异常,需将该校验情况进行及时反馈,报告获取的可见光图像的质量较差。实际应用中,出现此类情况可能是受到外界光照(例如,光线发生变化、反光角度变化)、遮挡等问题的影响,此时便可通过外力介入对其进行调整,例如重新获取标志物的可见光图像等,以此保证后续目标跟踪工作的稳定性及可靠性。
S306,根据深度图像、二维码角点2D坐标和二维码ID,得到标志物中的棋盘格角点3D坐标。
S307,根据棋盘格角点3D坐标,得到被跟踪目标在3D空间中的位置信息,其中,位置信息用于对被跟踪目标进行跟踪。
需要说明的是,本实施例中步骤S301、S302、S306和S307的具体实施方法可参考本公开上述实施例中S101-S104的具体实施过程,在此不做赘述。
在本实施例中,通过对获取到的标志物的二维码角点2D坐标和二维码ID进行校验,二维码检测的结果会作为目标跟踪过程稳定性的判据,在出现过多异常情况时,通过及时 地反馈,借助外力对其进行调整,提高后续目标跟踪工作的可靠性。
在已经得到从标志物中所检出的二维码角点2D坐标和二维码ID,并对该二维码角点2D坐标和二维码ID完成校验。得到正确的校验结果后,便可根据校验正常的二维码的角点2D坐标、ID以及已经获取到的标志物的深度图像计算标志物中的棋盘格角点3D坐标。
作为一种可能的实现方式,如图9所示,本公开实施例的目标跟踪方法中,根据深度图像、二维码角点2D坐标和二维码ID,得到标志物中的棋盘格角点坐标,可包括以下步骤:
S401,根据二维码角点2D坐标和二维码ID,得到可见光图像中的重点关注区域,其中,每个重点关注区域对应一个棋盘格角点。
S402,针对每个重点关注区域,根据该重点关注区域和深度图像,得到对应的棋盘格角点3D坐标。
在本实现方式中,作为一种示例,如图10所示,根据二维码角点2D坐标和二维码ID,得到可见光图像中的重点关注区域,可包括以下步骤:
S501,根据二维码ID进行棋盘格角点检测。
S502,针对检测到的每个棋盘格角点,利用与其相邻两二维码的8个二维码角点2D坐标计算标志物标准图像到可见光图像的单应性变换矩阵,并根据单应性变换矩阵和棋盘格角点在标志物标准图像中的预设区域,得到棋盘格角点在可见光图像中的重点关注区域。其中,预设区域为以棋盘格角点为中心,邻近两二维码角点为对角顶点的正方形区域。
具体而言,如图11所示,图11是本公开一个示例的标志物标准图像到可见光图像的单应性变换示意图,其中的每个棋盘格角点除自身之外,还有两个与其相邻的二维码,且每个二维码具有4个角点。在本实施例中,便可利用检测到的每个棋盘格角点的两个相邻二维码的8个角点对应的2D坐标建立标志物标准图像到可见光图像的单应性变换矩阵。标准图像中的预设区域是以检测到的棋盘格角点为中心,与其相邻的两个二维码的角点为对角确定的正方形区域。在确定了预设区域后,便可根据该预设区域和单应性变换得出棋盘格角点在可见光图像中的重点关注区域,可参考图11中示出的ROI(Region Of Interest,重点关注区域)部分。
作为一种可能的实现方式,在得到棋盘格角度在可见光图像中的重点关注区域后,可根据该重点关注区域和深度图像得到对应的棋盘格角点3D坐标。该实现方式可包括以下计算步骤:
作为一种示例,通过下式计算重点关注区域内的每一个像素的3D坐标:
Figure PCTCN2022101290-appb-000001
其中,i∈(1,…,N)表示在重点关注区域内的N个像素,
Figure PCTCN2022101290-appb-000002
表示第i个像素的3D坐标,
Figure PCTCN2022101290-appb-000003
表示第i个像素在深度图像中的2D坐标,
Figure PCTCN2022101290-appb-000004
表示第i个像素在可见光图像中的2D坐标,f根据深度图像和所述可见光图像对应的相机参数确定。
作为一种示例,通过下式计算棋盘格角点3D坐标:
Figure PCTCN2022101290-appb-000005
其中,
Figure PCTCN2022101290-appb-000006
表示棋盘格角点3D坐标。
也就是说,在本实现方式中,可首先根据第i个像素在深度图像中的2D坐标和在可见光图像中的2D坐标计算重点关注区域内的第i个像素的3D坐标。在获取到重点关注区域内的每一个像素的3D坐标后,通过对整个重点关注区域内的像素的3D坐标进行加权平均,得出棋盘格角点的3D坐标(精确坐标)。后续便可继续执行本公开实施例步骤S104中提出的,根据获取到的3D坐标得出被跟踪目标在3D空间中的位置信息,实现目标跟踪。
作为另一种可能的实现方式,在得到棋盘格角度在可见光图像中的重点关注区域后,可根据该重点关注区域和深度图像得到对应的棋盘格角点3D坐标,包括以下计算步骤:
通过下式计算重点关注区域的4个区域角点3D坐标:
Figure PCTCN2022101290-appb-000007
其中,i∈(1,…,4)表示重点关注区域的4个区域角点,
Figure PCTCN2022101290-appb-000008
表示第i个区域角点3D坐标,
Figure PCTCN2022101290-appb-000009
表示第i个区域角点在深度图像中的2D坐标,
Figure PCTCN2022101290-appb-000010
表示第i个区域角点在可见光图像中的2D坐标,f根据深度图像和可见光图像对应的相机参数确定。
根据4个区域角点3D坐标插值得到中心点坐标:
Figure PCTCN2022101290-appb-000011
拟合平面P:k·x+l·y+m·z=0,使得下式成立:
Figure PCTCN2022101290-appb-000012
根据
Figure PCTCN2022101290-appb-000013
k,l,m和平面方程P得到棋盘格角点3D坐标
Figure PCTCN2022101290-appb-000014
也就是说,在本实现方式中,可首先根据第i个区域角点在深度图像中的2D坐标和在可见光图像中的2D坐标计算重点关注区域内第i个区域角点的3D坐标。在获取到4个区域角点的3D坐标后,在三维空间坐标系中,可根据整个重点关注区域内角点的3D坐标进行平面拟合,同时在该拟合平面上根据这4个区域角点3D坐标插值得到中心点坐标。根据得到的中心点坐标和拟合平面最终得到棋盘格角点的3D坐标。
可选地,在本实施例中,相机选择深度相机。在该实现方式中,为了保证获取到的棋盘格角点的3D坐标的准确度,通过平面拟合的方式可避免深度相机在区域角点或图像边 缘处出现的误差。
作为一个示例,根据上述实施例获取到棋盘格角点的3D坐标,通过连续帧图像中角点之间的变换关系,获得棋盘格在这些连续帧的3D坐标变化,也就是获取到标志物所附着的被跟踪目标的3D坐标,实现目标跟踪。图12是本公开一个示例的被跟踪目标的位置示意图,图12(a)是被跟踪目标在可见光图像中的位置示意图、12(b)是被跟踪目标在深度图像中的位置示意图、12(c)是被跟踪目标在3D空间中的位置示意图,根据图12(a)、图12(b)和图12(c)之间的变换关系就可获取被跟踪目标。
需要说明的是,由于本公开实施例中的标志物中的二维码既要能够被快速检出,还要包含足够多的数据位置信息,才能通过后续工作计算出标志物的棋盘格角点3D坐标,示例性地,在3D坐标系下,该标志物可包含至少6个自由度,包括3个平动自由度和3个转动自由度。
综上,本公开实施例的目标跟踪方法,通过在被跟踪目标的表面附着设有黑白相间的棋盘格图案的标志物,避免了相关技术中需要将标志物侵入被跟踪目标内部,从而导致的被跟踪目标被二次伤害等情况的发生,同时在进行目标跟踪时,可首先通过二维码检测得到标志物中各二维码角点的2D坐标和ID,并对二维码角点2D坐标和二维码ID进行校验,只有校验结果正常的二维码才可参与后续的目标跟踪工作,极大程度上提高了目标跟踪过程的稳定性和可靠性。同时,在得到标志物中的棋盘格角点3D坐标的过程中,通过获取连续帧图像中角点之间的变换关系,确定标志物所附着的被跟踪目标的3D坐标,可实时获取被跟踪目标的位置以及被跟踪目标随时间变化的关系,保证目标跟踪过程的实时性,并且由于获取的是棋盘格角点的3D坐标,可以保证跟踪精度。
本公开实施例提出一种机器人10,如图13所示,机器人10包括:可见光图像采集模块101、深度图像采集模块102、图像处理模块103和执行模块104。
其中,可见光图像采集模块101,用于获取附着在被跟踪目标表面的标志物的可见光图像,其中,标志物设有黑白相间的棋盘格图案,且白色棋盘格内部设有二维码。深度图像采集模块102,用于获取标志物的深度图像。图像处理模块103,用于对可见光图像进行二维码检测,得到标志物中的二维码角点2D坐标和二维码ID,并根据深度图像、二维码角点2D坐标和二维码ID,得到标志物中的棋盘格角点3D坐标,以及根据棋盘格角点3D坐标,得到被跟踪目标在3D空间中的位置信息。其中,位置信息用于对被跟踪目标进行跟踪。执行模块104用于根据连续得到的被跟踪目标在3D空间中的位置信息,生成机器人的运动指令,控制机器人在3D空间中跟随被跟踪目标运动。
另外,需要说明的是,本实施例的机器人10的其他构成及作用对本领域的技术人员来说是已知的,为减少冗余,此处不做赘述。
本公开实施例还提出了一种目标跟踪***,如图14所示,目标跟踪***1包括:标志物20、机器人10。
其中,标志物20附着在被跟踪物表面,其中,标志物20设有黑白相间的棋盘格图案,且白色棋盘格内部设有二维码。
需要说明的是,本公开实施例的目标跟踪***的其他具体实施方式可参见本公开上述实施例的目标跟踪方法的具体实施方式。
以上实施例仅用以说明本公开的技术方案,而非对其限制;尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本公开各实施例技术方案的精神和范围,均应包含在本公开的保护范围之内。

Claims (10)

  1. 一种目标跟踪方法,其特征在于,所述方法包括:
    获取被跟踪目标表面附着标志物的可见光图像和深度图像,其中,所述标志物设有黑白相间的棋盘格图案,且白色棋盘格内部设有二维码;
    对所述可见光图像进行二维码检测,得到所述标志物中的二维码角点2D坐标和二维码ID;
    根据所述深度图像、所述二维码角点2D坐标和所述二维码ID,得到所述标志物中的棋盘格角点3D坐标;
    根据所述棋盘格角点3D坐标,得到所述被跟踪目标在3D空间中的位置信息,其中,所述位置信息用于对所述被跟踪目标进行跟踪。
  2. 根据权利要求1所述的目标跟踪方法,其特征在于,所述对所述可见光图像进行二维码检测,得到所述标志物中的二维码角点2D坐标和二维码ID,包括:
    针对所述标志物的每一二维码模板,利用所述二维码模板对所述可见光图像进行匹配,得到所述二维码模板与所述可见光图像中所有白色棋盘格内二维码之间的相似度;
    根据所述相似度判断是否检出与所述二维码模板匹配的二维码;
    如果检出,则得到所检出二维码的二维码角点2D坐标和二维码ID。
  3. 根据权利要求1所述的目标跟踪方法,其特征在于,在根据所述深度图像、所述二维码角点2D坐标和所述二维码ID,得到所述标志物中的棋盘格角点3D坐标之前,所述方法还包括:
    根据所述二维码角点2D坐标和所述二维码ID得到所述标志物中二维码的实际位置分布;
    比较所述标志物图像中二维码的标准位置分布和所述实际位置分布,对所述二维码角点2D坐标和所述二维码ID进行校验;
    针对校验异常的二维码角点2D坐标和二维码ID,将其舍弃,或者,对其进行调整。
  4. 根据权利要求1所述的目标跟踪方法,其特征在于,所述根据所述深度图像、所述二维码角点2D坐标和所述二维码ID,得到所述标志物中的棋盘格角点坐标,包括:
    根据所述二维码角点2D坐标和所述二维码ID,得到所述可见光图像中的重点关注区域,其中,每个所述重点关注区域对应一个棋盘格角点;
    针对每个重点关注区域,根据该重点关注区域和所述深度图像,得到对应的棋盘格角点3D坐标。
  5. 根据权利要求4所述的目标跟踪方法,其特征在于,所述根据所述二维码角点2D 坐标和所述二维码ID,得到所述可见光图像中的重点关注区域,包括:
    根据所述二维码ID进行棋盘格角点检测;
    针对检测到的每个棋盘格角点,利用与其相邻两二维码的8个二维码角点2D坐标计算标志物标准图像到所述可见光图像的单应性变换矩阵,并根据所述单应性变换矩阵和所述棋盘格角点在所述标志物标准图像中的预设区域,得到所述棋盘格角点在所述可见光图像中的重点关注区域,其中,所述预设区域为以所述棋盘格角点为中心,邻近两二维码角点为对角顶点的正方形区域。
  6. 根据权利要求5所述的目标跟踪方法,其特征在于,根据所述重点关注区域和所述深度图像,得到对应的棋盘格角点3D坐标,包括:
    通过下式计算所述重点关注区域内的每一个像素的3D坐标:
    Figure PCTCN2022101290-appb-100001
    其中,i∈(1,…,N)表示在所述重点关注区域内的N个像素,
    Figure PCTCN2022101290-appb-100002
    表示第i个像素的3D坐标,
    Figure PCTCN2022101290-appb-100003
    表示第i个像素在所述深度图像中的2D坐标,
    Figure PCTCN2022101290-appb-100004
    表示第i个像素在所述可见光图像中的2D坐标,f根据所述深度图像和所述可见光图像对应的相机参数确定;
    通过下式计算所述棋盘格角点3D坐标:
    Figure PCTCN2022101290-appb-100005
    其中,
    Figure PCTCN2022101290-appb-100006
    表示所述棋盘格角点3D坐标。
  7. 根据权利要求5所述的目标跟踪方法,其特征在于,根据所述重点关注区域和所述深度图像,得到对应的棋盘格角点3D坐标,包括:
    通过下式计算所述重点关注区域的4个区域角点3D坐标:
    Figure PCTCN2022101290-appb-100007
    其中,i∈(1,…,4)表示所述重点关注区域的4个区域角点,
    Figure PCTCN2022101290-appb-100008
    表示第i个区域角点3D坐标,
    Figure PCTCN2022101290-appb-100009
    表示第i个区域角点在所述深度图像中的2D坐标,
    Figure PCTCN2022101290-appb-100010
    表示第i个区域角点在所述可见光图像中的2D坐标,f根据所述深度图像和所述可见光图像对应的相机参数确定;
    根据所述4个区域角点3D坐标插值得到中心点坐标:
    Figure PCTCN2022101290-appb-100011
    拟合平面P:k·x+l·y+m·z=0,使得下式成立:
    Figure PCTCN2022101290-appb-100012
    根据
    Figure PCTCN2022101290-appb-100013
    k,l,m和平面方程P得到所述棋盘格角点3D坐标
    Figure PCTCN2022101290-appb-100014
  8. 根据权利要求1所述的目标跟踪方法,其特征在于,所述标志物采用柔性平面基底,可裁剪成任意形状。
  9. 一种机器人,其特征在于,所述机器人包括:
    可见光图像采集模块,用于获取附着在被跟踪目标表面的标志物的可见光图像,其中,所述标志物设有黑白相间的棋盘格图案,且白色棋盘格内部设有二维码;
    深度图像采集模块,用于获取所述标志物的深度图像;
    图像处理模块,用于对所述可见光图像进行二维码检测,得到所述标志物中的二维码角点2D坐标和二维码ID,并根据所述深度图像、所述二维码角点2D坐标和所述二维码ID,得到所述标志物中的棋盘格角点3D坐标,以及根据所述棋盘格角点3D坐标,得到所述被跟踪目标在3D空间中的位置信息,其中,所述位置信息用于对所述被跟踪目标进行跟踪;
    执行模块,用于根据连续得到的所述被跟踪目标在3D空间中的位置信息,生成机器人的运动指令,控制机器人在3D空间中跟随所述被跟踪目标运动。
  10. 一种目标跟踪***,其特征在于,所述***包括:
    标志物,所述标志物附着在被跟踪物表面,其中,所述标志物设有黑白相间的棋盘格图案,且白色棋盘格内部设有二维码;以及
    如权利要求9所述的机器人。
PCT/CN2022/101290 2022-05-30 2022-06-24 目标跟踪方法、***及机器人 WO2023231098A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210597366.6 2022-03-30
CN202210597366.6A CN115170646A (zh) 2022-05-30 2022-05-30 目标跟踪方法、***及机器人

Publications (1)

Publication Number Publication Date
WO2023231098A1 true WO2023231098A1 (zh) 2023-12-07

Family

ID=83483677

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/101290 WO2023231098A1 (zh) 2022-05-30 2022-06-24 目标跟踪方法、***及机器人

Country Status (3)

Country Link
US (1) US20230310090A1 (zh)
CN (1) CN115170646A (zh)
WO (1) WO2023231098A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117830604B (zh) * 2024-03-06 2024-05-10 成都睿芯行科技有限公司 一种定位用二维码异常检测方法及介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110146030A (zh) * 2019-06-21 2019-08-20 招商局重庆交通科研设计院有限公司 基于棋盘格标志法的边坡表面变形监测***和方法
CN111179356A (zh) * 2019-12-25 2020-05-19 北京中科慧眼科技有限公司 基于Aruco码的双目相机标定方法、装置、***和标定板
CN111243032A (zh) * 2020-01-10 2020-06-05 大连理工大学 一种棋盘格角点全自动检测方法
CN112132906A (zh) * 2020-09-22 2020-12-25 西安电子科技大学 一种深度相机与可见光相机之间的外参标定方法及***
KR102206108B1 (ko) * 2019-09-20 2021-01-21 광운대학교 산학협력단 체적형 객체 촬영을 위한 다중 rgb-d 카메라 기반의 포인트 클라우드 정합 방법
CN114224489A (zh) * 2021-12-12 2022-03-25 浙江德尚韵兴医疗科技有限公司 用于手术机器人的轨迹跟踪***及利用该***的跟踪方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110146030A (zh) * 2019-06-21 2019-08-20 招商局重庆交通科研设计院有限公司 基于棋盘格标志法的边坡表面变形监测***和方法
KR102206108B1 (ko) * 2019-09-20 2021-01-21 광운대학교 산학협력단 체적형 객체 촬영을 위한 다중 rgb-d 카메라 기반의 포인트 클라우드 정합 방법
CN111179356A (zh) * 2019-12-25 2020-05-19 北京中科慧眼科技有限公司 基于Aruco码的双目相机标定方法、装置、***和标定板
CN111243032A (zh) * 2020-01-10 2020-06-05 大连理工大学 一种棋盘格角点全自动检测方法
CN112132906A (zh) * 2020-09-22 2020-12-25 西安电子科技大学 一种深度相机与可见光相机之间的外参标定方法及***
CN114224489A (zh) * 2021-12-12 2022-03-25 浙江德尚韵兴医疗科技有限公司 用于手术机器人的轨迹跟踪***及利用该***的跟踪方法

Also Published As

Publication number Publication date
US20230310090A1 (en) 2023-10-05
CN115170646A (zh) 2022-10-11

Similar Documents

Publication Publication Date Title
US11123144B2 (en) Registration of frames of reference
US11039121B2 (en) Calibration apparatus, chart for calibration, chart pattern generation apparatus, and calibration method
US8508527B2 (en) Apparatus and method of building map for mobile robot
CN106650682B (zh) 一种人脸追踪的方法及装置
CN104408732B (zh) 一种基于全向结构光的大视场深度测量***及方法
US20150085072A1 (en) Ambiguity-free optical tracking system
US20150125035A1 (en) Image processing apparatus, image processing method, and storage medium for position and orientation measurement of a measurement target object
WO2023231098A1 (zh) 目标跟踪方法、***及机器人
US20180345040A1 (en) A target surface
JP6559377B2 (ja) 重畳位置補正装置及び重畳位置補正方法
WO2017187694A1 (ja) 注目領域画像生成装置
Kang et al. Robustness and accuracy of feature-based single image 2-D–3-D registration without correspondences for image-guided intervention
US20220054103A1 (en) X-ray ripple markers for x-ray calibration
CN112541973A (zh) 虚实叠合方法与***
JP2018173882A (ja) 情報処理装置、方法、及びプログラム
CN112998856B (zh) 三维实时定位方法
JP2009301181A (ja) 画像処理装置、画像処理プログラム、画像処理方法、および電子機器
JP6566420B2 (ja) 手術ナビゲーションシステムおよび手術ナビゲーション方法並びにプログラム
CN106580471A (zh) 图像导航定位***及方法
US10832422B2 (en) Alignment system for liver surgery
JP2004056230A (ja) 3次元物体の投影画像位置合わせシステム
US11250593B2 (en) System and method for detecting and correcting defective image output from radiation-damaged video cameras
US11830184B2 (en) Medical image processing device, medical image processing method, and storage medium
WO2021056452A1 (zh) 患者位置检测方法及装置、放射医疗设备、可读存储介质
CN111437034A (zh) 一种定位标尺及标志点定位方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22944416

Country of ref document: EP

Kind code of ref document: A1