WO2022206436A1 - 一种动态位置识别提示***及其方法 - Google Patents

一种动态位置识别提示***及其方法 Download PDF

Info

Publication number
WO2022206436A1
WO2022206436A1 PCT/CN2022/081729 CN2022081729W WO2022206436A1 WO 2022206436 A1 WO2022206436 A1 WO 2022206436A1 CN 2022081729 W CN2022081729 W CN 2022081729W WO 2022206436 A1 WO2022206436 A1 WO 2022206436A1
Authority
WO
WIPO (PCT)
Prior art keywords
recognition
range
specific object
preset
recognition range
Prior art date
Application number
PCT/CN2022/081729
Other languages
English (en)
French (fr)
Inventor
孙非
朱奕
郭晓杰
崔芙粒
单莹
Original Assignee
上海复拓知达医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海复拓知达医疗科技有限公司 filed Critical 上海复拓知达医疗科技有限公司
Priority to US18/553,351 priority Critical patent/US20240185448A1/en
Publication of WO2022206436A1 publication Critical patent/WO2022206436A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/108Computer aided selection or customisation of medical implants or cutting guides
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2068Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis using pointers, e.g. pointers having reference marks for determining coordinates of body points
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/365Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Definitions

  • the invention relates to the technical field of image processing, and in particular, to a dynamic position recognition prompting system and a method thereof.
  • the positioning accuracy will also change dynamically. For example, when the recognized object is far away from the image acquisition device, the number of distinguishable feature points will decrease significantly, thus affecting the positioning accuracy. For another example, if the object to be recognized is very close to the image acquisition device, the image may be distorted due to lens distortion at different positions, thereby reducing the positioning accuracy. This phenomenon results in a specific area in front of the image acquisition device that is the best range for identifying and positioning surgical instruments. How to let the operator intuitively and conveniently keep the identifiable instruments within the range and ensure that the positioning accuracy continues to meet the design expectations is one of the difficulties in the realization of augmented reality navigation technology.
  • augmented reality-based puncture surgery navigation technology because the optical image acquisition device used to obtain the position information of the recognized object is worn on the operator's head, and the operator needs to hold the puncture instrument that can be recognized by the image acquisition device and operate the instrument to perform different Surgical action, which causes the distance between the image acquisition device and the recognized object to change dynamically at any time, thereby affecting the operation of the surgical process.
  • the purpose of the present invention is to provide a dynamic position recognition prompting system and method thereof.
  • a dynamic position recognition prompt system comprising: a recognition module, a judgment module, and a prompt module; wherein,
  • the recognition module is used to acquire the current scene image through the image acquisition device, and identify the current position of the specific object in the current scene image;
  • the judging module is used to judge whether the current position of the specific object is within the preset recognition range
  • the prompt module is configured to display prompt information in an augmented reality manner to prompt the user to adjust the specific object to the preset recognition range if the current position of the specific object is not within the preset recognition range.
  • the preset recognition range is a preset recognition distance range and/or a recognition angle range; the judgment module is specifically configured to judge whether the specific object is within the recognition distance range; and/or judge whether the specific object is within the recognition angle range.
  • the preset recognition range is set as a radially increasing spatial shape range along the photographing direction of the image acquisition device.
  • the judging module includes an optimal recognition range judging sub-module for judging whether the current position of the specific object is within the optimal recognition range of the image acquisition device; the optimal recognition range is located in the preset recognition range , the preset recognition range is greater than the optimal recognition range.
  • the judging module further includes a recognition range setting module for setting the preset recognition range and/or the optimal recognition range.
  • the optimal recognition range includes: an optimal recognition range preset according to a user instruction, and/or a historical optimal recognition range obtained according to data statistics.
  • the prompt module is further configured to display prompt information in an augmented reality manner, the prompt information at least includes the positional relationship between the specific object and the preset recognition range, so as to prompt the user to adjust the position of the specific object or the image acquisition device.
  • the prompt information is one or more combinations of text, color, pattern, and animation superimposed and displayed on the current scene in an augmented reality manner.
  • the prompting module further includes a movement prompting sub-module, which is used for displaying position movement information and prompting the user to move the position of the specific object or the image acquisition device, so that the current position of the specific object is within the preset recognition range;
  • the position movement information includes a combination of one or more kinds of information among text, auxiliary lines, or animation superimposed and displayed on the current scene in an augmented reality manner.
  • the specific object is a specific object with recognizable features.
  • a method for prompting dynamic location recognition comprising:
  • prompt information is displayed in an augmented reality manner to prompt the user to adjust the specific object to be within the preset recognition range.
  • the preset recognition range is a preset recognition distance range and/or a recognition angle range
  • the judging whether the current position of the specific object is within the preset recognition range includes:
  • the preset recognition range is set as a radially increasing spatial shape range along the photographing direction of the image acquisition device.
  • it also includes judging whether the current position of the specific object is within the optimal recognition range of the image acquisition device; the optimal recognition range is within the preset recognition range, and the preset recognition range is larger than the predetermined recognition range. Describe the best identification range.
  • the optimal recognition range can be dynamically adjusted according to the operation requirements. When the specific object is in a fine operation requirement, the optimal recognition range is reduced; when the specific object is in a large-scale operation requirement, the optimal recognition range is expanded. .
  • the optimal recognition range includes: an optimal recognition range preset according to a user instruction and/or a historical optimal recognition range obtained according to data statistics.
  • the prompt information includes at least the positional relationship between the specific object and the preset recognition range, so as to prompt the user to adjust the position of the specific object or the image acquisition device.
  • the prompt information is one or more combinations of text, color, pattern, and animation superimposed and displayed on the current scene in an augmented reality manner.
  • the displaying prompt information in an augmented reality manner to prompt the user to adjust the specific object within the preset recognition range includes: displaying position movement information, prompting the user to move the specific object or the position of the image acquisition device, to The specific object is made to be within the preset recognition range; the position movement information includes any combination of one or more kinds of information among text, auxiliary lines, or animation superimposed and displayed on the current scene in an augmented reality manner.
  • the specific object is a specific object with recognizable features.
  • the present invention provides a dynamic position recognition prompting system and a method thereof.
  • the positional relationship adjustment information is displayed in the augmented reality mode, and the user is prompted to move the position of the specific object or the image acquisition device, so as to achieve the purpose that the positional relationship between the specific object and the image acquisition device is within the preset recognition range, and the present invention can navigate the specific object , to help users find a specific object within the preset recognition range. For example, in the augmented reality puncture surgery navigation technology, it can ensure that the user can quickly and accurately locate and master this skill.
  • the present invention can also define the three-dimensional optimum recognition range and the three-dimensional optimum recognition angle relative to the image acquisition device according to the expected positioning accuracy. By calculating the relative relationship between the current position of the specific object and the best recognition area in real time, it prompts the operator whether the recognition position should be adjusted at present by means of color, graphics, animation, etc., and adjusts the specific object to the appropriate position, and then continues the subsequent surgical steps.
  • Fig. 1 is the structure diagram of the dynamic position recognition prompting system of the present invention
  • Fig. 2 is the flow chart of the dynamic position recognition prompting method of the present invention
  • FIG. 3 is a specific structural diagram of a specific object in the dynamic position recognition prompting method of the present invention.
  • Fig. 4 is the preset recognition range diagram in the dynamic position recognition prompting method of the present invention.
  • Fig. 5 is the optimal recognition distance range diagram in the dynamic position recognition prompting method of the present invention.
  • FIG. 6 is an angle prompting diagram in a preset recognition range in the dynamic position recognition prompting method of the present invention.
  • an image acquisition device is required to obtain images of the surgical scene, and then to locate and identify the surgical instruments in the scene.
  • Surgical navigation is performed during the operation, and then surgical operations are performed on the patient to achieve the purpose of treating the patient; during the entire navigation and positioning process, the surgical instruments need to be in a preset recognition range, and during the operation, it is necessary to continuously confirm the navigation. Whether the device is consistently in the best identification zone.
  • training alone cannot guarantee that operators can quickly and accurately master this skill. Therefore, a continuous interactive method of prompting the optimal recognition range becomes a must.
  • the setting of the preset recognition range usually needs to be based on the position of the image acquisition device.
  • the image acquisition device is set at a fixed position in the surgical scene to capture the scene image in real time to identify the position of the surgical instrument, then the preset recognition The range is a fixed position relative to the surgical scene; in other embodiments, the image acquisition device is not fixed, for example, the image acquisition device on the head-mounted device worn by the operator is used to capture the scene image in real time to identify the position of the surgical instrument , the preset recognition range is fixed relative to the position of the image acquisition device, but not fixed relative to the surgical scene.
  • the present invention provides the user with positioning for the handheld machine.
  • the user is the observer of the whole navigation process, and he is also the operator who probes the hand-held machine into the body of the object.
  • Objects can be people or other animals that the user needs to operate on.
  • the specific object in the present invention is a hand-held machine, such as a medical device with identification features.
  • the medical device can be any tool that can be probed into the body of the object, such as: a puncture needle, a biopsy needle, a radiofrequency or microwave ablation needle, an ultrasound probe, Rigid endoscope, oval forceps under endoscopic surgery, electrocautery or stapler, etc.
  • the present invention also provides a dynamic position recognition prompting system, comprising: a recognition module 1, a judgment module 2, and a prompting module 3; wherein,
  • the recognition module 1 is used to acquire the current scene image through the image acquisition device, and identify the current position of the specific object in the current scene image;
  • the judging module 2 is used for judging whether the current position of the specific object is within the preset recognition range
  • the prompt module 3 is used to display prompt information in an augmented reality manner to prompt the user to adjust the specific object to the preset recognition range if the current position of the specific object is not within the preset recognition range .
  • the preset recognition range is a preset recognition distance range and/or a recognition angle range
  • the judgment module 2 is specifically configured to judge whether the specific object is within the recognition distance range; and/or judge Whether the specific object is within the recognition angle range.
  • the preset recognition range is set as a spatial shape range that increases radially along the shooting direction of the image acquisition device, including: taking the image acquisition device as the proximal end, and the distal end along the image acquisition direction to increase radially.
  • Large space shape Exemplarily, the spatial shape is a pyramid structure.
  • the judging module 2 also includes an optimal recognition range judging sub-module 21 for judging whether the current position of the specific object is within the optimal recognition range of the image acquisition device; the optimal recognition range is located in the predetermined range. Within the set recognition range, the preset recognition range is larger than the optimal recognition range.
  • the judging module further includes a recognition range setting module 22 for setting the preset recognition range and/or the optimal recognition range.
  • the optimal recognition range includes: an optimal recognition range preset according to a user instruction and/or a historical optimal recognition range obtained according to data statistics.
  • the prompting module is further configured to display prompting information in an augmented reality manner, and the prompting information at least includes the positional relationship between the specific object and the preset recognition range, so as to prompt the user to adjust the position of the specific object or the image acquisition device.
  • the prompt information is one or more combinations of text, color, pattern, and animation superimposed and displayed on the current scene in an augmented reality manner.
  • the prompting module further includes a movement prompting sub-module, which is used for displaying position movement information and prompting the user to move the position of the specific object or the image acquisition device, so that the current position of the specific object is within the preset recognition range;
  • the position movement information includes: any combination of one or more kinds of information among text, auxiliary lines or animations superimposed and displayed on the current scene in an augmented reality manner.
  • the specific object in the present invention is a specific object with identifiable features, for example, the specific object is a surgical instrument with identification features.
  • the surgical instrument can be identified and positioned by identifying the features.
  • the present invention also provides a dynamic location recognition prompting method, including:
  • the identification features include at least the morphological characteristics of the object itself and/or the identification features of the object marking.
  • the morphological characteristics of the object body at least include the structure, shape, or color of the object body
  • the object mark identification features at least include patterns, graphics or two-dimensional codes set on the second object.
  • a two-dimensional code is a black and white plane figure distributed on a plane, and the points on it are very easy to identify. By identifying at least three points among them, the two-dimensional code can be positioned.
  • the object marking identification feature may also be other plane graphics such as checkerboard.
  • QR code or checkerboard as an identification makes positioning objects or instruments more accurate and fast. As a result, fast-moving instruments can be navigated more precisely.
  • the logo fixed on the surface of the medical device can also be a three-dimensional figure.
  • the logo in the process of designing and producing the device, can be the handle of the device, or a structure fixed on the side of the handle.
  • the calculation time required for recognition is longer than that of plane graphics, the spatial positioning accuracy of stationary or slow-moving objects is relatively high.
  • the specific object in the present invention is a surgical instrument with identification features.
  • it is a puncture needle used in a puncture operation.
  • the end of the puncture needle is provided with an easily identifiable bowl-shaped structure, and a two-dimensional code is printed on the surface of the bowl-shaped structure.
  • the user needs to use the image acquisition device to obtain the current scene image, and the system recognizes the positions of multiple feature points on the two-dimensional code according to the current scene image to obtain the position of the surgical instrument in the scene.
  • the current position but during the whole process, the surgical instruments are constantly moving, so during the operation, the surgical instruments are not always in a static position, and during the moving process, the surgical instruments are not always in the ideal position area. It affects the effect of the entire surgical process, especially for users who are not very skilled, making it difficult to operate. Therefore, in the present invention, the surgical instrument during the operation can be identified through the identification feature, and the position of the surgical instrument can be acquired in real time.
  • the specific object is not in the range of the position area that is easy to be photographed and identified in real time, and the problem cannot be found in real time during the operation. Therefore, in the present invention, the user's operation process needs to be monitored and controlled in real time by technical means. identify.
  • the preset recognition range is a preset recognition distance range and/or a recognition angle range
  • the judgment module is specifically configured to judge whether the specific object is within the recognition distance range; and/or judge the Whether a specific object is within the identified angle range.
  • the image acquisition device is a device capable of acquiring images.
  • the image acquisition device may be an image acquisition device on a head-mounted device worn by the user, and its shooting angle is consistent with the user's viewing direction.
  • the image acquisition device is a head-mounted optical image acquisition device. The device not only ensures that the shooting angle is the viewing angle of the user, and ensures the accuracy, but also avoids the interference of various operations of the user during use, thereby significantly improving the user experience.
  • the position of a specific object needs to be acquired in real time.
  • the image of the surgical instrument is acquired through the image acquisition device, and it is judged in real time whether the current position of the surgical instrument is within the preset recognition range. If the position of the surgical instrument or the operating angle Offset, prompts the user, and assists the user to perform surgical operations.
  • the preset recognition range is set as a spatial shape range that increases radially along the shooting direction of the image acquisition device, and may be a spatial range formed by a conical structure.
  • the image acquisition device is taken as the proximal end, and the distal end along the image acquisition direction has a spatial shape that increases radially.
  • the end of the image acquisition device is the proximal end
  • the proximal structure is set as a rectangular structure
  • the end away from the image acquisition device is set as the distal end
  • the distal structure is a rectangular structure with an area larger than that of the proximal end.
  • connection between the distal structure and the proximal structure constitutes a closed spatial three-dimensional space, forming a preset recognition area. If the surgical instrument does not enter the preset recognition area, the user is prompted in time, so that the user can move the surgical instrument to reach the preset recognition area, and further, it can be prompted to enter the final optimal recognition area.
  • the identification range based on the shooting before the image acquisition device may include:
  • Transition area The pattern can be recognized and tracked by the algorithm, but it is not in the optimal range;
  • Optimal area the area recognized by the algorithm that can maintain the recognition accuracy
  • Transition angle The pattern can be recognized and tracked by the algorithm, but it is no longer within the optimal angle
  • the best angle the area recognized by the algorithm that can maintain the recognition accuracy
  • the preset recognition range is an operable implementation range in the scene, but under some special requirements, more accurate positioning and recognition are required. Therefore, the present invention sets an optimal recognition range, the most The optimal recognition range is located within the preset recognition range, generally at the center of the preset recognition range, and the preset recognition range is larger than the optimal recognition range.
  • the optimal recognition range includes: an optimal recognition range preset according to a user instruction and/or a historical optimal recognition range obtained according to data statistics.
  • the abscissa is the distance between the specific object and the image acquisition device, and the ordinate is the number of identifiable feature points.
  • the selected area is the selected optimal recognition distance, and based on the projected outer contour formed by the best recognition angle of all recognizable patterns in the optimal recognition area, an area prompt of the optimal recognition range in the field of view is formed.
  • the preset recognition range when the preset recognition range is set, it can also be set by angle.
  • the surgical instrument in Figure 3 the surgical instrument is viewed from the handle end, and the bowl-shaped edge of the handle is projected.
  • the projection of the bowl-shaped edge of the handle When the surgical instrument is at any identifiable angle and within the identifiable range of the image acquisition device, the projection of the bowl-shaped edge of the handle The set of all positions of the pattern forms the preset recognition range.
  • the preset recognition range is displayed in the corresponding position of the real scene in the form of augmented reality. Under the manipulation of the user, as long as the bowl-shaped edge of the handle is within the preset recognition range, it means that it is in the identifiable range. , which is convenient for the operator to quickly adjust the position.
  • the projection of the bowl-shaped edge of the handle forms a standard circle, such as the first circle in Figure 6.
  • the movement of the surgical instrument along the shooting direction will This causes the size of the projection ring to change, and the surgical instrument can be identified as long as the bowl edge of the surgical instrument remains within the preset identification range displayed in augmented reality.
  • the surgical instrument is deflected by a certain angle, its edge contour becomes an ellipse, such as the second circle in Figure 6, as long as the deflection angle is within an acceptable range, that is, the bowl-shaped edge of the surgical instrument is still in the pre-displayed augmented reality manner. Surgical instruments can still be identified within the set identification range.
  • the optimal recognition range shown can be dynamically adjusted according to accuracy requirements or operational requirements; when the specific object is in a fine operation requirement, the optimal recognition range is shrunk; when the specific object is in a large-scale operation requirement
  • the optimal recognition range expands.
  • a stricter identification range can be used to obtain more accurate spatial positioning, and when the puncture needle is pierced into the human body, the identifiable range can be appropriately relaxed to help doctors complete the surgical steps more quickly and reduce Minor surgical risks.
  • the augmented reality navigation device will record the historical operation information of users with different user (doctor) accounts. According to data statistics, for users with good historical records, the optimal identification range is larger, because these users have a better understanding of navigation The dependence on accuracy is low; for users whose history records show poor stability, the optimal recognition interval is small.
  • the preset recognition range can also be divided into multiple prompt levels according to the needs and the level of the operation.
  • different levels of reminders and prompts are provided to help users perform surgical operations.
  • displaying prompt information in an augmented reality manner to prompt the user to adjust the specific object within the preset recognition range includes: the prompt information at least includes the positional relationship between the specific object and the preset recognition range, so as to prompt the user to adjust The location of a specific object or image acquisition device.
  • the prompt information is a combination of one or more of text, color, pattern, and animation superimposed and displayed on the current scene in an augmented reality manner. Since a quiet environment is required during the operation, for the convenience of the user, the voice mode is not used in the present invention, but in different scenarios, the voice mode can also be used for prompting.
  • the user is displayed through text, color, or pattern, so that the user can get prompt information after seeing it, and then move the surgical instrument during the operation, or move the display device.
  • the user can also be assisted to move the position of the specific object or the image acquisition device, so that the current position of the specific object is within the preset recognition range;
  • the position movement information includes: to enhance A combination of one or more kinds of information among the text, auxiliary lines, or animation displayed on the current scene in a realistic way. Users can make auxiliary movements according to the prompts of auxiliary lines and animations, and then can quickly and accurately reach the best position.
  • the mechanism of dynamically adjusting the optimal recognition range according to different surgical stages, the existing stability records of the specific object and the existing operation records of the operator. You can also use voice to prompt you for moving locations.
  • the laboratory measurement of a specific object can define its three-dimensional optimal recognition range and three-dimensional optimal recognition angle relative to the image acquisition device according to the expected positioning accuracy.
  • the specific object can be adjusted to the appropriate position, and then the subsequent surgical steps can be continued.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Robotics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Gynecology & Obstetrics (AREA)
  • Pathology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Radiology & Medical Imaging (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

一种动态位置识别提示方法以及***,包括:识别模块(1)、判断模块(2)、以及提示模块(3);其中,识别模块(1),用于通过图像获取装置获取当前场景图像,并且识别场景图像中的特定物体的当前位置;判断模块(2),用于判断特定物体的当前位置是否处于预设识别范围;提示模块(3),用于若特定物体的当前位置未处于图像获取装置的预设识别范围内,能够对特定物体进行导航,帮助用户找到特定物体处于预设识别范围,使得图像与导航更加精确和精准。

Description

一种动态位置识别提示***及其方法 技术领域
本发明涉及图像处理技术领域,尤其涉及一种动态位置识别提示***及其方法。
背景技术
在图像获取装置识别物***置时,由于算法的限制,当识别物与图像获取装置的距离不同,定位精度也会产生动态变化。比如当识别物距离图像获取装置较远,可供分辨的特征点数量会显著下降,从而影响定位精度。又如识别物距离图像获取装置非常近,则可能因不同位置的镜头畸变造成图像扭曲,从而造成定位精度下降。这种现象就造成了图像获取装置前方有一特定区域是用于识别定位手术器械的最佳范围。如何让操作者直观便捷的将可识别器械保持在范围内,保证定位精度持续符合设计预期,是增强现实导航技术实现的难点之一。
例如基于增强现实的穿刺手术导航技术,因为用于获取识别物位置信息的光学图像获取装置佩戴于操作者头部,同时操作者需手持可被图像获取装置识别的穿刺器械并操作器械执行不同的手术动作,这种操作方式造成了图像获取装置与识别物之间的距离会随时动态变化,进而影响手术过程的操作。
发明内容
针对上述缺陷或不足,本发明的目的在于提供一种动态位置识别提示***及其方法。
为达到以上目的,本发明的技术方案为:
一种动态位置识别提示***,包括:识别模块、判断模块、以及提示模块;其中,
所述识别模块,用于通过图像获取装置获取当前场景图像,并且识别所述当前场景图像中的特定物体的当前位置;
所述判断模块,用于判断所述特定物体的当前位置是否处于预设识别范围内;
所述提示模块,用于若所述特定物体的当前位置未处于所述预设识别范围内,以增强现实方式显示提示信息以提示用户将所述特定物体调整至所述预设识别范围内。
所述预设识别范围为预设的识别距离范围和/或识别角度范围;所述判断模块具体用于判断所述特定物体是否处于所述识别距离范围内;和/或判断所述特定物体是否处于所述识别角度范围内。
所述预设识别范围设定为沿所述图像获取装置的拍摄方向呈放射增大的空间形状范围。
所述判断模块包括最佳识别范围判断子模块,用于判断所述特定物体的当前位置是否处于所述图像获取装置的最佳识别范围内;所述最佳识别范围位于所述预设识别范围内,所述预设识别范围大于所述最佳识别范围。
所述判断模块还包括识别范围设定模块,用于设定所述预设识别范围和/或最佳识别范围。
所述最佳识别范围包括:根据用户指令预设的最佳识别范围,和/或根据数据统计获取的历史最佳识别范围。
所述提示模块还用于以增强现实方式显示提示信息,所述提示信息至少包括特定物体与预设识别范围的位置关系,以提示用户调整特定物体或图像获取装置的位置。
所述提示信息为以增强现实方式叠加显示在当前场景上的文字、颜色、图案、动画的一种或多种的组合。
所述提示模块还包括移动提示子模块,用于显示位置移动信息,提示用户移动所述特定物体或所述图像获取装置的位置,以使得特定物体的当前位置处于所述预设识别范围内;所述位置移动信息包括以增强现实方式叠加显示在当前场景上的文字、辅助线、或动画之中的一种或多种信息的组合。
所述特定物体为带有可识别特征的特定物体。
一种动态位置识别提示方法,包括:
获取当前场景图像;
识别所述当前场景图像中的特定物体的当前位置;
判断所述特定物体的当前位置是否处于预设识别范围内;
若所述特定物体的当前位置未处于所述预设识别范围内,以增强现实方式显示提示信息以提示用户将所述特定物体调整至所述预设的识别范围内。
所述预设识别范围为预设的识别距离范围和/或识别角度范围;
所述判断所述特定物体的当前位置是否处于预设识别范围内包括:
判断所述特定物体是否处于所述识别距离范围内;和/或判断所述特定物体是否处于所述识别角度范围内。
所述预设识别范围设定为沿所述图像获取装置的拍摄方向呈放射增大的空间形状范围。
进一步地,还包括判断所述特定物体的当前位置是否处于所述图像获取装置的最佳识别范围内;所述最佳识别范围位于所述预设识别范围内,所述预设识别范围大于所述最佳识别范围。
所述最佳识别范围可根据操作需求进行动态调整,当所述特定物体处于精细操作需求时,则最佳识别范围收缩;当所述特定物体处于大范围操作需求时,则最佳识别范围扩展。
所述最佳识别范围包括:根据用户指令预设的最佳识别范围和/或根据数据统计获取的历史最佳识别范围。
所述提示信息至少包括特定物体与预设识别范围的位置关系,以提示用户调整特定物体或图像获取装置的位置。
所述提示信息为以增强现实方式叠加显示在当前场景上的文字、颜色、图案、动画的一种或多种的组合。
所述以增强现实方式显示提示信息以提示用户将所述特定物体调整至所述预设识别范围内包括:显示位置移动信息,提示用户移动所述特定物体或所述图像获取装置的位置,以使得特定物体处于所述预设识别范围内;所述位置移动信息包括以增强现实方式叠加显示在当前场景上的文字、辅助线、或动画之中的一种或多种信息任意组合。
所述特定物体为带有可识别特征的特定物体。
与现有技术比较,本发明的有益效果为:
本发明提供了一种动态位置识别提示***及其方法,通过对获取的场景中的特定物体的位置,进行实时的判断,判断特定物体的当前位置与预设识别范围关系,进而对特定物体进行增强现实方式显示位置关系调整信息,提 示用户移动所述特定物体或图像获取装置的位置,从而达到特定物体与图像获取装置的位置关系处于预设识别范围的目的,本发明能够对特定物体进行导航,帮助用户找到特定物体处于预设识别范围,例如,在增强现实的穿刺手术导航技术中,能够保证用户能够快速准确定位,掌握此项技能。
另外,本发明还能够根据预期的定位精度,定义其相对图像获取装置的三维最佳识别范围,以及三维最佳识别角度。通过实时计算特定物体当前位置与最佳识别区域的相对关系,通过颜色,图形,动画等方式提示操作者目前是否应该调整识别位置,将特定物体调整至合适位置,进而继续后续手术步骤。
附图说明
图1是本发明动态位置识别提示***结构图;
图2是本发明动态位置识别提示方法流程图;
图3是本发明动态位置识别提示方法中特定物体的具体结构图;
图4是本发明动态位置识别提示方法中预设识别范围图;
图5是本发明动态位置识别提示方法中最佳识别距离范围图;
图6是本发明动态位置识别提示方法中预设识别范围中角度提示图。
具体实施方式
下面将结合附图对本发明做详细描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属 于本发明的保护范围。
在进行一些临床手术中,尤其是在进行精准的操作场景中,需要图像获取装置获取手术场景图像,然后对场景中的手术器械进行定位和识别,用户能够通过显示的增强现实的图像,对整个手术过程进行手术导航,进而对患者进行手术操作,达到对患者进行治疗的目的;对整个导航和定位过程中,手术器械需要一直处于一个预设识别范围,在手术过程中,需要持续确认被导航器械是否持续处于最佳识别区域。而如果仅靠培训,是无法保证操作人员能够快速准确掌握此项技能。因此,一种持续的交互式最佳识别范围提示方法就成为必须。预设识别范围的设定通常需要以图像获取装置的位置为基准,在一些实施例中,在手术场景中的固定位置设置图像获取装置实时拍摄场景图像来识别手术器械的位置,则预设识别范围相对于手术场景是固定的位置;在另一些实施例中,图像获取装置是不固定的,例如利用操作者穿戴的头戴式设备上的图像获取装置实时拍摄场景图像来识别手术器械的位置,则预设识别范围相对于图像获取装置的位置是固定的,但相对于手术场景是不固定的。
以手术实施场景为例,本发明为用户提供针对手持机器的定位。其中,用户是整个导航过程的观察者,其也是将手持机器探入对象体内的操作者。对象可以是用户需要对其进行操作的人或其他动物。本发明中特定物体为手持机器,例如带有识别特征的医疗器械,该医疗器械可以是任意可探入对象体内的工具,例如为:穿刺针、活检针、射频或微波消融针、超声探头、硬质内窥镜、内窥镜手术下卵圆钳、电刀或吻合器等。
如图1所示,本发明还提供了一种动态位置识别提示***,包括:识别 模块1、判断模块2、以及提示模块3;其中,
所述识别模块1,用于通过图像获取装置获取当前场景图像,并且识别所述当前场景图像中的特定物体的当前位置;
所述判断模块2,用于判断所述特定物体的当前位置是否处于预设识别范围内;
所述提示模块3,用于若所述特定物体的当前位置未处于所述预设识别范围内,以增强现实方式显示提示信息以提示用户将所述特定物体调整至所述预设识别范围内。
本发明中,所述预设识别范围为预设的识别距离范围和/或识别角度范围;所述判断模块2具体用于判断所述特定物体是否处于所述识别距离范围内;和/或判断所述特定物体是否处于所述识别角度范围内。
其中,所述预设识别范围设定为沿所述图像获取装置的拍摄方向呈放射增大的空间形状范围,包括:以所述图像获取装置为近端,沿图像获取方向远端呈放射增大的空间形状。示例性的,所述空间形状为椎体结构。
所述判断模块2还包括最佳识别范围判断子模块21,用于判断所述特定物体的当前位置是否处于所述图像获取装置的最佳识别范围内;所述最佳识别范围位于所述预设识别范围内,所述预设识别范围大于所述最佳识别范围。
所述判断模块还包括识别范围设定模块22,用于设定所述预设识别范围和/或最佳识别范围。优选地,所述最佳识别范围包括:根据用户指令预设的最佳识别范围和/或根据数据统计获取的历史最佳识别范围。
其中,所述提示模块还用于以增强现实方式显示提示信息,所述提示信息至少包括特定物体与预设识别范围的位置关系,以提示用户调整特定物体 或图像获取装置的位置。所述提示信息为以增强现实方式叠加显示在当前场景上的文字、颜色、图案、动画的一种或多种的组合。
所述提示模块还包括移动提示子模块,用于显示位置移动信息,提示用户移动所述特定物体或所述图像获取装置的位置,以使得特定物体的当前位置处于所述预设识别范围内;所述位置移动信息包括:以增强现实方式叠加显示在当前场景上的文字、辅助线或动画之中的一种或多种信息任意组合。
需要说明的是,本发明中所述特定物体为带有可识别特征的特定物体,例如所述特定物体为带有识别特征的手术器械。本发明中,通过对识别特征的能够对手术器械识别、定位。
另外,如图2所示,本发明还提供了一种动态位置识别提示方法,包括:
S1、获取当前场景图像;
在进行具体的手术场时,需要使用手术器械进行操作,所述特定物体为上述的可移动的医疗器械,该医疗器械上设置有能够进行识别的识别特征。所述识别特征至少包括物体本体形态特性和/或物体标记识别特征。所述物体本体形态特性至少包括物体本体的结构、形态、或颜色,所述物体标记识别特征至少包括第二物体上设置的图案、图形或二维码。例如,二维码是在平面上分布的黑白相间的平面图形,其上面的点非常易于识别,通过识别其中的至少3个点,可以实现该二维码的定位。因为二维码固定于对象或器械,所以,可以实现固定有该二维码的对象或器械的定位。可选地,所述物体标记识别特征还可以是诸如棋盘格的其他平面图形。利用二维码或棋盘格作为标识,使得定位对象或器械更准确且快速。从而,可以对快速移动器械进行 更精准地导航。但是具体实施过程中,不局限于此,也可以是物体的其他的能够被识别的特性。
可选地,在医疗器械表面上所固定的标识还可以是立体图形,例如,在器械设计生产过程中,标识的图形可以是该器械的手柄,或者是固定于手柄侧面的某个结构。使用立体图形进行空间定位虽然识别所需的计算时间相对平面图形长,但对固定不动或慢速移动的目标空间定位精度较高。
S2、识别所述当前场景图像中的特定物体的当前位置;
示例性的,本发明中特定物体为带有识别特征的手术器械。在一种实施例中,如图3所示,是一种穿刺手术过程使用的穿刺针,穿刺针的端部设置有易于识别的碗状结构,并在碗状结构表面印刷有二维码。当进行手术过程中,用户手持穿刺针对实施对象进行手术,需要通过图像获取装置获取当前场景图像,由***根据当前场景图像识别二维码上的多个特征点的位置得到场景中的手术器械的当前位置;但是在整个过程中,手术器械是要进行不断移动的,所以在手术过程中,手术器械不是一直处于一个静态位置,在移动过程中,对手术器械不是每时每刻处于理想位置区域范围,影响整个手术过程的效果,尤其是对于不是很熟练的用户,造成操作困难。所以本发明中通过识别特征能够对手术中手术器械进行识别,实时获取手术器械的位置。
S3、判断所述特定物体的当前位置是否处于预设识别范围内;
由于操作整个过程中,特定物体不是实时都处于易于被拍摄并识别的位置区域范围,并且操作中不能实时发现该问题,所以,本发明中需要通过技术手段,对用户的操作过程进行实时监控和识别。
具体地,所述预设识别范围为预设的识别距离范围和/或识别角度范围;所述判断模块具体用于判断所述特定物体是否处于所述识别距离范围内;和/或判断所述特定物体是否处于所述识别角度范围内。
本实施例中,所述图像获取装置为能够获取图像的装置。在一些实施例中,图像获取装置可以是用户佩戴的头戴式设备上的图像获取装置,其拍摄角度与用户的观察方向保持一致。可选地,图像获取装置是头戴式光学图像获取装置。该装置不仅保证了拍摄的角度是用户所观看的角度,保证了精准度,而且避免了使用时对用户的各种操作的干扰,从而显著提高了用户体验。
在手术过程中,需要实时获取特定物体的位置,例如手术过程中,通过图像获取装置获取手术器械的图像,并且实时判断手术器械当前位置是否处于预设识别范围内,如果手术器械位置或者操作角度偏移,对用户进行提示,辅助用户进行手术操作。
需要说明的是,所述预设识别范围设定为沿所述图像获取装置的拍摄方向呈放射增大的空间形状范围,可以为锥形结构组成的空间范围。例如以所述图像获取装置为近端,沿图像获取方向远端呈放射增大的空间形状。如图4所示,图像获取装置端为近端,近端结构设置为矩形结构,远离图像获取装置端设置为远端,远端结构为面积大于近端面积的矩形结构。远端结构与近端结构相连则构成了一个封闭的空间立体空间,形成一个预设识别区域,图4中预设识别区域为虚线构成的范围,最佳识别范围为实线构成的范围,当手术器械未进入预设识别区域区域,则对用户及时进行提示,使得用户能够对手术器械移动,达到预设识别区域区域,进一步地,可以提示进入最终达到最佳识别范围。
在一个实施例中,基于划定图像获取装置前的拍摄识别范围可以包括:
不可识别范围:完全超出算法支持的图像获取装置成像区域;
过渡区域:图案可被算法识别并追踪,但不在最佳范围内;
最佳区域:算法认可的可保持识别精度的区域;
过渡角度:图案可被算法识别并追踪,但不再最佳角度内;
最佳角度:算法认可的可保持识别精度的区域;
根据不同的区域,则可以进行不同的提示。
进一步优选地,预设识别范围是该场景中的一个可操作的实施范围,但是,在一些特殊的要求下,需要进行更加精准的定位和识别,所以,本发明设置了最佳识别范围,最佳识别范围位于所述预设识别范围内,一般情况下位于预设识别范围中的中心位置,所述预设识别范围大于所述最佳识别范围。所述最佳识别范围包括:根据用户指令预设的最佳识别范围和/或根据数据统计获取的历史最佳识别范围。
如图5所示,对根据图像对特定物体的特征识别准确性进行统计分析:横坐标为特定物体与图像获取装置的距离,纵坐标为可识别的特征点数。选出的区域为选择的最佳识别距离,基于全部可识别图案在最佳识别区域中最佳识别角度形成的投影外轮廓,形成视野中最佳识别范围的区域提示。
如图6所示,设定预设识别范围时,也可以是通过角度进行设置。以图3中的手术器械为例,从手柄端正视该手术器械,对手柄碗状边缘进行投影,当手术器械处于任何可识别角度,且在图像获取装置可识别范围内,手柄碗状边缘投影图案的全部位置的集合,形成了预设识别范围。在手术过程中,以增强现实的方式将预设识别范围显示在真实场景的相应位置,手术器械在 用户的操纵下,只要手柄碗状边缘处于预设识别范围内,即代表其位于可识别范围,方便操作者快速调整位置。
可以理解,当手术器械的朝向与图像获取装置的拍摄方向一致时,手柄碗状边缘投影形成一标准的圆环,如图6中的第一圆环,此时手术器械沿着拍摄方向移动会导致投影圆环的大小改变,而只要手术器械的碗状边缘仍处于以增强现实方式显示的预设识别范围内,手术器械就可以被识别。若手术器械偏转一定角度,其边缘轮廓变成一椭圆,如图6中的第二圆环,只要偏转角度在可接受范围内,即手术器械的碗状边缘仍处于以增强现实方式显示的预设识别范围内,手术器械仍可以被识别。
本实施例中,所示最佳识别范围可根据精度需求或者操作需求进行动态调整;当所述特定物体处于精细操作需求时,则最佳识别范围收缩;当所述特定物体处于大范围操作需求时,则最佳识别范围扩展。比如当穿刺针处于患者体外时,可以用较严格的识别范围去获得更准确的空间定位,而当穿刺针刺入人体后,可适当放宽可识别范围,帮助医生更快速的完成手术步骤,减小手术风险。结合以上第2点,当穿刺针处于患者体外时,只有处于最佳区域、最佳角度,才能认为穿刺针与图像获取装置的当前位置关系处于预设位置关系区间;而当穿刺针进入人体后,处于过度区域、过度角度也可以认为处于预设识别位置关系。最佳识别范围也可根据操作者的手持稳定性而动态调整。在一个实施例中,增强现实导航设备会记录不同用户(医生)账号使用者的历史操作信息,根据数据统计,对历史记录良好的用户,最佳识别区间范围较大,因为这部分用户对导航精确度的依赖较低;对历史记录显示稳定性较差的用户,最佳识别区间较小。
另外,也可以根据需求和手术的级别,将预设识别范围可以划分为多个提示级别,当特定物体处于不同的位置时,进行不同级别的提醒和提示,帮助用户进行手术操作。
S4、若所述特定物体的当前位置未处于所述预设识别范围内,以增强现实方式显示提示信息以提示用户将所述特定物体调整至所述预设识别范围内。
其中,以增强现实方式显示提示信息以提示用户将所述特定物体调整至所述预设的识别范围内包括:所述提示信息至少包括特定物体与预设识别范围的位置关系,以提示用户调整特定物体或图像获取装置的位置。其中,所述提示信息为以增强现实方式叠加显示在当前场景上的文字、颜色、图案、动画的一种或多种的组合。由于在手术中需要安静的环境,所以为了用户方便考虑,本发明中未使用语音方式,但是,在不同的场景中,也可以使用语音方式进行提示。在增强现实中,通过文字、颜色、或图案对用户进行展示,使得用户能够看到后,得到提示信息,进而对手术过程中的手术器械件移动,或对显示装置进行移动。
另外,在进行提示后,还能够辅助用户移动所述特定物体或所述图像获取装置的位置,以使得特定物体的当前位置处于所述预设识别范围内;所述位置移动信息包括:以增强现实方式叠加显示在当前场景上的文字、辅助线、或动画之中的一种或多种信息的组合。用户能够根据、辅助线、动画的提示进行辅助移动,进而能够快速、准确的到达最佳位置。动态调整最佳识别范围的机制:根据不同手术阶段,本特定物体已有的稳定性记录,本操作者已有的手术记录。也可以应用语音进行移动位置的提示。
通过本发明的动态位置识别提示方法,对可特定物体的实验室测量,根 据预期的定位精度,定义其相对图像获取装置的三维最佳识别范围,以及三维最佳识别角度。通过实时计算特定物体当前位置与最佳识别区域的相对关系,通过颜色,图形,动画等方式提示操作者目前是否应该调整识别位置,能够使得特定物体调整至合适位置,进而继续后续手术步骤。
对于本领域技术人员而言,显然能了解到上述具体事实例只是本发明的优选方案,因此本领域的技术人员对本发明中的某些部分所可能作出的改进、变动,体现的仍是本发明的原理,实现的仍是本发明的目的,均属于本发明所保护的范围。

Claims (13)

  1. 一种动态位置识别提示***,其特征在于,包括:识别模块、判断模块、以及提示模块;其中,
    所述识别模块,用于通过图像获取装置获取当前场景图像,并且识别所述当前场景图像中的特定物体的当前位置;
    所述判断模块,用于判断所述特定物体的当前位置是否处于预设识别范围内;
    所述提示模块,用于若所述特定物体的当前位置未处于所述预设识别范围内,以增强现实方式显示提示信息以提示用户将所述特定物体调整至所述预设识别范围内。
  2. 根据权利要求1所述的动态位置识别提示***,其特征在于,所述预设识别范围为预设的识别距离范围和/或识别角度范围;所述判断模块具体用于判断所述特定物体是否处于所述识别距离范围内和/或判断所述特定物体是否处于所述识别角度范围内。
  3. 根据权利要求1或2项所述的动态位置识别提示***,其特征在于,所述预设识别范围设定为沿所述图像获取装置的拍摄方向呈放射增大的空间形状范围。
  4. 根据权利要求3所述的动态位置识别提示***,其特征在于,所述判断模块包括最佳识别范围判断子模块,用于判断所述特定物体的当前位置是否处于所述图像获取装置的最佳识别范围内;所述最佳识别范围位于所述预设识别范围内,所述预设识别范围大于所述最佳识别范围。
  5. 根据权利要求4所述的动态位置识别提示***,其特征在于,所述最佳识别范围根据操作需求调整,当所述特定物体处于精细操作需求时,则所 述最佳识别范围收缩;当所述特定物体处于大范围操作需求时,则所述最佳识别范围扩展。
  6. 根据权利要求4所述的动态位置识别提示***,其特征在于,所述判断模块还包括识别范围设定模块,用于设定所述预设识别范围和/或最佳识别范围。
  7. 根据权利要求6所述的动态位置识别提示***,其特征在于,所述最佳识别范围包括:根据用户指令预设的最佳识别范围和/或根据数据统计获取的历史最佳识别范围。
  8. 根据权利要求1所述的动态位置识别提示***,其特征在于,所述提示信息至少包括特定物体与预设识别范围的位置关系,以提示用户调整特定物体或图像获取装置的位置。
  9. 根据权利要求1所述的动态位置识别提示***,其特征在于,所述提示信息为以增强现实方式叠加显示在当前场景上的文字、颜色、图案、动画的一种或多种的组合。
  10. 根据权利要求1所述的动态位置识别提示***,其特征在于,所述提示模块还包括移动提示子模块,用于显示位置移动信息,提示用户移动所述特定物体或所述图像获取装置的位置,以使得特定物体处于所述预设识别范围内;所述位置移动信息包括以增强现实方式叠加显示在当前场景上的文字、辅助线、或动画之中的一种或多种信息的组合。
  11. 根据权利要求1-10所述的任意一项动态位置识别提示***,其特征在于,所述特定物体为带有可识别特征的特定物体。
  12. 一种动态位置识别提示方法,其特征在于,包括:
    获取当前场景图像;
    识别所述当前场景图像中的特定物体的当前位置;
    判断所述特定物体的当前位置是否处于预设识别范围内;
    若所述特定物体的当前位置未处于所述预设识别范围内,以增强现实方式显示提示信息以提示用户将所述特定物体调整至所述预设的识别范围内。
  13. 根据权利要求12所述的动态位置识别提示方法,其特征在于,所述预设识别范围为预设的识别距离范围和/或识别角度范围。
PCT/CN2022/081729 2021-04-01 2022-03-18 一种动态位置识别提示***及其方法 WO2022206436A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/553,351 US20240185448A1 (en) 2021-04-01 2022-03-18 Dynamic position recognition and prompt system and method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110358154.8A CN113509265A (zh) 2021-04-01 2021-04-01 一种动态位置识别提示***及其方法
CN202110358154.8 2021-04-01

Publications (1)

Publication Number Publication Date
WO2022206436A1 true WO2022206436A1 (zh) 2022-10-06

Family

ID=78062191

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/081729 WO2022206436A1 (zh) 2021-04-01 2022-03-18 一种动态位置识别提示***及其方法

Country Status (3)

Country Link
US (1) US20240185448A1 (zh)
CN (1) CN113509265A (zh)
WO (1) WO2022206436A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113509265A (zh) * 2021-04-01 2021-10-19 上海复拓知达医疗科技有限公司 一种动态位置识别提示***及其方法

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130208005A1 (en) * 2012-02-10 2013-08-15 Sony Corporation Image processing device, image processing method, and program
US20140081659A1 (en) * 2012-09-17 2014-03-20 Depuy Orthopaedics, Inc. Systems and methods for surgical and interventional planning, support, post-operative follow-up, and functional recovery tracking
US20170172696A1 (en) * 2015-12-18 2017-06-22 MediLux Capitol Holdings, S.A.R.L. Mixed Reality Imaging System, Apparatus and Surgical Suite
US20170367771A1 (en) * 2015-10-14 2017-12-28 Surgical Theater LLC Surgical Navigation Inside A Body
CN108319274A (zh) * 2017-01-16 2018-07-24 吕佩剑 一种无人飞行器位置的图形显示方法
US20190053855A1 (en) * 2017-08-15 2019-02-21 Holo Surgical Inc. Graphical user interface for a surgical navigation system and method for providing an augmented reality image during operation
US20200085511A1 (en) * 2017-05-05 2020-03-19 Scopis Gmbh Surgical Navigation System And Method
CN111595349A (zh) * 2020-06-28 2020-08-28 浙江商汤科技开发有限公司 导航方法及装置、电子设备和存储介质
WO2020231654A1 (en) * 2019-05-14 2020-11-19 Tornier, Inc. Bone wall tracking and guidance for orthopedic implant placement
CN113509265A (zh) * 2021-04-01 2021-10-19 上海复拓知达医疗科技有限公司 一种动态位置识别提示***及其方法

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103458219A (zh) * 2013-09-02 2013-12-18 小米科技有限责任公司 一种视频通话面部调整方法、装置及终端设备
JP6281947B2 (ja) * 2014-04-21 2018-02-21 Kddi株式会社 情報提示システム、方法及びプログラム
CN104125395A (zh) * 2014-05-30 2014-10-29 深圳市中兴移动通信有限公司 一种实现自动拍摄的方法及装置
CN104182741A (zh) * 2014-09-15 2014-12-03 联想(北京)有限公司 一种图像采集提示方法、装置及电子设备
CN109559453A (zh) * 2017-09-27 2019-04-02 缤果可为(北京)科技有限公司 用于自动结算的人机交互装置及其应用
US10806525B2 (en) * 2017-10-02 2020-10-20 Mcginley Engineered Solutions, Llc Surgical instrument with real time navigation assistance
WO2019245869A1 (en) * 2018-06-19 2019-12-26 Tornier, Inc. Closed-loop tool control for orthopedic surgical procedures
CN112190331A (zh) * 2020-10-15 2021-01-08 北京爱康宜诚医疗器材有限公司 手术导航信息的确定方法、装置及***、电子装置

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130208005A1 (en) * 2012-02-10 2013-08-15 Sony Corporation Image processing device, image processing method, and program
US20140081659A1 (en) * 2012-09-17 2014-03-20 Depuy Orthopaedics, Inc. Systems and methods for surgical and interventional planning, support, post-operative follow-up, and functional recovery tracking
US20150282796A1 (en) * 2012-09-17 2015-10-08 DePuy Synthes Products, Inc. Systems And Methods For Surgical And Interventional Planning, Support, Post-Operative Follow-Up, And, Functional Recovery Tracking
US20170367771A1 (en) * 2015-10-14 2017-12-28 Surgical Theater LLC Surgical Navigation Inside A Body
US20170172696A1 (en) * 2015-12-18 2017-06-22 MediLux Capitol Holdings, S.A.R.L. Mixed Reality Imaging System, Apparatus and Surgical Suite
CN108319274A (zh) * 2017-01-16 2018-07-24 吕佩剑 一种无人飞行器位置的图形显示方法
US20200085511A1 (en) * 2017-05-05 2020-03-19 Scopis Gmbh Surgical Navigation System And Method
US20190053855A1 (en) * 2017-08-15 2019-02-21 Holo Surgical Inc. Graphical user interface for a surgical navigation system and method for providing an augmented reality image during operation
WO2020231654A1 (en) * 2019-05-14 2020-11-19 Tornier, Inc. Bone wall tracking and guidance for orthopedic implant placement
CN111595349A (zh) * 2020-06-28 2020-08-28 浙江商汤科技开发有限公司 导航方法及装置、电子设备和存储介质
CN113509265A (zh) * 2021-04-01 2021-10-19 上海复拓知达医疗科技有限公司 一种动态位置识别提示***及其方法

Also Published As

Publication number Publication date
US20240185448A1 (en) 2024-06-06
CN113509265A (zh) 2021-10-19

Similar Documents

Publication Publication Date Title
US11717376B2 (en) System and method for dynamic validation, correction of registration misalignment for surgical navigation between the real and virtual images
JP6302070B2 (ja) マーカを用いた器具追跡
CN109998678A (zh) 在医学规程期间使用增强现实辅助导航
US20140160264A1 (en) Augmented field of view imaging system
CN109875685B (zh) 骨手术导航***
EP3753507A1 (en) Surgical navigation method and system
US20020010384A1 (en) Apparatus and method for calibrating an endoscope
US20010051761A1 (en) Apparatus and method for calibrating an endoscope
CN107689045B (zh) 内窥镜微创手术导航的图像显示方法、装置及***
US10973585B2 (en) Systems and methods for tracking the orientation of surgical tools
US10846883B2 (en) Method for calibrating objects in a reference coordinate system and method for tracking objects
WO2022206436A1 (zh) 一种动态位置识别提示***及其方法
CN109833092A (zh) 体内导航***和方法
EP3075342B1 (en) Microscope image processing device and medical microscope system
WO2022206417A1 (zh) 一种物体空间校准定位方法
CN113133829B (zh) 一种手术导航***、方法、电子设备和可读存储介质
US20220031394A1 (en) Method and System for Providing Real Time Surgical Site Measurements
CN117481756A (zh) 穿刺引导方法、设备及机器人***
WO2022206406A1 (zh) 一种基于校正物体在空间中位置的增强现实***、方法及计算机可读存储介质
CN216535498U (zh) 一种基于物体在空间中的定位装置
US20220020166A1 (en) Method and System for Providing Real Time Surgical Site Measurements
US20230166120A1 (en) Device and method for image-based support of a user
WO2020181498A1 (zh) 体内导航***和方法
CN116807617A (zh) 一种手术机器人***及手术导航方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22778619

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18553351

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22778619

Country of ref document: EP

Kind code of ref document: A1