WO2022206406A1 - 一种基于校正物体在空间中位置的增强现实***、方法及计算机可读存储介质 - Google Patents

一种基于校正物体在空间中位置的增强现实***、方法及计算机可读存储介质 Download PDF

Info

Publication number
WO2022206406A1
WO2022206406A1 PCT/CN2022/081469 CN2022081469W WO2022206406A1 WO 2022206406 A1 WO2022206406 A1 WO 2022206406A1 CN 2022081469 W CN2022081469 W CN 2022081469W WO 2022206406 A1 WO2022206406 A1 WO 2022206406A1
Authority
WO
WIPO (PCT)
Prior art keywords
position information
space
correcting
augmented reality
image
Prior art date
Application number
PCT/CN2022/081469
Other languages
English (en)
French (fr)
Inventor
孙非
朱奕
郭晓杰
崔芙粒
单莹
Original Assignee
上海复拓知达医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海复拓知达医疗科技有限公司 filed Critical 上海复拓知达医疗科技有限公司
Publication of WO2022206406A1 publication Critical patent/WO2022206406A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/108Computer aided selection or customisation of medical implants or cutting guides
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2068Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis using pointers, e.g. pointers having reference marks for determining coordinates of body points

Definitions

  • the present invention relates to the technical field of image processing, and in particular, to an augmented reality system and method based on correcting the position of an object in space.
  • Augmented reality technology usually captures images of real scenes through cameras, and needs to analyze and process the captured images of real scenes, and add additional information based on the real scenes to display to users, that is, augmentation of reality.
  • the process of analyzing and processing images of real scenes often includes locating objects in the scene. Under certain specific requirements, the accuracy of object positioning in a scene is extremely high, and the accuracy of object positioning in a scene in the prior art cannot meet the requirements.
  • augmented reality technology when augmented reality technology is applied to surgical navigation scenarios, it is necessary to very accurately determine the positional relationship between medical devices, patients, and scenarios to ensure accurate navigation information is provided to users.
  • puncture navigation based on augmented reality technology can realize fast and accurate surgical navigation with the most simple, convenient, easy-to-learn and easy-to-use equipment.
  • one of the cores of precise navigation accurate spatial positioning of surgical instruments based on visible light patterns, and registration of virtual organs and real human bodies, all depend on the accurate spatial positioning of identifiable patterns on the object to be positioned. Due to the limitation of device design, identifiable patterns of different sizes and shapes have different spatial positioning accuracy due to the inherent laws of spatial distribution of their own pattern feature points or the characteristics of their production processes.
  • the purpose of the present invention is to provide an augmented reality system and method based on correcting the position of an object in space.
  • An augmented reality system based on correcting the position of an object in space, comprising: a first acquisition unit, a second acquisition unit, a correction unit and a display unit, wherein:
  • the first acquisition unit is configured to capture the image of the first object in space, and identify the first object recognition characteristic in the image of the first object to obtain the spatial position information of the first object;
  • the second acquisition unit is configured to capture a second object image of the second object in space when the second object is at a specific position, and identify the second object identification characteristics in the second object image to obtain a second object image.
  • object space position information
  • the correction unit includes a first correction unit and/or a second correction unit, wherein:
  • the first correction unit configured to correct the second object space position information according to the first object space position information and the specific position
  • the second correcting unit configured to correct the spatial position information of the first object according to the spatial position information of the second object
  • the display unit is configured to display augmented reality information related to the position of the first object or the second object.
  • the first object identification characteristic includes at least the first object body shape characteristic and/or the first object mark identification characteristic; the first object body shape characteristic at least includes the structure, shape or color of the first object body; the first object body The object mark identification feature at least includes a pattern, graphic or two-dimensional code set on the first object;
  • the second object identification characteristic at least includes the second object ontology morphological characteristic and/or the second object mark identification characteristic; the second object ontology morphological characteristic at least includes the structure, shape or color of the second object ontology; the second object ontology
  • the object marking identification features at least include patterns, graphics or two-dimensional codes provided on the second object.
  • the first object space position information at least includes the first object space coordinates and/or the first object orientation; the second object space position information at least includes the second object space coordinates and/or the second object orientation.
  • the specific position is the position when the second object has a specific positional relationship with the first object, and the specific positional relationship includes the position between the second object and a preset point, line or surface on the first object Coincidence or partial coincidence.
  • the first correction unit is specifically used for:
  • the first correction unit is used for correcting the x and y coordinates of the second object.
  • the second correction unit is specifically used for:
  • the second correction unit is used for correcting the z coordinate of the first object.
  • the first object is a fixture in a surgical scene; the second object is an operating instrument in the surgical scene.
  • An augmented reality method based on correcting the position of objects in space including:
  • Augmented reality information related to the location of the first object or the second object is displayed.
  • the first object identification characteristic includes at least the first object body shape characteristic and/or the first object mark identification characteristic; the first object body shape characteristic at least includes the structure, shape or color of the first object body; the first object body The object mark identification feature at least includes a pattern, graphic or two-dimensional code set on the first object;
  • the second object identification characteristic at least includes the second object ontology morphological characteristic and/or the second object mark identification characteristic; the second object ontology morphological characteristic at least includes the structure, shape or color of the second object ontology; the second object ontology
  • the object marking identification features at least include patterns, graphics or two-dimensional codes provided on the second object.
  • the first object space position information at least includes the first object space coordinates and/or the first object orientation; the second object space position information at least includes the second object space coordinates and/or the second object orientation.
  • the specific position is a position when the second object has a specific positional relationship with a preset point, line or surface on the first object, and the specific positional relationship includes coincidence or partial coincidence of points, lines or surfaces.
  • the correcting the second object space position information according to the first object space position information and the specific position includes: calculating the second object theoretical position according to the first object space position information and the specific position relationship information; correcting the spatial position information of the second object according to the theoretical position information of the second object.
  • correcting the spatial position information of the second object includes correcting the x and y coordinates of the second object.
  • Correcting the first object space position information according to the second object space position information includes: calculating the second object theoretical position information according to the first object space position information and the specific position relationship; The theoretical position information of the second object is used to correct the spatial position information of the second object.
  • the calibrating the spatial position information of the second object includes calibrating the x and y coordinates of the second object.
  • the first object is a fixture in a surgical scene; the second object is an operating instrument in the surgical scene.
  • the present invention also provides a computer-readable storage medium storing a non-transitory computer-executable program for instructing the computer to execute the method described in the present invention.
  • the present invention provides an augmented reality system and method based on correcting the position of an object in space.
  • an augmented reality system and method based on correcting the position of an object in space.
  • two different objects can be compared with each other. Carry out mutual correction of image acquisition and position, and realize the improvement of optical positioning accuracy of one or both parties.
  • the method and system can be applied in various occasions, such as the positioning of medical device operations during surgery, applications in teaching simulation operations, and The application in the process of game activities, etc., accurate positioning and augmented reality of the location, can help users perform accurate and complete operations.
  • FIG. 1 is a structural block diagram of an augmented reality system based on the present invention for correcting the position of an object in space;
  • Fig. 2 is the embodiment example diagram in the specific embodiment of the present invention.
  • Fig. 3 is the flow chart of the augmented reality method of the present invention based on correcting the position of an object in space
  • FIG. 4 is a schematic diagram of mutual calibration based on the identification plate of the present invention.
  • the present invention provides an augmented reality method based on correcting the position of an object in space, which can be applied to an operation scene, an operation scene in a simulated teaching process, or a game process. position.
  • the embodiment of the present invention provides the user with the positioning of the instrument for the tissue and/or the instrument located in the object in the object.
  • the user is the observer of the whole in vivo navigation process, and he is also the operator who probes the instrument into the body of the subject.
  • Objects can be people or other animals that the user needs to operate on.
  • the instrument can be any tool that can be penetrated into the body of the subject.
  • the instrument may be, for example, a puncture needle, a biopsy needle, a radiofrequency or microwave ablation needle, an ultrasound probe, a rigid endoscope, an endoscopic oval forceps, an electric knife or a stapler and other medical instruments.
  • the first object is a fixture in a surgical scene; the second object is an operating instrument in a surgical scene.
  • an augmented reality system based on correcting the position of an object in space can be applied to surgical operations, simulated teaching operations, or game processes, and specifically includes: a first acquisition unit 1, a second acquisition unit 2, a correction Unit 3, and Display Unit 4, where:
  • the first acquisition unit 1 is configured to capture an image of a first object in space, and identify the first object recognition characteristic in the first object image to obtain spatial position information of the first object;
  • the second acquisition unit 2 is configured to capture a second object image of the second object in space when the second object is at a specific position, and identify the second object identification characteristics in the second object image to obtain the first object image. 2.
  • the correction unit 3 includes a first correction unit 31 and/or a second correction unit 32, wherein:
  • the first correcting unit 31 is configured to correct the spatial position information of the second object according to the spatial position information of the first object and a specific position;
  • the second correcting unit 32 is configured to correct the spatial position information of the first object according to the spatial position information of the second object;
  • the display unit 4 is configured to display augmented reality information related to the position of the first object or the second object.
  • the first object space position information at least includes the first object space coordinates and/or the first object orientation, which can be The specific positioning of the spatial position of the fixed first object is performed.
  • the first object identification characteristic includes at least the first object body morphological characteristic and/or the first object mark identification characteristic.
  • the morphological characteristic of the first object body at least includes the structure, shape, or color of the first object body, but in the specific implementation process, it is not limited to this, and may also be other identifiable characteristics of the object.
  • an object with a fixed shape can be fixed in the present invention.
  • the shape of the structure of the object is recognized.
  • different display methods can be used to prompt the user whether the capture process and the recognition process are successful.
  • the object is positioned and identified, and the accurate spatial position information of the object is obtained.
  • the first object mark identification characteristic includes at least a pattern, graphic or two-dimensional code set on the first object.
  • the pattern, graphic or two-dimensional code can be set on the first object through a printing process, and the identifiable patterns have different spatial accuracy according to their own pattern rules and production characteristics. Make full use of the combination of recognizable patterns with different characteristics to achieve rapid spatial calibration of navigation instruments.
  • the device for capturing the image of the first object is a device capable of image capturing, and the capturing angle is kept consistent with the user’s viewing direction. Consistent.
  • the user may wear the image capture device on the body, such as the head.
  • the image capture device is a head-mounted optical camera. When the user uses it, no matter what posture he adopts, the acquisition angle of the head-mounted optical camera can be well kept consistent with its viewing direction.
  • the first object image through the image acquisition device, find the first structure information corresponding to the first object in the database according to the first object image, identify the position and orientation of the first object, and set the current spatial coordinates for the first object, denoted as X1, Y1, Z1.
  • the second object is a moving instrument
  • the second object space position information at least includes the second object space coordinates and/or the second object orientation.
  • the specific position is the position when the second object has a specific positional relationship with a preset point, line or surface on the first object, for example, the specific positional relationship may be the second object and the first object. Preset points, lines or surfaces on an object coincide or partially coincide within a preset range.
  • the first correction unit 31 is specifically configured to: calculate the theoretical position information of the second object according to the spatial position information of the first object and the specific position relationship; The spatial position information of the object is corrected; exemplarily, the first correction unit 31 is configured to correct the x and y coordinates of the second object.
  • the display unit 4 is configured to display the image of the second object, the information content associated with the position of the second object, or the position prompt information associated with the position of the second object.
  • the second object identification characteristic at least includes the second object body shape characteristic and/or the second object mark identification characteristic; the second object body shape characteristic at least includes the structure, shape or color of the second object body; the second object body
  • the object marking identification features at least include patterns, graphics or two-dimensional codes provided on the second object.
  • the second object space position information at least includes the second object space coordinates and/or the second object orientation.
  • the second object identification characteristic at least includes the second object ontology morphological characteristic and/or the second object mark identification characteristic; the second object ontology morphological characteristic at least includes the structure, shape or color of the second object ontology; the second object ontology
  • the object marking identification features at least include patterns, graphics or two-dimensional codes provided on the second object.
  • a two-dimensional code is a black and white plane figure distributed on a plane, the points on it are very easy to identify, and the positioning of the two-dimensional code can be realized by identifying at least three points among them. Because the two-dimensional code is fixed to the object or device, the positioning of the object or device to which the two-dimensional code is fixed can be realized.
  • the second object marker identification characteristic may also be other planar graphics such as a checkerboard.
  • QR code or checkerboard as an identification makes positioning objects or instruments more accurate and fast. Thereby, fast moving instruments can be navigated more precisely.
  • the logo fixed on the surface of the instrument can also be a three-dimensional figure.
  • the logo can be the handle of the instrument, or a structure fixed on the side of the handle.
  • the second object in the present invention is a puncture needle in an operation, and the end of the puncture needle is provided with an identification structure and a two-dimensional code is printed.
  • the second obtaining unit 2 is specifically used for:
  • the first object is fixed in the space, and the second object is a moving object.
  • the second object moves to the specific position, the second object is identified according to the mark recognition characteristics of the second object, and the first object is obtained.
  • the second object is oriented and/or the current second object space coordinate is set for the second object.
  • the second correction unit 32 is specifically configured to: calculate the theoretical position information of the first object according to the spatial position information of the second object and the specific position; The spatial position information of the first object is corrected; exemplarily, the second correction unit 32 is used to correct the z-coordinate of the first object.
  • the display unit 4 is configured to display the image of the first object, the information content associated with the position of the first object, or the position prompt information associated with the position of the first object.
  • the specific position is the position when the second object has a specific positional relationship with a preset point, line, or surface on the first object.
  • the specific positional relationship may be the second object.
  • the object coincides with a preset point, line or surface on the first object or partially overlaps within a preset range.
  • the user When in use, the user can three-dimensionally display in-vivo organs, lesions, and parts of instruments within the subject's body that are not actually visible at the corresponding positions in the actual surgical scene.
  • invisible internal organs, lesions, and parts of the instrument located within the body are aligned with the human body and the actual instrument to guide the user through the surgical procedure.
  • the first object and the second object can be identified, and optical identification objects with different error characteristics can be used in the same scene.
  • the optical positioning accuracy of one or both parties can be improved.
  • the correlation of the coordinates of different identification patterns in the same space is determined by matching the geometric structure of the instruments with spatial correlation. By using the known trusted values, the calibration of the spatial recognition positions of different recognition patterns is realized.
  • the present invention also provides an augmented reality method based on correcting the position of an object in space, including:
  • S1 capture the first object image in space, and identify the first object recognition characteristic in the first object image, and obtain the first object spatial position information
  • first obtain specific spatial position information of a fixed object where the spatial position information at least includes the spatial coordinates of the first object and/or the orientation of the first object.
  • the specific positioning of the spatial location In order to perform positioning and calibration on the second object, first obtain specific spatial position information of a fixed object, where the spatial position information at least includes the spatial coordinates of the first object and/or the orientation of the first object. The specific positioning of the spatial location.
  • the first object identification characteristic includes at least the first object body morphological characteristic and/or the first object mark identification characteristic.
  • the morphological characteristics of the first object body at least include the structure, shape or color of the first object body, but in the specific implementation process, it is not limited to this, and may also be other identifiable characteristics of the object.
  • an object with a fixed shape can be fixed in the present invention.
  • the shape of the structure of the object is recognized.
  • different display methods can be used to prompt the user whether the capture process and the recognition process are successful. The object is positioned and identified, and the accurate spatial position information of the object is obtained.
  • the first object mark identification characteristic includes at least a pattern, graphic or two-dimensional code set on the first object.
  • the pattern, graphic or two-dimensional code can be set on the first object through a printing process, and the identifiable patterns have different spatial accuracy according to their own pattern rules and production characteristics. Make full use of the combination of recognizable patterns with different characteristics to achieve rapid spatial calibration of navigation instruments.
  • the device for capturing the image of the first object is a device capable of image capturing, and the capturing angle is consistent with the user's viewing direction.
  • the user may wear the image capture device on the body, such as the head.
  • the image capture device is a head-mounted optical camera.
  • the acquisition angle of the head-mounted optical camera can be well kept consistent with its viewing direction. In this way, it is not only ensured that the displayed angle is the angle viewed by the user, the accuracy of the display of the instrument is ensured, but also interference to various operations of the user during use is avoided.
  • the image of the first object is acquired by the image acquisition device, the identification characteristics of the first object are identified, the morphological characteristics of the body of the first object are obtained according to the identification characteristics of the first object, the orientation of the first object is obtained, and the current spatial coordinates of the first object are set for the first object , denoted as X1, Y1, Z1.
  • the second object is a moving instrument
  • the second object space position information at least includes the second object space coordinates and/or the second object orientation.
  • the second object identification characteristic at least includes the second object ontology morphological characteristic and/or the second object mark identification characteristic; the second object ontology morphological characteristic at least includes the structure, shape or color of the second object ontology; the second object ontology
  • the object marking identification features at least include patterns, graphics or two-dimensional codes provided on the second object.
  • a two-dimensional code is a black and white plane figure distributed on a plane, the points on it are very easy to identify, and the positioning of the two-dimensional code can be realized by identifying at least three points among them. Because the two-dimensional code is fixed to the object or device, the positioning of the object or device to which the two-dimensional code is fixed can be realized.
  • the second object marker identification characteristic may also be other planar graphics such as a checkerboard.
  • QR code or checkerboard as an identification makes positioning objects or instruments more accurate and fast. Thereby, fast moving instruments can be navigated more precisely.
  • the logo fixed on the surface of the instrument can also be a three-dimensional figure.
  • the logo can be the handle of the instrument, or a structure fixed on the side of the handle.
  • the second object in the present invention is a puncture needle in an operation, and the end of the puncture needle is provided with an identification structure and a two-dimensional code is printed.
  • capturing the second object image of the second object in space specifically includes:
  • the first object is fixed in the space
  • the second object is a moving object
  • an image of the second object in the space is captured.
  • the specific position can be set so that the second object moves to the preset coincidence with the first object, or, according to the needs of actual operation, when a certain position of the second object reaches a fixed position or completes a prescribed action, both can be positioned.
  • the first object is fixed in the space
  • the second object is a moving object
  • the second object is identified according to the identification characteristics of the second object.
  • the specific position is a position when the second object has a specific positional relationship with a preset associated point, line, or surface on the first object, and the specific positional relationship includes coincidence of points, lines or surfaces , partially overlapped.
  • the information board is used as the first object
  • the puncture needle is used as the second object.
  • the process can be two processes. According to the actual situation, two objects are relatively corrected, for example, according to the spatial position information of the first object and the specific position, the theoretical position information of the second object is calculated;
  • the spatial position information of the first object is corrected.
  • the position information of the object in space is calculated.
  • the coordinates of point A are the features of the first object captured (mainly on the panel). pattern features) calculated;
  • the point B of the needle tip of the puncture needle can be calculated. coordinate
  • the two points A and B are coincident at this time, but the coordinates of the two points A and B obtained through step 1 and step 2 respectively are not necessarily the same.
  • the accuracy of the x and y coordinates of point A on the first object is high but the accuracy of the z coordinate is relatively low, while the accuracy of the z coordinate of point B on the second object is relatively high.
  • the X2 and Y2 coordinates of the second object are corrected according to the X1 and Y1 coordinates of the first object, and the Z1 coordinate of the first object is corrected with the Z2 coordinate of the second object. Then the corresponding positions of the two structures in the database are adjusted as follows:
  • the specific mutual calibration method consists of the following two parts.
  • the schematic diagram of the mutual calibration is shown in Figure 4.
  • the first object is the identification plate
  • the second object is the puncture needle:
  • the calibration point has the following two expressions in the needle tip coordinate system:
  • the above two coordinates are the representation of the calibration point in the needle identifier coordinate system. Assuming that the expression (a) is more accurate for the z coordinate component, and the expression (b) is more accurate for the x and y coordinate components, the result after mutual calibration is
  • C camera coordinate system
  • T B ⁇ A Represents the coordinate transformation matrix from coordinate system A to coordinate system B
  • the camera can identify the positioning plate and the puncture needle, and then T C ⁇ Q and T C ⁇ N can be obtained. Place the puncture needle tip on a fixed point p on the identification plate. From the processing model of the identification plate, the coordinates of the fixed point in the identification plate coordinate system, that is, p Q , can be determined. According to the feature that the coordinates of this point in the camera coordinate system remain unchanged, the following coordinate relationship can be obtained:
  • the present invention can also be calibrated by using direction calibration, which specifically includes:
  • the direction vector v N of the puncture needle in the needle identification object coordinate system is manually determined in advance.
  • the calibration direction has two expressions in the needle tip coordinate system:
  • the above two vectors are the representation of the calibration direction in the coordinate system of the needle identifier. Assuming that the expression (a) is more accurate for the w coordinate component, and the expression (b) is more accurate for the u and v coordinate components, the result after mutual calibration is
  • the method for calibrating the orientation of the identification plate is shown in Figure 4.
  • the camera recognizes the identification plate and the puncture needle, and T C ⁇ Q and R C ⁇ N can be obtained. Insert the tip of the puncture needle into a fixed hole on the identification plate. From the processing model of the identification plate, the direction vector of the hole in the identification plate coordinate system, ie v Q , can be determined. Since the direction vector does not change in the camera coordinate system, the following conversion relationship can be obtained
  • the needle tip direction can be calculated in real time according to the following formula:
  • T C ⁇ N is given by the camera after identifying the pin identification object
  • v N is the calibration result calculated by the mutual calibration or the orientation calibration of the positioning plate.
  • the calibrated spatial position information of the first object and/or the second object is displayed, and augmented reality information related to the position is displayed, which may be that the content of the information is related to the position of the object, or the display position of the information. related to the location of the object.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Molecular Biology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Robotics (AREA)
  • Urology & Nephrology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明公开了一种基于校正物体在空间中位置的增强现实的***及方法,包括捕获在空间中的第一物体图像,识别第一物体图像中的第一物体识别特性,得到第一物体空间位置信息;当第二物体处于特定位置时,捕获第二物体在空间中的第二物体图像,并识别第二物体图像中的第二物体识别特性,得到第二物体空间位置信息;根据第一物体空间位置信息对第二物体空间位置信息进行校正,根据第二物体空间位置信息,对第一物体空间位置信息进行校正;根据第一物体空间位置信息和/或第二物体的空间位置信息,显示与所述第一物体和/或所述第二物体的位置相关的增强现实信息,该方法能够帮助使用者进行精准和完整的操作。

Description

一种基于校正物体在空间中位置的增强现实***、方法及计算机可读存储介质 技术领域
本发明涉及图像处理技术领域,尤其涉及一种基于校正物体在空间中位置的增强现实***及方法。
背景技术
增强现实技术,通常通过摄像头捕获到现实场景的图像,需要对捕获到的现实场景图像进行分析处理,并在现实场景的基础上增添附加的信息显示给用户,即对现实的增强。对现实场景的图像进行分析处理的过程,往往包括对场景中物体的定位。在某些特定需求下,对场景中的物体定位的精度要求极高,现有技术对场景中物体定位的准确度不能满足需求。
举例来说,增强现实技术应用于手术导航场景,需要非常准确地确定医疗器械与病人、场景之间的位置关系,才能确保向用户提供准确的导航信息。如基于增强现实技术的穿刺导航,可以用最简单方便,易学易用的设备实现快速精准的手术导航。而在整个流程中,精准导航的核心之一:基于可见光图案的精准手术器械空间定位,及虚拟器官与真实人体的注册配准,都依赖于对待定位物体上的可识别图***空间定位。而由于器械设计所限,不同尺寸形状的可识别图案,因其自身图案特征点空间分布的固有规律或其生产过程的特点,所特有的空间定位准确度也不尽相同。如果是反复使用的识别物还可能通过在首次临床使用前的出厂校准事先提高其识别精度,但对于一次性使用且不同产品误差分布也不一致的情况,则很难有类似事先校准机 会。如何能在使用现场快速提高其图案识别精度,是实际应用此项技术的一大难点。
发明内容
针对上述缺陷或不足,本发明的目的在于提供一种基于校正物体在空间中位置的增强现实的***及方法。
为达到以上目的,本发明的技术方案为:
一种基于校正物体在空间中位置的增强现实***,包括:第一获取单元、第二获取单元、校正单元以及显示单元,其中:
所述第一获取单元,用于捕获在空间中的第一物体图像,并识别所述第一物体图像中的第一物体识别特性,得到第一物体空间位置信息;
所述第二获取单元,用于当第二物体处于特定位置时,捕获第二物体在空间中的第二物体图像,并识别所述第二物体图像中的第二物体识别特性,得到第二物体空间位置信息;
所述校正单元包括第一校正单元和/或第二校正单元,其中:
所述第一校正单元,用于根据所述第一物体空间位置信息以及所述特定位置,对所述第二物体空间位置信息进行校正;
所述第二校正单元,用于根据所述第二物体空间位置信息,对所述第一物体空间位置信息进行校正;
所述显示单元,用于显示与所述第一物体或所述第二物体的位置相关的增强现实信息。
所述第一物体识别特性至少包括第一物体本体形态特性和/或第一物体标记识别特性;所述第一物体本体形态特性至少包括第一物体本体的结构、 形态或颜色;所述第一物体标记识别特性至少包括第一物体上设置的图案、图形或二维码;
所述第二物体识别特性至少包括第二物体本体形态特性和/或第二物体标记识别特性;所述第二物体本体形态特性至少包括第二物体本体的结构、形态或颜色;所述第二物体标记识别特性至少包括第二物体上设置的图案、图形或二维码。
所述第一物体空间位置信息至少包括第一物体空间坐标和/或第一物体朝向;所述第二物体空间位置信息至少包括第二物体空间坐标和/或第二物体朝向。
所述特定位置为所述第二物体与所述第一物体具有特定位置关系时的位置,所述特定位置关系包括第二物体与所述第一物体上预设的点、线或面之间重合或部分重合。
所述第一校正单元具体用于:
根据所述第一物体空间位置信息以及所述特定位置关系,计算第二物体理论位置信息;根据所述第二物体理论位置信息,对所述第二物体的空间位置信息进行校正。
所述第一校正单元用于对所述第二物体的x、y坐标进行校正。
所述第二校正单元具体用于:
根据所述第二物体空间位置信息以及所述特定位置关系,计算第一物体理论位置信息;根据所述第一物体理论位置信息,对所述第一物体的空间位置信息进行校正。
所述第二校正单元用于对第一物体的z坐标进行校正。
所述第一物体为手术场景中的固定物;所述第二物体为手术场景中的操作器械。
一种基于校正物体在空间中位置的增强现实方法,包括:
捕获在空间中的第一物体图像,并识别所述第一物体图像中的第一物体识别特性,得到第一物体空间位置信息;
当第二物体处于特定位置时,捕获第二物体在空间中的第二物体图像,并识别所述第二物体图像中的第二物体识别特性,得到第二物体空间位置信息;
根据所述第一物体空间位置信息以及特定位置,对所述第二物体空间位置信息进行校正;和/或根据所述第二物体空间位置信息,对所述第一物体空间位置信息进行校正;
显示与所述第一物体或所述第二物体的位置相关的增强现实信息。
所述第一物体识别特性至少包括第一物体本体形态特性和/或第一物体标记识别特性;所述第一物体本体形态特性至少包括第一物体本体的结构、形态或颜色;所述第一物体标记识别特性至少包括第一物体上设置的图案、图形或二维码;
所述第二物体识别特性至少包括第二物体本体形态特性和/或第二物体标记识别特性;所述第二物体本体形态特性至少包括第二物体本体的结构、形态或颜色;所述第二物体标记识别特性至少包括第二物体上设置的图案、图形或二维码。
所述第一物体空间位置信息至少包括第一物体空间坐标和/或第一物体朝向;所述第二物体空间位置信息至少包括第二物体空间坐标和/或第二物体 朝向。
所述特定位置为所述第二物体与所述第一物体上的预设的点、线或面具有特定位置关系时的位置,所述特定位置关系包括点、线或面重合、部分重合。
所述根据所述第一物体空间位置信息以及特定位置,对所述第二物体空间位置信息进行校正包括:根据所述第一物体空间位置信息以及所述特定位置关系,计算第二物体理论位置信息;根据所述第二物体理论位置信息,对所述第二物体的空间位置信息进行校正。
优选地,对所述第二物体的空间位置信息进行校正包括对所述第二物体的x、y坐标进行校正。
根据所述第二物体空间位置信息,对所述第一物体空间位置信息进行校正包括:根据所述第一物体空间位置信息以及所述特定位置关系,计算第二物体理论位置信息;根据所述第二物体理论位置信息,对所述第二物体的空间位置信息进行校正。
优选地,所述对所述第二物体的空间位置信息进行校正包括对所述第二物体的x、y坐标进行校正。
所述第一物体为手术场景中的固定物;所述第二物体为手术场景中的操作器械。
本发明还提供一种计算机可读存储介质,存储有非暂时的计算机可执行程序,所述程序用于指令所述计算机执行本发明记载的方法。
与现有技术比较,本发明的有益效果为:
本发明提供了一种基于校正物体在空间中位置的增强现实***及方法, 通过在同一场景下使用具有不同误差特性的物体的识别特性,通过二者对应物体的空间关联,对两个不同物体进行图像获取和位置的相互校正,实现单方或双方的光学定位精度提高,该方法和***能够在多种场合中进行应用,比如手术过程中的医疗器械操作的定位、教学模拟操作中的应用以及游戏活动过程中的应用等,精准的定位和位置的增强现实,能够帮助使用者进行精准和完整的操作。
附图说明
图1是本发明基于校正物体在空间中位置的增强现实***的结构框图;
图2是本发明具体实施方式中的实施方案示例图;
图3是本发明基于校正物体在空间中位置的增强现实方法流程图
图4是本发明基于识别板互校准示意图。
具体实施方式
下面将结合附图对本发明做详细描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明的保护范围。
在进行精准的操作场景中,很多时候需要准确地获取物体的实际位置与图像中的位置,在某些特定需求下,对场景中的物体定位的精度要求极高,比如医疗过程中,需要非常准确地确定医疗器械与病人、场景之间的位置关系,才能够确保向用户提供准确的导航信息。帮助医疗人员准备的找到操作位置和身体的对应关系。基于该要求,本发明提供了一种基于校正物体在空 间中位置的增强现实方法,能够应用于手术实施场景,也可以是应用于模拟教学过程中的操作场景、也可以应用于游戏过程中的定位。
以手术实施场景为例,本发明实施例为用户提供针对对象体内的组织和/或器械位于对象体内的器械定位。其中,用户是整个体内导航过程的观察者,其也是将器械探入对象体内的操作者。对象可以是用户需要对其进行操作的人或其他动物。器械可以是任意可探入对象体内的工具。器械可以例如是穿刺针、活检针、射频或微波消融针、超声探头、硬质内窥镜、内窥镜手术下卵圆钳、电刀或吻合器等医疗器械。优选地,所述第一物体为手术场景中的固定物;所述第二物体为手术场景中操作器械。
如图1所示,一种基于校正物体在空间中位置的增强现实***,能够应用于手术操作、模拟教学操作、或者游戏过程,具体包括:第一获取单元1、第二获取单元2、校正单元3,以及显示单元4,其中:
所述第一获取单元1,用于捕获在空间中的第一物体图像,并识别所述第一物体图像中的第一物体识别特性,得到第一物体空间位置信息;
所述第二获取单元2,用于当第二物体处于特定位置时,捕获第二物体在空间中的第二物体图像,并识别所述第二物体图像中的第二物体识别特性,得到第二物体空间位置信息;
所述校正单元3包括第一校正单元31和/或第二校正单元32,其中:
所述第一校正单元31,用于根据所述第一物体空间位置信息以及特定位置,对所述第二物体空间位置信息进行校正;
所述第二校正单元32,用于根据所述第二物体空间位置信息,对所述第一物体空间位置信息进行校正;
所述显示单元4,用于显示与所述第一物体或所述第二物体的位置相关的增强现实信息。
为了能够对第二物体进行定位校准,首先获取一个固定的物体的具有的第一物体空间位置信息,所述第一物体空间位置信息至少包括第一物体空间坐标和/或第一物体朝向,能够对固定的第一物体进行空间位置的具体定位。
本发明中,所述第一物体识别特性至少包括第一物体本体形态特性和/或第一物体标记识别特性。所述第一物体本体形态特性至少包括第一物体本体的结构、形态、或颜色,但是具体实施过程中,不局限于此,也可以是物体的其他的能够被识别的特性。示例性的,本发明可以固定设置一个形状固定的物体,在进行校准前,先识别物体的结构的形状,识别过程中,通过不同的显示方式,能够提示用户捕获过程和识别过程是否成功。对该物体进行定位识别,获取物体的准确空间位置信息。
另外,本发明中,所述第一物体标记识别特性至少包括第一物体上设置的图案、图形或二维码。所述图案、图形或二维码可以为通过印刷过程设置于第一物体上,可识别图案根据其自身图案的规律以及生产特点,所具备的空间准确度也不尽相同。充分利用不同特性可识别图案的组合,实现对导航器械的快速空间校准。
示例性的,本发明中,如图2所示,可以使用印刷有二维码的矩形信息板,用于捕获第一物体图像的设备为能够图像采集的装置,采集角度与用户的观察方向保持一致。当用户使用时,其可以将图像采集装置佩戴于身体上,例如头部。可选地,图像采集装置是头戴式光学摄像头。在用户使用时,无论其采用何种姿势,都可以很好地保持头戴式光学摄像头的采集角度与其观 察方向一致。由此,不仅保证了显示增强现实信息的角度是用户所观看的角度,保证了精准度,而且避免了使用时对用户的各种操作的干扰。从而显著提高了用户体验。根据摄像头采集到的图像对空间中的物体进行定位,得到物体在xyz空间坐标系中的位置,其中,z坐标表示沿延摄像头拍摄的深度方向上的坐标,x、y坐标是垂直于z坐标轴方向的坐标。通过图像采集装置获取第一物体图像,根据第一物体图像寻找数据库内与第一物体对应的第一结构信息,识别第一物***置与朝向,对第一物体设置当前空间坐标,记为X1、Y1、Z1。
在进行具体的手术场景中,需要使用器械进行操作,本发明中所述第二物体为移动的器械,所述第二物体空间位置信息至少包括第二物体空间坐标和/或第二物体朝向。
所述特定位置为所述第二物体与所述第一物体上的预设的点、线或面具有特定位置关系时的位置,例如,特定位置关系可以是所述第二物体与所述第一物体上的预设的点、线或面重合或者在预设的范围内部分重合。
所述第一校正单元31具体用于:根据所述第一物体空间位置信息以及所述特定位置关系,计算第二物体理论位置信息;根据所述第二物体理论位置信息,对所述第二物体的空间位置信息进行校正;示例性的,所述第一校正单元31用于对所述第二物体的x、y坐标进行校正。
显示单元4用于显示所述第二物体图像、第二物***置相关联的信息内容、或者与第二物***置相关联的位置提示信息。
所述第二物体识别特性至少包括第二物体本体形态特性和/或第二物体标记识别特性;所述第二物体本体形态特性至少包括第二物体本体的结构、 形态或颜色;所述第二物体标记识别特性至少包括第二物体上设置的图案、图形或二维码。所述第二物体空间位置信息至少包括第二物体空间坐标和/或第二物体朝向。
所述第二物体识别特性至少包括第二物体本体形态特性和/或第二物体标记识别特性;所述第二物体本体形态特性至少包括第二物体本体的结构、形态或颜色;所述第二物体标记识别特性至少包括第二物体上设置的图案、图形或二维码。
二维码是在平面上分布的黑白相间的平面图形,其上面的点非常易于识别,通过识别其中的至少3个点,可以实现该二维码的定位。因为二维码固定于对象或器械,所以,可以实现固定有该二维码的对象或器械的定位。
可选地,第二物体标记识别特性还可以是诸如棋盘格的其他平面图形。利用二维码或棋盘格作为标识,使得定位对象或器械更准确且快速。从而,可以对快速移动器械进行更精准地导航。
可选地,在器械表面上所固定的标识还可以是立体图形,例如,在器械设计生产过程中,标识的图形可以是该器械的手柄,或者是固定于手柄侧面的某个结构。使用立体图形进行空间定位虽然识别所需的计算时间相对平面图形长,但对固定不动或慢速移动的目标空间定位精度较高。
示例性的,如图2所示,本发明中第二物体为手术中的穿刺针,穿刺针的端部设置有标识结构,并且印刷有二维码。
基于上述内容,所述第二获取单元2具体用于:
所述第一物体固定设置在空间中,所述第二物体为移动物体,当所述第二物体移动到所述特定位置,则根据第二物体标记识别特性识别所述第二物 体,得到第二物体朝向和/或对第二物体设置当前的第二物体空间坐标。
所述第二校正单元32具体用于:根据所述第二物体空间位置信息以及所述特定位置,计算第一物体理论位置信息;根据所述第一物体理论位置信息,对所述第一物体的空间位置信息进行校正;示例性的,所述第二校正32单元用于对所述第一物体的z坐标进行校正。
显示单元4用于显示所述第一物体图像、第一物***置相关联的信息内容、或者与第一物***置相关联的位置提示信息。
本发明中,所述特定位置为所述第二物体与所述第一物体上的预设的点、线、或面具有特定位置关系时的位置,例如,特定位置关系可以是所述第二物体与所述第一物体上的预设的点、线或面重合或者在预设的范围内部分重合。
当进行使用时,用户可以在现实手术场景的对应位置三维地显示实际不可见的对象的体内器官、病变以及器械在对象体内的部分。换言之,不可见的体内器官、病变以及器械位于体内的部分与人体及实际器械对准,从而指引用户进行手术操作。
本实施例中能够根据第一物体和第二物体进行识别,在同一场景下使用具有不同误差特性的光学识别物,通过二者对应物体的空间关联,实现单方或双方的光学定位精度提高。针对不同误差特征的识别物,将与之有空间关联性的器械通过几何结构的匹配,确定不同识别图案在同一空间中坐标的关联性。通过利用已知的可信数值,实现对不同识别图案空间识别位置的校准。
另外,如图3所示,本发明还提供了一种基于校正物体在空间中位置的增强现实方法,包括:
S1、捕获在空间中的第一物体图像,并识别所述第一物体图像中的第一物体识别特性,得到第一物体空间位置信息;
为了能够对第二物体进行定位校准,首先获取一个固定的物体的具体的空间位置信息,该空间位置信息至少包括第一物体空间坐标和/或第一物体朝向,能够对固定的第一物体进行空间位置的具体定位。
本发明中,所述第一物体识别特性至少包括第一物体本体形态特性和/或第一物体标记识别特性。所述第一物体本体形态特性至少包括第一物体本体的结构、形态或颜色,但是具体实施过程中,不局限于此,也可以是物体的其他的能够被识别的特性。示例性的,本发明可以固定设置一个形状固定的物体,在进行校准前,先识别物体的结构的形状,识别过程中,通过不同的显示方式,能够提示用户捕获过程和识别过程是否成功。对该物体进行定位识别,获取物体的准确空间位置信息。
另外,所述第一物体标记识别特性至少包括第一物体上设置的图案、图形或二维码。所述图案、图形或二维码可以为通过印刷过程设置于第一物体上,可识别图案根据其自身图案的规律以及生产特点,所具备的空间准确度也不尽相同。充分利用不同特性可识别图案的组合,实现对导航器械的快速空间校准。
示例性的,如图2所示,可以使用印刷有二维码的矩形信息板,用于捕获第一物体图像的设备为能够图像采集的装置,采集角度与用户的观察方向保持一致。当用户使用时,其可以将图像采集装置佩戴于身体上,例如头部。可选地,图像采集装置是头戴式光学摄像头。在用户使用时,无论其采用何种姿势,都可以很好地保持头戴式光学摄像头的采集角度与其观察方向一致。 由此,不仅保证了显示的角度是用户所观看的角度,保证了器械显示的精准度,而且避免了使用时对用户的各种操作的干扰。从而显著提高了用户体验。通过图像采集装置获取第一物体图像,识别第一物体标记识别特性,根据第一物体标记识别特性获取第一物体本体形态特性,得到第一物体朝向,对第一物体设置当前第一物体空间坐标,记为X1、Y1、Z1。
S2、当第二物体处于特定位置时,捕获第二物体在空间中的第二物体图像,并识别所述第二物体图像中的第二物体识别特性,得到第二物体空间位置信息;
在进行具体的手术场景中,需要使用器械进行操作,本发明中所述第二物体为移动的器械,所述第二物体空间位置信息至少包括第二物体空间坐标和/或第二物体朝向。
所述第二物体识别特性至少包括第二物体本体形态特性和/或第二物体标记识别特性;所述第二物体本体形态特性至少包括第二物体本体的结构、形态或颜色;所述第二物体标记识别特性至少包括第二物体上设置的图案、图形或二维码。
二维码是在平面上分布的黑白相间的平面图形,其上面的点非常易于识别,通过识别其中的至少3个点,可以实现该二维码的定位。因为二维码固定于对象或器械,所以,可以实现固定有该二维码的对象或器械的定位。
可选地,第二物体标记识别特性还可以是诸如棋盘格的其他平面图形。利用二维码或棋盘格作为标识,使得定位对象或器械更准确且快速。从而,可以对快速移动器械进行更精准地导航。
可选地,在器械表面上所固定的标识还可以是立体图形,例如,在器械设计生产过程中,标识的图形可以是该器械的手柄,或者是固定于手柄侧面的某个结构。使用立体图形进行空间定位虽然识别所需的计算时间相对平面图形长,但对固定不动或慢速移动的目标空间定位精度较高。
示例性的,如图2所示,本发明中第二物体为手术中的穿刺针,穿刺针的端部设置有标识结构,并且印刷有二维码。
当所述第二物体处于特定位置,捕获所述第二物体在空间中的第二物体图像时具体包括:
所述第一物体固定设置在空间中,所述第二物体为移动物体,当所述第二物体移动到特定位置,则捕获所述第二物体在空间中的所述第二物体图像。该过程所述特定位置可以设置为第二物体移动到与第一物体的预设重合,或者,根据实际操作的需要,当所述第二物体的某一位置到达固定位置或者完成规定动作,皆可以进行定位。
具体包括:所述第一物体固定设置在空间中,所述第二物体为移动物体,当所述第二物体移动到所述特定位置,则根据第二物体标记识别特性识别所述第二物体,根据所述第二物体本体形态特性,得到第二物体朝向,对第二物体设置当前的第二物体空间坐标,记为X2、Y2、Z2。所述特定位置为所述第二物体与所述第一物体上的预设的相关联的点、线、或面具有特定位置关系时的位置,所述特定位置关系包括点、线或面重合、部分重合。
示例性的,将信息板作为第一物体,穿刺针作为第二物体,当用户手持穿刺针使针尖B点与信息板的A点重合时,对两个物体的位置定位,并且相互校准。
S3、根据所述第一物体空间位置信息以及特定位置,对所述第二物体空间位置信息进行校正;和/或根据所述第二物体空间位置信息,对所述第一物体空间位置信息进行校正。
S4、显示与所述第一物体或所述第二物体的位置相关的增强现实信息:
该过程可以是两个过程,根据实际的情况,对两个物体进行相对的校正,比如根据第一物体空间位置信息以及特定位置,计算第二物体理论位置信息;
根据第二物体理论位置信息,对第二物体的空间位置信息进行校正;和/或根据第二物体空间位置信息以及特定位置,计算第一物体理论位置信息;
根据第一物体理论位置信息,对第一物体的空间位置信息进行校正。
举例说明,如图2所示,根据拍摄的所述第一物体图像,计算该物体在空间中的位置信息,此时A点的坐标是通过拍摄到的第一物体的特征(主要是面板上的图案特征)计算得到的;
当医生手持第二物体(穿刺针)将针尖B点放置在第一物体(识别板)的A点,此时根据识别穿刺针末端设置的易于识别的特征,可以计算穿刺针的针尖B点的坐标;
已知此时A、B两点是重合的,但通过步骤1和步骤2分别得到的A、B两点的坐标未必相同。根据两个物体的空间几何特征可知,第一物体上A点的x,y坐标的精确度高但z坐标的精确度相对较低,而第二物体上B点的z坐标精确度相对较高,所以根据第一物体的X1、Y1坐标校正第二物体的X2、Y2坐标,用第二物体的Z2坐标校正第一物体的Z1坐标。则两个结构在数据库内的对应位置做如下调整:
X2=X1;Y2=Y1;Z1=Z2;
具体的互校准方法由以下2个部分组成,互校准示意图如图4所示,具体实施中,第一物体为识别板,第二物体为穿刺针:
(1)通过人工事先确定出穿刺针的针尖点在针识别物坐标系下的坐标。
(2)在识别板上加工一孔洞,使其平行于z轴,垂直于Oxy平面,孔洞底部一点为标定点(Calibration Point)。通过设计识别板模体,要确定出标定点在识别板坐标系下的坐标p Q。标定时,将穿刺针***孔洞内,并保证针尖点位于标定点处。根据标定点在摄像机坐标系下的坐标保持不变的特点,通过坐标转换,可知以下关系,此时标定点在针尖坐标系下有以下2个表达式:T C←Qp Q=T C←Np N
此时标定点在针尖坐标系下有以下2个表达式:
(a)由针识别物识别出并经人工点标定直接确定的第二物体的坐标系:
Figure PCTCN2022081469-appb-000001
(b)由识别板识别出并经坐标转换得到的穿刺针的坐标系:
Figure PCTCN2022081469-appb-000002
上述2个坐标均是标定点在针识别物坐标系下的表示。假设z坐标分量采用表达式(a)更准确,x、y坐标分量采用表达式(b)更准确,那么互校准后的结果为
Figure PCTCN2022081469-appb-000003
其中,C:摄像机坐标系
Q:识别板坐标系
N:穿刺针坐标系
T B←A:表示从坐标系A到坐标系B的坐标转换矩阵
p A:坐标系A中的点p
v A:坐标系A中的向量v
识别板点标定方法,摄像机识别出定位板和穿刺针,即可得到T C←Q和T C←N。将穿刺针针尖放置于识别板上一固定点p。从识别板的加工模型可以确定出该固定点在识别板坐标系下的坐标,即p Q。根据该点在摄像机坐标系下坐标不变的特点,可得下述坐标关系:
T C←Qp Q=T C←Np N
因此得到该点在穿刺针坐标系下的坐标,即
Figure PCTCN2022081469-appb-000004
另外,本发明也可是用过方向标定进行校准,具体包括:
(1)通过人工事先确定出穿刺针在针识别物坐标系下的方向向量v N
(2)在识别板上加工一孔洞,使其平行于z轴,垂直于Oxy平面,孔洞底部一点为标定点(Calibration Point),孔洞方向称为标定方向(Calibration Direction)。通过设计识别板模体,要确定出该孔洞方向在识别板坐标系下的方向向量v Q。标定时,将识别针***孔洞内,并保证针尖点位于标定点处。根据标定方向在摄像机坐标系下的方向保持不变的特点,通过坐标转换,可知以下关系:
T C←Qv Q=T C←Nv N
此时标定方向在针尖坐标系下有2个表达式:
(a)由针识别物识别出并经人工方向标定直接确定的穿刺针的方向向量:
Figure PCTCN2022081469-appb-000005
(b)由识别板识别出并经坐标转换得到的穿刺针的方向向量:
Figure PCTCN2022081469-appb-000006
上述2个向量均是标定方向在针识别物坐标系下的表示。假设w坐标分量采用表达式(a)更准确,u、v坐标分量采用表达式(b)更准确,那么互校准后的结果为
Figure PCTCN2022081469-appb-000007
识别板方向标定方法如图4所示。摄像机识别出识别板和穿刺针,即可得到T C←Q和R C←N。将穿刺针针尖***识别板上一固定孔洞内。从识别板的加工模型可以确定出该孔洞在识别板坐标系下的方向向量,即v Q。由该方向向量在摄像机坐标系下方向不变,可得下述转换关系
T C←Qv Q=T C←Nv N
因此得到该方向向量在穿刺针坐标系下的表示,即
Figure PCTCN2022081469-appb-000008
经过方向标定后,摄像机实时识别针识别物时,可按下述公式实时计算针尖方向:
v C=T C←Nv N
其中,T C←N由摄像机识别针识别物后给出,v N为采用互校准或定位板方向标定计算后的标定结果。
当校准完成后,显示校准后的第一物体和/或第二物体空间位置信息,并且显示与位置有关的增强现实信息,可以是信息的内容与物体的位置有关, 也可以是信息的显示位置与物***置有关。
对于本领域技术人员而言,显然能了解到上述具体事实例只是本发明的优选方案,因此本领域的技术人员对本发明中的某些部分所可能作出的改进、变动,体现的仍是本发明的原理,实现的仍是本发明的目的,均属于本发明所保护的范围。

Claims (10)

  1. 一种基于校正物体在空间中位置的增强现实***,其特征在于,包括:第一获取单元、第二获取单元、校正单元以及显示单元,其中:
    所述第一获取单元,用于捕获在空间中的第一物体图像,并识别所述第一物体图像中的第一物体识别特性,得到第一物体空间位置信息;
    所述第二获取单元,用于当第二物体处于特定位置时,捕获第二物体在空间中的第二物体图像,并识别所述第二物体图像中的第二物体识别特性,得到第二物体空间位置信息;
    所述校正单元包括第一校正单元和/或第二校正单元,其中:
    所述第一校正单元,用于根据所述第一物体空间位置信息以及所述特定位置,对所述第二物体空间位置信息进行校正;
    所述第二校正单元,用于根据所述第二物体空间位置信息,对所述第一物体空间位置信息进行校正;
    所述显示单元,用于显示与所述第一物体或所述第二物体的位置相关的增强现实信息。
  2. 根据权利要求1所述的基于校正物体在空间中位置的增强现实***,其特征在于,所述第一物体识别特性至少包括第一物体本体形态特性和/或第一物体标记识别特性;所述第一物体本体形态特性至少包括第一物体本体的结构、形态或颜色;所述第一物体标记识别特性至少包括第一物体上设置的图案、图形或二维码;
    所述第二物体识别特性至少包括第二物体本体形态特性和/或第二物体标记识别特性;所述第二物体本体形态特性至少包括第二物体本体的结构、形态或颜色;所述第二物体标记识别特性至少包括第二物体上设置的图案、 图形或二维码。
  3. 根据权利要求1所述的基于校正物体在空间中位置的增强现实***,其特征在于,所述第一物体空间位置信息至少包括第一物体空间坐标和/或第一物体朝向;所述第二物体空间位置信息至少包括第二物体空间坐标和/或第二物体朝向。
  4. 根据权利要求1所述的基于校正物体在空间中位置的增强现实***,其特征在于,所述特定位置为所述第二物体与所述第一物体具有特定位置关系时的位置。
  5. 根据权利要求4所述的基于校正物体在空间中位置的增强现实***,其特征在于,所述第一校正单元具体用于:根据所述第一物体空间位置信息以及所述特定位置关系,计算第二物体理论位置信息;根据所述第二物体理论位置信息,对所述第二物体的空间位置信息进行校正;
    所述第二校正单元具体用于:根据所述第二物体空间位置信息以及所述特定位置关系,计算第一物体理论位置信息;根据所述第一物体理论位置信息,对所述第一物体的空间位置信息进行校正。
  6. 根据权利要求4所述的基于校正物体在空间中位置的增强现实***,其特征在于,所述第一校正单元用于对所述第二物体的x、y坐标进行校正;所述第二校正单元用于对所述第一物体的z坐标进行校正。
  7. 根据权利要求1—6任意一项权利要求所述的基于校正物体在空间中位置的增强现实***,其特征在于,所述第一物体为手术场景中的固定物;所述第二物体为手术场景中的操作器械。
  8. 一种基于校正物体在空间中位置的增强现实方法,其特征在于,包括:
    捕获在空间中的第一物体图像,并识别所述第一物体图像中的第一物体识别特性,得到第一物体空间位置信息;
    当第二物体处于特定位置时,捕获第二物体在空间中的第二物体图像,并识别所述第二物体图像中的第二物体识别特性,得到第二物体空间位置信息;
    根据所述第一物体空间位置信息以及特定位置,对所述第二物体空间位置信息进行校正;和/或根据所述第二物体空间位置信息,对所述第一物体空间位置信息进行校正;
    显示与所述第一物体或所述第二物体的位置相关的增强现实信息。
  9. 根据权利要求8所述的基于校正物体在空间中位置的增强现实方法,其特征在于,所述第一物体为手术场景中的固定物;所述第二物体为手术场景中的操作器械。
  10. 一种计算机可读存储介质,存储有非暂时的计算机可执行程序,其特征在于,所述计算机可执行程序用于指令所述计算机执行权利要求8-9中任意一项所述的方法。
PCT/CN2022/081469 2021-04-01 2022-03-17 一种基于校正物体在空间中位置的增强现实***、方法及计算机可读存储介质 WO2022206406A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110357372.X 2021-04-01
CN202110357372.XA CN113509264A (zh) 2021-04-01 2021-04-01 一种基于校正物体在空间中位置的增强现实***、方法及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2022206406A1 true WO2022206406A1 (zh) 2022-10-06

Family

ID=78061350

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/081469 WO2022206406A1 (zh) 2021-04-01 2022-03-17 一种基于校正物体在空间中位置的增强现实***、方法及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN113509264A (zh)
WO (1) WO2022206406A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113509264A (zh) * 2021-04-01 2021-10-19 上海复拓知达医疗科技有限公司 一种基于校正物体在空间中位置的增强现实***、方法及计算机可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090312629A1 (en) * 2008-06-13 2009-12-17 Inneroptic Technology Inc. Correction of relative tracking errors based on a fiducial
US20110082467A1 (en) * 2009-10-02 2011-04-07 Accumis Inc. Surgical tool calibrating device
US20200078133A1 (en) * 2017-05-09 2020-03-12 Brainlab Ag Generation of augmented reality image of a medical device
CN113509263A (zh) * 2021-04-01 2021-10-19 上海复拓知达医疗科技有限公司 一种物体空间校准定位方法
CN113509264A (zh) * 2021-04-01 2021-10-19 上海复拓知达医疗科技有限公司 一种基于校正物体在空间中位置的增强现实***、方法及计算机可读存储介质
CN216535498U (zh) * 2021-04-01 2022-05-17 上海复拓知达医疗科技有限公司 一种基于物体在空间中的定位装置

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10137914B4 (de) * 2000-08-31 2006-05-04 Siemens Ag Verfahren zur Ermittlung einer Koordinatentransformation für die Navigation eines Objekts
EP1349114A3 (en) * 2002-03-19 2011-06-15 Canon Kabushiki Kaisha Sensor calibration apparatus, sensor calibration method, program, storage medium, information processing method, and information processing apparatus
DE102007013407B4 (de) * 2007-03-20 2014-12-04 Siemens Aktiengesellschaft Verfahren und Vorrichtung zur Bereitstellung einer Korrekturinformation
CN101904770B (zh) * 2009-06-05 2012-11-14 复旦大学 一种基于光学增强现实技术的手术导航***及方法
US9572539B2 (en) * 2011-04-08 2017-02-21 Imactis Device and method for determining the position of an instrument in relation to medical images
KR101367366B1 (ko) * 2012-12-13 2014-02-27 주식회사 사이버메드 영상 유도 수술을 위한 수술 도구를 보정하는 방법 및 도구
US11534243B2 (en) * 2016-11-23 2022-12-27 Clear Guide Medical, Inc. System and methods for navigating interventional instrumentation
CA3050177A1 (en) * 2017-03-10 2018-09-13 Brainlab Ag Medical augmented reality navigation
US20210121237A1 (en) * 2017-03-17 2021-04-29 Intellijoint Surgical Inc. Systems and methods for augmented reality display in navigated surgeries
CN110506297B (zh) * 2017-04-17 2023-08-11 康耐视公司 高精确度校准***和方法
TWI678181B (zh) * 2018-04-30 2019-12-01 長庚大學 手術導引系統
CN110769245A (zh) * 2018-07-27 2020-02-07 华为技术有限公司 一种校准方法及相关设备
CN110353806B (zh) * 2019-06-18 2021-03-12 北京航空航天大学 用于微创全膝关节置换手术的增强现实导航方法及***
EP3760157A1 (en) * 2019-07-04 2021-01-06 Scopis GmbH Technique for calibrating a registration of an augmented reality device
CN111540060B (zh) * 2020-03-25 2024-03-08 深圳奇迹智慧网络有限公司 增强现实设备的显示校准方法、装置、电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090312629A1 (en) * 2008-06-13 2009-12-17 Inneroptic Technology Inc. Correction of relative tracking errors based on a fiducial
US20110082467A1 (en) * 2009-10-02 2011-04-07 Accumis Inc. Surgical tool calibrating device
US20200078133A1 (en) * 2017-05-09 2020-03-12 Brainlab Ag Generation of augmented reality image of a medical device
CN113509263A (zh) * 2021-04-01 2021-10-19 上海复拓知达医疗科技有限公司 一种物体空间校准定位方法
CN113509264A (zh) * 2021-04-01 2021-10-19 上海复拓知达医疗科技有限公司 一种基于校正物体在空间中位置的增强现实***、方法及计算机可读存储介质
CN216535498U (zh) * 2021-04-01 2022-05-17 上海复拓知达医疗科技有限公司 一种基于物体在空间中的定位装置

Also Published As

Publication number Publication date
CN113509264A (zh) 2021-10-19

Similar Documents

Publication Publication Date Title
JP6889703B2 (ja) 患者の3d表面画像を手術中に観察するための方法及び装置
EP2637593B1 (en) Visualization of anatomical data by augmented reality
EP3254621A1 (en) 3d image special calibrator, surgical localizing system and method
CN113940755B (zh) 一种术像一体的外科手术规划与导航方法
KR102105974B1 (ko) 의료 영상 시스템
CN109998678A (zh) 在医学规程期间使用增强现实辅助导航
US20160000518A1 (en) Tracking apparatus for tracking an object with respect to a body
WO2022206417A1 (zh) 一种物体空间校准定位方法
CN105078573B (zh) 基于手持式扫描仪的神经导航空间配准方法
US20080123910A1 (en) Method and system for providing accuracy evaluation of image guided surgery
Lathrop et al. Minimally invasive holographic surface scanning for soft-tissue image registration
CN103948432A (zh) 术中立体内窥视频与超声图像增强现实算法
Zeng et al. A surgical robot with augmented reality visualization for stereoelectroencephalography electrode implantation
Agustinos et al. Visual servoing of a robotic endoscope holder based on surgical instrument tracking
Shao et al. Augmented reality calibration using feature triangulation iteration-based registration for surgical navigation
WO2022206406A1 (zh) 一种基于校正物体在空间中位置的增强现实***、方法及计算机可读存储介质
CN116327079A (zh) 内窥镜测量***和工具
CN109833092A (zh) 体内导航***和方法
Jiang et al. Optical positioning technology of an assisted puncture robot based on binocular vision
Meng et al. An automatic markerless registration method for neurosurgical robotics based on an optical camera
CN113100941B (zh) 基于ss-oct手术导航***的图像配准方法及***
CN216535498U (zh) 一种基于物体在空间中的定位装置
Wang et al. Real-time marker-free patient registration and image-based navigation using stereovision for dental surgery
CN112971996A (zh) 计算机可读存储介质、电子设备及手术机器人***
CN113616293A (zh) 一种基于姿态角超声引导穿刺导航***及方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22778589

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22778589

Country of ref document: EP

Kind code of ref document: A1