WO2020151119A1 - 一种用于牙科手术的增强现实方法及装置 - Google Patents

一种用于牙科手术的增强现实方法及装置 Download PDF

Info

Publication number
WO2020151119A1
WO2020151119A1 PCT/CN2019/084455 CN2019084455W WO2020151119A1 WO 2020151119 A1 WO2020151119 A1 WO 2020151119A1 CN 2019084455 W CN2019084455 W CN 2019084455W WO 2020151119 A1 WO2020151119 A1 WO 2020151119A1
Authority
WO
WIPO (PCT)
Prior art keywords
coordinate system
coordinate
visual marker
transformation matrix
pixel
Prior art date
Application number
PCT/CN2019/084455
Other languages
English (en)
French (fr)
Inventor
王利峰
Original Assignee
雅客智慧(北京)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 雅客智慧(北京)科技有限公司 filed Critical 雅客智慧(北京)科技有限公司
Publication of WO2020151119A1 publication Critical patent/WO2020151119A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C8/00Means to be fixed to the jaw-bone for consolidating natural teeth or for fixing dental prostheses thereon; Dental implants; Implanting tools

Definitions

  • the embodiments of the application relate to the technical field of medical robots, and in particular to an augmented reality method and device for dental surgery.
  • Dental implant surgery requires precise operations in a narrow space. Due to the narrow operating space of the oral cavity, it is difficult to directly observe and operate the position of the implant. The location of the implant depends on the experience of the doctor, which leads to the positioning of the implant. Not accurate enough, it is easy to cause failure of implant surgery.
  • the embodiments of the present application provide an augmented reality method and device for dental surgery.
  • an embodiment of the present application proposes an augmented reality method for dental surgery, including:
  • a second coordinate system established based on the second visual marker and a first coordinate established based on the first visual marker are obtained.
  • the transformation matrix between the virtual coordinate system and the second coordinate system, and the transformation matrix between the second coordinate system and the first coordinate system Obtaining the coordinate set of the object to be positioned in the first coordinate system; wherein the transformation matrix between the virtual coordinate system and the second coordinate system is obtained in advance;
  • the coordinate set of the object to be positioned in the first coordinate system and the projection matrix from the first coordinate system to the pixel coordinate system corresponding to the endoscope of the planting mobile phone it is obtained that the object to be positioned in the
  • the pixel coordinate set in the pixel coordinate system is set, and the object to be positioned is displayed on the two-dimensional image corresponding to the endoscope according to the pixel coordinate set; the projection matrix is obtained in advance.
  • an embodiment of the present application provides an augmented reality device for dental surgery, including:
  • the acquiring unit is configured to acquire the position and posture of the first visual marker and the position and posture of the second visual marker; wherein the first visual marker corresponds to the implanted mobile phone, and the second visual marker corresponds to the oral jaw;
  • the first obtaining unit is configured to obtain the second coordinate system established based on the second visual marker and the second coordinate system based on the first visual marker according to the position and posture of the first visual marker and the position and posture of the second visual marker.
  • the second obtaining unit is configured to, according to the coordinate set of the object to be positioned in the virtual coordinate system, the transformation matrix between the virtual coordinate system and the second coordinate system, and the second coordinate system and the first coordinate system A transformation matrix between coordinate systems to obtain a coordinate set of the object to be positioned in the first coordinate system; wherein the transformation matrix between the virtual coordinate system and the second coordinate system is obtained in advance;
  • the display unit is configured to obtain the coordinate set of the object to be positioned in the first coordinate system and the projection matrix of the first coordinate system to the pixel coordinate system corresponding to the endoscope of the planting mobile phone.
  • an embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and capable of running on the processor, and the processor implements any of the above-mentioned embodiments when the program is executed.
  • the steps of the augmented reality method for dental surgery are described.
  • an embodiment of the present application provides a non-transitory computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the enhancement for dental surgery described in any of the above embodiments is implemented. Realistic method steps.
  • the augmented reality method and device for dental surgery can obtain the position and posture of the first visual marker and the position and posture of the second visual marker, and then according to the position and posture of the first visual marker and the first visual marker 2.
  • the position and posture of the visual marker obtaining the transformation matrix between the second coordinate system established based on the second visual marker and the first coordinate system established based on the first visual marker, according to the coordinate set of the object to be positioned in the virtual coordinate system ,
  • the transformation matrix between the virtual coordinate system and the second coordinate system and the transformation matrix between the second coordinate system and the first coordinate system to obtain the coordinate set of the object to be positioned in the first coordinate system, and then according to the
  • the coordinate set in the first coordinate system, the projection matrix from the first coordinate system to the pixel coordinate system corresponding to the endoscope of the planting mobile phone obtain the pixel coordinate set of the object to be positioned in the pixel coordinate system, and collect it according to the pixel coordinate
  • the object to be positioned is displayed on the two-dimensional image corresponding to the specul
  • FIG. 1 is a schematic flowchart of an augmented reality method for dental surgery provided by an embodiment of the application
  • Figure 2 is a schematic diagram of camera calibration provided by an embodiment of the application.
  • FIG. 3 is a schematic flowchart of an augmented reality method for dental surgery provided by another embodiment of the application.
  • FIG. 4 is a schematic structural diagram of an augmented reality device for dental surgery provided by an embodiment of the application.
  • FIG. 5 is a schematic structural diagram of an augmented reality device for dental surgery provided by another embodiment of the application.
  • FIG. 6 is a schematic diagram of the physical structure of an electronic device provided by an embodiment of the application.
  • Augmented Reality (AR) technology has received increasing attention and has played an important role in many industries, showing great potential.
  • Augmented reality technology is a technology that can calculate the position and angle of the image taken by the camera in real time and add the corresponding image. It not only displays the real world information, but also displays the virtual world information at the same time, which can display the real world information It complements and overlaps with the information of the virtual world.
  • the augmented reality method for dental surgery provided by the embodiments of the present application can expand the observation range in dental surgery and improve the positioning accuracy of the part that needs dental surgery.
  • FIG. 1 is a schematic flowchart of an augmented reality method for dental surgery provided by an embodiment of the application.
  • the augmented reality method for dental surgery provided by an embodiment of the present application includes:
  • a planting mobile phone with an endoscope is used.
  • the endoscope is set on the head of the planting mobile phone and can extend into the patient's mouth.
  • the camera installed in the endoscope can photograph the oral cavity of the patient and obtain a two-dimensional image of the oral cavity of the patient.
  • the augmented reality device used for dental surgery (hereinafter referred to as augmented reality device) can track the first visual mark and the second visual mark in real time through optical or electromagnetic navigation instruments commonly used in navigation surgery, thereby obtaining the first visual mark.
  • the position and posture of a visual marker and the position and posture of the second visual marker can track the first visual mark and the second visual mark in real time through optical or electromagnetic navigation instruments commonly used in navigation surgery, thereby obtaining the first visual mark.
  • the position and posture of a visual marker and the position and posture of the second visual marker can track the first visual mark and the second visual mark in real time through optical or electromagnetic navigation instruments commonly used in navigation surgery, thereby obtaining the first visual mark.
  • the first visual mark corresponds to the planting mobile phone, and a first visual mark can be set on the planting mobile phone, the first visual mark includes at least three first visual marking points, and the at least three first visual marks
  • the marking point is not on a straight line, the first visual marking can be set on the motor end of the implanting mobile phone;
  • the second visual marking corresponds to the oral jaw, and the second visual marking includes at least three second visual markings Point, the at least three second visual marking points are not on a straight line, and the second visual marking may be set on a dental tray or implanted in the alveolar process or jaw bone of the patient.
  • the specific installation positions and installation methods of the first visual mark and the second visual mark are selected based on actual experience, and are not limited in the embodiment of the present application.
  • the augmented reality device may use an optical or electromagnetic navigation instrument based on the position and posture of the first visual marker.
  • the posture and the position and posture of the second visual marker are obtained, and the transformation matrix between the second coordinate system and the first coordinate system is obtained.
  • the second coordinate system is established based on the second visual marker by an optical or electromagnetic navigation instrument
  • the first coordinate system is a three-dimensional coordinate system established by an optical or electromagnetic navigation instrument based on the first visual mark.
  • the transformation matrix between the second coordinate system and the first coordinate system is configured to convert coordinates in the second coordinate system into coordinates in the first coordinate system.
  • the three-dimensional model of the patient’s mouth can be reconstructed based on the CT scan data before the dental operation, and the three-dimensional model of the object to be positioned can be established in the design aid software, and then the three-dimensional model of the object to be positioned can be pre-planned in the three-dimensional model of the patient’s mouth
  • the position, angle and depth of the object to be positioned include but not limited to implants, temporary crowns, etc.
  • the virtual coordinate system is the three-dimensional coordinate system to which the three-dimensional model of the object to be positioned belongs.
  • the three-dimensional model of the object to be positioned needs to be Mapped to the two-dimensional field of view of the endoscope, the coordinate set of the object to be positioned in the virtual coordinate system is the coordinate corresponding to the three-dimensional model of the object to be positioned in the virtual coordinate system. All can be set according to actual needs.
  • the coordinate set of the object to be positioned in the virtual coordinate system is the coordinate corresponding to the outer contour of the three-dimensional model of the object to be positioned, and the coordinate set of the object to be positioned in the virtual coordinate system can be preset.
  • the augmented reality device may be based on the coordinate set of the object to be positioned in the virtual coordinate system, the transformation matrix between the virtual coordinate system and the second coordinate system, and the second coordinate system and the first coordinate system.
  • the transformation matrix between the coordinate systems obtains the coordinate set of the object to be positioned in the first coordinate system.
  • the transformation matrix between the virtual coordinate system and the second coordinate system is obtained in advance.
  • the augmented reality device may use the coordinate a and the transformation matrix between the virtual coordinate system and the second coordinate system , Obtain the coordinate b of the coordinate a in the second coordinate system, and then obtain the coordinate b in the first coordinate system according to the coordinate b and the transformation matrix between the second coordinate system and the first coordinate system Therefore, the coordinate set of the object to be positioned in the first coordinate system can be obtained.
  • At least three feature points are selected on the tooth model included in the three-dimensional model of the patient's mouth, and the coordinates of the at least three feature points in the virtual coordinate system are obtained.
  • the probe of the optical navigation instrument is used in reality to obtain the actual points corresponding to the at least three characteristic points on the patient’s teeth in the second coordinate system.
  • the difference between the virtual coordinate system and the second coordinate system can be obtained.
  • the feature points are selected based on actual experience, which is not limited in the embodiment of the present application.
  • the augmented reality device After the augmented reality device obtains the coordinate set of the object to be positioned in the first coordinate system, it can be based on the coordinate set of the object to be positioned in the first coordinate system, the first A projection matrix from the coordinate system to the pixel coordinate system corresponding to the endoscope to obtain the pixel coordinate set of the object to be positioned in the pixel coordinate system, and the object to be positioned in the coordinate set in the first coordinate system
  • Each coordinate of corresponds to a pixel coordinate in the pixel coordinate set.
  • the augmented reality device may display the object to be positioned on the two-dimensional image corresponding to the endoscope according to the set of pixel coordinates, thereby converting the three-dimensional model of the object to be positioned Mapped to the two-dimensional field of view of the endoscope, the two-dimensional image corresponding to the endoscope is captured by the camera of the endoscope to obtain a two-dimensional image.
  • the pixel coordinate system is a two-dimensional coordinate system corresponding to a two-dimensional image captured by a camera of the endoscope; the projection matrix is obtained in advance.
  • the augmented reality method for dental surgery provided by the embodiment of the application can obtain the position and posture of the first visual marker and the position and posture of the second visual marker, and then according to the position and posture of the first visual marker and the second visual marker The position and posture of the marker are obtained, and the transformation matrix between the second coordinate system established based on the second visual marker and the first coordinate system established based on the first visual marker is obtained. According to the coordinate set of the object to be positioned in the virtual coordinate system, the virtual The transformation matrix between the coordinate system and the second coordinate system and the transformation matrix between the second coordinate system and the first coordinate system are used to obtain the coordinate set of the object to be positioned in the first coordinate system.
  • the object to be positioned is displayed on the corresponding two-dimensional image, which improves the positioning accuracy of the object to be positioned in dental surgery.
  • obtaining the projection matrix includes:
  • the projection matrix is obtained according to the internal parameter matrix of the camera of the endoscope, the transformation matrix between the third coordinate system and the first coordinate, and the external parameter matrix of the camera of the endoscope; wherein, the The transformation matrix between the third coordinate system and the first coordinate system is obtained in advance.
  • the internal parameter matrix of the camera of the endoscope is M 1
  • the external parameter matrix of the camera of the endoscope is M 2
  • the transformation matrix between the third coordinate system and the first coordinate is obtained in advance, and may be configured to convert coordinates in the third coordinate system into coordinates in the first coordinate system ;
  • the internal parameter matrix and external parameter matrix of the camera can be obtained by Zhang Zhengyou calibration method.
  • the internal parameter matrix of the camera refers to the matrix formed by the internal parameters of the camera
  • the external parameter matrix of the camera refers to the external parameter configuration of the camera relative to the third coordinate system. Matrix.
  • the coordinates (u, v) in the pixel coordinate system of the camera and the coordinates (X 1 , Y 1 , Z 1 ) in the first coordinate system have the following relationship:
  • Z c is the coordinate value corresponding to the pixel coordinate (u, v) in the direction perpendicular to the pixel coordinate system
  • M 1 is the internal parameter matrix of the camera
  • the internal parameter matrix and the external parameter matrix of the camera are obtained according to the Zhang Zhengyou calibration method.
  • the internal parameter matrix and the external parameter matrix of the camera can be obtained according to Zhang Zhengyou's calibration method.
  • FIG. 2 is a schematic diagram of camera calibration provided by an embodiment of the application.
  • the head of the planting mobile phone 1 is provided with an endoscope, and the motor end of the planting mobile phone 1 is provided with the first visual mark, so
  • the first visual mark includes three first visual mark points 2
  • the calibration board 3 is a flat plate, a black and white checkerboard pattern is set on the calibration board 3
  • the third visual mark is set on the calibration board 3
  • the visual marker includes three third visual marker points 4, and the coordinate system established based on the third visual marker is the third coordinate system.
  • the calibration board 3 is opposite to the endoscope of the planting mobile phone 1, and the camera of the endoscope can capture the checkerboard pattern, the size of each square on the checkerboard pattern is equal, and the checkerboard
  • the coordinates of each feature point on the pattern in the third coordinate system can be obtained by accurate measurement, and the feature point refers to a black and white corner point on the checkerboard pattern.
  • the coordinates (u, v) in the pixel coordinate system of the camera and the coordinates (X 3 , Y 3 , Z 3 ) in the third coordinate system have the following relationship:
  • Z c is the coordinate value corresponding to the pixel coordinate (u, v) in the direction perpendicular to the pixel coordinate system
  • M 1 is the internal parameter matrix of the camera
  • M 2 is the external parameter matrix of the camera.
  • M 3 is called a projection matrix from the third coordinate system to the pixel coordinate system
  • m ij is an element in the projection matrix M
  • i and j are positive integers
  • i is less than or equal to 3
  • j is less than or equal to 4.
  • the coordinates of more than 6 feature points in the third coordinate system can be accurately obtained by technical methods in the field of image processing, such as Harris Corner detection algorithm or Shi-Tomasi corner detection algorithm, etc.
  • Three linear equations can be obtained by formula (3). After eliminating Z c , two linear equations about m ij can be obtained.
  • solving two linear equations for m ij may calculate the value of each element in m 3, m 3 and then decomposed to obtain m 1 and M 2 , that is, the internal parameter matrix and the external parameter matrix of the camera are obtained.
  • obtaining a transformation matrix between the third coordinate system and the first coordinate system includes:
  • the third visual marker corresponds to a calibration board, and the calibration board is configured to calibrate the camera;
  • a transformation matrix between the third coordinate system and the first coordinate system is obtained; wherein, based on the first coordinate system
  • the coordinate system established by the three-vision mark is the third coordinate system.
  • the position and posture of the third visual marker can be obtained through the optical or electromagnetic navigation instrument, as well as the first The position and posture of a visual marker.
  • the third visual mark corresponds to a calibration board, and the third visual mark can be set on the calibration board.
  • the transformation between the third coordinate system and the first coordinate system can be obtained matrix.
  • the coordinate system established based on the third visual mark is the third coordinate system.
  • FIG. 3 is a schematic flow chart of an augmented reality method for dental surgery provided by another embodiment of the application.
  • the coordinate set in the first coordinate system includes:
  • the augmented reality device may according to each coordinate in the coordinate set of the object to be positioned in the virtual coordinate system. Coordinates and the transformation matrix between the virtual coordinate system and the second coordinate system to obtain that each coordinate in the coordinate set of the object to be positioned in the virtual coordinate system is in the second coordinate system. The corresponding coordinates of each coordinate in the coordinate set of the object to be positioned in the virtual coordinate system in the second coordinate system constitute the object to be positioned in the second coordinate system The set of coordinates under.
  • the augmented reality device may be based on the coordinate set of the object to be positioned in the second coordinate system And the transformation matrix between the second coordinate system and the first coordinate system to obtain each coordinate in the coordinate set of the object to be positioned in the second coordinate system.
  • Corresponding coordinates in a coordinate system, and the corresponding coordinates of each coordinate in the coordinate set of the object to be positioned in the second coordinate system in the first coordinate system constitute the object to be positioned in the The set of coordinates in the first coordinate system.
  • one coordinate in the coordinate set of the object to be positioned in the second coordinate system is (X 2 , Y 2 , Z 2 ), and the distance between the second coordinate system and the first coordinate system
  • the transformation matrix is M 21
  • the pixel corresponding to the endoscope of the planting mobile phone from the first coordinate system includes:
  • M is a 3x4 matrix, by the formula Three linear equations can be obtained. Since (X 1 , Y 1 , Z 1 ) and the value of each element in the projection matrix M can be obtained, the three linear equations can be used to eliminate Z c to solve for u and v. Thus, the pixel coordinates (u, v) are obtained.
  • FIG. 4 is a schematic structural diagram of an augmented reality device for dental surgery provided by an embodiment of the application.
  • the augmented reality device for dental surgery provided by an embodiment of the present application includes an acquiring unit 401 and a first acquiring unit 402.
  • the acquiring unit 401 is configured to acquire the position and posture of the first visual marker and the position and posture of the second visual marker; wherein the first visual marker corresponds to the implanted mobile phone, and the second visual marker corresponds to the oral jaw;
  • the first obtaining unit 402 is configured to obtain the second coordinate system established based on the second visual marker and the second coordinate system based on the first visual marker according to the position and posture of the first visual marker and the position and posture of the second visual marker.
  • a transformation matrix between the first coordinate system established by a visual mark; the second obtaining unit 403 is configured to determine the relationship between the virtual coordinate system and the second coordinate system according to the coordinate set of the object to be positioned in the virtual coordinate system And the transformation matrix between the second coordinate system and the first coordinate system to obtain the coordinate set of the object to be positioned in the first coordinate system; wherein the virtual coordinate system is The transformation matrix between the second coordinate systems is obtained in advance; the display unit 404 is configured to correspond to the coordinate set of the object to be positioned in the first coordinate system, and the correspondence between the first coordinate system and the endoscope The projection matrix of the pixel coordinate system of the object to be positioned is obtained, the pixel coordinate set of the object to be positioned in the pixel coordinate system is obtained, and the pixel coordinate set is displayed on the two-dimensional image corresponding to the endoscope. Object; wherein the projection matrix is obtained in advance.
  • a planting mobile phone with an endoscope is used.
  • the endoscope is set on the head of the planting mobile phone and can extend into the patient's mouth.
  • the camera installed in the endoscope can photograph the oral cavity of the patient and obtain a two-dimensional image of the oral cavity of the patient.
  • the acquisition unit 401 can track the first visual marker and the second visual marker in real time through optical or electromagnetic navigation instruments commonly used in navigation surgery, so as to obtain the position and posture of the first visual marker and the second visual marker. The position and posture of the visual marker.
  • the first visual mark corresponds to the planting mobile phone, and a first visual mark can be set on the planting mobile phone, the first visual mark includes at least three first visual marking points, and the at least three first visual marks
  • the marking point is not on a straight line, the first visual marking can be set on the motor end of the implanting mobile phone;
  • the second visual marking corresponds to the oral jaw, and the second visual marking includes at least three second visual markings Point, the at least three second visual marking points are not on a straight line, and the second visual marking may be set on a dental tray or implanted in the alveolar process or jaw bone of the patient.
  • the specific installation positions and installation methods of the first visual mark and the second visual mark are selected based on actual experience, and are not limited in the embodiment of the present application.
  • the first obtaining unit 402 may use an optical or electromagnetic navigation instrument based on the position and posture of the first visual marker and the position and posture of the second visual marker.
  • the position and posture of the second visual marker are obtained, and the transformation matrix between the second coordinate system and the first coordinate system is obtained.
  • the second coordinate system is a three-dimensional coordinate established based on the second visual marker by an optical or electromagnetic navigation instrument
  • the first coordinate system is a three-dimensional coordinate system established based on the first visual marker by an optical or electromagnetic navigation instrument.
  • the transformation matrix between the second coordinate system and the first coordinate system is configured to convert coordinates in the second coordinate system into coordinates in the first coordinate system.
  • the three-dimensional model of the patient’s mouth can be reconstructed based on the CT scan data, and the three-dimensional model of the object to be positioned can be established in the auxiliary design software, and then the position of the three-dimensional model of the object to be positioned in the three-dimensional model of the patient’s mouth can be planned in advance , Angle and depth, the objects to be positioned include but are not limited to implants, temporary crowns, etc.
  • the virtual coordinate system is the three-dimensional coordinate system to which the three-dimensional model of the object to be positioned belongs.
  • the three-dimensional model of the object to be positioned needs to be Mapped to the two-dimensional field of view of the endoscope, the coordinate set of the object to be positioned in the virtual coordinate system is the coordinate corresponding to the three-dimensional model of the object to be positioned in the virtual coordinate system. All can be set according to actual needs.
  • the coordinate set of the object to be positioned in the virtual coordinate system is the coordinate corresponding to the outer contour of the three-dimensional model of the object to be positioned, and the coordinate set of the object to be positioned in the virtual coordinate system can be preset.
  • the second obtaining unit 403 may be based on the coordinate set of the object to be positioned in the virtual coordinate system, the transformation matrix between the virtual coordinate system and the second coordinate system, and the second coordinate system and the first coordinate system.
  • the transformation matrix between the coordinate systems obtains the coordinate set of the object to be positioned in the first coordinate system.
  • the transformation matrix between the virtual coordinate system and the second coordinate system is obtained in advance.
  • the display unit 404 may use the coordinate set of the object to be positioned in the first coordinate system,
  • the projection matrix of the pixel coordinate system corresponding to the mirror to obtain the pixel coordinate set of the object to be positioned in the pixel coordinate system, and each coordinate in the coordinate set of the object to be positioned in the first coordinate system corresponds to One pixel coordinate in one of the pixel coordinate sets.
  • the display unit 404 may display the object to be positioned on the two-dimensional image corresponding to the endoscope according to the pixel coordinate set, thereby mapping the three-dimensional model of the object to be positioned to
  • the two-dimensional image corresponding to the endoscope is a two-dimensional image captured by the camera of the endoscope.
  • the pixel coordinate system is a two-dimensional coordinate system corresponding to a two-dimensional image captured by a camera of the endoscope; the projection matrix is obtained in advance.
  • the augmented reality device for dental surgery provided by the embodiment of the application can acquire the position and posture of the first visual marker and the position and posture of the second visual marker, and then according to the position and posture of the first visual marker and the second visual marker The position and posture of the marker are obtained, and the transformation matrix between the second coordinate system established based on the second visual marker and the first coordinate system established based on the first visual marker is obtained. According to the coordinate set of the object to be positioned in the virtual coordinate system, the virtual The transformation matrix between the coordinate system and the second coordinate system and the transformation matrix between the second coordinate system and the first coordinate system are used to obtain the coordinate set of the object to be positioned in the first coordinate system.
  • the object to be positioned is displayed on the corresponding two-dimensional image, which improves the positioning accuracy of the object to be positioned in dental surgery.
  • FIG. 5 is a schematic structural diagram of an augmented reality device for dental surgery provided by another embodiment of the application. As shown in FIG. 5, on the basis of the foregoing embodiments, and further, on the basis of the foregoing embodiments, Further, the augmented reality device for dental surgery provided by the embodiment of the present application further includes a third obtaining unit 405, wherein:
  • the third obtaining unit 405 is configured to be based on the internal parameter matrix of the camera of the endoscope, the transformation matrix between the second coordinate system and the first coordinate, and the external parameter matrix of the camera of the endoscope, Obtain the projection matrix; wherein the transformation matrix between the second coordinate system and the first coordinate system is obtained in advance.
  • the third obtaining unit 405 obtains that the internal parameter matrix of the camera of the endoscope is M 1 , the external parameter matrix of the camera of the endoscope is M 2 , and the distance between the third coordinate system and the first coordinate
  • the transformation matrix is Then you can get the projection matrix
  • the transformation matrix between the third coordinate system and the first coordinate system is obtained in advance, and may be configured to convert coordinates in the third coordinate system into coordinates in the first coordinate system ;
  • the internal parameter matrix and external parameter matrix of the camera can be obtained by Zhang Zhengyou calibration method.
  • the internal parameter matrix of the camera refers to the matrix formed by the internal parameters of the camera
  • the external parameter matrix of the camera refers to the external parameter configuration of the camera relative to the first coordinate system. Matrix.
  • FIG. 6 is a schematic diagram of the physical structure of an electronic device provided by an embodiment of the application.
  • the electronic device may include: a processor 610, a communication interface 620, a memory (memory) 630, and The communication bus 640, wherein the processor 610, the communication interface 620, and the memory 630 communicate with each other through the communication bus 640.
  • the processor 610 may call the logic instructions in the memory 630 to execute the following method: obtain the position and posture of the first visual marker and the position and posture of the second visual marker; wherein the first visual marker corresponds to the planting mobile phone, so The second visual marker corresponds to the oral jaw; according to the position and posture of the first visual marker and the position and posture of the second visual marker, a second coordinate system established based on the second visual marker is obtained and based on The transformation matrix between the first coordinate system established by the first visual marker; according to the coordinate set of the object to be positioned in the virtual coordinate system, the transformation matrix between the virtual coordinate system and the second coordinate system, and the The transformation matrix between the second coordinate system and the first coordinate system is used to obtain the coordinate set of the object to be positioned in the first coordinate system; wherein, the virtual coordinate system and the second coordinate system The transformation matrix between is obtained in advance; according to the coordinate set of the object to be positioned in the first coordinate system and the projection of the first coordinate system to the pixel coordinate system corresponding to the endoscope of the planting mobile phone
  • the above-mentioned logical instructions in the memory 630 can be implemented in the form of software functional units and when sold or used as independent products, they can be stored in a computer readable storage medium.
  • the technical solution of this application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program code .
  • the computer program product includes a computer program stored on a non-transitory computer-readable storage medium.
  • the computer program includes program instructions.
  • the program instructions When the program instructions are executed by a computer, the computer
  • the methods provided in the foregoing method embodiments can be executed, for example, including: acquiring the position and posture of the first visual marker and the position and posture of the second visual marker; wherein the first visual marker corresponds to the planting mobile phone, and the second visual marker The second visual marker corresponds to the oral jaw; according to the position and posture of the first visual marker and the position and posture of the second visual marker, a second coordinate system established based on the second visual marker is obtained and based on the The transformation matrix between the first coordinate system established by the first visual marker; according to the coordinate set of the object to be positioned in the virtual coordinate system, the transformation matrix between the virtual coordinate system and the second coordinate system, and the first The transformation matrix between the two coordinate system and the first coordinate system to obtain the coordinate set of the object to be positioned in the first coordinate system; wherein, the virtual coordinate
  • This embodiment provides a non-transitory computer-readable storage medium, the non-transitory computer-readable storage medium stores a computer program, and the computer program causes the computer to execute the methods provided in the foregoing method embodiments, for example, including : Obtain the position and posture of the first visual marker and the position and posture of the second visual marker; wherein, the first visual marker corresponds to the implanted mobile phone, and the second visual marker corresponds to the oral jaw; according to the first The position and posture of the visual marker and the position and posture of the second visual marker to obtain the transformation between the second coordinate system established based on the second visual marker and the first coordinate system established based on the first visual marker Matrix; according to the coordinate set of the object to be positioned in the virtual coordinate system, the transformation matrix between the virtual coordinate system and the second coordinate system, and the transformation between the second coordinate system and the first coordinate system Matrix to obtain the coordinate set of the object to be positioned in the first coordinate system; wherein the transformation matrix between the virtual coordinate system and the second coordinate system is obtained in advance; according
  • the device embodiments described above are merely illustrative.
  • the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network units. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments. Those of ordinary skill in the art can understand and implement without creative work.
  • each implementation manner can be implemented by software plus a necessary general hardware platform, and of course, it can also be implemented by hardware.
  • the computer software product can be stored in a computer-readable storage medium, such as ROM/RAM, magnetic A disc, an optical disc, etc., include a number of instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute the methods described in each embodiment or some parts of the embodiment.

Landscapes

  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Dentistry (AREA)
  • Epidemiology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Endoscopes (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

一种用于牙科手术的增强现实方法及对应的装置,该方法包括以下步骤:获取第一视觉标记和第二视觉标记的位置和姿态(S101);根据第一视觉标记和第二视觉标记的位置和姿态,获得第二坐标系与第一坐标系之间的变换矩阵(S102);根据待定位对象在虚拟坐标系下的坐标集合、虚拟坐标系与第二坐标系之间的变换矩阵以及第二坐标系与第一坐标系之间的变换矩阵,获得待定位对象在第一坐标系下的坐标集合(S103);根据待定位对象在第一坐标系下的坐标集合和第一坐标系到像素坐标系的投影矩阵,获得待定位对象的像素坐标集合,并根据像素坐标集合在二维图像上显示待定位对象(S104)。该用于牙科手术的增强现实方法及装置提高了对待定位对象的定位精度。

Description

一种用于牙科手术的增强现实方法及装置
交叉引用
本申请引用于2019年01月22日提交的专利名称为“一种用于牙科手术的增强现实方法及装置”的第2019100604606号中国专利申请,其通过引用被全部并入本申请。
技术领域
本申请实施例涉及医疗机器人技术领域,具体涉及一种用于牙科手术的增强现实方法及装置。
背景技术
近年来,随着口腔种植技术在临床中的推广,以及人们对口腔健康要求的提高,越来越多的患者选择对缺失牙进行种植修复治疗。
口腔种牙手术需要在狭小空间内进行精密操作,由于口腔的操作空间狭小,对于种牙的位置很难直接观察进行操作,对于种牙位置的定位依赖于医生的经验,导致对种植体的定位不够准确,容易造成种植手术的失败。
因此,如何提出一种用于牙科手术的增强现实方法,能够提高种植体的定位精度,成为业界亟待解决的重要课题。
发明内容
针对现有技术中的缺陷,本申请实施例提供一种用于牙科手术的增强现实方法及装置。
一方面,本申请实施例提出一种用于牙科手术的增强现实方法,包括:
获取第一视觉标记的位置和姿态以及第二视觉标记的位置和姿态;其中,所述第一视觉标记与种植手机对应,所述第二视觉标记与口腔颌骨对应;
根据所述第一视觉标记的位置和姿态以及所述第二视觉标记的位置和姿态,获得基于所述第二视觉标记建立的第二坐标系与基于所述第一视觉标记建立的第一坐标系之间的变换矩阵;
根据待定位对象在虚拟坐标系下的坐标集合、所述虚拟坐标系与所述 第二坐标系之间的变换矩阵以及所述第二坐标系与所述第一坐标系之间的变换矩阵,获得所述待定位对象在所述第一坐标系下的坐标集合;其中,所述虚拟坐标系与所述第二坐标系之间的变换矩阵是预先获得的;
根据所述待定位对象在所述第一坐标系下的坐标集合和所述第一坐标系到所述种植手机的内窥镜对应的像素坐标系的投影矩阵,获得所述待定位对象在所述像素坐标系下的像素坐标集合,并根据所述像素坐标集合在所述内窥镜对应的二维图像上显示所述待定位对象;所述投影矩阵是预先获得的。
另一方面,本申请实施例提供一种用于牙科手术的增强现实装置,包括:
获取单元,被配置为获取第一视觉标记的位置和姿态和第二视觉标记的位置和姿态;其中,所述第一视觉标记与种植手机对应,所述第二视觉标记与口腔颌骨对应;
第一获得单元,被配置为根据所述第一视觉标记的位置和姿态和所述第二视觉标记的位置和姿态,获得基于所述第二视觉标记建立的第二坐标系与基于所述第一视觉标记建立的第一坐标系之间的变换矩阵;
第二获得单元,被配置为根据待定位对象在虚拟坐标系下的坐标集合、所述虚拟坐标系与所述第二坐标系之间的变换矩阵以及所述第二坐标系与所述第一坐标系之间的变换矩阵,获得所述待定位对象在所述第一坐标系下的坐标集合;其中,所述虚拟坐标系与所述第二坐标系之间的变换矩阵是预先获得的;
显示单元,被配置为根据所述待定位对象在所述第一坐标系下的坐标集合和所述第一坐标系到所述种植手机的内窥镜对应的像素坐标系的投影矩阵,获得所述待定位对象在所述像素坐标系下的像素坐标集合,并根据所述像素坐标集合在所述内窥镜对应的二维图像上显示所述待定位对象;所述投影矩阵是预先获得的。
再一方面,本申请实施例提供一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现上述任一实施例所述的用于牙科手术的增强现实方法的步骤。
又一方面,本申请实施例提供一种非暂态计算机可读存储介质,其上 存储有计算机程序,该计算机程序被处理器执行时实现上述任一实施例所述的用于牙科手术的增强现实方法的步骤。
本申请实施例提供的用于牙科手术的增强现实方法及装置,由于能够获取第一视觉标记的位置和姿态以及第二视觉标记的位置和姿态,然后根据第一视觉标记的位置和姿态以及第二视觉标记的位置和姿态,获得基于第二视觉标记建立的第二坐标系与基于第一视觉标记建立的第一坐标系之间的变换矩阵,根据待定位对象在虚拟坐标系下的坐标集合、虚拟坐标系与第二坐标系之间的变换矩阵以及第二坐标系与第一坐标系之间的变换矩阵,获得待定位对象在第一坐标系下的坐标集合,再根据待定位对象在第一坐标系下的坐标集合、第一坐标系到种植手机的内窥镜对应的像素坐标系的投影矩阵,获得待定位对象在像素坐标系下的像素坐标集合,并根据像素坐标集合在内窥镜对应的二维图像上显示待定位对象,提高了牙科手术中对待定位对象的定位精度。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请一实施例提供的用于牙科手术的增强现实方法的流程示意图;
图2为本申请一实施例提供的摄像机标定的示意图;
图3为本申请另一实施例提供的用于牙科手术的增强现实方法的流程示意图;
图4为本申请一实施例提供的用于牙科手术的增强现实装置的结构示意图;
图5为本申请另一实施例提供的用于牙科手术的增强现实装置的结构示意图;
图6为本申请一实施例提供的电子设备的实体结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
随着计算机软硬件技术的不断发展,增强现实技术(Augmented Reality,简称AR)受到日益广泛的关注,并且已经在许多行业发挥了重要作用,显示出巨大的潜力。增强现实技术是一种可以实时地计算摄像机拍摄的影像的位置及角度并加上相应图像的技术,不仅展现了真实世界的信息,而且将虚拟世界的信息同时显示出来,能够将真实世界的信息和虚拟世界的信息相互补充和叠加。本申请实施例提供的用于牙科手术的增强现实方法,能够扩展牙科手术中的观察范围,提高了对需要进行牙科手术的部位的定位准确性。
图1为本申请一实施例提供的用于牙科手术的增强现实方法的流程示意图,如图1所示,本申请实施例提供的用于牙科手术的增强现实方法,包括:
S101、获取第一视觉标记的位置和姿态以及第二视觉标记的位置和姿态;其中,所述第一视觉标记与种植手机对应,所述第二视觉标记与口腔颌骨对应;
具体地,在本申请实施例中采用带内窥镜的种植手机,所述内窥镜设置在种植手机的头部,可以伸入患者口腔内部,在所述内窥镜的光源照射下,所述内窥镜安装的摄像机可以拍摄患者的口腔,获得患者口腔的二维图像。在进行牙科手术时,用于牙科手术的增强现实装置(以下简称增强现实装置)可以通过导航手术中常用的光学或电磁导航仪器实时追踪第一视觉标记和第二视觉标记,从而获得所述第一视觉标记的位置和姿态以及所述第二视觉标记的位置和姿态。其中,所述第一视觉标记与种植手机对应,可以在所述种植手机上设置第一视觉标记,所述第一视觉标记包括至少三个第一视觉标记点,所述至少三个第一视觉标记点不在一条直线上,所述第一视觉标记可以设置在所述种植手机的马达端;所述第二视觉标记与口腔颌骨对应,所述第二视觉标记包括至少三个第二视觉标记点,所述 至少三个第二视觉标记点不在一条直线上,可以将所述第二视觉标记设置在牙托板上或者植入所述患者的牙槽突或颌骨。所述第一视觉标记和所述第二视觉标记的具体安装位置和安装方式,根据实际经验进行选择,本申请实施例不做限定。
S102、根据所述第一视觉标记的位置和姿态以及所述第二视觉标记的位置和姿态,获得基于所述第二视觉标记建立的第二坐标系与基于所述第一视觉标记建立的第一坐标系之间的变换矩阵;
具体地,所述增强现实装置在获得所述第一视觉标记的位置和姿态以及所述第二视觉标记的位置和姿态之后,可以通过光学或电磁导航仪器基于所述第一视觉标记的位置和姿态以及所述第二视觉标记的位置和姿态,获得第二坐标系与第一坐标系之间的变换矩阵,所述第二坐标系是通过光学或电磁导航仪器基于所述第二视觉标记建立的三维坐标系,所述第一坐标系是通过光学或电磁导航仪器基于所述第一视觉标记建立的三维坐标系。所述第二坐标系与所述第一坐标系之间的变换矩阵被配置为将所述第二坐标系下的坐标转换成所述第一坐标系下的坐标。
S103、根据待定位对象在虚拟坐标系下的坐标集合、所述虚拟坐标系与所述第二坐标系之间的变换矩阵以及所述第二坐标系与所述第一坐标系之间的变换矩阵,获得所述待定位对象在所述第一坐标系下的坐标集合;其中,所述虚拟坐标系与所述第二坐标系之间的变换矩阵是预先获得的;
具体地,在牙科手术之前可以基于CT扫描数据重建患者口腔的三维模型,并在辅助设计软件中建立待定位对象的三维模型,然后预先规划所述待定位对象的三维模型在患者口腔的三维模型中的位置、角度和深度,所述待定位对象包括但不限于种植体、临时牙冠等。虚拟坐标系为所述待定位对象的三维模型所属的三维坐标系,为了能够在所述内窥镜的二维视野中获得所述待定位对象的位置,需要将所述待定位对象的三维模型映射到所述内窥镜的二维视野中,待定位对象在虚拟坐标系下的坐标集合即所述待定位对象的三维模型在所述虚拟坐标系下对应的坐标,根据实际需要可以设置所述待定位对象在虚拟坐标系下的坐标集合为所述待定位对象的三维模型的外轮廓对应的坐标,所述待定位对象在所述虚拟坐标系下的坐标集合可以预先设定。所述增强现实装置可以根据所述待定位对象在虚 拟坐标系下的坐标集合、所述虚拟坐标系与所述第二坐标系之间的变换矩阵以及所述第二坐标系与所述第一坐标系之间的变换矩阵,获得所述待定位对象在所述第一坐标系下的坐标集合。其中,所述虚拟坐标系与所述第二坐标系之间的变换矩阵是预先获得的。
例如,对于所述待定位对象在虚拟坐标系下的坐标集合中的每一个坐标a,所述增强现实装置可以根据坐标a和所述虚拟坐标系与所述第二坐标系之间的变换矩阵,获得坐标a在所述第二坐标系下的坐标b,再根据坐标b和所述第二坐标系与所述第一坐标系之间的变换矩阵,获得坐标b在所述第一坐标系下的坐标c,从而可以得到所述待定位对象在所述第一坐标系下的坐标集合。
在进行牙科手术前,在所述患者口腔的三维模型中包括的牙齿模型上选择至少三个特征点,并获取所述至少三个特征点在所述虚拟坐标系下的坐标。在设置完成所述第二视觉标记之后,在现实中利用光学导航仪器的探针获取与所述至少三个特征点在所述患者牙齿上各自对应的真实点在所述第二坐标系下的坐标,根据所述至少三个特征点在虚拟坐标系下的坐标和所述至少三个真实点的在第二坐标系下的坐标,可以获得所述虚拟坐标系与所述第二坐标系之间的变换矩阵。其中,所述特征点根据实际经验进行选取,本申请实施例不做限定。
S104、根据所述待定位对象在所述第一坐标系下的坐标集合和所述第一坐标系到所述种植手机的内窥镜对应的像素坐标系的投影矩阵,获得所述待定位对象在所述像素坐标系下的像素坐标集合,并根据所述像素坐标集合在所述内窥镜对应的二维图像上显示所述待定位对象;其中,所述内窥镜设置在所述种植手机上;所述投影矩阵是预先获得的。
具体地,所述增强现实装置获得所述待定位对象在所述第一坐标系下的坐标集合之后,可以根据所述待定位对象在所述第一坐标系下的坐标集合、所述第一坐标系到内窥镜对应的像素坐标系的投影矩阵,获得所述待定位对象在所述像素坐标系下的像素坐标集合,所述待定位对象在所述第一坐标系下的坐标集合中的每一个坐标对应一个所述像素坐标集合中的一个像素坐标。所述增强现实装置在获得所述像素坐标集合之后,可以根据所述像素坐标集合在所述内窥镜对应的二维图像上显示所述待定位对 象,从而将所述待定位对象的三维模型映射到所述内窥镜的二维视野中,所述内窥镜对应的二维图像即通过所述内窥镜的摄像机拍摄获得二维图像。其中,所述像素坐标系是所述内窥镜的摄像机拍摄获得二维图像对应的二维坐标系;所述投影矩阵是预先获得的。
本申请实施例提供的用于牙科手术的增强现实方法,由于能够获取第一视觉标记的位置和姿态以及第二视觉标记的位置和姿态,然后根据第一视觉标记的位置和姿态以及第二视觉标记的位置和姿态,获得基于第二视觉标记建立的第二坐标系与基于第一视觉标记建立的第一坐标系之间的变换矩阵,根据待定位对象在虚拟坐标系下的坐标集合、虚拟坐标系与第二坐标系之间的变换矩阵以及第二坐标系与第一坐标系之间的变换矩阵,获得待定位对象在第一坐标系下的坐标集合,再根据待定位对象在第一坐标系下的坐标集合、第一坐标系到种植手机的内窥镜对应的像素坐标系的投影矩阵,获得待定位对象在像素坐标系下的像素坐标集合,并根据像素坐标集合在内窥镜对应的二维图像上显示待定位对象,提高了牙科手术中对待定位对象的定位精度。
在上述各实施例的基础上,进一步地,获得所述投影矩阵包括:
根据所述内窥镜的摄像机的内参矩阵、第三坐标系与所述第一坐标之间的变换矩阵以及所述内窥镜的摄像机的外参矩阵,获得所述投影矩阵;其中,所述第三坐标系与所述第一坐标系之间的变换矩阵是预先获得的。
具体地,所述内窥镜的摄像机的内参矩阵为M 1,所述内窥镜的摄像机的外参矩阵为M 2,第三坐标系与所述第一坐标之间的变换矩阵为
Figure PCTCN2019084455-appb-000001
那么所述投影矩阵
Figure PCTCN2019084455-appb-000002
其中,所述第三坐标系与所述第一坐标系之间的变换矩阵是预先获得的,可以被配置为将所述第三坐标系下的坐标转换成所述第一坐标系下的坐标;所述摄像机的内参矩阵和外参矩阵可以通过张正友标定法获得。在本申请实施例中,所述摄像机的内参矩阵是指所述摄像机的内部参数所构成的矩阵,所述摄像机的外参矩阵是指所述摄像机相对于所述第三坐标系的外部参数构成的矩阵。
所述摄像机的像素坐标系下的坐标(u,v)与所述第一坐标系下的坐标(X 1,Y 1,Z 1)存在如下关系:
Figure PCTCN2019084455-appb-000003
其中,Z c为所述像素坐标(u,v)在垂直于像素坐标系方向上对应的坐标值,M 1为所述摄像机的内参矩阵,
Figure PCTCN2019084455-appb-000004
为所述摄像机相对于所述第一坐标系的外部参数构成的矩阵。基于坐标变换可以获得
Figure PCTCN2019084455-appb-000005
所以
Figure PCTCN2019084455-appb-000006
在上述各实施例的基础上,进一步地,根据张正友标定法获得所述摄像机的内参矩阵和外参矩阵。
具体地,所述摄像机的内参矩阵和外参矩阵可以根据张正友标定法获得。
例如,图2为本申请一实施例提供的摄像机标定的示意图,如图2所示,种植手机1的头部设置内窥镜,种植手机1的马达端上设置所述第一视觉标记,所述第一视觉标记包括三个第一视觉标记点2,标定板3为平板,标定板3上设置黑白相间的棋盘格图案,在标定板3上设置所述第三视觉标记,所述第三视觉标记包括三个第三视觉标记点4,基于所述第三视觉标记建立的坐标系为第三坐标系。标定板3与种植手机1的内窥镜相对,所述内窥镜的摄像机能够拍摄到所述棋盘格图案,所述棋盘格图案上的每个方格的尺寸都相等,并且所述棋盘格图案上的每个特征点在所述第三坐标系下的坐标可以通过精确测量获得,所述特征点是指所述棋盘格图案上黑白相间的角点。所述摄像机的像素坐标系下的坐标(u,v)与所述第三坐标系下的坐标(X 3,Y 3,Z 3)存在如下关系:
Figure PCTCN2019084455-appb-000007
其中,Z c为所述像素坐标(u,v)在垂直于像素坐标系方向上对应的坐标值,M 1为所述摄像机的内参矩阵,M 2为所述摄像机的外参矩阵。
Figure PCTCN2019084455-appb-000008
M 3称为所述第三坐标系到所述像素坐标系的投影矩阵,m ij为投影矩阵M中的元素,i和j为正整数,i小于或者等于3,j小于或者等于4。将M 3带入到式(2)中,可以获得如下表达式:
Figure PCTCN2019084455-appb-000009
获取6个以上所述特征点在所述第三坐标系下的坐标,6个以上所述特征点在所述像素坐标系下对应的坐标可以通过图像处理领域中的技术方法准确获得,如Harris角点检测算法或Shi-Tomasi角点检测算法等。通过式(3)可以获得三个线性方程,将Z c消除之后,可以获得两个关于m ij的线性方程,基于6个以上所述特征点在所述第二坐标系下的坐标,6个以上所述特征点在所述像素坐标系下对应的坐标以及最小二乘法,求解两个关于m ij的线性方程可以计算出M 3中每个元素的值,再将M 3分解即可获得M 1和M 2,即获得了所述摄像机的内参矩阵和外参矩阵。
在上述各实施例的基础上,进一步地,获得所述第三坐标系与所述第一坐标系之间的变换矩阵包括:
获取第三视觉标记的位置和姿态,以及所述第一视觉标记的位置和姿态;其中,所述第三视觉标记与标定板对应,所述标定板被配置为对所述摄像机进行标定;
根据所述第三视觉标记的位置和姿态,以及所述第一视觉标记的位置和姿态,获得所述第三坐标系与所述第一坐标系之间的变换矩阵;其中,基于所述第三视觉标记建立的坐标系为所述第三坐标系。
具体地,如图2所示,在用标定板3对所述内窥镜的摄像机进行标定的时候,可以通过所述光学或电磁导航仪器获取第三视觉标记的位置和姿态,以及所述第一视觉标记的位置和姿态。其中,所述第三视觉标记与标定板对应,可以将所述第三视觉标记设置在所述标定板上。通过所述光学或电磁导航仪器基于所述第三视觉标记的位置和姿态以及所述第一视觉标记的位置和姿态,可以获得所述第三坐标系与所述第一坐标系之间的变换矩阵。其中,基于所述第三视觉标记建立的坐标系为所述第三坐标系。
图3为本申请另一实施例提供的用于牙科手术的增强现实方法的流程示意图,如图3所示,在上述各实施例的基础上,进一步地,所述根据待定位对象在虚拟坐标系下的坐标集合、所述虚拟坐标系与所述第二坐标系之间的变换矩阵以及所述第二坐标系与所述第一坐标系之间的变换矩阵,获得所述待定位对象在所述第一坐标系下的坐标集合包括:
S1031、根据所述待定位对象在所述虚拟坐标系下的坐标集合以及所述虚拟坐标系与所述第二坐标系之间的变换矩阵,获得所述待定位对象在所述第二坐标系下的坐标集合;
具体地,对于所述待定位对象在所述虚拟坐标系下的坐标集合中的每个坐标,所述增强现实装置可以根据所述待定位对象在所述虚拟坐标系下的坐标集合中的每个坐标以及所述虚拟坐标系与所述第二坐标系之间的变换矩阵,获得所述待定位对象在所述虚拟坐标系下的坐标集合中的每个坐标在所述第二坐标系下的对应的坐标,所述待定位对象在所述虚拟坐标系下的坐标集合中的各个坐标在所述第二坐标系下的对应的坐标构成了所述待定位对象在所述第二坐标系下的坐标集合。
例如,所述待定位对象在所述虚拟坐标系下的坐标集合中的一个坐标为(X 0,Y 0,Z 0),所述虚拟坐标系与所述第二坐标系之间的变换矩阵为M 02,那么所述待定位对象在所述虚拟坐标系下的坐标集合中的一个坐标(X 0,Y 0,Z 0)在所述第二坐标系下对应的坐标(X 2,Y 2,Z 2) T=M 02(X 0,Y 0,Z 0) T
S1032、根据所述待定位对象在所述第二坐标系下的坐标集合以及所述第二坐标系与所述第一坐标系之间的变换矩阵,获得所述待定位对象在所述第一坐标系下的坐标集合。
具体地,对于所述待定位对象在所述第二坐标系下的坐标集合中的每个坐标,所述增强现实装置可以根据所述待定位对象在所述第二坐标系下的坐标集合中的每个坐标以及所述第二坐标系与所述第一坐标系之间的变换矩阵,获得所述待定位对象在所述第二坐标系下的坐标集合中的每个坐标在所述第一坐标系下的对应的坐标,所述待定位对象在所述第二坐标系下的坐标集合中的各个坐标在所述第一坐标系下的对应的坐标构成了所述待定位对象在所述第一坐标系下的坐标集合。
例如,所述待定位对象在所述第二坐标系下的坐标集合中的一个坐标为(X 2,Y 2,Z 2),所述第二坐标系与所述第一坐标系之间的变换矩阵为M 21,那么所述待定位对象在所述第二坐标系下的坐标集合中的一个坐标(X 2,Y 2,Z 2)在所述第一坐标系下对应的坐标(X 1,Y 1,Z 1) T=M 21(X 2,Y 2,Z 2) T
在上述各实施例的基础上,进一步地,所述根据所述待定位对象在所述第一坐标系下的坐标集合、所述第一坐标系到所述种植手机的内窥镜对应的像素坐标系的投影矩阵,获得所述待定位对象在所述像素坐标系下的像素坐标集合包括:
根据公式
Figure PCTCN2019084455-appb-000010
计算获得所述待定位对象在所述像素坐标系下的像素坐标集合中的像素坐标(u,v),其中,Z c为所述像素坐标(u,v)在垂直于像素坐标系方向上对应的坐标值,M为所述投影矩阵,(X 1,Y 1,Z 1)为所述待定位对象在所述第一坐标系下的坐标集合中的坐标。
具体地,M为一个3x4的矩阵,由公式
Figure PCTCN2019084455-appb-000011
可以得到三个线性方程,由于(X 1,Y 1,Z 1)和投影矩阵M中每个元素的值都可以获得,利用三个线性方程将Z c消去,即可求解出u和v,从而得到像素坐标(u,v)。
图4为本申请一实施例提供的用于牙科手术的增强现实装置的结构示意图,如图4所示,本申请实施例提供的牙科手术的增强现实装置,包括获取单元401、第一获得单元402、第二获得单元403和显示单元404,其中:
获取单元401被配置为获取第一视觉标记的位置和姿态和第二视觉标记的位置和姿态;其中,所述第一视觉标记与种植手机对应,所述第二视觉标记与口腔颌骨对应;第一获得单元402被配置为根据所述第一视觉标记的位置和姿态和所述第二视觉标记的位置和姿态,获得基于所述第二视觉标记建立的第二坐标系与基于所述第一视觉标记建立的第一坐标系之间的变换矩阵;第二获得单元403被配置为根据待定位对象在虚拟坐标系下的坐标集合、所述虚拟坐标系与所述第二坐标系之间的变换矩阵以及所述第二坐标系与所述第一坐标系之间的变换矩阵,获得所述待定位对象在所述第一坐标系下的坐标集合;其中,所述虚拟坐标系与所述第二坐标系之间的变换矩阵是预先获得的;显示单元404被配置为根据所述待定位对象在所述第一坐标系下的坐标集合、所述第一坐标系到内窥镜对应的像素坐标系的投影矩阵,获得所述待定位对象在所述像素坐标系下的像素坐标 集合,并根据所述像素坐标集合在所述内窥镜对应的二维图像上显示所述待定位对象;其中,所述投影矩阵是预先获得的。
具体地,在本申请实施例中采用带内窥镜的种植手机,所述内窥镜设置在种植手机的头部,可以伸入患者口腔内部,在所述内窥镜的光源照射下,所述内窥镜安装的摄像机可以拍摄患者的口腔,获得患者口腔的二维图像。在进行牙科手术时,获取单元401可以通过导航手术中常用的光学或电磁导航仪器实时追踪第一视觉标记和第二视觉标记,从而获得所述第一视觉标记的位置和姿态以及所述第二视觉标记的位置和姿态。其中,所述第一视觉标记与种植手机对应,可以在所述种植手机上设置第一视觉标记,所述第一视觉标记包括至少三个第一视觉标记点,所述至少三个第一视觉标记点不在一条直线上,所述第一视觉标记可以设置在所述种植手机的马达端;所述第二视觉标记与口腔颌骨对应,所述第二视觉标记包括至少三个第二视觉标记点,所述至少三个第二视觉标记点不在一条直线上,可以将所述第二视觉标记设置在牙托板上或者植入所述患者的牙槽突或颌骨。所述第一视觉标记和所述第二视觉标记的具体安装位置和安装方式,根据实际经验进行选择,本申请实施例不做限定。
在获得所述第一视觉标记的位置和姿态以及所述第二视觉标记的位置和姿态之后,第一获得单元402可以通过光学或电磁导航仪器基于所述第一视觉标记的位置和姿态以及所述第二视觉标记的位置和姿态,获得第二坐标系与第一坐标系之间的变换矩阵,所述第二坐标系是通过光学或电磁导航仪器基于所述第二视觉标记建立的三维坐标系,所述第一坐标系是通过光学或电磁导航仪器基于所述第一视觉标记建立的三维坐标系。所述第二坐标系与所述第一坐标系之间的变换矩阵被配置为将所述第二坐标系下的坐标转换成所述第一坐标系下的坐标。
在牙科手术之前可以基于CT扫描数据重建患者口腔的三维模型,并在辅助设计软件中建立待定位对象的三维模型,然后预先规划所述待定位对象的三维模型在患者口腔的三维模型中的位置、角度和深度,所述待定位对象包括但不限于种植体、临时牙冠等。虚拟坐标系为所述待定位对象的三维模型所属的三维坐标系,为了能够在所述内窥镜的二维视野中获得所述待定位对象的位置,需要将所述待定位对象的三维模型映射到所述内 窥镜的二维视野中,待定位对象在虚拟坐标系下的坐标集合即所述待定位对象的三维模型在所述虚拟坐标系下对应的坐标,根据实际需要可以设置所述待定位对象在虚拟坐标系下的坐标集合为所述待定位对象的三维模型的外轮廓对应的坐标,所述待定位对象在所述虚拟坐标系下的坐标集合可以预先设定。第二获得单元403可以根据所述待定位对象在虚拟坐标系下的坐标集合、所述虚拟坐标系与所述第二坐标系之间的变换矩阵以及所述第二坐标系与所述第一坐标系之间的变换矩阵,获得所述待定位对象在所述第一坐标系下的坐标集合。其中,所述虚拟坐标系与所述第二坐标系之间的变换矩阵是预先获得的。
获得所述待定位对象在所述第一坐标系下的坐标集合之后,显示单元404可以根据所述待定位对象在所述第一坐标系下的坐标集合、所述第一坐标系到内窥镜对应的像素坐标系的投影矩阵,获得所述待定位对象在所述像素坐标系下的像素坐标集合,所述待定位对象在所述第一坐标系下的坐标集合中的每一个坐标对应一个所述像素坐标集合中的一个像素坐标。显示单元404在获得所述像素坐标集合之后,可以根据所述像素坐标集合在所述内窥镜对应的二维图像上显示所述待定位对象,从而将所述待定位对象的三维模型映射到所述内窥镜的二维视野中,所述内窥镜对应的二维图像即通过所述内窥镜的摄像机拍摄获得二维图像。其中,所述像素坐标系是所述内窥镜的摄像机拍摄获得二维图像对应的二维坐标系;所述投影矩阵是预先获得的。
本申请实施例提供的用于牙科手术的增强现实装置,由于能够获取第一视觉标记的位置和姿态以及第二视觉标记的位置和姿态,然后根据第一视觉标记的位置和姿态以及第二视觉标记的位置和姿态,获得基于第二视觉标记建立的第二坐标系与基于第一视觉标记建立的第一坐标系之间的变换矩阵,根据待定位对象在虚拟坐标系下的坐标集合、虚拟坐标系与第二坐标系之间的变换矩阵以及第二坐标系与第一坐标系之间的变换矩阵,获得待定位对象在第一坐标系下的坐标集合,再根据待定位对象在第一坐标系下的坐标集合、第一坐标系到种植手机的内窥镜对应的像素坐标系的投影矩阵,获得待定位对象在像素坐标系下的像素坐标集合,并根据像素坐标集合在内窥镜对应的二维图像上显示待定位对象,提高了牙科手术中 对待定位对象的定位精度。
图5为本申请另一实施例提供的用于牙科手术的增强现实装置的结构示意图,如图5所示,在上述各实施例的基础上,进一步地,在上述各实施例的基础上,进一步地,本申请实施例提供的用于牙科手术的增强现实装置还包括第三获得单元405,其中:
第三获得单元405被配置为根据所述内窥镜的摄像机的内参矩阵、所述第二坐标系与所述第一坐标之间的变换矩阵以及所述内窥镜的摄像机的外参矩阵,获得所述投影矩阵;其中,所述第二坐标系与所述第一坐标系之间的变换矩阵是预先获得的。
具体地,第三获得单元405获得所述内窥镜的摄像机的内参矩阵为M 1,所述内窥镜的摄像机的外参矩阵为M 2,第三坐标系与所述第一坐标之间的变换矩阵为
Figure PCTCN2019084455-appb-000012
那么可以得到所述投影矩阵
Figure PCTCN2019084455-appb-000013
其中,所述第三坐标系与所述第一坐标系之间的变换矩阵是预先获得的,可以被配置为将所述第三坐标系下的坐标转换成所述第一坐标系下的坐标;所述摄像机的内参矩阵和外参矩阵可以通过张正友标定法获得。在本申请实施例中,所述摄像机的内参矩阵是指所述摄像机的内部参数所构成的矩阵,所述摄像机的外参矩阵是指所述摄像机相对于所述第一坐标系的外部参数构成的矩阵。
本申请实施例提供的装置的实施例具体可以用于执行上述各方法实施例的处理流程,其功能在此不再赘述,可以参照上述方法实施例的详细描述。
图6为本申请一实施例提供的电子设备的实体结构示意图,如图6所示,该电子设备可以包括:处理器(processor)610、通信接口(Communications Interface)620、存储器(memory)630和通信总线640,其中,处理器610,通信接口620,存储器630通过通信总线640完成相互间的通信。处理器610可以调用存储器630中的逻辑指令,以执行如下方法:获取第一视觉标记的位置和姿态以及第二视觉标记的位置和姿态;其中,所述第一视觉标记与种植手机对应,所述第二视觉标记与口腔颌骨对应;根据所述第一视觉标记的位置和姿态以及所述第二视觉标记的位置和姿态,获得基于所述第二视觉标记建立的第二坐标系与基于所述第一视 觉标记建立的第一坐标系之间的变换矩阵;根据待定位对象在虚拟坐标系下的坐标集合、所述虚拟坐标系与所述第二坐标系之间的变换矩阵以及所述第二坐标系与所述第一坐标系之间的变换矩阵,获得所述待定位对象在所述第一坐标系下的坐标集合;其中,所述虚拟坐标系与所述第二坐标系之间的变换矩阵是预先获得的;根据所述待定位对象在所述第一坐标系下的坐标集合和所述第一坐标系到所述种植手机的内窥镜对应的像素坐标系的投影矩阵,获得所述待定位对象在所述像素坐标系下的像素坐标集合,并根据所述像素坐标集合在所述内窥镜对应的二维图像上显示所述待定位对象;所述投影矩阵是预先获得的。
此外,上述的存储器630中的逻辑指令可以通过软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
本实施例公开一种计算机程序产品,所述计算机程序产品包括存储在非暂态计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,计算机能够执行上述各方法实施例所提供的方法,例如包括:获取第一视觉标记的位置和姿态以及第二视觉标记的位置和姿态;其中,所述第一视觉标记与种植手机对应,所述第二视觉标记与口腔颌骨对应;根据所述第一视觉标记的位置和姿态以及所述第二视觉标记的位置和姿态,获得基于所述第二视觉标记建立的第二坐标系与基于所述第一视觉标记建立的第一坐标系之间的变换矩阵;根据待定位对象在虚拟坐标系下的坐标集合、所述虚拟坐标系与所述第二坐标系之间的变换矩阵以及所述第二坐标系与所述第一坐标系之间的变换矩阵,获得所述待定位对象在所述第一坐标系下的坐标集合;其中,所述虚拟坐标系与所述第二坐标系之间的变换矩阵是预先获得的;根据所述待定位对象 在所述第一坐标系下的坐标集合和所述第一坐标系到所述种植手机的内窥镜对应的像素坐标系的投影矩阵,获得所述待定位对象在所述像素坐标系下的像素坐标集合,并根据所述像素坐标集合在所述内窥镜对应的二维图像上显示所述待定位对象;所述投影矩阵是预先获得的。
本实施例提供一种非暂态计算机可读存储介质,所述非暂态计算机可读存储介质存储计算机程序,所述计算机程序使所述计算机执行上述各方法实施例所提供的方法,例如包括:获取第一视觉标记的位置和姿态以及第二视觉标记的位置和姿态;其中,所述第一视觉标记与种植手机对应,所述第二视觉标记与口腔颌骨对应;根据所述第一视觉标记的位置和姿态以及所述第二视觉标记的位置和姿态,获得基于所述第二视觉标记建立的第二坐标系与基于所述第一视觉标记建立的第一坐标系之间的变换矩阵;根据待定位对象在虚拟坐标系下的坐标集合、所述虚拟坐标系与所述第二坐标系之间的变换矩阵以及所述第二坐标系与所述第一坐标系之间的变换矩阵,获得所述待定位对象在所述第一坐标系下的坐标集合;其中,所述虚拟坐标系与所述第二坐标系之间的变换矩阵是预先获得的;根据所述待定位对象在所述第一坐标系下的坐标集合和所述第一坐标系到所述种植手机的内窥镜对应的像素坐标系的投影矩阵,获得所述待定位对象在所述像素坐标系下的像素坐标集合,并根据所述像素坐标集合在所述内窥镜对应的二维图像上显示所述待定位对象;所述投影矩阵是预先获得的。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用 以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (10)

  1. 一种用于牙科手术的增强现实方法,其特征在于,包括:
    获取第一视觉标记的位置和姿态以及第二视觉标记的位置和姿态;其中,所述第一视觉标记与种植手机对应,所述第二视觉标记与口腔颌骨对应;
    根据所述第一视觉标记的位置和姿态以及所述第二视觉标记的位置和姿态,获得基于所述第二视觉标记建立的第二坐标系与基于所述第一视觉标记建立的第一坐标系之间的变换矩阵;
    根据待定位对象在虚拟坐标系下的坐标集合、所述虚拟坐标系与所述第二坐标系之间的变换矩阵以及所述第二坐标系与所述第一坐标系之间的变换矩阵,获得所述待定位对象在所述第一坐标系下的坐标集合;其中,所述虚拟坐标系与所述第二坐标系之间的变换矩阵是预先获得的;
    根据所述待定位对象在所述第一坐标系下的坐标集合和所述第一坐标系到所述种植手机的内窥镜对应的像素坐标系的投影矩阵,获得所述待定位对象在所述像素坐标系下的像素坐标集合,并根据所述像素坐标集合在所述内窥镜对应的二维图像上显示所述待定位对象;所述投影矩阵是预先获得的。
  2. 根据权利要求1所述的方法,其特征在于,获得所述投影矩阵包括:
    根据所述内窥镜的摄像机的内参矩阵、第三坐标系与所述第一坐标之间的变换矩阵以及所述内窥镜的摄像机的外参矩阵,获得所述投影矩阵;其中,所述第三坐标系与所述第一坐标系之间的变换矩阵是预先获得的。
  3. 根据权利要求2所述的方法,其特征在于,根据张正友标定法获得所述摄像机的内参矩阵和外参矩阵。
  4. 根据权利要求2所述的方法,其特征在于,获得所述第三坐标系与所述第一坐标系之间的变换矩阵包括:
    获取第三视觉标记的位置和所述第一视觉标记的位置和姿态;其中,所述第三视觉标记与标定板对应,所述标定板被配置为对所述摄像机进行标定;
    根据所述第三视觉标记的位置和所述第一视觉标记的位置和姿态,获 得所述第三坐标系与所述第一坐标系之间的变换矩阵;其中,基于所述第三视觉标记建立的坐标系为所述第三坐标系。
  5. 根据权利要求1所述的方法,其特征在于,所述根据待定位对象在虚拟坐标系下的坐标集合、所述虚拟坐标系与所述第二坐标系之间的变换矩阵以及所述第二坐标系与所述第一坐标系之间的变换矩阵,获得所述待定位对象在所述第一坐标系下的坐标集合包括:
    根据所述待定位对象在所述虚拟坐标系下的坐标集合以及所述虚拟坐标系与所述第二坐标系之间的变换矩阵,获得所述待定位对象在所述第二坐标系下的坐标集合;
    根据所述待定位对象在所述第二坐标系下的坐标集合以及所述第二坐标系与所述第一坐标系之间的变换矩阵,获得所述待定位对象在所述第一坐标系下的坐标集合。
  6. 根据权利要求1至5任一项所述的方法,其特征在于,根据所述待定位对象在所述第一坐标系下的坐标集合和所述第一坐标系到所述种植手机的内窥镜对应的像素坐标系的投影矩阵,获得所述待定位对象在所述像素坐标系下的像素坐标集合包括:
    根据公式
    Figure PCTCN2019084455-appb-100001
    计算获得所述待定位对象在所述像素坐标系下的像素坐标集合中的像素坐标(u,v),其中,Z c为所述像素坐标(u,v)在垂直于像素坐标系方向上对应的坐标值,M为所述投影矩阵,(X 1,Y 1,Z 1)为所述待定位对象在所述第一坐标系下的坐标集合中的坐标。
  7. 一种用于牙科手术的增强现实装置,其特征在于,包括:
    获取单元,被配置为获取第一视觉标记的位置和姿态和第二视觉标记的位置和姿态;其中,所述第一视觉标记与种植手机对应,所述第二视觉标记与口腔颌骨对应;
    第一获得单元,被配置为根据所述第一视觉标记的位置和姿态和所述第二视觉标记的位置和姿态,获得基于所述第二视觉标记建立的第二坐标系与基于所述第一视觉标记建立的第一坐标系之间的变换矩阵;
    第二获得单元,被配置为根据待定位对象在虚拟坐标系下的坐标集合、所述虚拟坐标系与所述第二坐标系之间的变换矩阵以及所述第二坐标系 与所述第一坐标系之间的变换矩阵,获得所述待定位对象在所述第一坐标系下的坐标集合;其中,所述虚拟坐标系与所述第二坐标系之间的变换矩阵是预先获得的;
    显示单元,被配置为根据所述待定位对象在所述第一坐标系下的坐标集合和所述第一坐标系到所述种植手机的内窥镜对应的像素坐标系的投影矩阵,获得所述待定位对象在所述像素坐标系下的像素坐标集合,并根据所述像素坐标集合在所述内窥镜对应的二维图像上显示所述待定位对象;所述投影矩阵是预先获得的。
  8. 根据权利要求7所述的装置,其特征在于,还包括第三获得单元:
    所述第三获得单元被配置为根据所述内窥镜的摄像机的内参矩阵、所述第二坐标系与所述第一坐标之间的变换矩阵以及所述内窥镜的摄像机的外参矩阵,获得所述投影矩阵;其中,所述第二坐标系与所述第一坐标系之间的变换矩阵是预先获得的。
  9. 一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现如权利要求1至6任一项所述的用于牙科手术的增强现实方法的步骤。
  10. 一种非暂态计算机可读存储介质,其上存储有计算机程序,其特征在于,该计算机程序被处理器执行时实现如权利要求1至6任一项所述的用于牙科手术的增强现实方法的步骤。
PCT/CN2019/084455 2019-01-22 2019-04-26 一种用于牙科手术的增强现实方法及装置 WO2020151119A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910060460.6 2019-01-22
CN201910060460.6A CN109700550B (zh) 2019-01-22 2019-01-22 一种用于牙科手术的增强现实方法及装置

Publications (1)

Publication Number Publication Date
WO2020151119A1 true WO2020151119A1 (zh) 2020-07-30

Family

ID=66262538

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/084455 WO2020151119A1 (zh) 2019-01-22 2019-04-26 一种用于牙科手术的增强现实方法及装置

Country Status (2)

Country Link
CN (1) CN109700550B (zh)
WO (1) WO2020151119A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112972027A (zh) * 2021-03-15 2021-06-18 四川大学 一种利用混合现实技术的正畸微种植体植入定位方法

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110664483A (zh) * 2019-07-09 2020-01-10 苏州迪凯尔医疗科技有限公司 根尖外科手术的导航方法、装置、电子设备和存储介质
CN110459083B (zh) 2019-08-22 2020-08-04 北京众绘虚拟现实技术研究院有限公司 一种视觉-触觉融合的增强现实口腔手术技能训练模拟器
CN111297501B (zh) * 2020-02-17 2021-07-30 北京牡丹电子集团有限责任公司 一种口腔种植手术增强现实导航方法和***
CN111445453B (zh) * 2020-03-25 2023-04-25 森兰信息科技(上海)有限公司 摄像机获取的琴键图像的偏移判断方法、***、介质及装置
CN111162840B (zh) * 2020-04-02 2020-09-29 北京外号信息技术有限公司 用于设置光通信装置周围的虚拟对象的方法和***
CN112037314A (zh) * 2020-08-31 2020-12-04 北京市商汤科技开发有限公司 图像显示方法、装置、显示设备及计算机可读存储介质
CN112168392A (zh) * 2020-10-21 2021-01-05 雅客智慧(北京)科技有限公司 牙科导航手术配准方法及***
CN112885436B (zh) * 2021-02-25 2021-11-30 刘春煦 一种基于增强现实三维成像的牙科手术实时辅助***
CN113440281B (zh) * 2021-07-21 2022-07-26 雅客智慧(北京)科技有限公司 手术路径规划方法、装置及自动植牙***
CN114521962B (zh) * 2022-04-24 2022-12-16 杭州柳叶刀机器人有限公司 手术机器人轨迹跟踪方法、装置、机器人及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106560163A (zh) * 2015-09-30 2017-04-12 合肥美亚光电技术股份有限公司 手术导航***及手术导航***的配准方法
CN108292175A (zh) * 2015-11-25 2018-07-17 特里纳米克斯股份有限公司 用于光学检测至少一个对象的检测器
CN108433834A (zh) * 2018-04-09 2018-08-24 上海术凯机器人有限公司 一种牙科种植钻针配准装置和方法
CN108742876A (zh) * 2018-08-02 2018-11-06 雅客智慧(北京)科技有限公司 一种手术导航装置
CN108784832A (zh) * 2017-04-26 2018-11-13 中国科学院沈阳自动化研究所 一种脊柱微创手术增强现实导航方法
US20180368930A1 (en) * 2017-06-22 2018-12-27 NavLab, Inc. Systems and methods of providing assistance to a surgeon for minimizing errors during a surgical procedure
US20190000570A1 (en) * 2017-06-29 2019-01-03 NavLab, Inc. Guiding a robotic surgical system to perform a surgical procedure

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5476036B2 (ja) * 2009-04-30 2014-04-23 国立大学法人大阪大学 網膜投影型ヘッドマウントディスプレイ装置を用いた手術ナビゲーションシステムおよびシミュレーションイメージの重ね合わせ方法
CN107822720A (zh) * 2017-10-26 2018-03-23 上海杰达齿科制作有限公司 齿科钻头、种植导板及其使用方法
CN208114666U (zh) * 2018-01-16 2018-11-20 浙江工业大学 基于增强现实的人机协作机器人种牙***
CN108399638B (zh) * 2018-02-08 2021-07-20 重庆爱奇艺智能科技有限公司 一种基于标记的增强现实交互方法、装置及电子设备
CN108742898B (zh) * 2018-06-12 2021-06-01 中国人民解放军总医院 基于混合现实的口腔种植导航***
CN109035414A (zh) * 2018-06-20 2018-12-18 深圳大学 增强现实手术图像的生成方法、装置、设备及存储介质
CN109077822B (zh) * 2018-06-22 2020-11-03 雅客智慧(北京)科技有限公司 一种基于视觉测量的牙科种植手机标定***及方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106560163A (zh) * 2015-09-30 2017-04-12 合肥美亚光电技术股份有限公司 手术导航***及手术导航***的配准方法
CN108292175A (zh) * 2015-11-25 2018-07-17 特里纳米克斯股份有限公司 用于光学检测至少一个对象的检测器
CN108784832A (zh) * 2017-04-26 2018-11-13 中国科学院沈阳自动化研究所 一种脊柱微创手术增强现实导航方法
US20180368930A1 (en) * 2017-06-22 2018-12-27 NavLab, Inc. Systems and methods of providing assistance to a surgeon for minimizing errors during a surgical procedure
US20190000570A1 (en) * 2017-06-29 2019-01-03 NavLab, Inc. Guiding a robotic surgical system to perform a surgical procedure
CN108433834A (zh) * 2018-04-09 2018-08-24 上海术凯机器人有限公司 一种牙科种植钻针配准装置和方法
CN108742876A (zh) * 2018-08-02 2018-11-06 雅客智慧(北京)科技有限公司 一种手术导航装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112972027A (zh) * 2021-03-15 2021-06-18 四川大学 一种利用混合现实技术的正畸微种植体植入定位方法

Also Published As

Publication number Publication date
CN109700550B (zh) 2020-06-26
CN109700550A (zh) 2019-05-03

Similar Documents

Publication Publication Date Title
WO2020151119A1 (zh) 一种用于牙科手术的增强现实方法及装置
EP3903724B1 (en) Calibration method and device for dental implantology navigation surgeries
US10980621B2 (en) Dental design transfer
US11963845B2 (en) Registration method for visual navigation in dental implant surgery and electronic device
EP2564375B1 (en) Virtual cephalometric imaging
EP2134290B1 (en) Computer-assisted creation of a custom tooth set-up using facial analysis
WO2021227548A1 (zh) 一种下颌骨切削手术的数字化引导***
US10111595B2 (en) Method for checking tooth positions
US20130273491A1 (en) System and method for evaluating orthodontic treatment
KR20160004864A (ko) 치과 시술 시뮬레이션을 위한 치아모델 생성 방법
JPWO2006033483A1 (ja) 人体情報抽出装置、人体撮影情報の基準面変換方法および断面情報検出装置
Lin et al. Point-based superimposition of a digital dental model on to a three-dimensional computed tomographic skull: an accuracy study in vitro
CN109598703B (zh) 牙齿图像的处理方法、***、计算机可读存储介质及设备
EP4113440A1 (en) Non-invasive periodontal examination
EP4103103B1 (en) At home progress tracking using phone camera
US12048605B2 (en) Tracking orthodontic treatment using teeth images
WO2016003256A1 (ko) 치아 교정 시술을 위한 모의시술 방법
CN109589179B (zh) 用于确定牙科器械的空间坐标的混合现实***和方法
Andrews Validation of the Apple i-Phone© combined with the Bellus© three-dimensional photogrammetry application for facial imaging
CN114521911A (zh) 基于头颅侧位的增强现实显示方法、***及存储介质
TWM589538U (zh) 兼具光學導航功能的數位化種植導板及種植系統
CN116919594A (zh) 一种结构光光学种植导航***及导航方法
CN112137744A (zh) 兼具光学导航功能的数字化种植导板及其使用方法
Aoki et al. 3D head model construction of individuals utilizing standard model and photogrammetry

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19911258

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16/11/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19911258

Country of ref document: EP

Kind code of ref document: A1