WO2022068341A1 - 可读存储介质、骨建模配准***及骨科手术*** - Google Patents

可读存储介质、骨建模配准***及骨科手术*** Download PDF

Info

Publication number
WO2022068341A1
WO2022068341A1 PCT/CN2021/108603 CN2021108603W WO2022068341A1 WO 2022068341 A1 WO2022068341 A1 WO 2022068341A1 CN 2021108603 W CN2021108603 W CN 2021108603W WO 2022068341 A1 WO2022068341 A1 WO 2022068341A1
Authority
WO
WIPO (PCT)
Prior art keywords
bone
scanning device
bone model
cartilage
virtual
Prior art date
Application number
PCT/CN2021/108603
Other languages
English (en)
French (fr)
Inventor
曾好
陆天威
邵辉
何超
Original Assignee
苏州微创畅行机器人有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州微创畅行机器人有限公司 filed Critical 苏州微创畅行机器人有限公司
Priority to EP21874003.3A priority Critical patent/EP4223246A4/en
Priority to US18/445,062 priority patent/US20230410453A1/en
Priority to BR112023005691A priority patent/BR112023005691A2/pt
Priority to AU2021354680A priority patent/AU2021354680A1/en
Publication of WO2022068341A1 publication Critical patent/WO2022068341A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2048Tracking techniques using an accelerometer or inertia sensor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2051Electromagnetic tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • A61B2034/2057Details of tracking cameras
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2068Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis using pointers, e.g. pointers having reference marks for determining coordinates of body points
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/39Markers, e.g. radio-opaque or breast lesions markers
    • A61B2090/3937Visible markers
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/39Markers, e.g. radio-opaque or breast lesions markers
    • A61B2090/3983Reference marker arrangements for use with image guided surgery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/56Particle system, point based geometry or rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts

Definitions

  • the invention relates to the field of robot-assisted surgery systems, in particular to a readable storage medium, a bone modeling registration system and an orthopedic surgery system.
  • the purpose of the present invention is to provide a readable storage medium, a bone modeling and registration system, and an orthopedic surgery system, so as to solve the problems existing in the existing bone registration and registration.
  • a readable storage medium is provided, and a program is stored on the readable storage medium, and the program is executed to realize:
  • the bone surface image information comes from a stereo vision scanning device
  • the coordinates of the predetermined object in the navigation image coordinate system are obtained.
  • the bone surface image information includes at least two image data from different viewing angles.
  • the step of performing three-dimensional reconstruction according to the bone surface image information of the at least two different viewing angles to obtain the first virtual bone model includes:
  • Three-dimensional reconstruction is performed based on the bone contour point cloud data to obtain the first virtual bone model.
  • the step of obtaining the coordinates of the predetermined object in the navigation image coordinate system includes:
  • coordinate transformation is performed on the first virtual bone model to obtain the predetermined object in the navigation image coordinate system coordinate of.
  • the preset second virtual bone model is established according to the MRI scan of the bone surface of the predetermined object.
  • the preset second virtual bone model is established according to the CT scan of the bone surface of the predetermined object, and the first virtual bone model is registered with the preset second virtual bone model of the predetermined object. Steps include:
  • the first virtual bone model is corrected based on the cartilage compensation data, and the corrected first virtual bone model is used for registration with the second virtual bone model.
  • the cartilage compensation algorithm includes: using the Canny operator to obtain the edge of the cartilage removal area of the predetermined object, calculating the gradient change of the edge, and calculating the edge according to the selected object.
  • the gradient change fits the depth change of the non-cleared area adjacent to the cartilage clearing area, and iteratively covers the entire cartilage area to obtain the cartilage compensation data.
  • the cartilage compensation algorithm includes: using a Canny operator to obtain edges of multiple cartilage removal areas of the predetermined object, and calculating gradients of multiple edges.
  • the removal depth is obtained by expanding and fitting from each cartilage removal area to the surrounding area; the removal range is gradually expanded until the entire cartilage area is covered, and the cartilage compensation data is obtained.
  • the removal depth at the junction is obtained by fitting according to the gradient change of the center point of the cartilage removal area closest to the junction.
  • the cartilage compensation algorithm includes: calculating the first virtual bone model by using a neuron network trained on a training set to obtain the cartilage compensation algorithm. data.
  • the step of registering the first virtual bone model with the preset second virtual bone model includes:
  • the icp algorithm is used to iterate until the preset convergence conditions are met, and the registration matrix is fitted.
  • the program on the readable storage medium when executed, it is also implemented to acquire real-time bone surface image information of a predetermined object, and the real-time bone surface image information comes from a stereo vision scanning device with a fixed posture;
  • 3D reconstruction is performed according to the real-time bone surface image information to obtain a real-time first virtual bone model
  • Real-time registration is performed between the first virtual bone model and the preset second virtual bone model of the predetermined object; the real-time coordinates of the predetermined object in the navigation image coordinate system are obtained.
  • a bone modeling registration system which includes: a processor and a stereo vision scanning device; the stereo vision scanning device is used to obtain the bone surface image information, and feeding back the bone surface image information to the processor;
  • the processor is connected in communication with the stereo vision scanning device; the processor is configured to obtain the pose information of the stereo vision scanning device, and obtain the bone surface of the predetermined object obtained by the stereo vision scanning device image information; three-dimensional reconstruction is performed according to the bone surface image information to obtain a first virtual bone model; the first virtual bone model is registered with a preset second virtual bone model, based on the stereo vision scanning device
  • the pose information is used to obtain the coordinates of the predetermined object in the navigation image coordinate system.
  • the pose of the stereo vision scanning device is fixed, and the pose information of the stereo vision scanning device is obtained through calibration.
  • the bone modeling registration system further includes: a navigation device and a target, the target is connected to the stereo vision scanning device, and the navigation device is adapted to the target for acquiring the target the real-time coordinate information of the target, and transmit the real-time coordinate information to the processor; the processor is connected in communication with the navigation device, and obtains the stereoscopic image based on the real-time coordinate information of the target fed back by the navigation device Pose information of the vision scanning device.
  • an orthopedic surgery system which includes the above-mentioned bone modeling and registration system, and also includes a robotic arm, an operating trolley and a navigation trolley. at least one of.
  • the readable storage medium stores a program, and when the program is executed, it realizes: obtaining the data of the predetermined object.
  • bone surface image information comes from a stereo vision scanning device; three-dimensional reconstruction is performed according to the bone surface image information to obtain a first virtual bone model; the first virtual bone model is combined with the predetermined object
  • the preset second virtual bone model is registered; based on the pose information of the stereo vision scanning device, the coordinates of the predetermined object in the navigation image coordinate system are obtained.
  • the image information of the bone surface is obtained by the stereo vision scanning device, and the first virtual bone model is obtained by reconstruction.
  • the coordinates of the predetermined object in the navigation image coordinate system can be obtained. That is, the registration of the preoperative image model and the actual intraoperative bone model is realized. There is no need to place the target on the bone, the patient will not be injured twice, the contact between the instrument and the human body is reduced, the probability of contamination is reduced, and the cleaning of the instrument is convenient.
  • the setting of the bone modeling registration system is simple, which simplifies the installation of the registration tool during the operation, simplifies the registration process, reduces the complexity of the operation, and shortens the operation time.
  • the scanning accuracy of the stereo vision scanning device is higher than that of the traditional point-and-click method, which can better reflect the bone surface structure and improve the registration accuracy.
  • FIG. 1 is a schematic diagram of the operation scene of the orthopaedic surgery system involved in the present invention
  • Fig. 2 is the flow chart of the orthopaedic surgery process involved in the present invention
  • FIG. 3 is a schematic diagram of a bone modeling registration system according to Embodiment 1 of the present invention.
  • Fig. 4 is the schematic diagram of the cartilage removal area of embodiment one of the present invention.
  • FIG. 5 is a schematic diagram of scanning after cartilage removal according to Embodiment 1 of the present invention.
  • Embodiment 6 is a schematic diagram of coordinate conversion in Embodiment 1 of the present invention.
  • Fig. 7 is the schematic diagram of the cartilage removal area of the second embodiment of the present invention.
  • FIG. 8 is a schematic diagram of scanning after cartilage removal according to Embodiment 2 of the present invention.
  • Fig. 9 is the schematic diagram of the cartilage removal area of the third embodiment of the present invention.
  • FIG. 10 is a schematic diagram of scanning after cartilage removal according to Embodiment 3 of the present invention.
  • FIG. 11 is a schematic diagram of a bone modeling registration system according to Embodiment 4 of the present invention.
  • 1-surgical trolley 2-robot arm; 3-tool target; 4-osteotomy guide tool; 5-surgical tool; 6-tracker; 7-auxiliary monitor; 8-main monitor; 9-navigation trolley; 10 -Keyboard; 12-Femur; 14-Tibia; 17-Patient; 18-Operator;
  • T1-navigation device coordinate system T2-target coordinate system; T3-stereo vision scanning device coordinate system; T4-navigation image coordinate system;
  • features defined as “first”, “second”, “third” may expressly or implicitly include one or at least two of these features
  • the term “proximal” is generally the end close to the operator
  • the term “proximal” “Distal” usually refers to the end close to the object to be operated
  • “one end” and “the other end” and “proximal end” and “distal end” usually refer to the corresponding two parts, which not only include the end point
  • the terms “installation”, “Connected” and “connected” should be understood in a broad sense, for example, it can be a fixed connection, a detachable connection, or an integrated body; it can be a mechanical connection or an electrical connection; it can be directly connected or through the middle.
  • the medium is indirectly connected, which can be the internal communication between two elements or the interaction relationship between the two elements.
  • the arrangement of one element on another element generally only means that there is a connection, coupling, cooperation or transmission relationship between the two elements, and the relationship between the two elements may be direct or indirect through intermediate elements connection, coupling, cooperation or transmission, and should not be construed as indicating or implying the spatial positional relationship between two elements, that is, one element can be in any position inside, outside, above, below or on one side of the other element, unless the content Also clearly stated.
  • the specific meanings of the above terms in the present invention can be understood according to specific situations.
  • the core idea of the present invention is to provide a readable storage medium, a bone modeling and registration system, and an orthopedic surgery system, so as to solve the problems existing in the existing bone registration and registration.
  • FIG. 1 is a schematic diagram of an orthopedic surgery system involved in the present invention
  • FIG. 2 is a flowchart of an orthopedic surgery process of the orthopaedic surgery system involved in the present invention.
  • FIG. 1 shows an application scenario of using the orthopaedic surgery system for knee replacement in an exemplary embodiment.
  • the orthopaedic surgery system of the present invention has no particular limitation on the application environment, and can also be applied to other orthopaedic surgery systems. Operation.
  • the orthopaedic surgical system is described by taking knee joint replacement as an example, but this should not be taken as a limitation of the present invention.
  • the orthopedic surgery system includes a control device, a bone modeling and registration system, a robotic arm 2 and an osteotomy guide tool 4 .
  • the robotic arm 2 is arranged on the operating trolley 1, and the control device is a computer in some embodiments, but the present invention is not limited to this, the computer is configured with a processor, a main display 8 and a keyboard 10, more preferably also A secondary display 7 is included. The contents displayed on the auxiliary display 7 and the main display 8 may be the same or different.
  • the osteotomy guide tool 4 is installed at the end of the mechanical arm 2 , so that the osteotomy guide tool 4 is supported by the mechanical arm 2 and the spatial position and posture of the osteotomy guide tool 4 are adjusted.
  • the orthopaedic surgical system further includes a navigation device, which may be an electromagnetic positioning navigation device, an optical positioning navigation device, or an inertial navigation device.
  • the navigation device is an optical positioning and navigation device. Compared with other navigation methods, the measurement accuracy is high, and the positioning accuracy of the osteotomy guide tool 4 can be effectively improved.
  • an optical positioning and navigation device is used as an example for description, but it is not limited thereto.
  • the computer program stored in the memory is executed by the processor, according to the current pose and desired pose of the tool target 3, the robotic arm 2 is controlled to move, and the robotic arm 2 drives the osteotomy guide tool 4 and the tool target 3 to move, and makes the tool move.
  • the target 3 reaches the desired pose, and the desired pose of the tool target 3 corresponds to the desired pose of the osteotomy guide tool 4 .
  • the automatic positioning of the osteotomy guide tool 4 can be realized, and the real-time posture (including the position and posture) of the osteotomy guide tool 4 can be tracked and fed back by the tool target 3 during the operation, and controlled
  • the movement of the mechanical arm realizes the adjustment of the position and posture of the osteotomy guide tool 4, not only the positioning accuracy of the osteotomy guide tool 4 is high, but also the osteotomy guide tool 4 is supported by the mechanical arm 2 without fixing the guide tool on the human body , which can avoid secondary damage to the human body.
  • the orthopedic surgery system further includes an operating trolley 1 and a navigation trolley 9 .
  • the control device and a part of the navigation device are installed on the navigation trolley 9, for example, the processor is installed inside the navigation trolley 9, the keyboard 10 is placed outside the navigation trolley 9 for operation, and the main
  • the display 8 , the auxiliary display 7 and the tracker 6 are all mounted on a bracket, the bracket is vertically fixed on the navigation trolley 9 , and the robotic arm 2 is mounted on the operating trolley 1 .
  • the use of the operating trolley 1 and the navigation trolley 9 makes the entire surgical operation more convenient.
  • the use process of the orthopaedic surgery system of this embodiment generally includes the following operations:
  • Step SK1 Move the operating trolley 1 and the navigation trolley 9 to a suitable position beside the hospital bed;
  • Step SK2 Install navigation markers, osteotomy guide tools 4 and other related components (such as sterile bags);
  • Step SK3 preoperative planning; specifically, the operator 18 imports the bone CT/MRI scan image data of the patient 17 into the computer for preoperative planning, and obtains an osteotomy plan, for example, the osteotomy plan includes information such as the model of the body and the installation orientation of the prosthesis; specifically, the computer creates a three-dimensional knee joint virtual model according to the image data of the patient's knee joint obtained by CT/MRI scan, and then creates an osteotomy plan according to the three-dimensional knee joint virtual model, In order for the operator to perform preoperative evaluation according to the osteotomy plan, more specifically, the osteotomy plan is determined based on the three-dimensional virtual model of the knee joint and in combination with the obtained size specifications of the prosthesis and the installation position of the osteotomy plate.
  • the plan is finally output in the form of a surgical report, which records a series of reference data such as the coordinates of the osteotomy plane, the amount of osteotomy, the angle of osteotomy, the prosthesis specification, the installation position of the prosthesis, and surgical aids, especially a series of theoretical explanations. , such as the reason for selecting the osteotomy angle, etc., to provide reference for the surgical operator; wherein, the three-dimensional knee joint virtual model can be displayed through the main display 8, and the operator can input the surgical parameters through the keyboard 10 for preoperative planning. ;
  • Step SK4 Bone real-time registration; after the preoperative evaluation, the position of the bone feature points needs to be acquired in real time, and then the processor can obtain the actual orientation of the femur 12 and the tibia 14 through the feature matching algorithm, and match the image orientation of the femur 12 and the tibia 14. correspond.
  • the actual orientation of the femur 12 and the tibia 14 is registered with the three-dimensional knee joint virtual model created in step SK3 according to the image data of the patient's knee joint obtained by the preoperative CT/MRI scan through the bone modeling registration system, so that the femur 12 and the tibia 14 are registered.
  • the actual orientation of the robot arm is unified with the coordinates of the robot arm under the navigation device, and the robot arm 2 can perform surgery according to the planned bone position in the navigation device;
  • Step SK5 Drive the robotic arm to move in place, and perform the operation; and then send the coordinates of the preoperatively planned osteotomy plane to the robotic arm 2 through the navigation device, and the robotic arm 2 locates the osteotomy plane through the tool target 3 and moves to a predetermined position. , so that the robotic arm 2 enters the holding state (ie does not move), after which the operator can use the surgical tool 5 such as an oscillating saw or an electric drill to perform an osteotomy and/or drilling operation through the osteotomy guide tool 4 . After the osteotomy and drilling operations are completed, the operator can install the prosthesis and perform other surgical operations.
  • the surgical tool 5 such as an oscillating saw or an electric drill to perform an osteotomy and/or drilling operation through the osteotomy guide tool 4 .
  • the present invention provides a bone modeling registration system, which includes: a processor and a stereo vision scanning device 100; the processor here can be set on the operating trolley 1
  • the shared processor in the computer on the computer can also be an independently set processor;
  • the stereo vision scanning device 100 is used to acquire the bone surface image information of a predetermined object, that is, the three-dimensional data of the bone surface, and convert the bone surface image The information is fed back to the processor;
  • the processor is connected in communication with the stereo vision scanning device 100 ; the processor is configured to obtain the pose information of the stereo vision scanning device 100 and obtain the stereo vision the bone surface image information of the predetermined object obtained by the scanning device 100; three-dimensional reconstruction is performed according to the bone surface image information to obtain a first virtual bone model; the first virtual bone model and the prese
  • the orthopedic surgery system of the present invention includes the above-mentioned bone modeling and registration system, and preferably further includes at least one of a robotic arm 2 , an operating trolley 1 and a navigation trolley 9 .
  • the bone location can be registered preoperatively or intraoperatively using the bone modeling registration system.
  • the setting of the bone modeling registration system is simple, which simplifies the installation of the registration tool during the operation, simplifies the registration process, reduces the complexity of the operation, and shortens the operation time.
  • the scanning accuracy of the stereo vision scanning device is higher than that of the traditional point-and-click method, which can better reflect the bone surface structure and improve the registration accuracy.
  • FIG. 3 is a schematic diagram of a bone modeling registration system according to Embodiment 1 of the present invention
  • FIG. 4 is a schematic diagram of a cartilage removal area according to Embodiment 1 of the present invention
  • FIG. 5 is an embodiment of the present invention 1 is a schematic diagram of scanning after cartilage removal
  • FIG. 6 is a schematic diagram of coordinate conversion in Embodiment 1 of the present invention.
  • FIG. 3 shows the bone modeling and registration system provided in the first embodiment, which includes a processor (not shown) and a stereo vision scanning device 100.
  • the bone modeling and registration system further includes: a navigation device 200 and the target 300, the navigation device 200 can use the tracker 6 in the aforementioned orthopedic surgery system, or can be set independently.
  • the following description takes the tracker 6 as an example of the navigation device 200 .
  • the target 300 is an optical target adapted to the tracker 6 .
  • the target 300 is connected to the stereo vision scanning device 100, and the navigation device 200 is adapted to the target 300 to obtain the real-time coordinate information of the target 300 and transmit the real-time coordinate information to the target.
  • the processor is connected in communication with the navigation device 200 , and obtains the pose information of the stereo vision scanning device 100 based on the real-time coordinate information of the target 300 fed back by the navigation device 200 .
  • the stereo vision scanning device 100 uses the principle of binocular stereo vision to scan a predetermined object (such as a knee joint) to obtain large-scale and high-precision three-dimensional depth information, that is, three-dimensional data information of the knee joint surface.
  • the principle of binocular stereo vision is the basis for establishing the 3D imaging effect. Based on the principle of parallax, the imaging device is used to obtain two images of the measured object from different positions, and the 3D geometric information of the object is obtained by calculating the positional deviation between the corresponding points of the images. Methods.
  • the stereo vision scanning device 100 can be selected from mature binocular or trinocular scanners currently on the market.
  • the processor can perform three-dimensional reconstruction according to the bone surface image information to obtain a first virtual bone model, and perform three-dimensional reconstruction on the image information above.
  • the steps of the 3D scanning can also be performed directly by the stereo scanning device, and the 3D reconstructed information is sent to the processor.
  • the bone surface image information includes at least two image data from different viewing angles.
  • the processor may acquire a pre-placed second virtual bone model, and the preset second virtual bone model may be established based on preoperative CT scanning of the bone surface of the predetermined object, or MRI scanning of the bone surface of the predetermined object. Surface build.
  • the stereo vision scanning device 100 is fixedly connected to the target 300, and the two have a known relative positional relationship. Therefore, the navigation device 200 can know the pose information of the stereo vision scanning device 100 by passing the target 300.
  • the navigation device 200 Further, the pose information of the stereo vision scanning device 100 is transmitted to the processor, and the processor registers the first virtual bone model with the preset second virtual bone model, based on the pose of the stereo vision scanning device 100 information, the coordinates of the predetermined object in the navigation image coordinate system can be obtained. This achieves the registration of the actual bone position with the navigation image.
  • the relative positional relationship between the stereo vision scanning device 100 and the target 300 may be fixed, such as obtained through mechanical design files or through calibration.
  • the first embodiment also provides a readable storage medium, for example, a computer-readable storage medium, on which a program, such as a computer program, is stored, and when the program is executed, it realizes:
  • Step SA1 Acquire bone surface image information of a predetermined object; the bone surface image information comes from a stereo vision scanning device 100;
  • Step SA2 performing three-dimensional reconstruction according to the bone surface image information to obtain a first virtual bone model
  • Step SA3 register the first virtual bone model with the preset second virtual bone model
  • Step SA4 Based on the pose information of the stereo vision scanning device 100, the coordinates of the predetermined object in the navigation image coordinate system are obtained.
  • step SA2 specifically includes:
  • Step SA21 Perform segmentation processing on the bone surface image information to obtain bone contour point cloud data
  • Step SA22 Perform three-dimensional reconstruction based on the bone contour point cloud data to obtain the first virtual bone model, that is, the reconstructed virtual model of the femur.
  • step SAT3 specifically includes: performing rough registration on the first virtual bone model and the second virtual bone model; using the Ransac consensus algorithm to remove possible error points; using the icp algorithm to iterate until the preset Convergence condition, fitting out the registration matrix.
  • the point cloud registration is to convert the point cloud coordinates in two different coordinate systems to the opposite coordinate system.
  • the algorithm provided by step SAT3 is an improved version of the traditional icp (Iteractive Closest Point) algorithm.
  • the steps of the traditional icp algorithm can be summarized as follows:
  • step SAT3 the first virtual bone model established based on the scanned bone surface image information and the preset second virtual bone model are coarsely registered to make the two models as close as possible, and then use Ransac to match. Then the icp algorithm is run to iterate until the convergence conditions are met to fit the final registration matrix.
  • each coordinate system is unified in step SA4.
  • the coordinate system of the navigation device 200 is the navigation device coordinate system T1
  • the coordinate system of the target is the target coordinate system T2
  • the coordinate system of the stereo vision scanning device 100 is the stereo vision scanning device coordinate system T3.
  • the coordinate system of the second virtual bone model is the navigation image coordinate system T4;
  • Step SA4 includes: based on the predetermined connection relationship between the stereo vision scanning device 100 and a target 300, obtaining a first coordinate conversion relationship between the coordinate system T3 of the stereo vision scanning device and the target coordinate system T2; The positioning of the target 300 obtains the second coordinate conversion relationship between the target coordinate system T2 and the navigation device coordinate system T1; based on the registration of the first virtual bone model and the second virtual bone model in step SA3, a stereo vision scan is obtained the third coordinate transformation relationship between the device coordinate system T3 and the navigation image coordinate system T4; based on the first coordinate transformation relationship, the second coordinate transformation relationship and the third coordinate transformation relationship, the first virtual The bone model performs coordinate transformation to obtain the coordinates of the predetermined object in the navigation image coordinate system T4.
  • the relative position between the stereoscopic vision scanning device 100 and the target 300 can be predicted, so the coordinate system T3 of the stereoscopic vision scanning device and the target
  • the first coordinate transformation relationship between the coordinate systems T2 is known, for example, obtained through a mechanical design file or through calibration. Since the navigation device 200 and the target 300 are compatible with each other, the navigation device 200 can know the position information of the target 300 in real time. Therefore, the conversion relationship between the target coordinate system T2 and the navigation device coordinate system T1 can be adjusted by the navigation device 200 . Tracking of target 300 is learned. Based on the registration of the first virtual bone model and the second virtual bone model in step SA3, the transformation relationship between the navigation device coordinate system T1 and the navigation image coordinate system T4 can be obtained.
  • the specific process is as follows:
  • the first virtual bone model established based on the bone surface image information obtained by the stereo vision scanning device 100 is converted from the coordinate system T3 of the stereo vision scanning device according to the transformation matrix of the first coordinate transformation relationship Transfer to the target coordinate system T2, and use the navigation device 200 to track the target 300 to obtain the transformation matrix of the second coordinate transformation relationship Convert the bone surface image information from the target coordinate system T2 to the navigation device coordinate system T1, and then obtain the navigation device coordinate system T1 and the navigation image coordinate system according to the registration of the first virtual bone model and the second virtual bone model in step SA3. Conversion relationship between T4.
  • the coordinates of the first virtual bone model established according to the bone surface image information obtained by the stereo vision scanning device 100 in the navigation image coordinate system T4 can be obtained. That is, the registration of the bone position is achieved.
  • the stereo vision scanning device 100 when the stereo vision scanning device 100 is held by the operator, its position can move relative to the patient, and the generated registration matrix cannot complete the navigation in the unstable coordinate system, so the coordinates of the navigation device are
  • the system T1 is set as the world coordinate system, and the first virtual bone model is converted to the coordinate system T1 of the navigation device.
  • the navigation device 200 remains fixed after the scanning is completed. As long as the patient remains fixed, the navigation device 200 also remains stationary relative to the patient.
  • the fixed navigation device 200 can allow the stereoscopic scanning device 100 to be removed after scanning, without hindering the subsequent operation.
  • the readable storage medium, the bone modeling registration system, and the orthopaedic surgery system provided in this embodiment are not limited to preoperative use. If there is a registration requirement during the operation, for example, the position of the patient's leg has changed. , and the stereoscopic vision scanning device 100 can also be used to scan the changed position of the bone to achieve re-registration.
  • the bone surface image information directly scanned by the stereo vision scanning device 100 is generally point cloud data including cartilage on the bone surface.
  • the pre-installed second virtual bone model is established based on the nuclear magnetic resonance (ie, MRI) scan image, it contains the cartilage data of the bone, and the bone surface image information just scanned by the stereo vision scanning device 100 also contains The data of cartilage is obtained, so the two can directly use the aforementioned method for registration.
  • the above step S3 includes: acquiring cartilage compensation data according to a cartilage compensation algorithm; revising the first virtual bone model based on the cartilage compensation data, and using the revised first virtual bone model and the second virtual bone model model for registration.
  • the cartilage is removed by grinding with a milling cutter.
  • the location of the cartilage removal area is shown in Figure 4.
  • the cartilage removal area 400 is roughly a point-like area, and the cartilage removal area is planned by the doctor before the operation. Decide on a specific location.
  • Figure 5 demonstrates the process of the milling cutter scanning after cartilage removal. After the milling cutter grinds the soft tissue in the cartilage removal area, the stereoscopic vision scanning device 100 is used to scan the bone surface, and then the single-point cartilage compensation algorithm is run to obtain the cartilage compensation data.
  • the single-point cartilage compensation algorithm includes the following steps: first, the Canny operator is used to obtain the edge of the cartilage removal area of the predetermined object, the gradient change of the edge is calculated, and the non-removal area adjacent to the cartilage removal area is fitted according to the gradient change. The depth changes of , iteratively cover the entire cartilage area, and finally fit the cartilage compensation data for the next registration.
  • the Canny operator is a multi-level edge detection algorithm. Specific calculation methods such as calculating the gradient change of the edge, fitting adjacent regions and iterative coverage are well known to those skilled in the art, and will not be described here.
  • this embodiment provides two different processing methods, without placing a target on the bone, that is, the matching of the preoperative image model and the actual intraoperative bone model is realized. Accurate, will not cause secondary trauma to the patient, reduce the contact between the instrument and the human body, reduce the probability of contamination, and facilitate the cleaning of the instrument.
  • the setting of the bone modeling registration system is simple, which simplifies the installation of the registration tool during the operation, simplifies the registration process, reduces the complexity of the operation, and shortens the operation time.
  • the scanning accuracy of the stereo vision scanning device is higher than that of the traditional point-and-click method, which can better reflect the bone surface structure and improve the registration accuracy.
  • FIG. 7 is a schematic diagram of the cartilage removal area according to the second embodiment of the present invention
  • FIG. 8 is a schematic diagram of scanning after the cartilage removal according to the second embodiment of the present invention.
  • the readable storage medium, the bone modeling and registration system, and the orthopaedic surgery system provided in the second embodiment of the present invention are basically the same as the readable storage medium, the bone modeling and registration system, and the orthopaedic surgery system provided in the first embodiment, and the same parts are different. Again, only different points will be described below.
  • the acquisition method of the cartilage compensation data is different from that in the first embodiment.
  • the cartilage was also removed by grinding with a milling cutter.
  • the location of the cartilage removal area is shown in Figure 7.
  • the cartilage removal area is roughly a plurality of punctate areas. These cartilage removal areas are determined by the preoperative planning doctor. Location.
  • Figure 8 illustrates the process of the milling cutter to scan the cartilage after removing the cartilage. There is no need to follow a special order to remove the cartilage at different positions. Bone surface, and then run a multi-point cartilage compensation algorithm to obtain cartilage compensation data.
  • the multi-point cartilage compensation algorithm includes the following steps: first, using the Canny operator to obtain the edges of multiple cartilage removal areas of the predetermined object, calculating the gradient changes of the multiple edges, and according to the gradient changes from each cartilage removal area to the edge.
  • the clearance depth is obtained by the peripheral expansion fitting; the clearance range is gradually expanded until the entire cartilage area is covered, and the cartilage compensation data is obtained.
  • the removal depth at the junction is obtained by fitting the gradient change of the center point of the cartilage removal region closest to the junction.
  • this embodiment adopts a multi-point cartilage compensation algorithm, its fitting accuracy is higher than that of the single-point cartilage compensation algorithm in the first embodiment. However, multiple cartilage-clearing areas of the bone need to be ground away.
  • FIG. 9 is a schematic diagram of a cartilage removal area according to Embodiment 3 of the present invention
  • FIG. 10 is a schematic diagram of scanning after cartilage removal according to Embodiment 3 of the present invention.
  • the readable storage medium, the bone modeling and registration system, and the orthopaedic surgery system provided by the third embodiment of the present invention are basically the same as the readable storage medium, the bone modeling and registration system, and the orthopaedic surgery system provided in the first embodiment. Again, only different points will be described below.
  • the acquisition method using the cartilage compensation data is different from that in the first embodiment.
  • the cartilage was removed by scribing a knife.
  • the location of the cartilage removal area is shown in Figure 9.
  • the cartilage removal area is roughly a strip area, and the specific location of these cartilage removal areas is determined by the preoperative planning doctor.
  • Figure 10 demonstrates the process of scanning after cartilage removal by scribing a knife. After scribing and scribing to remove the soft tissue in the cartilage removal area, the stereoscopic vision scanning device 100 can be used to scan the bone surface, and then run the cartilage compensation algorithm in the scribing area to obtain cartilage compensation data.
  • the cartilage compensation algorithm in the scribed area is basically the same as the single-point compensation algorithm in the first embodiment.
  • the removal depth of the surrounding area is fitted by the gradient change of the edge of the scribed area until the entire cartilage area is covered to obtain the cartilage compensation data.
  • Embodiment 1 please refer to Embodiment 1, which will not be repeated here.
  • FIG. 11 is a schematic diagram of a bone modeling registration system according to Embodiment 4 of the present invention.
  • the readable storage medium, the bone modeling and registration system, and the orthopaedic surgery system provided in the fourth embodiment of the present invention are basically the same as the readable storage medium, the bone modeling and registration system, and the orthopaedic surgery system provided in the first embodiment. Again, only different points will be described below.
  • the settings of the navigation device 200 and the target 300 are cancelled, and the stereo vision scanning
  • the pose of the device 100 is fixed, and the pose information of the stereo vision scanning device 100 is obtained through calibration.
  • the stereoscopic vision scanning device 100 is held by a robotic arm. After the pose information of the stereoscopic vision scanning device 100 is calibrated, the robotic arm is fixed, and the pose of the stereoscopic vision scanning device 100 is kept unchanged. .
  • the pose of the stereo vision scanning device 100 is fixed, the coordinate transformation relationship of the first virtual bone model established by the acquired bone surface image information relative to the world coordinate system is known. Therefore, by combining the first virtual bone model with After the second virtual bone model is registered, the registration matrix of the first virtual bone model and the second virtual bone model can be directly obtained.
  • the program in the readable storage medium when executed, it also realizes: acquiring real-time bone surface image information of a predetermined object, and the real-time bone surface image information comes from a stereo vision scanning device with fixed posture; perform three-dimensional reconstruction of the image information of the bone surface obtained by obtaining a real-time first virtual bone model; perform real-time registration of the first virtual bone model with the preset second virtual bone model of the predetermined object; obtain the predetermined object The real-time coordinates in the navigation image coordinate system.
  • the pose of the stereo vision scanning device 100 is kept unchanged, and the state of scanning the bone surface is maintained, so that the real-time registration matrix is obtained by running the registration algorithm at all times, which can achieve the purpose of real-time tracking.
  • This configuration enables real-time registration even if the position of the bone changes during surgery.
  • the readable storage medium, the bone modeling and registration system, and the orthopaedic surgery system provided by the fifth embodiment of the present invention are basically the same as the readable storage medium, the bone modeling and registration system, and the orthopaedic surgery system provided in the first embodiment, and the same parts are different. Again, only different points will be described below.
  • the acquisition method of the cartilage compensation data is different from that in the first embodiment.
  • the cartilage compensation is not completed by mechanical means such as a milling cutter or a scribing knife, but the data information of the cartilage is automatically cleared by training a neuron network through a machine learning algorithm.
  • the cartilage compensation algorithm includes: calculating the first virtual bone model by using the neuron network trained on the training set to obtain the cartilage compensation data. It should be noted that the method for training the neuron network through the training set is well known to those skilled in the art, and will not be described here.
  • the readable storage medium, the bone modeling and registration system, and the orthopaedic surgery system of the above-mentioned several embodiments are described in a progressive description manner, and different parts in each embodiment can be combined with each other, which is not limited in the present invention .

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Robotics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Architecture (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)
  • Surgical Instruments (AREA)
  • Prostheses (AREA)

Abstract

本发明提供一种可读存储介质、骨建模配准***及骨科手术***,所述可读存储介质上存储有程序,所述程序被执行时实现:获取预定对象的骨表面图像信息;所述骨表面图像信息来自于一立体视觉扫描装置;根据所述骨表面图像信息进行三维重建,得到第一虚拟骨模型;将所述第一虚拟骨模型与所述预定对象的预置的第二虚拟骨模型进行配准;基于所述立体视觉扫描装置的位姿信息,得到所述预定对象于导航影像坐标系下的坐标。如此配置,无需在骨头上安置靶标,不会造成病人二次受创,减少了器械与人体部位的接触,减小了污染几率,方便器械清洗。此外,***设置简单,简化了手术中配准注册工具的安装,简化了配准流程。

Description

可读存储介质、骨建模配准***及骨科手术*** 技术领域
本发明涉及机器人辅助手术***领域,特别涉及一种可读存储介质、骨建模配准***及骨科手术***。
背景技术
近年来,手术导航***越来越多地被运用到外科手术当中,特别是骨科手术中。例如MAKO骨科手术导航***、Robodoc骨科手术导航***等,均是利用机械臂以及红外光学导航设备的结合,根据医生的术前规划,结合术中的注册配准技术,使用机器人辅助医生完成手术操作。这其中,骨注册配准技术,是获得导航***第一虚拟骨模型与实际骨头的坐标转换关系,但是目前通用的注册工具以及方法存在着以下的问题:
(1)流程繁琐,耗时多。传统的注册方法需要在骨头手术部位上安装靶标。靶标的安装要求其可靠不易松动,否则注册需要重做;特别当进行手术部位注册时,需要使用探针点选一定数量的注册点完成配准,由于部位注册对精度有较严格的要求,因此注册成功率较低,可能需要反复注册才能满足要求,流程繁琐且增加了手术时间。
(2)可靠性差、容错率低。对于骨注册,由于注册精度直接影响到整个***精度,影响到整个手术的效果好坏,因此骨注册要求极高的注册精度,对于传统注册算法,实际骨头与CT数据的匹配程度,注册点的数量、位置选择都会影响注册精度,特别是骨头表面覆盖了一层软组织需要探针刺破才能选中合适的注册点,因此容易发生注册精度不满足需求需要重新注册的情况,另外精度最好的地方往往就是点选注册点的地方,而其余部位的匹配精度无法保证维持高准确度从而影响整个***的可靠性。
(3)导致病患的二次伤害,由上文可知目前的手术注册方式需要在人体,例如股骨、胫骨上安装骨靶标,会对骨头造成新的损伤,增加患者休养周期,且若操作人员经验不足,则存在骨钉安装位置不到位的情况,导致必须重新安装靶标的可能性,进一步增加手术风险。
发明内容
本发明的目的在于提供一种可读存储介质、骨建模配准***及骨科手术***,以解决现有骨注册配准所存在的问题。
为解决上述技术问题,根据本发明的第一个方面,提供了一种可读存储介质,所述可读存储介质上存储有程序,所述程序被执行时实现:
获取预定对象的骨表面图像信息;所述骨表面图像信息来自于一立体视觉扫描装置;
根据所述骨表面图像信息进行三维重建,得到第一虚拟骨模型;
将所述第一虚拟骨模型与所述预定对象的预置的第二虚拟骨模型进行配准;
基于所述立体视觉扫描装置的位姿信息,得到所述预定对象于导航影像坐标系下的坐标。
可选的,所述骨表面图像信息包括至少两幅不同视角的图像数据。
可选的,根据所述至少两幅不同视角的骨表面图像信息进行三维重建,得到第一虚拟骨模型的步骤包括:
对所述骨表面图像信息进行分割处理,获取骨轮廓点云数据;
基于所述骨轮廓点云数据进行三维重建,得到所述第一虚拟骨模型。
可选的,所述可读存储介质上的程序被执行时,基于所述立体视觉扫描装置的位姿信息,得到所述预定对象于导航影像坐标系下的坐标的步骤包括:
基于所述立体视觉扫描装置与一靶标的预定连接关系,得到立体视觉扫描装置坐标系与靶标坐标系之间的第一坐标转换关系;
基于导航装置对所述靶标的定位,得到所述靶标坐标系与导航装置坐标系之间的第二坐标转换关系;
基于所述第一虚拟骨模型与所述第二虚拟骨模型的配准,获得立体视觉扫描装置坐标系与导航影像坐标系之间的第三坐标转换关系;
基于所述第一坐标转换关系、所述第二坐标转换关系和所述第三坐标转换关系,对所述第一虚拟骨模型进行坐标转换,得到所述预定对象于所述导 航影像坐标系下的坐标。
可选的,预置的第二虚拟骨模型根据MRI扫描所述预定对象的骨表面建立。
可选的,预置的第二虚拟骨模型根据CT扫描所述预定对象的骨表面建立,将所述第一虚拟骨模型与所述预定对象的预置的第二虚拟骨模型进行配准的步骤包括:
根据软骨补偿算法获取软骨补偿数据;
基于所述软骨补偿数据对所述第一虚拟骨模型进行修正,并利用修正后的第一虚拟骨模型与所述第二虚拟骨模型进行配准。
可选的,所述可读存储介质上的程序被执行时,所述软骨补偿算法包括:使用Canny算子获取所述预定对象的软骨清除区域的边缘,计算所述边缘的梯度变化,根据所述梯度变化拟合邻近所述软骨清除区域的非清除区域的深度变化,迭代覆盖整个软骨区域,得到所述软骨补偿数据。
可选的,所述可读存储介质上的程序被执行时,所述软骨补偿算法包括:使用Canny算子获取所述预定对象的多个软骨清除区域的边缘,计算多个所述边缘的梯度变化,根据所述梯度变化从各软骨清除区域向周围扩展拟合得到清除深度;逐渐扩展清除范围直至覆盖整个软骨区域,得到所述软骨补偿数据。
可选的,所述可读存储介质上的程序被执行时,当两个扩展区域相交时,交界处的清除深度根据距离所述交界处最近的软骨清除区域的中心点的梯度变化拟合得到。
可选的,所述可读存储介质上的程序被执行时,所述软骨补偿算法包括:使用经训练集完成训练的神经元网络对所述第一虚拟骨模型进行计算,得到所述软骨补偿数据。
可选的,所述可读存储介质上的程序被执行时,将所述第一虚拟骨模型与预置的第二虚拟骨模型进行配准的步骤包括:
将所述第一虚拟骨模型和所述第二虚拟骨模型进行粗配准;
使用Ransac一致性算法去除可能的错误点;
使用icp算法迭代,直至满足预设的收敛条件,拟合出配准矩阵。
可选的,所述可读存储介质上的程序被执行时还实现,获取预定对象的实时的骨表面图像信息,所述实时的骨表面图像信息来自于位姿固定的立体视觉扫描装置;
根据实时的骨表面图像信息进行三维重建,得到实时的第一虚拟骨模型;
将所述第一虚拟骨模型与所述预定对象的预置的第二虚拟骨模型进行实时配准;得到所述预定对象于导航影像坐标系下的实时坐标。
为解决上述技术问题,根据本发明的第二个方面,还提供了一种骨建模配准***,其包括:处理器以及立体视觉扫描装置;所述立体视觉扫描装置用于获取预定对象的骨表面图像信息,并将所述骨表面图像信息反馈至所述处理器;
所述处理器与所述立体视觉扫描装置通信连接;所述处理器被配置为,获取所述立体视觉扫描装置的位姿信息,以及获取所述立体视觉扫描装置所得到的预定对象的骨表面图像信息;根据所述骨表面图像信息进行三维重建,得到第一虚拟骨模型;将所述第一虚拟骨模型与预置的第二虚拟骨模型进行配准,基于所述立体视觉扫描装置的位姿信息,得到所述预定对象于导航影像坐标系下的坐标。
可选的,在所述骨建模配准***中,所述立体视觉扫描装置的位姿固定,所述立体视觉扫描装置的位姿信息通过标定获取。
可选的,所述骨建模配准***还包括:导航装置和靶标,所述靶标与所述立体视觉扫描装置连接,所述导航装置与所述靶标相适配,用以获取所述靶标的实时坐标信息,并将所述实时坐标信息传输至所述处理器;所述处理器与所述导航装置通信连接,基于所述导航装置所反馈的所述靶标的实时坐标信息获得所述立体视觉扫描装置的位姿信息。
为解决上述技术问题,根据本发明的第三个方面,还提供了一种骨科手术***,其包括如上所述的骨建模配准***,还包括机械臂、手术台车及导航台车中的至少一者。
综上所述,在本发明提供的可读存储介质、骨建模配准***及骨科手术 ***中,所述可读存储介质上存储有程序,所述程序被执行时实现:获取预定对象的骨表面图像信息;所述骨表面图像信息来自于一立体视觉扫描装置;根据所述骨表面图像信息进行三维重建,得到第一虚拟骨模型;将所述第一虚拟骨模型与所述预定对象的预置的第二虚拟骨模型进行配准;基于所述立体视觉扫描装置的位姿信息,得到所述预定对象于导航影像坐标系下的坐标。
如此配置,利用立体视觉扫描装置获取骨表面图像信息,重建得到第一虚拟骨模型,基于第一虚拟骨模型与预置的第二虚拟骨模型的配准及各坐标系之间的转换,即可得到预定对象于导航影像坐标系下的坐标。即实现了术前影像模型与术中实际骨模型的配准。无需在骨头上安置靶标,不会造成病人二次受创,减少了器械与人体部位的接触,减小了污染几率,方便器械清洗。此外,骨建模配准***的设置简单,简化了手术中配准注册工具的安装,简化了配准流程,降低了手术的复杂度,缩减了手术时间。此外,立体视觉扫描装置的扫描相比传统点选的方式精度更高,更能反应骨表面结构,提升注册精度。
附图说明
本领域的普通技术人员将会理解,提供的附图用于更好地理解本发明,而不对本发明的范围构成任何限定。其中:
图1是本发明涉及的骨科手术***的手术场景示意图;
图2是本发明涉及的骨科手术过程的流程图;
图3是本发明实施例一的骨建模配准***的示意图;
图4是本发明实施例一的软骨清除区域的示意图;
图5是本发明实施例一的软骨清除后扫描的示意图;
图6是本发明实施例一的坐标转换示意图;
图7是本发明实施例二的软骨清除区域的示意图;
图8是本发明实施例二的软骨清除后扫描的示意图;
图9是本发明实施例三的软骨清除区域的示意图;
图10是本发明实施例三的软骨清除后扫描的示意图;
图11是本发明实施例四的骨建模配准***的示意图。
附图中:
1-手术台车;2-机械臂;3-工具靶标;4-截骨导向工具;5-手术工具;6-跟踪仪;7-辅助显示器;8-主显示器;9-导航台车;10-键盘;12-股骨;14-胫骨;17-患者;18-操作者;
T1-导航装置坐标系;T2-靶标坐标系;T3-立体视觉扫描装置坐标系;T4-导航影像坐标系;
100-立体视觉扫描装置;200-导航装置;300-靶标;400-软骨清除区域。
具体实施方式
为使本发明的目的、优点和特征更加清楚,以下结合附图和具体实施例对本发明作进一步详细说明。需说明的是,附图均采用非常简化的形式且未按比例绘制,仅用以方便、明晰地辅助说明本发明实施例的目的。此外,附图所展示的结构往往是实际结构的一部分。特别的,各附图需要展示的侧重点不同,有时会采用不同的比例。
如在本发明中所使用的,单数形式“一”、“一个”以及“该”包括复数对象,术语“或”通常是以包括“和/或”的含义而进行使用的,术语“若干”通常是以包括“至少一个”的含义而进行使用的,术语“至少两个”通常是以包括“两个或两个以上”的含义而进行使用的,此外,术语“第一”、“第二”、“第三”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”、“第三”的特征可以明示或者隐含地包括一个或者至少两个该特征,术语“近端”通常是靠近操作者的一端,术语“远端”通常是靠近***作对象的一端,“一端”与“另一端”以及“近端”与“远端”通常是指相对应的两部分,其不仅包括端点,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或成一体;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连 通或两个元件的相互作用关系。此外,如在本发明中所使用的,一元件设置于另一元件,通常仅表示两元件之间存在连接、耦合、配合或传动关系,且两元件之间可以是直接的或通过中间元件间接的连接、耦合、配合或传动,而不能理解为指示或暗示两元件之间的空间位置关系,即一元件可以在另一元件的内部、外部、上方、下方或一侧等任意方位,除非内容另外明确指出外。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本发明中的具体含义。
本发明的核心思想在于提供一种可读存储介质、骨建模配准***及骨科手术***,以解决现有骨注册配准所存在的问题。
请参考图1和图2,其中,图1是本发明涉及的骨科手术***的手术场景示意图;图2是本发明涉及的骨科手术***的骨科手术过程的流程图。
图1示出了一个示范性的实施例中,利用所述骨科手术***进行膝关节置换的应用场景,然而,本发明的骨科手术***对应用环境没有特别的限制,也可应用于其他的骨科手术。以下描述中,以用于膝关节置换为示例对骨科手术***进行说明,但不应以此作为对本发明的限定。
如图1所示,所述骨科手术***包括控制装置、骨建模配准***、机械臂2以及截骨导向工具4。机械臂2设置在手术台车1上,所述控制装置在一些实施例中为一台计算机,但本发明对此不作限制,该计算机配置了处理器、主显示器8和键盘10,更优选还包括辅助显示器7。所述辅助显示器7和主显示器8所显示的内容可以是一致的,也可以不同。所述截骨导向工具4安装在机械臂2的末端,从而通过机械臂2来支撑截骨导向工具4,并调整截骨导向工具4的空间位置和姿态。
在一些实施例中,骨科手术***还包括导航装置,所述导航装置可以是电磁定位导航装置、光学定位导航装置或者惯性导航装置。优选的,所述导航装置为光学定位导航装置,相比于其他的导航方式,测量精度高,可有效提高截骨导向工具4的定位精度。以下描述中,以光学定位导航装置作为示例进行说明,但不以此为限。
可选的,利用跟踪仪6来捕捉工具靶标3反射的信号(优选光学信号) 并记录工具靶标3的位姿(即工具靶标3在基座标系下的位置和姿态),再由控制装置的存储器内存储的计算机程序被处理器执行时,根据工具靶标3的当前位姿和期望位姿,控制机械臂2运动,机械臂2驱动截骨导向工具4和工具靶标3运动,并使工具靶标3到达期望位姿,工具靶标3的期望位姿对应于截骨导向工具4的期望位姿。
因此,对于骨科手术***的应用,可实现截骨导向工具4的自动定位,且手术过程中由工具靶标3跟踪并反馈截骨导向工具4的实时位姿(包括位置和姿态),并通过控制机械臂的运动实现截骨导向工具4的位置和姿态的调整,不仅截骨导向工具4的定位精度高,而且通过机械臂2来支撑截骨导向工具4,而无需将导向工具固定在人体上,可避免对人体产生二次伤害。
一般的,所述骨科手术***还包括手术台车1和导航台车9。所述控制装置和一部分所述导航装置安装在导航台车9上,例如所述处理器安装在导航台车9的内部,所述键盘10放置在导航台车9的外部进行操作,所述主显示器8、辅助显示器7和跟踪仪6均安装在一个支架上,所述支架竖直固定在导航台车9上,而所述机械臂2安装在手术台车1上。手术台车1和导航台车9的使用,使整个手术操作更为方便。
请参考图2,在执行膝关节置换手术时,本实施例的骨科手术***的使用过程大致包括以下操作:
步骤SK1:将手术台车1及导航台车9移动至病床旁边合适的位置;
步骤SK2:安装导航标志物、截骨导向工具4以及其他相关部件(如无菌袋);
步骤SK3:术前规划;具体的,操作者18将患者17的骨头CT/MRI扫描影像数据导入所述计算机进行术前规划,得到截骨方案,该截骨方案例如包括截骨平面坐标、假体的型号以及假体的安装方位等信息;具体地,所述计算机根据CT/MRI扫描得到的患者膝关节影像数据,创建三维膝关节虚拟模型,进而根据三维膝关节虚拟模型创建截骨方案,以便手术操作者根据截骨方案进行术前评估,更具体地,基于三维膝关节虚拟模型,并结合得到的假体的尺寸规格以及截骨板的安装位置等确定截骨方案,所述截骨方案最终 以手术报告形式输出,其记录有截骨平面坐标、截骨量、截骨角度、假体规格、假体的安装位置、手术辅助工具等一系列参考数据,特别还包括一系列理论说明,如选取该截骨角度的原因说明等,以为手术操作者提供参考;其中,三维膝关节虚拟模型可通过主显示器8进行显示,且操作者可通过键盘10输入手术参数,以便进行术前规划;
步骤SK4:骨实时配准;术前评估后,需要实时获取骨头特征点位置,然后处理器可以通过特征匹配算法得到股骨12及胫骨14的实际方位,并与股骨12及胫骨14的图像方位相对应。通过骨建模配准***将股骨12及胫骨14的实际方位与步骤SK3中根据术前CT/MRI扫描得到的患者膝关节影像数据所创建三维膝关节虚拟模型配准,使得股骨12及胫骨14的实际方位与机械臂在导航装置下的坐标相统一,机械臂2可以根据导航装置中规划的骨位置进行手术;
步骤SK5:驱动机械臂运动到位,执行操作;进而通过导航装置将术前规划的截骨平面坐标发送给机械臂2,所述机械臂2通过工具靶标3定位截骨平面并运动到预定位置后,使机械臂2进入保持状态(即不动),此后,操作者即可使用摆锯或电钻等手术工具5通过截骨导向工具4进行截骨和/或钻孔操作。完成截骨及钻孔操作后,操作者即可安装假体及进行其他手术操作。
传统手术及没有机械臂参与定位的导航手术***,需要手动调整截骨导向定位工具,精度差,调整效率低,而使用机械臂定位截骨导向工具,操作者不需要使用额外的骨钉将截骨导向工具固定在骨头上,减少病人的创伤面,并缩减手术时间。
基于上述骨科手术***,可实现机器人辅助手术,帮助操作者定位需截骨的位置,以便于操作者实施截骨。为解决现有骨注册配准所存在的问题,本发明提供了一种骨建模配准***,其包括:处理器以及立体视觉扫描装置100;这里的处理器可以为设置在手术台车1上的计算机内的共用的处理器,也可以是独立设置的处理器;所述立体视觉扫描装置100用于获取预定对象的骨表面图像信息,即骨头表面三维数据,并将所述骨表面图像信息反馈至所述处理器;所述处理器与所述立体视觉扫描装置100通信连接;所述处理 器被配置为,获取所述立体视觉扫描装置100的位姿信息,以及获取所述立体视觉扫描装置100所得到的预定对象的骨表面图像信息;根据所述骨表面图像信息进行三维重建,得到第一虚拟骨模型;将所述第一虚拟骨模型与预置的第二虚拟骨模型进行配准,基于所述立体视觉扫描装置100的位姿信息,得到所述预定对象于导航影像坐标系下的坐标。本发明所述骨科手术***包括如上所述的骨建模配准***,优选还包括机械臂2、手术台车1及导航台车9中的至少一者。如此配置,即可利用该骨建模配准***在术前或术中对骨位置进行配准。配置过程中无需在骨头上安置靶标,不会造成病人二次受创,减少了器械与人体部位的接触,减小了污染几率,方便器械清洗。此外,骨建模配准***的设置简单,简化了手术中配准注册工具的安装,简化了配准流程,降低了手术的复杂度,缩减了手术时间。此外,立体视觉扫描装置的扫描相比传统点选的方式精度更高,更能反应骨表面结构,提升注册精度。
下面通过若干实施例,结合附图,对本发明提供的可读存储介质、骨建模配准***及骨科手术***进行详细的说明。
【实施例一】
请参考图3至图6,其中,图3是本发明实施例一的骨建模配准***的示意图;图4是本发明实施例一的软骨清除区域的示意图;图5是本发明实施例一的软骨清除后扫描的示意图;图6是本发明实施例一的坐标转换示意图。
图3示出了本实施例一提供的骨建模配准***,其包括处理器(未图示)以及立体视觉扫描装置100,优选的,所述骨建模配准***还包括:导航装置200和靶标300,导航装置200可以利用前述骨科手术***中的跟踪仪6,也可以独立设置。下面以跟踪仪6作为导航装置200的范例进行说明,相对应的,靶标300为与跟踪仪6相适配的光学靶标。所述靶标300与所述立体视觉扫描装置100连接,所述导航装置200与所述靶标300相适配,用以获取所述靶标300的实时坐标信息,并将所述实时坐标信息传输至所述处理器;所述处理器与所述导航装置200通信连接,基于所述导航装置200所反馈的所述靶标300的实时坐标信息获得所述立体视觉扫描装置100的位姿信息。
具体的,立体视觉扫描装置100利用双目立体视觉原理扫描预定对象(如膝关节),获取大范围高精度的三维深度信息,即膝关节表面三维数据信息。双目立体视觉原理是建立三维成像效果的基础,其基于视差原理,利用成像设备从不同的位置获取被测物体的两幅图像,通过计算图像对应点间的位置偏差,来获取物体三维几何信息的方法。立体视觉扫描装置100可选择目前市场上成熟的双目或三目扫描仪。处理器在获得来自立体视觉扫描装置100所扫描得到的骨表面图像信息或深度信息后,即可根据所述骨表面图像信息进行三维重建,得到第一虚拟骨模型,以上对于图像信息进行三维重建的步骤也可由立体扫描装置直接进行,并将三维重建后的信息发送给所述处理器。优选的,所述骨表面图像信息包括至少两幅不同视角的图像数据。进一步的,处理器可以获取预先置入的第二虚拟骨模型,该预置的第二虚拟骨模型可根据术前CT扫描所述预定对象的骨表面建立,或MRI扫描所述预定对象的骨表面建立。
可选的,立体视觉扫描装置100与靶标300固定连接,两者具有可知的相对位置关系,由此,导航装置200通过靶标300,即可获知立体视觉扫描装置100的位姿信息,导航装置200进一步将立体视觉扫描装置100的位姿信息传输至处理器,处理器将所述第一虚拟骨模型与预置的第二虚拟骨模型进行配准,基于所述立体视觉扫描装置100的位姿信息,即可得到所述预定对象于导航影像坐标系下的坐标。由此实现了对实际骨的位置与导航影像的配准。在一些实施例中,立体视觉扫描装置100与靶标300的相对位置关系可以是固定的,如通过机械设计文件或者通过标定的方式获得。
由此,本实施例一还提供一种可读存储介质,例如,计算机可读存储介质,其上存储有程序,如计算机程序,所述程序被执行时实现:
步骤SA1:获取预定对象的骨表面图像信息;所述骨表面图像信息来自于一立体视觉扫描装置100;
步骤SA2:根据所述骨表面图像信息进行三维重建,得到第一虚拟骨模型;
步骤SA3:将所述第一虚拟骨模型与预置的第二虚拟骨模型进行配准;
步骤SA4:基于所述立体视觉扫描装置100的位姿信息,得到所述预定对象于导航影像坐标系下的坐标。
进一步的,步骤SA2具体包括:
步骤SA21:对所述骨表面图像信息进行分割处理,获取骨轮廓点云数据;
步骤SA22:基于所述骨轮廓点云数据进行三维重建,得到所述第一虚拟骨模型,亦即重建后的股骨的虚拟模型。
优选的,步骤SAT3具体包括:将所述第一虚拟骨模型和所述第二虚拟骨模型进行粗配准;使用Ransac一致性算法去除可能的错误点;使用icp算法迭代,直至满足预设的收敛条件,拟合出配准矩阵。具体的,点云注册配准即将两个不同坐标系下的点云坐标转换到对方坐标系下。步骤SAT3提供的算法,是传统icp(Iteractive Closest Point)算法的改良版,传统icp算法的步骤可归纳如下:
a)确定目标点云中对应点集q i
b)拟合R,T矩阵,使得源点云p i与q i之间距离最短,判断距离误差公式E如下:
Figure PCTCN2021108603-appb-000001
c)不断收敛迭代直到E满足收敛条件为止。
步骤SAT3提供的算法中,首先将基于扫描的骨表面图像信息所建立的第一虚拟骨模型和预置的第二虚拟骨模型进行粗配准,使两模型尽可能地接近,随后使用Ransac一致性算法去除可能的错误点,再运行icp算法不断迭代直到满足收敛条件拟合出最终配准矩阵。
优选的,在步骤SA4中将各坐标系进行统一。具体的,导航装置200自带的坐标系为导航装置坐标系T1,靶标的坐标系为靶标坐标系T2,立体视觉扫描装置100自带的坐标系为立体视觉扫描装置坐标系T3,预置的第二虚拟骨模型的坐标系为导航影像坐标系T4;
步骤SA4包括:基于所述立体视觉扫描装置100与一靶标300的预定连接关系,得到立体视觉扫描装置坐标系T3与靶标坐标系T2之间的第一坐标 转换关系;基于导航装置200对所述靶标300的定位,得到所述靶标坐标系T2与导航装置坐标系T1之间的第二坐标转换关系;基于步骤SA3中第一虚拟骨模型与第二虚拟骨模型的配准,获得立体视觉扫描装置坐标系T3与导航影像坐标系T4之间的第三坐标转换关系;基于所述第一坐标转换关系、所述第二坐标转换关系和所述第三坐标转换关系,对所述第一虚拟骨模型进行坐标转换,得到所述预定对象于导航影像坐标系T4下的坐标。
具体的,请参考图6,由于立体视觉扫描装置100与靶标300具有预定连接关系,立体视觉扫描装置100与靶标300之间的相对位置是可以预知的,因此立体视觉扫描装置坐标系T3与靶标坐标系T2之间的第一坐标转换关系是可知的,如通过机械设计文件或者通过标定的方式获得。由于导航装置200与靶标300两者是相适配的,导航装置200可以实时地获知靶标300的位置信息,因此靶标坐标系T2和导航装置坐标系T1之间的转换关系可通过导航装置200对靶标300的跟踪获知。基于步骤SA3中第一虚拟骨模型与第二虚拟骨模型的配准,可以获得导航装置坐标系T1与导航影像坐标系T4之间的转换关系。其具体过程如下:
Figure PCTCN2021108603-appb-000002
Figure PCTCN2021108603-appb-000003
Figure PCTCN2021108603-appb-000004
其中,
Figure PCTCN2021108603-appb-000005
分别为第一坐标转换关系的转换矩阵、第二坐标转换关系的转换矩阵及第三坐标转换关系的转换矩阵。通过上述的转换,基于立体视觉扫描装置100获取的骨表面图像信息所建立的第一虚拟骨模型,从立体视觉扫描装置坐标系T3根据第一坐标转换关系的转换矩阵
Figure PCTCN2021108603-appb-000006
转移到靶标坐标系T2下,利用导航装置200跟踪靶标300获取第二坐标转换关系的转换矩阵
Figure PCTCN2021108603-appb-000007
将骨表面图像信息从靶标坐标系T2转换到导航装置坐标系T1中,进而根据步骤SA3中第一虚拟骨模型与第二虚拟骨模型的配准,获取导航装置坐标系T1与导航影像坐标系T4间的转换关系。由此,基于导航装置200对靶标300的跟踪,即可得到根据立体视觉扫描装置100获取的骨表面图像信息所建立的第一虚拟骨模型于导航影像坐标系T4下的坐标。亦即实现了对 骨位置进行配准。
本实施例中,由于立体视觉扫描装置100***作者握持时,其位置相对于患者可以是运动的,而生成的配准矩阵无法在不稳定的坐标系中完成导航,因此将导航装置坐标系T1设置为世界坐标系,将第一虚拟骨模型转换到导航装置坐标系T1下,导航装置200在扫描完成后保持固定,只要患者保持固定,那么导航装置200相对患者同样保持静止不动,固定的导航装置200就可以让立体视觉扫描装置100在完成扫描后撤除,而不会妨碍接下来的手术。
可以理解的,本实施例提供的可读存储介质、骨建模配准***及骨科手术***并不局限于术前使用,在术中如果有配准需求,如患者的腿部位置发生了改变,也可以使用立体视觉扫描装置100对改变位置后的骨头进行扫描,实现再次配准。
需要注意的是,由于骨头表面存在一层软组织,直接利用立体视觉扫描装置100扫描得到的骨表面图像信息一般是包含了骨头表面软骨的点云数据。根据第二虚拟骨模型的来源不同,后续会包含两种处理途径。当预置入的第二虚拟骨模型是根据核磁共振(即MRI)扫描影像建立的,则其包含了骨头的软骨数据,而恰好使用立体视觉扫描装置100扫描获取的骨表面图像信息同样是包含了软骨的数据,故两者可直接利用前述的方法进行注册配准。
而当预置的第二虚拟骨模型是根据CT扫描影像建立的,其为不包括软骨组织的硬骨点云数据,故通过立体视觉扫描装置100扫描获取的骨表面图像信息需要再使用补偿算法以从中提取硬骨表面点云数据来完成配准。因此,上述步骤S3包括:根据软骨补偿算法获取软骨补偿数据;基于所述软骨补偿数据对所述第一虚拟骨模型进行修正,并利用修正后的第一虚拟骨模型与所述第二虚拟骨模型进行配准。
可选的,在本实施例中,使用铣刀磨除的方式清除软骨,软骨清除区域的位置见图4,该软骨清除区域400大致为一点状区域,该软骨清除区域由术前规划的医生决定具***置。
图5演示了铣刀对软骨清除后进行扫描的过程,铣刀磨除软骨清除区域的软组织后,再利用立体视觉扫描装置100扫描骨表面,进而运行单点软骨 补偿算法来获取软骨补偿数据。
进一步的,所述单点软骨补偿算法包括如下步骤:首先使用Canny算子获取所述预定对象的软骨清除区域的边缘,计算边缘的梯度变化,根据梯度变化拟合邻近软骨清除区域的非清除区域的深度变化,不断迭代覆盖整个软骨区域,最终拟合出软骨补偿数据,用于接下来的注册配准。需要说明的,Canny算子是一种多级边缘检测算法,计算边缘的梯度变化,拟合邻近区以及迭代覆盖等具体的计算方法均为本领域技术人员所公知,这里不做展开说明。
综上,根据预置的第二虚拟骨模型的不同来源,本实施例提供了两种不同的处理途径,无需在骨头上安置靶标,即实现了术前影像模型与术中实际骨模型的配准,不会造成病人二次受创,减少了器械与人体部位的接触,减小了污染几率,方便器械清洗。此外,骨建模配准***的设置简单,简化了手术中配准注册工具的安装,简化了配准流程,降低了手术的复杂度,缩减了手术时间。此外,立体视觉扫描装置的扫描相比传统点选的方式精度更高,更能反应骨表面结构,提升注册精度。
【实施例二】
请参考图7和图8,其中,图7是本发明实施例二的软骨清除区域的示意图;图8是本发明实施例二的软骨清除后扫描的示意图。
本发明实施例二提供的可读存储介质、骨建模配准***及骨科手术***与实施例一提供的可读存储介质、骨建模配准***及骨科手术***基本相同,对于相同部分不再叙述,以下仅针对不同点进行描述。
实施例二中,针对根据CT扫描影像建立的第二虚拟骨模型,利用软骨补偿数据的获取方式与实施例一不同。具体的,实施例二同样使用铣刀磨除的方式清除软骨,软骨清除区域的位置见图7,该软骨清除区域大致为多个点状区域,这些软骨清除区域由术前规划的医生决定具***置。
图8演示了铣刀对软骨清除后进行扫描的过程,磨除不同位置处的软骨无需按照特别的顺序,铣刀磨除多个软骨清除区域的软组织后,可直接利用立体视觉扫描装置100扫描骨表面,进而运行多点软骨补偿算法来获取软骨 补偿数据。
进一步的,所述多点软骨补偿算法包括如下步骤:首先使用Canny算子获取所述预定对象的多个软骨清除区域的边缘,计算多个边缘的梯度变化,根据梯度变化从各软骨清除区域向周围扩展拟合得到清除深度;逐渐扩展清除范围直至覆盖整个软骨区域,得到所述软骨补偿数据。
更进一步的,当两个扩展区域相交时,交界处的清除深度根据距离所述交界处最近的软骨清除区域的中心点的梯度变化拟合得到。
由于本实施例采用了多点软骨补偿算法,其拟合精度较实施例一的单点软骨补偿算法更高。但需要对骨头的多个软骨清除区域进行磨除。
【实施例三】
请参考图9和图10,其中,图9是本发明实施例三的软骨清除区域的示意图;图10是本发明实施例三的软骨清除后扫描的示意图。
本发明实施例三提供的可读存储介质、骨建模配准***及骨科手术***与实施例一提供的可读存储介质、骨建模配准***及骨科手术***基本相同,对于相同部分不再叙述,以下仅针对不同点进行描述。
实施例三中,针对根据CT扫描影像建立的第二虚拟骨模型,利用软骨补偿数据的获取方式与实施例一不同。具体的,实施例三使用划刀划线的方式清除软骨,软骨清除区域的位置见图9,该软骨清除区域大致为条状区域,这些软骨清除区域由术前规划的医生决定具***置。
图10演示了划刀对软骨清除后进行扫描的过程,划刀划线去除软骨清除区域的软组织后,可利用立体视觉扫描装置100扫描骨表面,进而运行划刻区域软骨补偿算法来获取软骨补偿数据。
划刻区域软骨补偿算法与实施例一的单点补偿算法基本相同,通过划刻区域边缘梯度变化拟合周围区域的清除深度,直至覆盖全软骨区域,得到软骨补偿数据。具体请参考实施例一,这里不再重复。
【实施例四】
请参考图11,其是本发明实施例四的骨建模配准***的示意图。
本发明实施例四提供的可读存储介质、骨建模配准***及骨科手术***与实施例一提供的可读存储介质、骨建模配准***及骨科手术***基本相同,对于相同部分不再叙述,以下仅针对不同点进行描述。
如图11所示,相比实施例一提供的骨建模配准***,本实施例四提供的骨建模配准***中,取消了导航装置200和靶标300的设置,而将立体视觉扫描装置100的位姿固定,所述立体视觉扫描装置100的位姿信息通过标定获取。在一个示范性的实施例中,立体视觉扫描装置100由一机械臂所持,对立体视觉扫描装置100的位姿信息进行标定后,即固定机械臂,保持立体视觉扫描装置100的位姿不变。
由于立体视觉扫描装置100的位姿是固定的,其获取的骨表面图像信息所建立的第一虚拟骨模型相对于世界坐标系的坐标转换关系是可知的,因此通过将第一虚拟骨模型与第二虚拟骨模型配准后,即可直接获知第一虚拟骨模型与第二虚拟骨模型的配准矩阵。
相应的,所述可读存储介质中的程序被执行时还实现:获取预定对象的实时的骨表面图像信息,所述实时的骨表面图像信息来自于位姿固定的立体视觉扫描装置;根据实时的骨表面图像信息进行三维重建,得到实时的第一虚拟骨模型;将所述第一虚拟骨模型与所述预定对象的预置的第二虚拟骨模型进行实时配准;得到所述预定对象于导航影像坐标系下的实时坐标。
进一步的,在获取配准矩阵后,保持立体视觉扫描装置100的位姿不变,保持扫描骨表面的状态,这样时刻运行配准算法获取实时配准矩阵,可以达到实时跟踪的目的。如此配置,即使术中骨头的位置发生了改变,也可以实时配准。
【实施例五】
本发明实施例五提供的可读存储介质、骨建模配准***及骨科手术***与实施例一提供的可读存储介质、骨建模配准***及骨科手术***基本相同,对于相同部分不再叙述,以下仅针对不同点进行描述。
实施例五中,针对根据CT扫描影像建立的第二虚拟骨模型,软骨补偿数据的获取方式与实施例一不同。具体的,实施例五不使用铣刀或者划刀等机械方式完成软骨补偿,而是通过机器学习算法训练神经元网络来自动清除软骨的数据信息。具体的,软骨补偿算法包括:使用经训练集完成训练的神经元网络对所述第一虚拟骨模型进行计算,得到所述软骨补偿数据。需要说明的,神经元网络通过训练集进行训练的方法为本领域技术人员所公知,这里不再展开说明。
需要说明的,上述若干实施例的可读存储介质、骨建模配准***及骨科手术***按递进的描述方式进行说明,各实施例中不同的部分可相互组合,本发明对此不作限制。
上述描述仅是对本发明较佳实施例的描述,并非对本发明范围的任何限定,本发明领域的普通技术人员根据上述揭示内容做的任何变更、修饰,均属于权利要求书的保护范围。

Claims (16)

  1. 一种可读存储介质,其特征在于,所述可读存储介质上存储有程序,所述程序被执行时实现:
    获取预定对象的骨表面图像信息,所述骨表面图像信息来自于一立体视觉扫描装置;
    根据所述骨表面图像信息进行三维重建,得到第一虚拟骨模型;
    将所述第一虚拟骨模型与所述预定对象的预置的第二虚拟骨模型进行配准;
    基于所述立体视觉扫描装置的位姿信息,得到所述预定对象于导航影像坐标系下的坐标。
  2. 根据权利要求1所述的可读存储介质,其特征在于,所述骨表面图像信息包括至少两幅不同视角的图像数据。
  3. 根据权利要求1所述的可读存储介质,其特征在于,根据所述骨表面图像信息进行三维重建,得到第一虚拟骨模型的步骤包括:
    对所述骨表面图像信息进行分割处理,获取骨轮廓点云数据;
    基于所述骨轮廓点云数据进行三维重建,得到所述第一虚拟骨模型。
  4. 根据权利要求1所述的可读存储介质,其特征在于,基于所述立体视觉扫描装置的位姿信息,得到所述预定对象于导航影像坐标系下的坐标的步骤包括:
    基于所述立体视觉扫描装置与一靶标的预定连接关系,得到立体视觉扫描装置坐标系与靶标坐标系之间的第一坐标转换关系;
    基于导航装置对所述靶标的定位,得到所述靶标坐标系与导航装置坐标系之间的第二坐标转换关系;
    基于所述第一虚拟骨模型与所述第二虚拟骨模型的配准,获得立体视觉扫描装置坐标系与导航影像坐标系之间的第三坐标转换关系;
    基于所述第一坐标转换关系、所述第二坐标转换关系和所述第三坐标转换关系,对所述第一虚拟骨模型进行坐标转换,得到所述预定对象于所述导 航影像坐标系下的坐标。
  5. 根据权利要求1所述的可读存储介质,其特征在于,预置的第二虚拟骨模型根据MRI扫描所述预定对象的骨表面建立。
  6. 根据权利要求1所述的可读存储介质,其特征在于,预置的第二虚拟骨模型根据CT扫描所述预定对象骨表面建立,将所述第一虚拟骨模型与所述预定对象的预置的第二虚拟骨模型进行配准的步骤包括:
    根据软骨补偿算法获取软骨补偿数据;
    基于所述软骨补偿数据对所述第一虚拟骨模型进行修正,并利用修正后的第一虚拟骨模型与所述第二虚拟骨模型进行配准。
  7. 根据权利要求6所述的可读存储介质,其特征在于,所述软骨补偿算法包括:使用Canny算子获取所述预定对象的软骨清除区域的边缘,计算所述边缘的梯度变化,根据所述梯度变化拟合邻近所述软骨清除区域的非清除区域的深度变化,迭代覆盖整个软骨区域,得到所述软骨补偿数据。
  8. 根据权利要求6所述的可读存储介质,其特征在于,所述软骨补偿算法包括:使用Canny算子获取所述预定对象的多个软骨清除区域的边缘,计算多个所述边缘的梯度变化,根据所述梯度变化从各软骨清除区域向周围扩展拟合得到清除深度;逐渐扩展清除范围直至覆盖整个软骨区域,得到所述软骨补偿数据。
  9. 根据权利要求8所述的可读存储介质,其特征在于,当两个扩展区域相交时,交界处的清除深度根据距离所述交界处最近的软骨清除区域的中心点的梯度变化拟合得到。
  10. 根据权利要求6所述的可读存储介质,其特征在于,所述软骨补偿算法包括:使用经训练集完成训练的神经元网络对所述第一虚拟骨模型进行计算,得到所述软骨补偿数据。
  11. 根据权利要求1所述的可读存储介质,其特征在于,将所述第一虚拟骨模型与预置的第二虚拟骨模型进行配准的步骤包括:
    将所述第一虚拟骨模型和所述第二虚拟骨模型进行粗配准;
    使用Ransac一致性算法去除可能的错误点;
    使用icp算法迭代,直至满足预设的收敛条件,拟合出配准矩阵。
  12. 根据权利要求1所述的可读存储介质,其特征在于,所述程序被执行时还实现:
    获取预定对象的实时的骨表面图像信息,所述实时的骨表面图像信息来自于位姿固定的立体视觉扫描装置;
    根据实时的骨表面图像信息进行三维重建,得到实时的第一虚拟骨模型;
    将所述第一虚拟骨模型与所述预定对象的预置的第二虚拟骨模型进行实时配准;得到所述预定对象于导航影像坐标系下的实时坐标。
  13. 一种骨建模配准***,其特征在于,包括:处理器以及立体视觉扫描装置;所述立体视觉扫描装置用于获取预定对象的骨表面图像信息,并将所述骨表面图像信息反馈至所述处理器;
    所述处理器与所述立体视觉扫描装置通信连接;所述处理器被配置为,获取所述立体视觉扫描装置的位姿信息,以及获取所述立体视觉扫描装置所得到的预定对象的骨表面图像信息;根据所述骨表面图像信息进行三维重建,得到第一虚拟骨模型;将所述第一虚拟骨模型与预置的第二虚拟骨模型进行配准,基于所述立体视觉扫描装置的位姿信息,得到所述预定对象于导航影像坐标系下的坐标。
  14. 根据权利要求13所述的骨建模配准***,其特征在于,所述立体视觉扫描装置的位姿固定,所述立体视觉扫描装置的位姿信息通过标定获取。
  15. 根据权利要求13所述的骨建模配准***,其特征在于,所述骨建模配准***还包括:导航装置和靶标,所述靶标与所述立体视觉扫描装置连接,所述导航装置与所述靶标相适配,用以获取所述靶标的实时坐标信息,并将所述实时坐标信息传输至所述处理器;所述处理器与所述导航装置通信连接,基于所述导航装置所反馈的所述靶标的实时坐标信息获得所述立体视觉扫描装置的位姿信息。
  16. 一种骨科手术***,其特征在于,包括根据权利要求13~15中任一项所述的骨建模配准***,还包括机械臂、手术台车及导航台车中的至少一者。
PCT/CN2021/108603 2020-09-29 2021-07-27 可读存储介质、骨建模配准***及骨科手术*** WO2022068341A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP21874003.3A EP4223246A4 (en) 2020-09-29 2021-07-27 READABLE STORAGE MEDIUM, BONE MODELING REGISTRATION SYSTEM AND ORTHOPEDIC SURGICAL SYSTEM
US18/445,062 US20230410453A1 (en) 2020-09-29 2021-07-27 Readable storage medium, bone modeling registration system and orthopedic surgical system
BR112023005691A BR112023005691A2 (pt) 2020-09-29 2021-07-27 Meio de armazenamento legível, sistema de registro de modelagem óssea e sistema cirúrgico ortopédico
AU2021354680A AU2021354680A1 (en) 2020-09-29 2021-07-27 Readable storage medium, bone modeling registration system and orthopedic surgical system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011052618.4 2020-09-29
CN202011052618.4A CN112155732B (zh) 2020-09-29 2020-09-29 可读存储介质、骨建模配准***及骨科手术***

Publications (1)

Publication Number Publication Date
WO2022068341A1 true WO2022068341A1 (zh) 2022-04-07

Family

ID=73860765

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/108603 WO2022068341A1 (zh) 2020-09-29 2021-07-27 可读存储介质、骨建模配准***及骨科手术***

Country Status (6)

Country Link
US (1) US20230410453A1 (zh)
EP (1) EP4223246A4 (zh)
CN (1) CN112155732B (zh)
AU (1) AU2021354680A1 (zh)
BR (1) BR112023005691A2 (zh)
WO (1) WO2022068341A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115624385A (zh) * 2022-09-19 2023-01-20 重庆生物智能制造研究院 术前空间配准方法及装置、计算机设备、存储介质
CN117420917A (zh) * 2023-12-19 2024-01-19 烟台大学 基于手部骨架的虚拟现实控制方法、***、设备及介质

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112155732B (zh) * 2020-09-29 2022-05-17 苏州微创畅行机器人有限公司 可读存储介质、骨建模配准***及骨科手术***
CN112826590A (zh) * 2021-02-02 2021-05-25 复旦大学 基于多模态融合和点云配准的膝关节置换术空间注册***
CN113539444B (zh) * 2021-08-30 2024-04-19 上海联影医疗科技股份有限公司 医学图像重建方法、装置、电子设备和存储介质
CN113524201B (zh) * 2021-09-07 2022-04-08 杭州柳叶刀机器人有限公司 机械臂位姿主动调节方法、装置、机械臂和可读存储介质
CN114224508A (zh) * 2021-11-12 2022-03-25 苏州微创畅行机器人有限公司 医学图像处理方法、***、计算机设备和存储介质
TWI790181B (zh) * 2022-08-03 2023-01-11 國立陽明交通大學 手術機器人系統
CN115356840A (zh) * 2022-09-01 2022-11-18 广东粤港澳大湾区黄埔材料研究院 一种显微镜的锁焦方法及装置
CN115830247B (zh) * 2023-02-14 2023-07-14 北京壹点灵动科技有限公司 髋关节旋转中心的拟合方法和装置、处理器及电子设备
CN116563379B (zh) * 2023-07-06 2023-09-29 湖南卓世创思科技有限公司 一种基于模型融合的标志物定位方法、装置及***

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130102893A1 (en) * 2010-06-30 2013-04-25 Fritz Vollmer Medical image registration using a rigid inner body surface
CN106344152A (zh) * 2015-07-13 2017-01-25 中国科学院深圳先进技术研究院 腹部外科手术导航配准方法及***
CN109692041A (zh) * 2019-01-30 2019-04-30 杭州键嘉机器人有限公司 一种针对覆盖软骨的骨表面配准方法
CN109785374A (zh) * 2019-01-23 2019-05-21 北京航空航天大学 一种牙科增强现实手术导航的自动实时无标记图像配准方法
CN112155732A (zh) * 2020-09-29 2021-01-01 苏州微创畅行机器人有限公司 可读存储介质、骨建模配准***及骨科手术***
CN112155733A (zh) * 2020-09-29 2021-01-01 苏州微创畅行机器人有限公司 可读存储介质、骨建模配准***及骨科手术***
CN112155734A (zh) * 2020-09-29 2021-01-01 苏州微创畅行机器人有限公司 可读存储介质、骨建模配准***及骨科手术***

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10568535B2 (en) * 2008-05-22 2020-02-25 The Trustees Of Dartmouth College Surgical navigation with stereovision and associated methods
WO2011134083A1 (en) * 2010-04-28 2011-11-03 Ryerson University System and methods for intraoperative guidance feedback
CN102155940B (zh) * 2011-03-17 2012-10-17 北京信息科技大学 用于双目视觉定位跟踪***的立体靶标
US9192445B2 (en) * 2012-12-13 2015-11-24 Mako Surgical Corp. Registration and navigation using a three-dimensional tracking sensor
CN105139442A (zh) * 2015-07-23 2015-12-09 昆明医科大学第一附属医院 一种结合ct和mri二维图像建立人体膝关节三维仿真模型的方法
CN106560163B (zh) * 2015-09-30 2019-11-29 合肥美亚光电技术股份有限公司 手术导航***及手术导航***的配准方法
WO2017185170A1 (en) * 2016-04-28 2017-11-02 Intellijoint Surgical Inc. Systems, methods and devices to scan 3d surfaces for intra-operative localization
CN105852970B (zh) * 2016-04-29 2019-06-14 北京柏惠维康科技有限公司 神经外科机器人导航定位***及方法
EP4331513A2 (en) * 2016-05-27 2024-03-06 Mako Surgical Corp. Preoperative planning and associated intraoperative registration for a surgical system
US11129681B2 (en) * 2017-02-22 2021-09-28 Orthosoft Ulc Bone and tool tracking in robotized computer-assisted surgery
CN107510504A (zh) * 2017-06-23 2017-12-26 中南大学湘雅三医院 一种辅助骨科手术的非放射线透视视觉导航方法及***
CN107970060A (zh) * 2018-01-11 2018-05-01 上海联影医疗科技有限公司 手术机器人***及其控制方法
CN111105385A (zh) * 2018-10-09 2020-05-05 武汉大学中南医院 一种对基于断层成像技术提供的人体关节数据进行处理的方法
CN109512514A (zh) * 2018-12-07 2019-03-26 陈玩君 一种混合现实骨科微创手术导航***及使用方法
CN109925055B (zh) * 2019-03-04 2021-04-30 北京和华瑞博医疗科技有限公司 全数字化全膝关节置换手术机器人***及其模拟手术方法
CN110169820A (zh) * 2019-04-24 2019-08-27 艾瑞迈迪科技石家庄有限公司 一种关节置换手术位姿标定方法及装置
CN110443839A (zh) * 2019-07-22 2019-11-12 艾瑞迈迪科技石家庄有限公司 一种骨骼模型空间配准方法及装置
CN111297475B (zh) * 2020-02-19 2021-06-18 苏州微创畅行机器人有限公司 骨注册方法、骨注册***、骨注册控制装置及可跟踪元件
CN111493878A (zh) * 2020-03-17 2020-08-07 北京天智航医疗科技股份有限公司 用于骨科手术的光学三维扫描设备及测量骨骼表面的方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130102893A1 (en) * 2010-06-30 2013-04-25 Fritz Vollmer Medical image registration using a rigid inner body surface
CN106344152A (zh) * 2015-07-13 2017-01-25 中国科学院深圳先进技术研究院 腹部外科手术导航配准方法及***
CN109785374A (zh) * 2019-01-23 2019-05-21 北京航空航天大学 一种牙科增强现实手术导航的自动实时无标记图像配准方法
CN109692041A (zh) * 2019-01-30 2019-04-30 杭州键嘉机器人有限公司 一种针对覆盖软骨的骨表面配准方法
CN112155732A (zh) * 2020-09-29 2021-01-01 苏州微创畅行机器人有限公司 可读存储介质、骨建模配准***及骨科手术***
CN112155733A (zh) * 2020-09-29 2021-01-01 苏州微创畅行机器人有限公司 可读存储介质、骨建模配准***及骨科手术***
CN112155734A (zh) * 2020-09-29 2021-01-01 苏州微创畅行机器人有限公司 可读存储介质、骨建模配准***及骨科手术***

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4223246A4 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115624385A (zh) * 2022-09-19 2023-01-20 重庆生物智能制造研究院 术前空间配准方法及装置、计算机设备、存储介质
CN115624385B (zh) * 2022-09-19 2024-05-10 重庆生物智能制造研究院 术前空间配准方法及装置、计算机设备、存储介质
CN117420917A (zh) * 2023-12-19 2024-01-19 烟台大学 基于手部骨架的虚拟现实控制方法、***、设备及介质
CN117420917B (zh) * 2023-12-19 2024-03-08 烟台大学 基于手部骨架的虚拟现实控制方法、***、设备及介质

Also Published As

Publication number Publication date
EP4223246A1 (en) 2023-08-09
CN112155732B (zh) 2022-05-17
CN112155732A (zh) 2021-01-01
AU2021354680A1 (en) 2023-05-11
BR112023005691A2 (pt) 2023-04-25
EP4223246A4 (en) 2024-04-03
US20230410453A1 (en) 2023-12-21

Similar Documents

Publication Publication Date Title
WO2022068341A1 (zh) 可读存储介质、骨建模配准***及骨科手术***
CN111467036B (zh) 手术导航***、髋臼截骨的手术机器人***及其控制方法
US20230355312A1 (en) Method and system for computer guided surgery
WO2021063023A1 (zh) 截骨导向工具的位置校正方法及骨科手术***
EP3397188B1 (en) System and methods for preparing surgery on a patient at a target site defined by a virtual object
EP1855607B1 (en) Image-guided robotic system for keyhole neurosurgery
WO2022068340A1 (zh) 可读存储介质、骨建模配准***及骨科手术***
US8010177B2 (en) Intraoperative image registration
US8150494B2 (en) Apparatus for registering a physical space to image space
AU2018316801B2 (en) Ultrasound bone registration with learning-based segmentation and sound speed calibration
CN112220557B (zh) 用于颅脑穿刺的手术导航及机器臂装置及定位方法
US9052384B2 (en) System and method for calibration for image-guided surgery
WO2022083453A1 (zh) 手术操作工具的校验方法及校验***
CN112006776A (zh) 一种手术导航***及手术导航***的配准方法
WO2023279825A1 (zh) 导航及复位操作控制***及方法
JP2000163558A (ja) 位置合わせ装置
CN114224508A (zh) 医学图像处理方法、***、计算机设备和存储介质
JP2023502727A (ja) 骨切り術校正方法、校正装置、読み取り可能な記憶媒体、および整形外科手術システム
US20240024035A1 (en) Preoperative imaging combined with intraoperative navigation before and after alteration of a surgical site to create a composite surgical three dimensional structural dataset
CN214857401U (zh) 一体化***结构装置
Chen et al. Surgical Navigation System Design for Surgery Aided Diagnosis Platform

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21874003

Country of ref document: EP

Kind code of ref document: A1

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112023005691

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 112023005691

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20230328

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021874003

Country of ref document: EP

Effective date: 20230502

ENP Entry into the national phase

Ref document number: 2021354680

Country of ref document: AU

Date of ref document: 20210727

Kind code of ref document: A