CN111035458A - Intelligent auxiliary system for operation comprehensive vision and image processing method - Google Patents

Intelligent auxiliary system for operation comprehensive vision and image processing method Download PDF

Info

Publication number
CN111035458A
CN111035458A CN201911408910.2A CN201911408910A CN111035458A CN 111035458 A CN111035458 A CN 111035458A CN 201911408910 A CN201911408910 A CN 201911408910A CN 111035458 A CN111035458 A CN 111035458A
Authority
CN
China
Prior art keywords
image
fusion
images
image data
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911408910.2A
Other languages
Chinese (zh)
Inventor
贾欢
吴皓
汪照炎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ninth Peoples Hospital Shanghai Jiaotong University School of Medicine
Original Assignee
Ninth Peoples Hospital Shanghai Jiaotong University School of Medicine
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ninth Peoples Hospital Shanghai Jiaotong University School of Medicine filed Critical Ninth Peoples Hospital Shanghai Jiaotong University School of Medicine
Priority to CN201911408910.2A priority Critical patent/CN111035458A/en
Publication of CN111035458A publication Critical patent/CN111035458A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/361Image-producing devices, e.g. surgical cameras
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/373Surgical systems with images on a monitor during operation using light, e.g. by using optical scanners
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Gynecology & Obstetrics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Robotics (AREA)

Abstract

The invention discloses an intelligent auxiliary system for operation comprehensive vision and an image processing method, wherein the system comprises: glasses, it has picture frame and two mirror legs; a VR/AR display screen arranged in front of the mirror frame; two infrared light sources arranged on the mirror frame; the two optical tracking video cameras are respectively arranged on the two lens legs and used for receiving the reflecting signals of the mark points of the operation area so as to track the mark points of the operation area to obtain relative position information and shooting images in the sight range of a user to obtain video shot images; the data receiving and processing device comprises a data receiving port for receiving external information, a voice instruction processing component and an image processing component; a miniature microphone; and a power supply element. The operation comprehensive vision intelligent auxiliary system provided by the invention realizes the virtual-real fusion imaging of the navigation image, does not need a user to construct a superposition effect in the brain, can switch and project display contents through a voice instruction, assists the operation, and improves the operation efficiency.

Description

Intelligent auxiliary system for operation comprehensive vision and image processing method
Technical Field
The invention relates to a medical instrument, in particular to an intelligent auxiliary system for an operation comprehensive vision and an image processing method.
Background
Instruments and equipment in an operating room are numerous, a doctor needs to know different information of a patient in the operating process, often needs to adjust the sight back and forth between displays of different equipment or ask other personnel in the operating room to assist in switching display contents so as to obtain the required contents, and the process of completely knowing target information is time-consuming. Taking the navigation technique as an example, the navigation technique is needed to assist in the situations that the anatomical structure of part of the operation area is complex, or anatomical variation exists.
When the traditional navigation system is used, a doctor needs to obtain the relative position relationship between the image reconstructed before the operation and the instrument used at the time from another display screen, and then moves the sight back to an operation area to construct an overlapping effect in the brain so as to assist in judging the positions of the focus and the anatomical structure. The reconstructed image is displayed by the independent display, and cannot be superposed with the image of the operation area for display, so that the sight of a doctor needs to be switched back and forth between the display and the operation area during use, the operation is very inconvenient, and the optimal auxiliary effect is difficult to achieve.
The practical application scenes can cause the consumption of operation time, prolong the overall operation time, and possibly bring adverse effects to patients.
Disclosure of Invention
The invention aims to solve the defect of poor auxiliary effect of the traditional navigation system, and provides an intelligent auxiliary system for the comprehensive visual field of the operation.
In order to achieve the above object, the present invention provides an intelligent auxiliary system for integrated vision of surgery, comprising:
glasses, it has picture frame and two mirror legs;
a VR/AR display screen arranged in front of the mirror frame;
the two infrared light sources are arranged on the mirror frame and used for assisting the optical tracking video camera to measure distance;
the two optical tracking video cameras are respectively arranged on the two lens legs and used for receiving the reflecting signals of the mark points of the operation area so as to track the mark points of the operation area to acquire relative position information and shooting images in the sight range of a user, and the images are used for subsequent image fusion.
Data reception processing means for: receiving external information, processing voice commands and processing images. The data receiving and processing device is arranged behind the cameras on the two sides of the glasses legs, is equivalent to a CPU, and can perform various operations and instruction processing.
The external information includes but is not limited to original image data reconstruction results, endoscope imaging contents, microscope imaging contents, patient medical record information, examination results, basic vital sign monitoring values and the like, and can be called and presented by using voice instructions.
The voice command refers to a voice command of a doctor knife collected by a microphone. And the voice instruction processing comprises the recognition and feedback of the voice instruction. The recognizable voice instruction content is preset, and the doctor of the main knife can easily adjust the display content of the display screen by sending out the voice instruction. The data receiving and processing device identifies the voice command, makes feedback, and adjusts and outputs data, information and the like from different sources to the display screen as display content according to the voice command.
The image processing comprises image registration, image fusion is carried out on the video shooting image and preoperative image data, and the video shooting image and the preoperative image data are output to the display screen according to a voice command.
The miniature microphone is arranged at the tail end of the glasses leg and used for collecting voice instructions;
and the power supply element is used for supplying power.
Optionally, the data receiving and processing device obtains the data signal through wireless transmission or wired transmission.
Optionally, the system further comprises: and the position sensor is used for assisting in monitoring the direction change of the glasses so as to adjust the angle of the imaging content, and the accuracy of the fused image can be further ensured only by point-to-point fusion because the fused image needs to be based on different distances and angles.
Optionally, the position sensor comprises a gyroscope sensor.
Optionally, the position sensor is provided on the temple, e.g. integrated into a data receiving and processing device or a power supply unit.
Optionally, the preoperative image data is obtained by registering the raw image data with the patient pose.
The invention also provides a method for processing images by the image processing component of the intelligent auxiliary system for the comprehensive visual field of the operation, which is characterized by comprising the following steps:
three-dimensionally reconstructing the two-dimensional image data to obtain original image data;
registering the original image data and the pose of the patient to obtain registered preoperative image data;
acquiring camera image data through an optical tracking video camera;
the infrared light source emits infrared light, the optical tracking video camera detects signals reflected by the mark points of the operation area, and the relative position relation between the glasses and the mark points of the operation area is obtained;
and performing virtual-real fusion on the registered images to obtain virtual-real fusion images, and outputting the fusion images to a display screen.
The operation comprehensive vision intelligent auxiliary system provided by the invention utilizes the image processing assembly to carry out secondary registration on the image, thereby realizing the virtual-real fusion imaging of the navigation image, having high definition, not needing a user to construct a superposition effect in the brain, and being capable of more effectively assisting the user to accurately judge the positions of the focus and the anatomical structure; the voice command processing assembly is used for identifying and feeding back the voice command of the user, the user can conveniently switch and project the display content through the voice command, the operation is assisted, and the operation efficiency is improved.
Drawings
Fig. 1 is a schematic structural diagram of an intelligent auxiliary system for comprehensive vision in surgery according to the present invention.
Fig. 2 is a schematic structural diagram of an image processing assembly according to the present invention.
FIG. 3 is a flow chart of the image processing component of the present invention for image processing.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The "operating area mark point" refers to a mark point attached to the skin of the operating area of a patient. The optical surgical navigation system needs to register mark points by using the mark points (capable of reflecting infrared light) attached to the skin of a patient before operation, namely, the mapping relation between an operation space and a medical image space is determined.
The "original image data" described herein refers to three-dimensional reconstructed image data obtained by performing a three-dimensional reconstruction of a target anatomical region according to a result obtained by performing a related image examination on a preoperative surgical region by using an image examination device after a patient is admitted, such as CT and MRI. And a three-dimensional model of the target area is established by utilizing the two-dimensional image data to obtain a reconstructed image, so that the imaging is more visual.
The "preoperative image data" as described herein is obtained by registering raw image data with the patient pose. The original image shot by the patient and the specific body position used in the operation are not completely consistent, and the fusion imaging needs to adjust the image fusion angle, the image size and the like according to the body position of the patient so as to ensure that the imaging content accurately corresponds to the actual position of the patient one by one. Therefore, the above-described registration is required. The purpose of registration is to have a one-to-one correspondence between different anatomical locations in the pre-operative image data and the location of the patient in the operating room. The registration needs to use a camera on the glasses and a mark point arranged near the operation area, and data processing is carried out in a CPU based on the relative spatial position relationship of the camera and the mark point to obtain preoperative image data.
The image fusion is to fuse preoperative image data with a real operation scene shot by camera shooting, can help to position a focus and some important anatomical structures and the like, and plays a role in assisting operation. When the traditional navigation system is used, a doctor needs to obtain the relative position relationship between the image reconstructed before the operation and the instrument used at the time from another display screen, moves the sight back to an operation area, and constructs an overlapping effect in the brain so as to assist in judging the positions of focuses and anatomical structures, so that a certain auxiliary effect is played for partial complex operations, but the optimal presentation effect is not achieved.
As shown in fig. 1, a schematic structural diagram of an intelligent auxiliary system for integrated vision in surgery is shown, which comprises: the glasses comprise glasses 10, a VR/AR display screen 20, two infrared light sources 30, two optical tracking video cameras 40, a data receiving and processing device 50, a micro microphone 60 and a power supply element 70.
The eyeglasses 10 have a frame 11 and two temple pieces 12.
VR/AR display screen 20 set up in front of the picture frame, the display screen is transparent, is similar to the glasses lens.
The infrared light source 30 is arranged on the frame and used for emitting infrared light and assisting the optical tracking video camera in ranging, namely measuring information such as distance, angle and the like between the glasses and a positioner (namely, an operation area mark point) placed in an operation area when the glasses are worn. The information is transmitted to a processor, and the processor superposes, superposes and fuses the image reconstructed from the images before the operation and the actually seen content (camera shooting content) in the operation according to the information to form an image, namely, the two scenes are fused with different transparencies, so that an augmented virtual reality environment is formed.
The optical tracking video cameras 40 are respectively arranged on the two glasses legs, can receive the reflected signals of the mark points of the operation area, and can realize the conventional camera shooting function. The optical tracking video camera is used for tracking the mark points of the operation area, detecting the signals reflected by the mark points of the operation area, and analyzing and obtaining the distance, the angle and the like so as to obtain the relative position information. The optical tracking video camera is also used for shooting images in the sight range of the user for subsequent image fusion; the relative position refers to the relative position of the glasses and the mark points of the operation area.
The data reception processing device 50 is provided behind the cameras on both sides of the temple, and is equivalent to a CPU, and can perform various calculations and instruction processing. The data receiving and processing device 50 includes a data receiving port for receiving external information, a voice command processing component for voice command recognition and feedback processing, and an image processing component for image registration and fusion processing.
The data receiving and processing device is connected with the VR/AR display screen 20, the infrared light source 30, the optical tracking video camera 40 and the micro microphone 60 through data transmission wires.
The external information includes but is not limited to an original image data reconstruction result, an endoscope imaging content (from an endoscope device used in the operation, the imaging content does not need glasses), a microscope imaging content (from a microscope device used in the operation, the imaging content does not need glasses), patient medical record information, an examination result, a basic vital sign monitoring value and the like, and can be called and presented by using a voice instruction.
In the embodiment of the present application, as an example, the external data may be transmitted to the data receiving and processing device 50 through a wireless manner such as bluetooth or WIFI.
The voice command refers to a voice command of a doctor knife collected by a microphone. And the voice instruction processing comprises the recognition and feedback of the voice instruction. The recognizable voice instruction content is preset, and the doctor of the main knife can easily adjust the display content of the display screen by sending out the voice instruction. The data receiving and processing device identifies the voice command, makes feedback, and adjusts and outputs data, information and the like from different sources to the display screen as display content according to the voice command.
As shown in fig. 2, the data receiving and processing device 50 includes an image processing component, which includes: a data importing unit 511 and an image fusion unit 512, wherein the image processed by the image fusion unit 512 can be output to the VR/AR display screen 20.
The data import unit is used for importing data, and the data comprises original image data and video shot images.
The image fusion unit is used for registering the video shooting image and the preoperative image data and carrying out image fusion.
As shown in fig. 3, the image processing by the image processing module includes the following steps:
the CT or MRI image is reconstructed by adopting three-dimensional reconstruction software to obtain original image data.
Registering the original image data and the pose of the patient to obtain registered preoperative image data; the registration of the original image data and the pose of the patient ensures the accuracy of the mapping relation between the medical image space and the operation space.
The captured image data is acquired by optically tracking the video camera 40.
The infrared light source emits infrared light, the optical tracking video camera detects signals reflected by the mark points of the operation area, the distance, the angle and the like are obtained through analysis, and the relative position relation between the glasses 30 and the mark points of the operation area is obtained.
And registering the camera image data and the preoperative image data according to the relative position relationship, so as to ensure that the final fusion imaging result is accurate. The accuracy of the fusion imaging position is realized through secondary registration, and the size, the angle and the like of the fusion imaging content are adjusted according to the relative position relation.
And performing virtual-real fusion on the registered images to obtain virtual-real fused images, and outputting the virtual-real fused images to a display screen according to the voice instruction.
The micro microphone 60 is arranged at the end of the temple and used for collecting voice commands and then outputting the voice commands to the data receiving and processing device 50.
The power supply unit 70 is used for supplying power to the VR/AR display screen 20, the infrared light source 30, the optical tracking video camera 40, the data receiving and processing device 50, and the micro microphone 60.
In the embodiment of the present application, as an example, the data receiving and processing device 50 may obtain the external data signal through wireless transmission or wired transmission.
In the embodiment of the present application, as an example, the system for assisting integrated vision in surgery further includes: at least one position sensor (not shown, may be integrated with the data receiving and processing device 50 or the power supply element 70 as required) for post-registration position change monitoring, transmitting the position change information to the CPU (i.e., the data receiving and processing device 50), and adjusting the imaging effect accordingly. In particular to a method for assisting to monitor the glasses orientation transformation so as to adjust imaging content and angle. The accuracy of the fused image can be further ensured only by point-to-point fusion because the fused image needs to be based on different distances and angles.
In the embodiment of the present application, as an example, the position sensor is a gyroscope sensor.
In the embodiment of the present application, as an example, the position sensor is provided on the temple.
The operation comprehensive vision intelligent auxiliary system of the invention has the following use method:
1. wearing glasses to acquire an original image data and a patient pose registration image (conventional intraoperative navigation registration operation) and a glasses camera image;
2. monitoring markers (such as optical calibration balls) near an operation area by utilizing glasses optical tracking camera shooting to obtain relative position information of the glasses and the mark points of the operation area;
3. registering the pose registration image data and the camera image according to the relative position relation;
4. and acquiring the registered images, performing virtual-real fusion on the images, and outputting the images to a glasses display screen.
The specific registration process is handled by the processor and may require repetition of the registration operation. The user needs to keep the visual angle for a certain time during the operation, and may need to move the visual line to watch a plurality of mark points in the operation area and wait for a certain time.
The voice command using method (the voice recognition module can utilize the existing voice assistant and the like) comprises the following steps:
1. the main doctor gives voice instructions (the instructions comprise opening, closing, displaying navigation fusion images, split-screen display, canceling split screen, displaying the content of equipment x and the like);
2. collecting voice instructions;
3. the processing module carries out voice instruction recognition;
4. and projecting the corresponding content to the display screen according to the instruction.
If the voice operation is not available or the voice operation is not convenient, the user can also use the external computer equipment to operate and display the content.
In summary, the intelligent auxiliary system for comprehensive visual field of operation provided by the invention utilizes the image processing component to register the original image data first to ensure the accuracy of the mapping relationship between the medical image space and the operation space, and then registers the camera image data and the preoperative image data according to the relative position relationship to ensure the accuracy of the final fusion imaging result. According to the invention, the accuracy of the fused imaging position is realized through secondary registration, and the registered images are subjected to virtual-real fusion to obtain virtual-real fused images, so that the definition is high, the accuracy is good, a superposition effect does not need to be built in the brain by a user, and the user can be effectively assisted to accurately judge the positions of the focus and the anatomical structure; and the user can conveniently switch and project the display content through the voice command, the operation is assisted, and the operation efficiency is improved.
While the present invention has been described in detail with reference to the preferred embodiments, it should be understood that the above description should not be taken as limiting the invention. Various modifications and alterations to this invention will become apparent to those skilled in the art upon reading the foregoing description. Accordingly, the scope of the invention should be determined from the following claims.

Claims (10)

1. An intelligent auxiliary system for comprehensive vision in operation is characterized in that the system comprises:
glasses, it has picture frame and two mirror legs;
a VR/AR display screen arranged in front of the mirror frame;
the two infrared light sources are arranged on the mirror frame and used for assisting the optical tracking video camera to measure distance;
the two optical tracking video cameras are respectively arranged on the two lens legs and used for receiving the reflecting signals of the mark points of the operation area so as to track the mark points of the operation area to obtain relative position information, and shooting images in the sight range of a user to obtain video shot images which are used for subsequent image fusion;
the data receiving and processing device comprises a data receiving port for receiving external information, a voice instruction processing component and an image processing component;
the miniature microphone is arranged at the tail end of the glasses leg and used for collecting voice instructions; and
and the power supply element is used for supplying power.
2. The intelligent auxiliary system for integrated vision in surgery as recited in claim 1, wherein said data receiving and processing device receives external information through wireless transmission or wired transmission.
3. The intelligent assistance system for integrated vision in surgery of claim 1, further comprising: at least one position sensor for assisting in monitoring the eyeglass orientation transition.
4. The intelligent assistance system for integrated vision during surgery as set forth in claim 3, wherein said position sensor comprises a gyroscopic sensor.
5. The intelligent assistance system for integrated vision during surgery as set forth in claim 3, wherein said position sensor is disposed on said temple.
6. The system as claimed in claim 1, wherein the external information includes but is not limited to three-dimensional reconstruction of original image data, endoscopic imaging content, microscopic imaging content, patient history information, examination results, and basic vital sign monitoring values.
7. The system as claimed in claim 1, wherein the voice command processing module is configured to preset recognizable voice command contents for recognizing and feeding back the collected voice commands.
8. The intelligent assistance system for comprehensive vision in surgery as claimed in claim 1, wherein said image processing component is used for image processing, and comprises a data import unit and an image fusion unit.
9. The system as claimed in claim 8, wherein the image fusion unit is used for image registration and image fusion, and the image fusion is virtual-real fusion of the registered images of the captured image and the image data according to the relative position relationship.
10. A method for processing images by the image processing component of the intelligent assistance system for comprehensive vision in surgery according to claim 1, wherein the method comprises:
three-dimensionally reconstructing the two-dimensional image data to obtain original image data;
registering the original image data and the pose of the patient to obtain registered preoperative image data;
acquiring camera image data through an optical tracking video camera;
the infrared light source emits infrared light, the optical tracking video camera detects signals reflected by the mark points of the operation area, and the relative position relation between the glasses and the mark points of the operation area is obtained;
and performing virtual-real fusion on the registered images to obtain virtual-real fusion images, and outputting the fusion images to a display screen.
CN201911408910.2A 2019-12-31 2019-12-31 Intelligent auxiliary system for operation comprehensive vision and image processing method Pending CN111035458A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911408910.2A CN111035458A (en) 2019-12-31 2019-12-31 Intelligent auxiliary system for operation comprehensive vision and image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911408910.2A CN111035458A (en) 2019-12-31 2019-12-31 Intelligent auxiliary system for operation comprehensive vision and image processing method

Publications (1)

Publication Number Publication Date
CN111035458A true CN111035458A (en) 2020-04-21

Family

ID=70242255

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911408910.2A Pending CN111035458A (en) 2019-12-31 2019-12-31 Intelligent auxiliary system for operation comprehensive vision and image processing method

Country Status (1)

Country Link
CN (1) CN111035458A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991333A (en) * 2021-04-21 2021-06-18 强基(上海)医疗器械有限公司 Image processing method and system based on voice analysis in endoscopic surgery
CN113100967A (en) * 2021-04-09 2021-07-13 哈尔滨工业大学(深圳) Wearable surgical tool positioning device and positioning method
CN113893034A (en) * 2021-09-23 2022-01-07 上海交通大学医学院附属第九人民医院 Integrated operation navigation method, system and storage medium based on augmented reality
CN114078102A (en) * 2020-08-11 2022-02-22 北京芯海视界三维科技有限公司 Image processing apparatus and virtual reality device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114078102A (en) * 2020-08-11 2022-02-22 北京芯海视界三维科技有限公司 Image processing apparatus and virtual reality device
CN113100967A (en) * 2021-04-09 2021-07-13 哈尔滨工业大学(深圳) Wearable surgical tool positioning device and positioning method
CN112991333A (en) * 2021-04-21 2021-06-18 强基(上海)医疗器械有限公司 Image processing method and system based on voice analysis in endoscopic surgery
CN113893034A (en) * 2021-09-23 2022-01-07 上海交通大学医学院附属第九人民医院 Integrated operation navigation method, system and storage medium based on augmented reality

Similar Documents

Publication Publication Date Title
US11529197B2 (en) Device and method for tracking the position of an endoscope within a patient's body
RU2740259C2 (en) Ultrasonic imaging sensor positioning
CN111035458A (en) Intelligent auxiliary system for operation comprehensive vision and image processing method
US8251893B2 (en) Device for displaying assistance information for surgical operation, method for displaying assistance information for surgical operation, and program for displaying assistance information for surgical operation
US11596480B2 (en) Navigation, tracking and guiding system for the positioning of operatory instruments within the body of a patient
CN106308946B (en) A kind of augmented reality devices and methods therefor applied to stereotactic surgery robot
US9636188B2 (en) System and method for 3-D tracking of surgical instrument in relation to patient body
US7774044B2 (en) System and method for augmented reality navigation in a medical intervention procedure
CN103735312B (en) Multimode image navigation system for ultrasonic guidance operation
US11963723B2 (en) Visualization of medical data depending on viewing-characteristics
JP2017524281A (en) Systems and methods for surgical visualization of mediated reality
CN112672709A (en) System and method for tracking the position of a robotically-manipulated surgical instrument
US20120004541A1 (en) Surgery assistance system
CN211484971U (en) Intelligent auxiliary system for comprehensive vision of operation
CN112805999B (en) Enhanced optical imaging system for medical procedures
CN109730771A (en) A kind of operation guiding system based on AR technology
Gsaxner et al. Augmented reality in oral and maxillofacial surgery
JP2017205343A (en) Endoscope device and method for operating endoscope device
US10631948B2 (en) Image alignment device, method, and program
US20030179249A1 (en) User interface for three-dimensional data sets
He et al. Sensor-fusion based augmented-reality surgical navigation system
US10049480B2 (en) Image alignment device, method, and program
Salb et al. INPRES (intraoperative presentation of surgical planning and simulation results): augmented reality for craniofacial surgery
CN115624384B (en) Operation auxiliary navigation system, method and storage medium based on mixed reality technology
US20230026585A1 (en) Method and system for determining a pose of at least one object in an operating theatre

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination