CN113424130A - Virtual kit for radiologists - Google Patents

Virtual kit for radiologists Download PDF

Info

Publication number
CN113424130A
CN113424130A CN201980062928.3A CN201980062928A CN113424130A CN 113424130 A CN113424130 A CN 113424130A CN 201980062928 A CN201980062928 A CN 201980062928A CN 113424130 A CN113424130 A CN 113424130A
Authority
CN
China
Prior art keywords
virtual
volume
dimensional image
image volume
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980062928.3A
Other languages
Chinese (zh)
Inventor
大卫·拜伦·道格拉斯
罗伯特·艾德文·道格拉斯
凯瑟琳·玛丽·道格拉斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kai SelinMaliDaogelasi
Luo BoteAidewenDaogelasi
Da WeiBailunDaogelasi
Original Assignee
Kai SelinMaliDaogelasi
Luo BoteAidewenDaogelasi
Da WeiBailunDaogelasi
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kai SelinMaliDaogelasi, Luo BoteAidewenDaogelasi, Da WeiBailunDaogelasi filed Critical Kai SelinMaliDaogelasi
Publication of CN113424130A publication Critical patent/CN113424130A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Physics & Mathematics (AREA)
  • Primary Health Care (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Epidemiology (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

Virtual tools are used to manipulate aspects of a three-dimensional medical image volume. The virtual tool is geographically registered with the image volume. The rendering of the image volume is manipulated by the image processor in response to using the virtual tool. The virtual tool may be used to facilitate analysis of the image volume. The virtual tool may include: a virtual focus pen; a virtual 3D cursor; a virtual transport viewer; a virtual base; a virtual knife; a virtual catheter; a virtual signpost; a virtual ablation instrument; a virtual table; a virtual comparison tool; and a virtual icon.

Description

Virtual kit for radiologists
Technical Field
Aspects of the present disclosure generally relate to viewing of volumetric medical images.
Background
Traditionally, Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) scans are viewed by a radiologist in a slice-by-slice fashion. Recent advances in diagnostic radiology have provided for true 3D viewing of medical images through the use of virtual reality, mixed reality, or augmented reality headsets, with volume-by-volume methods and the use of interactive volume pairs for 3D cursors (see U.S. patent 8,384,771, "methods and apparatus for three-dimensional image viewing," incorporated herein by reference; U.S. patent 9,980,691, "methods and apparatus for three-dimensional image viewing," incorporated herein by reference; and Douglas, d.b., Wilke, c.a., Gibson, j.d., Boone, j.m., winmark, m. (2017) "augmented reality: advances in diagnostic imaging. Interactive, volume-subtended 3D cursors offer greater potential when scrutinizing sub-volumes within an imaging data set. Other recent advances include us patent application 16/195,251 "interactive voxel manipulation in volumetric medical imaging for virtual motion, deformable tissue and virtual radiological anatomy", which is incorporated herein by reference, which provides the ability to perform various tissue manipulations to improve visualization and understanding of complex structures. Still further, a set of geo-registered medical imaging tools (see U.S. patent application 16/524,275, "use geo-registered tools to manipulate three-dimensional medical images") have been developed to manipulate volumes and enhance viewing. However, there are inherent limitations to the ability of radiologists to interact with volumetric medical images.
Disclosure of Invention
All examples, aspects and features mentioned in this document can be combined in any technically possible way:
according to some aspects, a method comprises: selecting a virtual tool suite from a set of available virtual tools in response to user input for a selected three-dimensional image volume loaded in the image processing system; geographically registering each virtual tool of the selected suite with the three-dimensional image volume; and manipulating the three-dimensional image volume in response to manipulation of some of the virtual toolkits. Some implementations include selecting the virtual tool suite from a set of available virtual tools, the virtual tool suite including: a virtual focus pen; a virtual 3D cursor; a virtual transport viewer; a virtual base; a virtual knife; a virtual catheter; a virtual signpost; a virtual ablation instrument; a virtual table; a virtual comparison tool; and a virtual icon. Some implementations include manipulating the three-dimensional image volume in response to a virtual focal pen by highlighting a portion of the three-dimensional image volume and adding written notes. Some implementations include altering a three-dimensional image volume voxel adjacent to a tip of the virtual focus pen. Some implementations include manipulating the three-dimensional image volume by separating a voxel from the three-dimensional image volume in response to a virtual knife. Some implementations include manipulating the three-dimensional image volume in response to the virtual transport viewer by moving the virtual transport viewer within a hollow structure of the three-dimensional image volume and presenting an image from a perspective of the virtual transport viewer. Some implementations include performing a virtual colonoscopy using the virtual transport viewer. Some implementations include manipulating the three-dimensional image volume by inserting visible moving voxels into the three-dimensional image volume in response to the virtual contrast material. Some implementations include manipulating the three-dimensional image volume by removing voxels of an organ shell in response to manipulation of some of the virtual toolkits, which may be performed in a repetitive shell-by-shell manner. Some implementations include manipulating the three-dimensional image volume in response to manipulation of some of the virtual toolkits by adjusting coordinates of voxels of the tissue of interest to separate the closely distributed tissue of interest. Some implementations include manipulating the three-dimensional image volume in response to the virtual table by placing a tissue of interest in a virtual bin of the virtual table. Some implementations include manipulating the three-dimensional image volume in response to the virtual catheter by limiting movement of the virtual catheter to a list of blood voxels within the selected blood vessel. Some implementations include automatically displaying information associated with a selected sub-volume of the three-dimensional image volume. Some implementations include displaying patient metadata and current conditions prompting acquisition of a medical image volume, a medical history of the patient, laboratory results, and pathology results. Some implementations include displaying the information with a virtual windshield. Some implementations include displaying distances to key metrics using virtual signposts. Some implementations include displaying a visually-aided icon indicating a viewing perspective. Some implementations include displaying a visual aid icon indicative of a finding detected by an artificial intelligence algorithm. Some implementations include displaying a visual aid icon indicative of a body relative to the three-dimensional image volume or patient. Some implementations include selecting a sub-volume having a three-dimensional cursor subtended by the volume. Some implementations include selecting the sub-volume from a plurality of sub-volumes of a predetermined list of sub-volumes. Some implementations include sequentially displaying each of the sub-volumes of the list. Some implementations include selecting the sub-volume from a plurality of sub-volumes defined by sequential search pattern coordinates. Some implementations include selecting the volume from a plurality of sub-volumes defined by random search mode coordinates. In some implementations, manipulating the three-dimensional image volume includes at least one of: changing the voxel size; changing a voxel shape; changing the voxel position; changing the voxel direction; changing voxel internal parameters; creating a voxel; and eliminating voxels. Some implementations manipulate the three-dimensional image volume includes dividing a volume of a sub-volume of interest into a plurality of portions based on a common characteristic. In some implementations, manipulating the three-dimensional image volume includes generating a decomposition view by creating a plurality of enlarged cubes, each cube contacting a center point. Some implementations include employing a virtual eye tracker symbol to assist human eye viewing. Some implementations include causing the virtual eye tracker symbol to appear and disappear at spatially separated locations so that the human eye can perform saccades and jump from one location to another. Some implementations include smoothly moving the virtual eye tracker symbol along a path so that the human eye can perform smooth tracking.
According to some aspects, an apparatus comprises: an image processing system comprising an interface to select a suite of virtual tools from a set of available virtual tools for a selected three-dimensional image volume display loaded in the image processing system to geographically register each virtual tool of the selected suite with the three-dimensional image volume in response to user input, and an image processor to manipulate the three-dimensional image volume in response to manipulation of some of the suite of virtual tools. In some implementations, the virtual tool suite is selected from a set of available virtual tools, including: a virtual focus pen; a virtual 3D cursor; a virtual transport viewer; a virtual base; a virtual knife; a virtual catheter; a virtual signpost; a virtual ablation instrument; a virtual table; a virtual comparison tool; and a virtual icon. In some implementations, the virtual toolkit includes a virtual focus pen, and the image processor manipulates the three-dimensional image volume in response to the virtual focus pen by highlighting a portion of the three-dimensional image volume and adding written notes. In some implementations, the image processor alters a three-dimensional image volume voxel adjacent to a tip of the virtual focus pen. In some implementations, the virtual kit includes a virtual knife, and the image processor manipulates the three-dimensional image volume by separating or manipulating (e.g., altering its position) voxels (e.g., voxels of a tissue type) from the three-dimensional image volume in response to the virtual knife. In some implementations, the virtual toolkit includes a virtual transport viewer, and wherein the image processor manipulates the three-dimensional image volume in response to the virtual transport viewer by moving the virtual transport viewer within a hollow structure of the three-dimensional image volume and presenting images from a perspective of the virtual transport viewer. In some implementations, the virtual transport viewer is used to perform a virtual colonoscopy via the interface. In some implementations, the virtual kit includes a virtual contrast material, and wherein the image processor manipulates the three-dimensional image volume by inserting visible moving voxels into the three-dimensional image volume in response to the virtual contrast material. In some implementations, the image processor assigns different density values to those different ones of the moving voxels. In some implementations, the image processor manipulates the three-dimensional image volume by removing voxels of an organ shell in response to manipulation of some of the virtual toolkits. In some implementations, the image processor manipulates the three-dimensional image volume in response to manipulation of some of the virtual toolkits by separating the closely distributed tissues of interest by adjusting coordinates of voxels of the tissues of interest. In some implementations, the virtual toolkit includes a virtual table and includes the image processor to manipulate the three-dimensional image volume in response to the virtual table by placing a tissue of interest in a virtual bin of the virtual table. In some implementations, the virtual kit includes a virtual catheter and includes the image processor to manipulate the three-dimensional image volume in response to the virtual catheter by limiting movement of the virtual catheter to a list of blood voxels within a selected vessel. In some implementations, the interface automatically displays information associated with the selected sub-volume of the three-dimensional image volume. In some implementations, the interface displays patient metadata and current conditions prompting acquisition of a medical image volume, medical history of the patient, laboratory results, and pathology results. Some implementations include the interface displaying the information with a virtual windshield. Some implementations include the interface displaying distances to key measures using virtual signposts. Some implementations include the interface displaying a visually-aided icon indicating a viewing perspective. Some implementations include displaying the interface indicating findings detected by an artificial intelligence algorithm. Some implementations include displaying the interface with an indication relative to the three-dimensional image volume or patient body. Some implementations include the interface receiving a sub-volume having a three-dimensional cursor subtended by a volume. In some implementations, the selected sub-volume is one of a plurality of sub-volumes of a predetermined list of sub-volumes. Some implementations include the interface sequentially displaying each of the sub-volumes of the list. In some implementations, the selected sub-volume package is one of a plurality of sub-volumes defined by sequential search pattern coordinates. In some implementations, the selected sub-volume is one of a plurality of sub-volumes defined by random search pattern coordinates. In some implementations, the image processor manipulating the three-dimensional image volume includes at least one of: changing the voxel size; changing a voxel shape; changing the voxel position; changing the voxel direction; changing voxel internal parameters; creating a voxel; and eliminating voxels. In some implementations, the image processor manipulates the three-dimensional image volume by creating a plurality of enlarged cubes to generate an exploded view, each cube contacting a central point. In some implementations, the interface includes a virtual eye tracker symbol. In some implementations, the virtual eye tracker symbols appear and disappear at spatially separated locations so that the human eye can perform saccades and jump from one location to another. In some implementations, the virtual eye tracker symbol moves smoothly along a path.
In some implementations, a method and apparatus for preparing software to implement a virtual toolkit for enhancing medical image analysis is employed, the method and apparatus comprising the steps of: loading a volume medical imaging data set according to the examination list, converting the medical image into a 3D volume and importing the 3D volume into a virtual tool suite; performing registration and calibration of each virtual tool using/in the geo-registered volumetric medical image; performing filtering, auto-binning, and voxel manipulation (e.g., as described in U.S. patent application 15/904,092, "for processing 3D medical images to enhance visualization" and U.S. patent application 16/195,251, incorporated herein by reference, and advanced viewing options taught in the present disclosure); for each time step, providing a display in accordance with the movements and operations of the virtual tool suite listed in the above step; reaching a decision point where the checklist is complete, in which case the next step can be entered, or the checklist is incomplete and then returned to the previous step; and finally, if the review of the entire medical examination is complete, the examination is terminated and a report is prepared and archived.
An apparatus, comprising: IO devices; an image processor in communication with the IO device, the image processor comprising a program stored on a computer-readable non-transitory medium, the program comprising: instructions to subdivide the voxels into different tissue types; instructions to perform voxel manipulation; instructions to create and insert at least one other voxel in the medical imaging dataset; instructions for eliminating voxels from entering the medical imaging dataset; instructions to perform voxel annotation on the above user-modified radiological image; instructions for the step of reviewing are recorded.
In some implementations, virtual tool manipulation may also be accomplished through controller/joystick input. In some implementations, the controller/joystick input can direct the 3D cursor to change size, shape, or direction (roll, pitch, and yaw). In some implementations, the controller/joystick input can direct the left and right eye viewing perspectives to zoom in toward the 3D cursor or zoom out away from the cursor. In some implementations, the controller/joystick input may direct convergence to a focal point. In some implementations, the controller/joystick input may direct the 3D cursor to be raised or lowered within the head display unit, or to move the 3D cursor from side to side. In some implementations, the controller/joystick input can change the color of the 3D cursor. In some implementations, the controller/joystick inputs may invoke filtering, segmentation, sorting, statistical analysis, and reporting, which are discussed in U.S. patent application 15/904,092, which is incorporated herein by reference. In some implementations, the controller/joystick input can direct the virtual focus pen to move in the volume of interest. In some implementations, the controller/joystick/keyboard input may direct the annotation of one or more 3D cursors within the volume of interest. In some implementations, the controller/joystick input may guide icon options related to volumetric medical imaging to enable a radiologist to organize complex exams in his/her exam inventory method. In some implementations, presenting the 3D medical image includes a method of improved user controller interface for medical personnel to view the 3D medical image composed of the joystick and the function buttons. When interfacing with a 3D cursor, the functions will include, but are not limited to, the following: change the direction of the 3D cursor-roll, pitch, and yaw; zooming in the medical personnel viewpoint towards the 3D cursor and zooming out the medical personnel viewpoint away from the cursor; invoking convergence; raising and lowering a display position of a 3D cursor on the headset; changing the size, shape and color of the 3D cursor; invoking filtering, subdividing, sorting, counting and reporting operations; calling a virtual focus pen and movement control thereof; annotating one or more 3D cursors within a volume of interest; calling an icon option; and invoke advanced viewing options of interest (e.g., blast, ablation, slice type viewing). Representative examples of virtual tools include, but are not limited to, the following: a virtual focus pen; a virtual 3D cursor; a virtual transport viewer; a virtual base; a virtual knife; a virtual catheter; a virtual signpost; virtual ablation; a virtual table; virtual comparison; and a virtual icon. Many other virtual tools are possible, but not shown, including but not limited to the following: a drill bit; a cup; a string; a mirror; a lens; a metal device; a non-metallic device; and other tools commonly used by medical personnel, construction workers, or engineers.
In some implementations, a medical person viewing a volumetric medical image may invoke a process by which a virtual radiology assistance type icon may be displayed. The purpose of the virtual radiology-assisted type of icon is to depict all relevant/important information, which concerns, among other things: the purpose of the virtual radiology assistance type icon is to depict all relevant/important information about: the position of the medical personnel performing the examination on the medical institution's checklist and the next step; the metadata and current condition of the patient trigger acquisition of a medical image; the medical history of the patient; laboratory results; results from Artificial Intelligence (AI) routines and application of condition indicators, if any. In this implementation, the reviewing medical trainee may command the display of the virtual windshield at any time. In this implementation, the reviewing medical personnel may also modify the items to be displayed on the windshield.
In some implementations, the virtual tool can direct the modification of the voxels. Examples of voxel manipulation include altering the size, shape, location, orientation, or internal parameters of a voxel. Further, voxels may be created or eliminated in the direction of the virtual tool.
In some implementations, a virtual focus pen is used to enhance visualization of a structure of interest within a medical imaging volume. In some implementations, a focused pen may be used to point to areas of the structure that may contain anomalies. In some implementations, the focal pen can use a symbol (e.g., an arrow) to point to the region of interest. In some implementations, the annotations may be written in the volumetric data. In some implementations, voxels near the pen tip of the focus pen may be highlighted while modifying the transparency of the tissue at a specified distance from the tip of the focus pen. In some implementations, a focal pen can be used with a 3D cursor.
In some embodiments, a virtual eye tracker symbol is used with a 3D medical imaging data set to facilitate human viewing of a structure. In some embodiments, the virtual eye tracker symbols appear and disappear at spatially separated locations, so that the human eye can perform saccades and jump from one location to another. In another embodiment, the virtual eye tracker symbol is continuously visible and has a smooth movement along the path so that the human eye can perform a smooth tracking. The virtual eye tracker symbol may be controlled by, but is not limited to: a virtual tool (e.g., a virtual focus pen); a geographic registration tool; or a pre-programmed sequence.
In some implementations, a virtual knife can be co-registered with the volumetric medical image and used to cut virtual tissue in the medical imaging volume. In a further implementation, the virtual knife will have a movable geo-registration point (e.g., the tip of the virtual knife), additional points will indicate the cutting surface of the virtual knife, and the control unit will provide X, Y, Z changes in the coordinate system and the roll, pitch, and yaw of the knife. Virtual knives may be used for, but are not limited to: the medical person viewing the medical image may: the virtual knife is picked up and moved into the 3D digital structure of current interest, then passed through the 3D geo-registered structure, and then tissue outside the surface created by the virtual knife can be deleted (or placed) as it passes through the 3D geo-registered structure (the side of the virtual knife is pre-selected by the medical personnel viewing the medical image). In further implementations, the virtual knife will have a precise registration point (e.g., the tip of the knife in geographic registration), and additional geographic registration points will indicate the cutting surface of the knife. In some implementations, tactile or auditory feedback can be provided to the user.
In some implementations, presenting the 3D medical image includes a method of facilitating viewing of the medical image, the method including using a visual conveyance to effect a procedure through a blood vessel. In another implementation, a virtual catheter may be used in conjunction with a visual conveyance to optimize the viewing of vascular structures within a patient. Example procedures include, but are not limited to, the following: tunneling within a tunnel may be used in assessing the vascular structure and the potential need for insertion of one or more stents. In this case, the transversing procedure into the blood vessel may be via the groin into the common femoral artery, external artery, common iliac artery, abdominal aorta, thoracic aorta, aortic arch, and finally coronary artery of interest. The medical person viewing the medical image may then visualize what is moving within the blood vessel as viewed from the 3D headset. Blood within the vessel may be subtracted digitally and virtual light may be shone on the vessel wall. The constrictor within the vascular structure will appear as a narrowing of the tunnel and record the X, Y, Z coordinates of the constrictor. At any time, the medical person viewing the medical image can view the entire vascular structure through the current position traversed in the displayed blood vessel. In some implementations, the diameter of the vessel may be enlarged and the voxels manipulated to achieve optimal viewing of the internal structure. In some implementations, rendering the 3D medical image includes a method that facilitates viewing the medical image that is comprised of a 3D geo-registered vessel traversal. In further implementations, the 3D geographic registration traversal in the tunnel will be used in conjunction with the geographic registration of the patient as described in U.S. patent application 15/949,202, "intelligent operating room equipped with intelligent surgical device" and U.S. patent application 16/509,592, "implantable marker for surgery". The interventionalist can switch back and forth between a geo-registered 3D system using a 3D head mounted display and a standard display currently available in interventional procedures. This allows near real-time viewing of shrinkers identified in the pre-operative plan. Further, alerts can be issued in near real time as critical intersections are approached.
In some implementations, presenting the 3D medical image includes a method of facilitating viewing of a medical image composed of a virtual catheter. In further implementations, the virtual catheter may be used in conjunction with a 3D digital image of a vascular structure within a patient. In further implementations, the catheter may continuously calculate the total distance traveled that may be displayed, and may also be marked and recorded over time for later review. The virtual catheter may be used during pre-operative planning of an interventional procedure, such as, but not limited to, treatment of a cerebral aneurysm. The implementation of a virtual catheter in the treatment of an aneurysm may be as follows: a virtual catheter is inserted at a predetermined point in the 3D digital vascular structure, such as the groin of the patient into the common femoral artery, then the external iliac artery, then the common iliac artery, then the abdominal aorta, then the thoracic aorta, then the head and neck arteries, then the common carotid artery, then the middle cerebral artery, and finally into the aneurysm. For each intersection point where an interventionalist needs to take care of and prepare to switch from one vessel to another, an augmented reality distance marker may be added to the 3D virtual catheter; screenshots of all key vessel junctions on the screen may be labeled as angular changes of the current path in the coordinate system X-Y, X-Z and Y-Z planes.
In some implementations, presenting the 3D medical image includes a method of facilitating viewing of a medical image comprised of an explosion within the 3D digital image. In further implementations, medical personnel viewing the medical image (e.g., using the segmentation technique outlined in U.S. patent application 15/904,092) may divide it into multiple portions based on common characteristics of the 3D digital volume of interest (e.g., similar hounsfield units). The medical person viewing the medical image may then select a point within the 3D digital volume (ideally near the center of the 3D digital volume and between the subdivided sub-volumes) that will act on the start of the explosion. The 3D digital subvolumes can then be separated in a number of ways as if an explosion had occurred. One way (but not limited to) is as follows: eight large cubes are created, each touching a center point and each parallel to the X, Y, Z axis (e.g., a first cube will be positive in X, positive in Y, and positive in Z; a second cube may be positive in X, negative in Y, and positive in Z; etc.). Medical personnel viewing the medical image then establish a distance factor for sub-volumes near the center point and a larger distance factor for sub-volumes further away. These factors are then applied to all voxels within each particular sub-volume of the 3D digital image, depending on the cube in which the center voxel of the sub-volume resides. (Note that for the first cube described above, the X, Y, Z coordinates of the voxels within that sub-volume will be increased by a specified factor in the positive X, positive Y, and positive Z directions for all sub-volumes whose central voxels fall within that cube. A medical person viewing the medical image modifies a factor that changes the distribution between the sub-volumes during the examination process.
In some implementations, the virtual transportation process viewer can be used prior to a colonoscopy. Under the virtual transport viewer procedure, the patient will: first receiving a CT scan of the colon; creating a 3D volume of the colon from the CT 2D slices (us patent 8,384,771); the subdivision (us patent application 15/904,092) will identify the colon and the subtraction will extract the contents of the colon (e.g. air, faeces). And in this way, the colon will retain its original shape and any polyps will be visible; a virtual platform can then be inserted and can be examined back and forth within the colon. This inspection avoids the problem of polyps being folded, as folding may occur during insertion of the forward-only camera. For ease of 3D viewing, the diameter of the colon may be increased from the center point of the enlarged colon to an optimal diameter by voxel manipulation (us patent application 16/195,251). If no polyp is found, the patient can go home with confidence for continued health, and he/she will avoid the discomfort and discomfort of colonoscopy and avoid the cost of colonoscopy. Otherwise, treatment will be required. Avoiding the need for colonoscopy personnel may not circumvent the virtual transport viewer procedure and the proportion of cases detected at an early stage may increase. Thus, the general health of the public will be improved.
In some implementations, medical personnel performing a colonography medical image review may invoke the process of performing a virtual colonoscopy. In this implementation, a CT scan with/without contrast is performed. Then, a 3D virtual image is constructed from the CT 2D slices (us patent 8,384,771). Subdivision is performed (us patent application 15/904,092) and tissue is subtracted outside the colon. Also, non-tissue contents within the colon are subtracted. The colon is then "stretched" to elongate the folds that may obscure the polyps, thereby eliminating polyp obscuring by the folded colon tissue. As described in us patent application 16/195,251, the stretching process involves voxel manipulation. The elongated, straight virtual colon is divided into two parts along the length axis, so that the internal structure can be viewed through the head display unit.
In some implementations, rendering the 3D medical image includes a method of facilitating viewing of the medical image that includes inserting virtual contrast material within the vascular system. In a further implementation, the virtual contrast may be used in conjunction with a 3D digital image of the vascular structure in the patient. Example procedures include, but are not limited to, the following steps: such as in pulmonary embolism to look for obstruction. In this example, a blood vessel will be selected to receive the virtual contrast material in a time-stepping manner. The insertion of the virtual contrast may be done in a way and at a speed as if it had actually been inserted into the blood vessel. The duration of the time step will be under the control of the medical personnel viewing the medical image; a "freeze frame" will be available as well as a replay of the contrast flow. Nearby and possibly even overlapping vessels, except for the one receiving the virtual contrast, may be extracted from the 3D image displayed to the medical staff viewing the medical image.
In some implementations, presenting the 3D medical image includes a method that facilitates viewing a medical image composed of ablation techniques. In further implementations, ablation techniques may be used in conjunction with 3D digital structures, delivered by a 3D cursor as described above. The basic methods of ablation techniques include, but are not limited to, the following processes: first determining the outer "shell" of the organ of interest to medical personnel viewing the medical image (e.g., using the segmentation technique outlined in U.S. patent application 15/904,092); sequentially eliminating one voxel from all voxels distant on the outer surface-this step is repeated multiple times on the remaining outer layer of tissue under the direction of the medical staff viewing the medical image. Alternatively, a layer is selected in the X, Y, Z coordinate system (e.g., the X-Y layer with the highest Z coordinate is selected and eliminated-this step is repeated multiple times over the remaining 3D digital volume as directed by the medical personnel viewing the medical image.
In some implementations, the location of closely spaced tissues of interest can be modified with a virtual tool such that the separation distance of these tissues can be increased by voxel manipulation (e.g., inserting other voxels of different transparency) and simultaneously adjusting the coordinates of the tissues of interest.
In some embodiments, a sub-volume-by-sub-volume viewing method is enabled. The sub-volumes may be made of different numbers or combinations of voxels.
In some implementations, presenting the 3D medical image includes a method of ordering movement of a 3D cursor in the volume of interest. The sequence may, for example and without limitation, initiate a 3D cursor at 0_ X, 0_ Y, 0_ Z coordinates, increment X to move the 3D cursor in the X direction, then continue to increment to the maximum value of X, then increment Y and decrement X back to OX coordinates. This will continue through the X-Y plane until completion, then Z will increment, and the process continues until completion. The increments will continue until all volumes of interest have been reviewed. The change from one increment to another will be controlled by medical personnel reviewing the medical images. Furthermore, during review of the content of the 3D cursor, if suspicious tissue is detected, medical personnel may annotate the 3D cursor for further review. Furthermore, the medical staff can at any time choose to display the position of the 3D cursor within the volume of interest and within the volume of interest that has been examined. If the suspicious tissue appears in multiple 3D cursors, the total number of these 3D cursors may be displayed simultaneously. In some implementations, medical personnel reviewing the 3D virtual medical image may invoke one type of search mode. In some implementations, an example of a sequential search is, but is not limited to, a virtual windshield wiper search. This type of search pattern helps to ensure that a thorough search is performed. In some implementations, the method is based on a random pattern of items that may be of interest for examination. Image processing tools such as changing transparency and using false colors can help identify potential interests (us patent 8,384,771). This type of search pattern may speed the review process. Both types of search modes employ a 3D cursor (us patent 9,980,691 "method and apparatus for three-dimensional image browsing" and us patent application 15/878,463 "interactive 3D cursor for medical imaging"). Note that: when reviewing medical images in the original 2D format, the eye may jump from one point to another following the reviewer's saccade path and may not see a large portion of the slice, and thus may miss small blocks. When using a 3D cursor, these patches subtend a larger portion of the displayed image and the probability of detection increases proportionally.
In some implementations, a medical professional reviewing a volumetric medical image may invoke a process whereby the volumetric medical image is examined using a step-and-step process of selecting sub-volumes (total volumes) wrapped within a 3D cursor (U.S. patent 9,980,691 and U.S. patent application 15/878,463). The content of each 3D cursor will be reviewed independently. Furthermore, after the step process is completed, a problem may arise as to whether the entire volume has been inspected. In this implementation, the examined volumes contained in each 3D cursor may be summed and subtracted from the total original volume. This may result in missing some parts of the original volume that are intended to be reviewed. In this implementation, these missing portions will be highlighted to the medical personnel performing the review and may remind him/her to continue reviewing and inspecting these missing portions. In this implementation, the detection rate of the patches will be improved compared to the 2D slice inspection process. In this implementation, a more thorough check will be performed. In some implementations, presenting the 3D medical image includes a method of ordering movement of a 3D cursor in the volume of interest. The sequence may, for example and without limitation, initiate a 3D cursor at 0_ X, 0_ Y, 0_ Z coordinates, increment X to move the 3D cursor in the X direction, then continue to increment to the maximum value of X, then increment Y and decrement X back to OX coordinates. This will continue through the X-Y plane until completion, then Z will increment, and the process continues until completion. The increments will continue until all volumes of interest have been reviewed. The change from one increment to another will be controlled by medical personnel reviewing the medical images. Furthermore, during review of the content of the 3D cursor, if suspicious tissue is detected, medical personnel may annotate the 3D cursor for further review. Furthermore, the medical staff can at any time choose to display the position of the 3D cursor within the volume of interest and within the volume of interest that has been examined. If the suspicious tissue appears in multiple 3D cursors, the total number of these 3D cursors may be displayed simultaneously.
In some implementations, the virtual icon can be used in conjunction with the viewing of the medical imaging volume to facilitate positioning. As an example, when a small region of an organ (e.g., a liver) is scrutinized, the exact position of the 3D cursor within the organ may become somewhat disoriented. Thus, the icon may aid in positioning at all times during the viewing exam. In another embodiment, the arrow annotation indicia may display a path from the initial viewing perspective to the current viewing perspective. In an alternative embodiment, an automatic re-centering technique may be used to quickly re-orient the user.
In some implementations, a virtual table can be added to a toolkit that will have a virtual storage box on the table. A sub-volume of the virtual medical image currently being examined and containing tissue of interest/interest may be placed in a virtual storage bin. The storage box for the sub-volume may correspond to a checklist item for a medical facility. In some implementations, an emergency box may be added that is accessible to both reviewing and treating personnel, thereby facilitating and expediting collaboration among these personnel. In some implementations, the preparation of the report may be expedited by automatically ordering and extracting items through the storage box and adding those items to the report. Added items (e.g., annotated graphics containing the organization in question) will increase the quality and completeness of the report. Whenever a radiologist finds an anomaly, he/she places it in a report virtual table. The items placed on the report virtual table may include 2D slices or 3D volumes containing abnormal findings. The radiologist has the option of determining the size of the virtual table and the virtual box. One radiologist may pass the project to another radiologist's table or box for collaboration.
In some implementations, the radiological report may include images processed via a virtual tool.
Drawings
This patent or application file contains at least one drawing executed in color. The color drawings of the copies of this patent or patent application publication will pay the necessary fee as required by the office.
FIG. 1 illustrates a flow chart for optimizing a medical imaging examination display using a virtual tool;
FIG. 2 illustrates an apparatus for implementing the process of FIG. 1;
FIG. 3 illustrates a virtual toolkit option to view volumetric medical images;
fig. 4 shows a flow chart and an exemplary heads-up display type icon conveying relevant information to a radiologist during interpretation of an imaging exam. Head-up displays we refer to as virtual windshields (similar to "head-up" displays on airplanes), which can be recalled at any time during an exam, and can display specific items to be reviewed in a patient's entire case and/or medical facility checklist on a single virtual windshield. Alternatively, it may be referred to as a virtual radiology assistance type icon;
FIG. 5 shows a flowchart and a diagram of virtual toolkit inputs resulting in voxel modification;
FIG. 6 illustrates voxel manipulation based on interaction with a virtual tool;
FIG. 7 illustrates a virtual eye tracker symbol having a variable display mode;
FIG. 8 shows a virtual knife that may be used by medical personnel to "cut tissue" from an existing 3D medical imaging volume to allow enhanced visibility of internal structures;
FIG. 9 illustrates a virtual traversal of a virtual blood vessel using a visual transport;
FIG. 10 illustrates that a virtual catheter may be used in conjunction with a volumetric medical image of a vascular structure within a patient with the assistance of a virtual icon;
FIG. 11 illustrates the general concept of a 3D medical image and an example technique behind exploding the 3D medical image into multiple separate organs;
FIG. 12 illustrates performing a more accurate virtual colonoscopy review using a virtual transport viewer;
fig. 13 shows a portion of a virtual 3D volumetric medical image containing a colon portion of the large intestine stretched in the manner of a long straight tube by voxel manipulation. The contents of the tube are then dispensed and subsequently/purged from the tube. Finally, the tube is split along the length axis and opened to allow viewing of the internal colon structure;
FIG. 14 shows the insertion of a virtual contrast and its flow in the vascular system;
fig. 15 illustrates an ablation technique that may be used in conjunction with a 3D digital structure within a 3D cursor. This allows careful inspection of the internal structure and elements of the organ;
FIG. 16 illustrates virtual focus pen guided voxel manipulation;
FIG. 17 illustrates an overall imaging volume and multiple types of subvolumes;
FIG. 18 shows an ordering of 3D cursor movements in a random pattern in a volume of interest;
FIG. 19 shows an example of a system mode for viewing medical images (e.g., a sequential virtual windshield wiper type mode);
fig. 20 illustrates a volume of interest to be reviewed and a process by which any areas that are missed by the intended review can be highlighted to the medical personnel performing the review. These identified sub-volumes may be subsequently reviewed, thereby ensuring the integrity of the review;
FIG. 21 shows a human icon with a position of a 3D virtual cursor included at an approximate location within the body;
FIG. 22 illustrates a virtual movable table for storing virtual images of suspicious tissue stored by checklist categories;
FIG. 23 shows a sample radiological report including images processed using a virtual tool.
Detailed Description
Certain aspects, features, and implementations described herein may include machines such as computers, electronic components, radiological components, optical components, and processes such as computer-implemented steps. It will be apparent to one of ordinary skill in the art that computer-implemented steps may be stored as computer-executable instructions on a non-transitory computer-readable medium. Further, those of ordinary skill in the art will appreciate that computer-executable instructions may be executed on a variety of tangible processor devices. For ease of explanation, not every step, device, or component that may be part of a computer or data storage system is described herein. Such steps, devices, and components will be recognized by those of ordinary skill in the art in view of the teachings of the present disclosure and the knowledge generally available to those of ordinary skill in the art. Accordingly, corresponding machines and processes are enabled and are included within the scope of the present disclosure.
FIG. 1 illustrates a flow chart for optimizing a display of a medical imaging examination using a geo-registration tool. In step a100, a volumetric medical imaging data set is loaded from a checklist, the medical image is converted to a 3D volume and imported to a virtual toolkit. In step B102, registration and calibration is performed for each virtual tool within the volumetric medical image with/within the geographic registration. In step C104, filtering, subdivision, and voxel manipulation are performed according to U.S. patent application 15/904,092 and U.S. patent application 16/195,251, which are incorporated by reference herein, and the advanced viewing options discussed in this disclosure. In step D106, for each time step, a display is provided according to the movements and operations of the virtual tool suite listed in the above step. In step E108, the user has to answer the question "do the checks of this element in the checklist complete? ". If the answer is no 110, the user should currently proceed to step D106. If the answer is yes 114, the radiologist should proceed to step F116 and review the next set of medical images according to the checklist. The radiologist should then proceed to step G118 and answer the question "is review completed? ". If the answer is no 120, the radiologist should go 122 to step A100. If the answer is yes 124, the radiologist should stop 126.
Fig. 2 shows an apparatus for implementing the process shown in fig. 1. A radiological imaging system 200, e.g. X-ray, ultrasound, CT (computed tomography), PET (positron emission tomography) or MRI (magnetic resonance imaging), is used to generate a 2D medical image 202 of an anatomical structure of interest 204. The 2D medical image 202 is provided to an image processor 206, which includes a processor 208 (e.g., a CPU and GPU), volatile memory 210 (e.g., RAM), and non-volatile memory 212 (e.g., HDD and SSD). A program 214 running on the image processor performs one or more of the steps described in fig. 1, generates a 3D medical image from the 2D medical image and displays on the IO device 216. IO device 216 may comprise a virtual reality headset, a mixed reality headset, an augmented reality headset, a monitor, a tablet computer, a PDA (personal digital assistant), a mobile phone, or any of a variety of devices. IO device 216 may include a touch screen and may accept input from external devices (represented by 218), such as a keyboard, a mouse, and any of a variety of devices for receiving various inputs. However, some or all of the inputs may be automated, such as by the program 214. Finally, as further discussed in fig. 3 and the remainder of this patent, a series of virtual tools 220 are implemented, which facilitate medical personnel viewing medical images.
Fig. 3 illustrates the virtual toolkit option in viewing volumetric medical images. In this figure, a representative example of viewing options available through the use of virtual tools is shown. Options for guiding/selecting the virtual tool may be presented on the display and the user will click on the desired option. At the center of the illustration, the virtual tool 300 (i.e., the virtual focus pen) is geographically registered within the medical imaging volume. The virtual focus pen is superimposed within a region containing the virtual 3D medical image 302 that is inside the 3D cursor 304. The movement of the buttons (e.g., on the keyboard) plus the virtual tool may be coupled together to adjust the 3D cursor 304 (e.g., select the center of the 3D cursor 304 and then move the virtual focus pen 300 a distance to correspond to the radius). A user views a virtual tool using headphones 306 (e.g., augmented reality, mixed reality, or virtual reality) glasses having a left-eye display 308 and a right-eye display 310. The virtual focus pen may be registered within the virtual image by touching a particular point (e.g., a corner) of the medical imaging volume 302. For display purposes, the medical professional may choose to show only the tip of the focal pen in the display, expand the tip of the focal pen as needed, and/or display a virtual image of the focal pen as it is oriented within the volume. The movement of the virtual focus pen 300 will be controlled by the medical personnel viewing the medical image. The virtual focus pen 300 is useful when it is desired to smoothly track eye movement. For example, when checking for an artery with no occlusion, the eye movement must be tracked smoothly, with a virtual focus pen tracking along the artery to look for the occlusion. The movement of the ball sweeping the rows can cause portions of the artery to be skipped and no severe occlusion is detected; accordingly, the virtual focus pen 300 may help assist in this search mode. Multiple colored/shaped virtual focus pens 300 may be used to track the different flows of the arteries and veins. In the top image, the position and orientation of the virtual tool may change relative to the volume of interest. The virtual focus pen is shown as having an initial position and orientation 312 relative to the volume of interest 314. The user may then move the virtual focus pen to a subsequent position and orientation 316 relative to the volume of interest 314. Proceeding in a clockwise direction, next, the virtual focus pen 318 performs a grab of the volume of interest 320 at an initial distance from the head display unit 322. The virtual sound stylus 324 then pulls the volume of interest 326 closer to the head display unit 322 to improve visualization. Alternatively, the volume of interest 320 may be moved in other positions or directions by the focal pen 318. Next, a virtual point may be placed on or next to a portion of the virtual image 330 being examined (e.g., the carotid artery) in a fixed or dynamic manner. For example, the point may appear and disappear at multiple points along the vascular structure to facilitate a viewing-through-viewing in situations where the eye jumps a short distance to view the most important portion of the vascular structure. At time point #1, a first virtual point 332 appears, and at this time, other virtual points are not shown in the field of view. At time point #2, a second virtual point 334 appears, and other virtual points are not shown in the field of view at this time. At time point #3, a third virtual point 336 appears, and other virtual points are not shown in the field of view at this time. At time point #4, a fourth virtual point 338 appears, and other virtual points are not shown in the field of view at this time. Optionally, the virtual point 342 may be moved along a portion of the virtual image 340 (e.g., the carotid artery) to help the human eye perform smooth tracking and enhance the view of the vascular structure. Next, the virtual focus pen 344 is used to perform convergence to the focus 346. A left eye viewpoint 348 is shown. Also shown are lines showing the viewing angle of the left eye 350. A right eye viewpoint 352 is shown. A line illustrating the viewing angle of the right eye 354 is also shown. Note that viewing angle 350 from left eye viewpoint 348 and viewing angle 354 from right eye viewpoint 352 intersect at convergence point 346. Next, virtual dissection is performed using the virtual knife 356, and the aorta 358 and pulmonary artery 360 are dissected away from the rest of the heart 364. Note that a cutting plane 362 is shown. Next, a virtual catheter 366 is placed within the medical imaging volume through the aorta 368. A virtual road sign 370 is shown to guide medical personnel. A focus pen 372 is shown. The dashed blue line 374 is a desired catheter trajectory, which may be set at different times. The virtual catheter 366 may be pulled through the vascular system. 376 shows the process of looking through the blood vessel type and highlights the desired path in the dotted red circle 378. The last three examples illustrate advanced viewing options enabled by virtual tools. An exploded view of organ separation is shown, where the organs are separated. For example, the amount of spacing between various abdominal organs, including aorta 380, left kidney 382, pancreas 384, and liver 386, is increased. Next, virtual ablation is performed, wherein the shell 390 of virtual tissue is sequentially removed at multiple points in time. The anatomy in which the virtual ablation may be performed is placed in a 3D cursor 388 to help guide the ablation. Finally, a structure such as the colon 392 may be sliced (e.g., using a virtual knife) and opened so that the interior including the polyp 394 inside the hollow viscus may be more carefully examined. To achieve this, voxel manipulation is required.
Fig. 4 shows a flowchart and an exemplary heads-up display type icon that conveys relevant information to the radiologist during interpretation of the imaging exam. First, a flowchart is shown to show how the virtual windshield 406 can be used to assist in the interpretation of radiographic images. The first step 400 is for the radiologist (or other medical personnel) to move to the checklist item on the report. The second step 402 is for a virtual auxiliary type icon (also referred to as virtual windshield 406) to display relevant information. The third step 404 is for the radiologist to review the virtual auxiliary type icon display, review the image, and enter that portion of the report. The heads-up display, which we refer to as a virtual windshield 406 (similar to an "heads-up" display on an airplane), can be recalled at any time during the exam, and will display on a single virtual windshield the specific items to be reviewed in the patient's entire case and/or medical facility list. During the examination of the virtual volumetric medical image, the virtual windshield is invoked to assist the medical personnel performing the examination. Issues related to review may include: check what is the next step on the list; what is the motivation for acquiring medical images; whether there is a prior history of the current condition; what the assay result is; and if an artificial intelligence program has been applied, what the result is, what the listed indicators are. In this figure, an exemplary virtual windshield 406 is shown. An example manifest in which some items have been checked and others remain (unchecked). Age, gender, current condition, and any relevant medical history related to the current condition. If any, is the result of applying the AI routine and the status indicator. Pathology, imaging and laboratory results data, if any, are also shown. Note that the person performing the examination may be interested in other items, an example of which is shown in this figure. It should also be noted that having all relevant information on the virtual windshield 406 saves medical personnel time, as the medical personnel will not have to go from the radiological PACS system to the electronic medical record system — all the critical information is there, and can be displayed on the display screen at any time under the command of the person conducting the examination.
Fig. 5 shows a flowchart and a diagram of virtual toolkit inputs resulting in voxel changes. Note that the voxels may be manipulated in size, shape, position, orientation, or internal parameters. Further, voxels may be created or eliminated in the direction of the virtual tool. The original voxel 500 is shown. The original voxel 500 may be reduced in size to produce a smaller voxel 502. The size of original voxel 500 may be increased to produce a larger voxel 504. The original cube-like voxel 500 may be modified such that eight smaller voxels 506 are created such that each of the eight smaller voxels 506 has one-eighth of the volume of the original voxel 500. The original voxel 500 may also be eliminated 508. The internal data elements (e.g., gray values) of the original voxel 500 may be modified 510. Additional internal data elements (e.g., textures, organization type attributes, etc.) may be added 512. The orientation of the original voxel may be altered 514. The position of original voxel 516 may be moved (i.e., the voxel is moved), which may be performed by changing the x, y, z coordinates 518 of original voxel 516 to the x, y, z coordinates 522 of new voxel 516, shifting voxel 520 so that it has been moved by a particular x-distance 524, y-distance 526, and z-distance 528. The shape of the original voxel 500 may be changed, for example, from a cube to an octahedron 530 or to the shape of a cylinder 532.
Fig. 6 illustrates voxel manipulation based on interaction with a virtual tool. Diagram a shows a 3D cursor 600 containing a volume of interest 602. Note that the volume of interest is a uniform medium gray color. Further, note that the tip 604 of the virtual tool (i.e., in this case, the virtual focus pen) 606 is located outside the volume of interest 604. Fig. B shows a 3D cursor 608 containing a volume of interest 610 with a change in position and orientation of a virtual tool (i.e., in this case, a focus pen) 612, a portion of which includes a tip 614 of the virtual tool now entering the virtual 3D cursor 608 and the volume of interest (e.g., containing tissue selected from a volumetric medical image) 610. Note that a number of voxels 616 immediately adjacent to the tip 614 of the virtual tool 612 have changed/highlighted to light gray. Further, note that the transparency of the tissue 610 in the 3D cursor 608 has changed to better visualize the tissue 616 highlighted by the virtual focus pen 612 and the virtual focus pen 612 itself. Diagram C shows another change in the position and orientation of the virtual focus pen 618 and a corresponding change in the visual appearance of the nearby voxel 620. The 3D cursor 622 contains a volume of interest 624 that has other (as compared to fig. 6B) changes in the position and orientation of the virtual tool (i.e., in this case, the pen-focus) 618, a portion of which includes the virtual tool's tip 626 now entering the virtual 3D cursor 622 and the volume of interest (e.g., containing tissue selected from the volumetric medical image) 624. Note that a number of voxels 620 in close proximity to the tip 626 of virtual tool 618 have been changed/highlighted to light gray. Further, note that the transparency of the tissue 624 within the 3D cursor 622 has changed (compared to fig. 6A) to better visualize the tissue 624 highlighted by the virtual focus pen 618 and the virtual focus pen 618 itself. This is used to help the radiologist determine the exact location of the virtual tool within the volume of interest.
Fig. 7 shows a virtual eye tracker symbol with a variable display mode. The human eye can perform a saccade to quickly switch from a stationary object to a stationary object. Fig. 7A shows a carotid bifurcation 700 with a virtual oculomotor symbol 701 (e.g., blue dot) at multiple locations over multiple points in time. At a first point in time 702, the virtual eye tracker symbol 701 is located inferior to the portion of the common carotid artery 704 of the carotid bifurcation 700. At a second point in time 706, the virtual oculomotor symbol 701 is located over a carotid sinus 708 portion of the carotid bifurcation 700. At a third point in time 710, the virtual eye tracker symbol 701 is positioned over the middle of the portion of the internal carotid artery 712 of the carotid bifurcation 700. At a fourth time point 714, the virtual eye tracker symbol 701 is located over the external carotid 716 portion of the carotid bifurcation 700, which aids the human eye in moving because it detects the motion of the new point, thereby enabling the eye to jump to each new location. Such a system may be coupled with a virtual eye tracking system to ensure that a person looks at an eye tracking symbol. The user may control the disappearance of one virtual eye tracker symbol and the appearance of another virtual eye tracker symbol via the IO device. Fig. 7B shows a virtual oculomograph symbol 718 located over the underside of the common carotid artery 720 portion of the carotid bifurcation 700. In multiple time steps, the virtual eye tracker point 718 moves smoothly to the internal carotid artery at time point # N when the virtual eye tracker point 718 reaches the final destination (e.g., the middle of the internal carotid artery 722). For example, the user may select a frame rate of 60 frames per second and a movement rate of 2 centimeters per second.
The distance traveled by the virtual eye tracker point will determine the total time for a particular segment. The high frame rate facilitates smooth eye tracking of the human eye and potentially avoids areas skipped by the user, resulting in a more comprehensive view.
The virtual eye tracker point 718 may take many shapes, sizes, colors, movement speeds, etc.
Fig. 8 shows a virtual knife that medical personnel can use to "cut through tissue" from an existing 3D medical imaging volume to enhance visibility of internal structures. In this example, a virtual knife is used to investigate the patient's heart. This task is performed in conjunction with a 3D cursor that wraps around the 3D medical volume of the heart. Fig. 8A shows a virtual knife 800 having a virtual cutting surface 802 and an associated registration point 804. Medical personnel viewing the medical image may: the virtual knife 800 is picked up and moved to the volume of interest, and as shown, the heart 806 is wrapped in a 3D cursor 808, minus the tissue outside the heart. Fig. 8B shows passing a knife 800 equipped with a cutting surface 802 and a registration point 804 through a 3D volume of interest 806 such that a portion of tissue 810 (i.e., the aorta and pulmonary artery) is cut and then displaced. Fig. 8C shows removal of the aorta and pulmonary arteries to allow the medical personnel to gaze into the aortic valve 812 and the pulmonary valve 814. Further sculpting may allow inspection of the tricuspid valve (not shown). Finally, the 4D data set can be viewed along with a virtual toolkit to provide a stronger cardiac viewing effect.
Fig. 9 illustrates a virtual transvascular procedure using a visual transport. A virtual vehicle is a means/pathway by which medical personnel can move within the hollow structure and visualize conditions therein. The virtual vehicle may be used alone or in combination with a virtual catheter. The virtual conveyance may be used in conjunction with a virtual catheter to generate 3D digital images of the interior of a vascular structure within a patient's body, or to train an interventionalist to treat a vascular condition. The virtual vehicle provides forward vision within the vessel. In this example, a blood vessel is shown. The blood in the vessel may be subtracted digitally and the virtual light may be projected forward a distance from the current viewing angle in the tunnel and the X, Y, Z coordinates of the retractor may be recorded. Note that during the subdivision process, blood will be removed from the vessel, but the vessel structure will remain. This allows the virtual conveyance to visualize the internal structure of the unobstructed blood vessel. Typically, the virtual vehicle will be centered in the vessel and looking forward. Note that the center of the viewpoint is within the center of the blood vessel, but the actual viewing angle will shift according to the viewing angles of the left and right eyes. Medical personnel viewing the medical image can visualize how moving within the blood vessel as viewed from a 3D headset (e.g., augmented reality). Note that the diameter of the virtual vessel may be enlarged according to voxel manipulation to enhance the viewing effect (U.S. patent application 16/195,251, which is incorporated herein by reference). For example, it may be difficult to view small pipes and to view all portions of a large channel from the inside at the same time, and thus the ability to resize the channel provides great viewing flexibility. If the user identifies an abnormal condition, the user may assume a different location and orientation to investigate the condition. The distance of the displayed blood vessel will be selected by the medical staff and the illumination intensity of the structure will also be selected by the medical staff. Common clinical applications contemplated by using these techniques include measuring carotid atherosclerotic plaques (e.g., measuring the lumen at the stenotic region and the length of the stenosis), which may be a proving better indicator for determining stent type and placement than current methods, such as the symptomatic carotid endarterectomy test (NASCET) measurement technique in north america. For example, a smaller volume within a lumen of a given length would be a better indicator for disease states and interventions than current methods. The rolling calculation of the lumen of each vessel will be performed using the metrics provided to the medical professional. This crossing (ride) can be used during the assessment of the vascular structure and the potential need for stent insertion. At any time, the medical staff viewing the medical image can view the entire vessel structure and display the current position of the vessel in the vessel by an icon. Panel A shows the interior surface of a normal blood vessel, with blood removed 900 without plaque. The largest circle 900 represents the interior of the blood vessel at the current viewing position. The texture of the inner mucosal surface 901 is shown. The medium square dot 902 represents the internal state of the blood vessel at an intermediate distance from the current viewing position, e.g., 5 centimeters from the current viewing position. Circle 904 represents the farthest distance that the user can see from the current viewing location. The smallest circle 904 represents the farthest distance the user can see from the current viewing position. A virtual marker (e.g., double-headed arrow 906) may indicate a length within the vessel being actively viewed, such as 10 centimeters 904 from the current location of vessel 900 to the farthest location within the vessel that can be seen. A virtual landmark 908 having a distance to a critical intersection, such as "30.0 centimeters from brachial artery. Panel B shows narrowing of the vessel lumen due to atherosclerotic plaque and a roadmap providing a description of the measurement. The largest circle 910 represents the internal state of the vessel at the current viewing position. The texture of the inner mucosal surface 911 is shown. The medium square dot 912 represents the interior shape of the vessel at an intermediate distance from the current viewing location, e.g., 5 centimeters from the current viewing location. Note that a portion of the middle circle 912 at the 2 o' clock position is shown as an inward bulge 916. The circular portion of the middle circle 912 and the portion of the inward convex middle circle 916 are both located 5 centimeters from the current viewing position. Thus, the entire dashed line including 912 and 916 is located 5 centimeters from the current viewing position, and thus represents an "isometric line". The smallest circle 914 represents the farthest distance the user can see from the current viewing position, e.g., 10 centimeters from the current viewing position. A virtual marker (e.g., large double-headed arrow 918) may indicate the length within the vessel being actively viewed, e.g., 10 centimeters 914 from the current location of vessel 910 to the farthest location within the vessel that can be seen. From the intended position of the dashed line (defining a distance of 5 cm assuming no plaque/narrowing) to the actual position 916 at 5 cm and further inwards, a smaller double arrow 920 is shown. Note that when the radius of a particular "iso-line" decreases, this will indicate a narrowed region. Additionally, note that when the radius of a particular "isopycnic line" increases, this will indicate an area of enlargement/dilation/aneurysm. In addition, note another virtual landmark 920, which marks "5.0 centimeters of atherosclerosis 30% away from the narrowed site at the center of the 2 DIAN Capsule". A clock system is one example of how to describe the location of narrowing. Figure C shows a visual transmission tool approaching the branch junction of three blood vessels. During pre-panning, the medical professional can select which vessel the catheter should enter, and can highlight that vessel with a false color to verify the correct path of the catheter. The maximum circle 922 represents the internal state of the vessel at the current viewing position. The texture of the inner mucosal surface 923 is shown. The medium-sized half circle at the 3 o' clock position 924 represents a branch vessel (e.g., internal carotid artery) that would be the desired option to access. The medium-sized half circle at the 9 o' clock position 926 represents an additional branch vessel (e.g., the external carotid artery), which would be the second option to enter, but is not ideal in this example case. The red dashed line 928 is shown as an example of a virtual tool to be displayed on an image as a visual cue to help notify medical personnel of the branch that they wish to enter. A virtual landmark 930 is shown, indicating that "5.0 centimeters away from the carotid bifurcation. The internal carotid artery was accessed toward the 3 o' clock position. ".
Fig. 10 shows a virtual catheter which can be used in conjunction with a volumetric medical image of a vascular structure in a patient with the aid of a virtual icon. For example, a 3D virtual catheter may be used during preoperative planning of an interventional procedure (e.g., acquiring important distance or angle measurements). In this figure, an interventional procedure 3D virtual catheter is used to treat the aneurysm. Fig. a shows a blue solid line 1000, which is a catheter located in the right groin area, entering the common femoral artery and extending into the right external artery and aorta. The tip of the catheter 1002 is a small black solid circle. Note that the traversed path is shown as a solid line, while the planned path is shown in dashed line 1004. The planned route may be achieved by placing a location marker in the particular blood vessel that the medical professional wishes to target. Such location markers may be at waypoints along the desired path or at the final target lesion 1006 (e.g., a cerebral aneurysm). After placing these position markers, the path connecting the markers may be marked (e.g., blue dashed line). Measurements along the blue dashed line may then be performed. By measurement, a landmark 1008 may be displayed to inform medical personnel of the distance to the vascular boundary to be used in the actual interventional medical procedure. Note that a virtual icon 1010, such as a 2D or 3D object, is also shown. Figure B shows a virtual catheter 1012 extending into the thoracic aorta. As shown, the dashed line 1016 represents the desired path of the catheter through the brachial head, followed by the common carotid artery, followed by the internal carotid artery, followed by the middle cerebral artery, and finally into the aneurysm 1018. The road sign may be displayed to inform medical personnel of the distance to the vascular boundary to be used in the actual interventional medical procedure. An augmented reality distance marker for each intersection that the interventionalist needs to attend to may be added to the 3D virtual catheter and prepared to switch from one vessel to another. All screenshots of key vessel junctions can be annotated as angular variations of the current path in the coordinate system X-Y, X-Z and Y-Z planes. Panel C shows an explosion at the vascular junction where multiple routing options occur and the medical personnel must carefully move the catheter to the correct vessel. The descending thoracic aorta 1022, the brachial head 1024, the left common carotid artery 1026, the left subclavian artery 1028, and the ascending thoracic aorta 1030 are shown. A virtual conduit 1032 is shown. A tip 1034 of a virtual catheter is shown. The blue dashed line 1036 represents the desired catheter path.
Fig. 11 illustrates the general concept of a 3D medical image and an example technique after exploding the 3D medical image into a plurality of individual organs. Medical personnel viewing medical images (e.g., using the subdivision technique outlined in U.S. patent application 15/904,092, which is incorporated herein by reference) may divide them into multiple portions according to common characteristics of the 3D digital volume of interest (e.g., similar henries, anatomical atlases, etc.). In this figure, a general procedure is shown, where it is desired to examine critical organs individually. Such a process may be constructed, for example, from items on an image review checklist. Figure a shows a general illustration of organs within the abdomen. The liver 1100, right adrenal gland 1102, right kidney 1104, inferior vena cava 1106, right iliac vein 1108, spleen 1110, aorta 1112, pancreas 1114, left adrenal gland 1116, left kidney 1118, gastrointestinal tract 1120, and left iliac artery 1122 are shown. This procedure will spread these organs outward from the approximate center point in the torso in the direction X, Y, Z to facilitate individual examination without visual interference from adjacent organs. Fig. B shows the organ after the subdivision has been applied and dashed lines are annotated around the organ to illustrate the subdivision process. The liver 1124, right adrenal 1126, right kidney 1128, inferior vena cava 1130, right iliac vein 1132, spleen 1134, aorta 1136, pancreas 1138, left adrenal 1140, left kidney 1142, gastrointestinal tract 1144, and left iliac artery 1146 are shown. Note that the dashed lines are shown to better illustrate the subdivision. Figure C shows an exploded view. The coordinates (X, Y, Z) of the organ will be modified to a new position shown in dashed lines. Software for implementing this concept includes, but is not limited to, the following processes. A medical person viewing the medical image may select a point within the 3D digital volume (ideally near the center of the 3D digital volume and between the segmented sub-volumes) that will act on the start of the explosion. The liver 1148, right adrenal gland 1150, right kidney 1152, inferior vena cava and iliac veins 1154, pancreas 1156, gastrointestinal tract 1158, spleen 1160, left kidney 1162, left adrenal gland 1164 and aorta and iliac arteries 1166 are shown. Figure D illustrates one of the ways in which the 3D digital subvolumes may be separated as if an explosion had occurred. One way (but not limited to) is as follows: eight large cubes 1168 are created, each touching a center point and each parallel to the X, Y, Z axis (e.g., a first cube would be positive in X, positive in Y, and positive in Z; a second cube could be positive in X, negative in Y, and positive in Z; etc.). Medical personnel viewing the medical image then establish a distance factor for sub-volumes near the center point and a larger distance factor for sub-volumes further away. These factors are then applied to all voxels within each particular sub-volume of the 3D digital image, depending on the cube in which the center voxel of the sub-volume resides. (Note that for the first cube described above, the X, Y, Z coordinates of the voxels within that sub-volume will be increased by a specified factor in the positive X, positive Y, and positive Z directions for all sub-volumes whose central voxels fall within that cube. A medical person viewing the medical image modifies a factor that changes the distribution between the sub-volumes during the examination process. For example 1170 shows a medium distribution. Alternatively, 1172 shows a larger distribution.
FIG. 12 illustrates the use of a virtual transport viewer to perform a more accurate virtual colonoscopy. The general public tends to avoid performing colonoscopy examinations due to unpleasant preparation (e.g., large drinking of liquid) and discomfort during surgery. One of the alternatives is to perform virtual colonography, in which a CT scan is performed and the inner mucosal surface of the colon is viewed. If no polyps are found, no treatment stage is required. However, if a polyp is found, at a later date, the preparation phase is repeated and a treatment phase (i.e., colonoscopy) is performed to remove the polyp. However, if a polyp is found, the preparation phase is repeated at some later date and a treatment phase (i.e., colonoscopy) is performed to remove the polyp. In this figure, a virtual transport viewer is used to view the interior of the colon and determine if a polyp is present. If not, everything is good and no preparation for colonoscopy is required. If a polyp is present, it can be detected by the virtual transport viewer and the required preparation and subsequent treatment can then be performed. Under the virtual transport viewer procedure, the patient will follow a procedure in which he/she first receives a CT scan of the colon; a 3D volume of the colon will be generated from the CT 2D slices (us patent 8,384,771, incorporated by reference); a subdivision (us patent application 15/904,092, which is incorporated by reference) will identify the colon, and a subtraction will extract the contents of the colon (e.g., air, fecal matter). In this way, the colon will retain its original shape; virtual transport can then be inserted and can be examined back and forth within the colon. This inspection avoids the problem of polyps being folded, as folding may occur during insertion of the forward-only camera. If no polyp is found, the patient may continue to be in good health and he/she will avoid the discomfort and discomfort of the preparation phase and the colonoscopy or preparation for air insertion phase. If no polyp is found, the patient can go home with confidence for continued health, and he/she will avoid the unpleasantness and discomfort of the preparation phase and colonoscopy or preparation for air insertion phase. Fig. 12A shows a view of the inner surface of the colon with air and stool 900 removed without polyps. The largest circle 1200 represents the internal state of the colon at the current viewing position. The texture of the inner mucosal surface 1201 is shown. The medium sized square dots 902 represent the internal state of the colon at a middle distance from the current viewing position (e.g., 5 centimeters from the current viewing position). The smallest circle 1204 represents the farthest distance that the user can see from the current viewing position. A virtual marker (e.g., double-headed arrow 1206) may indicate the length within the vessel being actively viewed, e.g., 10 centimeters 1204 from the current location of the vessel 1200 to the farthest location within the vessel that can be seen. A virtual landmark 1208 having a distance to a critical intersection, such as "20 centimeters from the cecum boundary". Fig. 12B shows a view of the inner surface of the colon with air and stool 900 removed with three polyps in it. The largest circle 1210 represents the internal state of the colon at the current viewing position. The texture of the inner mucosal surface 1211 is shown. The medium-sized square dot circle 1212 represents the internal state of the colon at an intermediate distance from the current viewing position, e.g., 5 centimeters from the current viewing position. The smallest circle 1214 represents the farthest distance the user can see from the current viewing position. A virtual marker (e.g., double-headed arrow 1216) may indicate the length within the vessel being actively viewed, for example, 10 centimeters 1214 from the current location of the vessel 1210 to the farthest location within the vessel that can be seen. Villous polyp 1218 is shown. A virtual landmark 1220 is shown having a distance to a key landmark, for example "3 cm at 10 o' clock to villous polyp". Sessile polyp 1222 is shown. A virtual landmark 1224 is shown having a distance to a key landmark, such as "7 centimeters at 4 o' clock from sessile polyp".
Fig. 13 shows a portion of a virtual 3D volumetric medical image containing a colon portion of the large intestine that has been stretched in some way by voxel manipulation to become a long straight tube. The contents of the tube are then subdivided and subsequently/removed from the tube. Finally, the tube is split along the length axis and opened to allow viewing of the internal structure. There are methods for physically examining the internal structure of the colon, the methods comprising: preparation, air is inserted to fill and dilate the colon, a camera with light is inserted and the system is moved along the length of the colon to view and record internal structures. The rendered television recordings may then be presented to medical personnel and patients. A limitation of rendered television recordings is that polyps can be obscured from the television view by folds along the colon. Furthermore, if a polyp is found, the patient must later return to colonoscopy, which entails another preparation and subsequent resection of the polyp tissue. The procedure invoked in this virtual procedure does not require an unpleasant preparation phase in the preliminary examination. In this process, CT images with/without contrast are performed. Then, a 3D virtual image is constructed from the CT 2D slices (us patent 8,384,771, incorporated by reference). Subdivision is performed (us patent application 15/904,092) and tissue is subtracted outside the colon. Also, non-tissue contents within the colon are subtracted. The colon is then "stretched" to elongate the folds that may obscure the polyps, thereby eliminating polyp obscuring by the folded colon tissue. As described in us patent application 16/195,251, the stretching process involves voxel manipulation. As shown, the elongated, straight virtual colon is divided into two parts along the length axis, so that the internal structure can be viewed through the head display unit. The hollow visceral colon 1300 is straightened. After straightening out, the colon may be opened like a book 1302 and the mucosal surface viewed from the top inwards. Once open, a first polyp is shown cut in half, with the first half 1304 and the second half 1305. A second polyp 1306 is shown intact. Alternatively, the colon may be opened like a book and pulled apart to flatten it 1308, and view the mucosal surface from the top inwards. Once open, the first polyp is shown cut in half, a first half 1309 and a second half 1310. A second polyp 1312 is shown intact. When the colon is flattened, the polyps pop up more in 3D view on the headphones.
Fig. 14 shows the insertion of a virtual contrast and its flow through the vascular system. Initially, blood within the affected vessel has been removed. In the top row, the vessel is in a normal, non-pathological state, and normal blood flow is shown by placement of a virtual contrast. A proximal portion of a blood vessel 1400, an intermediate portion of blood vessels 1401a and 1401b, and a distal portion of blood vessels 1402a, 1402b, and 1402c are shown. Thus, when the virtual contrast is inserted, it will mimic normal blood flow to be imaged. Three time points are shown, including: an initial time point 1404; a subsequent point in time 1406; and a final point in time 1408. At initial time point 1404, all native blood voxels have been removed and no virtual contrast has been inserted. At subsequent time point 1406, some of the virtual contrast 1410 displayed in gray has been inserted into the proximal portion of the vessel 1400 and the middle portions of the vessels 1401a and 1401b, but no virtual contrast (displayed in white in the absence of virtual contrast) has been inserted into the distal portions of the vessels 1402a, 1402b, and 1402 c. At the final point in time 1408, the gray displayed virtual contrast 1412 has been inserted into the proximal portion of the vessel 1400, the middle portion of the vessels 1401a and 1401b, and the distal portion of the vessels 1402a, 1402b, and 1402 c. In the bottom row, the vessel is in a pathological state (i.e., a blood clot 1413 is disposed in one of the distal arterial segments). Again, the proximal portion of vessel 1400, the middle portion of vessels 1401a and 1401b, and the distal portion of vessels 1402a, 1402b, and 1402c are shown. Thus, due to the presence of the blood clot 1413, when inserted, the virtual contrast will mimic the changing blood flow pattern. Three time points are shown, including: initial time point 1414; a subsequent point in time 1416; and a final time point 1418. At initial time point 1414, all native blood voxels have been removed and no virtual contrast has been inserted. At a subsequent point in time 1416, some of the virtual contrast 1410 displayed in gray has been inserted into the proximal portion of the vessel 1400 and the middle portions of the vessels 1401a and 1401b, but no virtual contrast (displayed in white in the absence of virtual contrast) has been inserted into the distal portions of the vessels 1402a, 1402b and 1402 c. At the final point in time 1418, the gray displayed virtual contrast 1412 has been inserted into the proximal portion of the vessel 1400, the middle portion of the vessels 1401a and 1401b, and the two distal branches of the vessels 1402a, 1402b, and 1402 c. However, one of the distal portions of the blood vessel 1402a is not filled by the virtual contrast 1412 because it is occluded by the blood clot 1413. Thus, when a blood clot is present, it is assigned an occlusion-type interactive voxel parameter. In this illustration, the insertion of the virtual contrast is shown in the normal setting of the blood vessel and in the changed blood vessel setting (i.e., with a blood clot). Note that the virtual contrast is performed from proximal to distal up to the point of the clot, but not beyond the clot. The remaining branches undergo the insertion of a virtual contrast. Thus, assigning blocking-type interactive voxel parameters stops the flow of virtual contrast. Alternatively, interactive voxel parameters of the surgical clip blocking type may be used.
Fig. 15 illustrates an ablation technique that may be used, among other things, in conjunction with 3D digital structures within a 3D cursor. This allows a close inspection of the interior of the organ. Methods based on ablation techniques include, but are not limited to, the following processes: fig. 15A shows a first step in the process of determining the outer "shell" of an organ of interest to medical personnel viewing the medical image (e.g., using the subdivision technique outlined in U.S. patent application 15/904,092) (note: in this figure, liver 1500 wrapped in 3D cursor 1502 is shown (U.S. patent 9,980,691.) to determine the outer shell of the liver, the process allows subtraction of tissue outside the liver from the center voxel 1504 within the 3D cursor in an outward direction 1506 to the subdivided surface of the liver tissue, alternatively, from the 3D cursor side in an inward direction 1508 until the distance reaches the subdivided surface of the liver tissue, fig. 15B shows the next step in the process of sequentially eliminating one voxel from all voxels away from the outer surface, e.g., the outer shell of a voxel is shown as black 1510. then, this step is repeated a number of times over the remaining outer layers of tissue in the direction in which the medical personnel views the medical image. Alternatively, a layer is selected in the X, Y, Z coordinate system (e.g., the X-Y layer with the highest Z coordinate is selected and eliminated-this step is repeated multiple times over the remaining 3D digital volume in the direction of medical personnel viewing the medical image. FIG. 15C shows the next step of the process, i.e., when the layer is subtracted, the internal tissue type is revealed. the original shell 1512 is shown the shell after N-step ablation 1514 The process of searching would be a more useful method of viewing a tumor inside a solid organ than would be the case. Note that the volume of liver tissue has been reduced from the original volume due to the repeated ablation steps.
Fig. 16 shows a virtual focal pen guiding voxel manipulation. The movement of the virtual focus pen will be controlled by the medical personnel viewing the medical image. The figure shows the expanded distance between closely spaced, overlapping and indistinguishable blood vessels and the process of expanding the distance between blood vessels. If arteriovenous malformations occur in areas of the brain where multiple blood vessels are close, it is difficult to determine which of these blood vessels to inject the therapeutic material into. Digitally extending the distance may help identify a suitable vascular injection. Two conditions will be discussed. First, non-vascular tissue separates the vessels. Second, several vessels with little or no non-vascular tissue are grouped to separate vessels. If different types of tissue separate the vessels: a) subdividing to determine the type of tissue present in the volume of interest; b) for all non-blood and non-vascular types of tissue, expanding their volume by multiplicative or additive factors; c) the coordinates of the blood and blood vessels are adjusted to account for the expansion of the non-blood and non-vascular tissue types. Next, to illustrate the clusters: d) performing a subdivision to determine which voxels are primarily blood and which voxels are tissue (i.e., blood vessels); e) temporarily eliminating tissue voxels; c) then using a multiplication (or addition factor) for all coordinates of the blood voxels; f) apply a smoothing routine to the vessel (optional); g) the blood voxels are wrapped with tissue vessels. The display of the medical personnel viewing the medical image shows the dilated vascular structure, thereby facilitating the treatment. One of the problems encountered in radiology is the difficulty in understanding the relationships between multiple complex anatomical structures. One example is a cerebral arteriovenous malformation (AVM). Complex cerebral AVMs may consist of multiple tortuous feeding arteries, an entangled lesion that carries the aneurysm, and multiple drainage veins. The precise anatomy of such complex structures is difficult to understand. The procedure is as follows. In a first step 1600, structures of interest that the user wants to separate are identified. In this case, the structures that need to be separated are two pink blood vessels, which are very close to each other. The user moves the virtual focal tip to the space between the two blood vessels. Note that the process creates a virtual red dot that shows the location of the voxels that will be subsequently copied and inserted as the manipulatable tissue type. In a second step 1602, tissue properties between two structures of interest (e.g., cerebrospinal fluid) are characterized. In a third step 1604, a voxel manipulation (e.g. inserting additional cerebrospinal fluid type voxels) is performed and the positions of the two vessels are changed simultaneously such that the distance between the two vessels is increased. First vessel 1606 and second vessel 1608 are closely spaced with only a small portion of the intervening cerebrospinal fluid type voxels 1610. A virtual pointer 1612 is shown. Also shown is the tip 1614 of the virtual pointer. A virtual symbol (e.g., red dot 1616) is also shown to mark a location in the imaged volume to be manipulated. A particular tissue property may then be assigned to a tissue property between two structures of interest (e.g., cerebrospinal fluid). A tissue property between two structures of interest (e.g., cerebrospinal fluid) can be designated as a specific tissue property. To illustrate this, each of the boundaries in these voxels has changed to a light blue 1618. Note that at this point, first blood vessel 1606 and second blood vessel 1608 are still closely spaced to each other. Then, to separate the first blood vessel 1606 and the second blood vessel 1608, three additional columns of cerebrospinal fluid voxels 1620, 1622, and 1624 are inserted. Note that the separation between first blood vessel 1606 and second blood vessel 1608 has increased. This is useful because 3D viewing may now be improved, allowing a better view and understanding of the relationships between closely packed structures.
Figure 17 illustrates the overall imaging volume and the types of the various subvolumes. In contrast to the slice-by-slice approach that has traditionally been used in radiology, we show here a sub-volume-by-sub-volume approach. Note that the slices may be arranged to create a volume as described in us patent 8,384,771. A medical professional (or computer program) may first select a particular point 1700 to mark a boundary between the first set of 2D slices 1702 and the second set of 2D slices 1704. The medical professional (or computer program) may select the particular point 1706 a second time to mark the boundary between the second set of 2D slices 1704 and the third set of 2D slices 1708. Once the slices of a particular sub-volume are determined, the sub-volume can be pulled from the 2D slice stack and viewed and analyzed. In this illustration, the first subvolume 1710 created from the first set of 2D images 1700 is pulled up to one side of the first set of 2D images 1700. A second subvolume 1712 created from the second set of 2D images 1704 is pulled to one side of the second set of 2D images 1704. The third subvolume 1714 created from the third set of 2D images 1708 is pulled down to the side opposite the third set of 2D images. Note that a variety of methods of allocating stack boundaries may be performed. For example, a convenient subvolume size (e.g., 20 2D slices) is selected and the subvolumes are divided accordingly (e.g., slices 1-20 assigned to subvolume #1, slices 21-40 assigned to subvolume #2, etc.). The number of slices in each stack may vary. Alternatively, the user may touch and select the boundary using a virtual pen, such as points 1702 and 1706. The total image volume can be integrated into many different combinations of subvolumes of different sizes that can be viewed from many different angles (us patent 8,384,771). In this example, a sub-volume may be made by arranging a set of slices.
Fig. 18 shows the 3D cursor movement sequence through the volume of interest in a random pattern. Using a random pattern based on items of possible interest, reviewing medical personnel may view the sum of the virtual medical image volumes at one time and apply techniques such as changing transparency and applying false colors to structures of different densities that have densities different from the nominal density of the organ being examined. The reviewer may then move and resize the 3D cursor to move to various objects that may be of interest for detailed review. In this illustration, a total scan volume 1800 is shown. A first subvolume displayed in a 3D cursor is shown at a first point in time 1802. The 3D cursor is then moved in direction 1804. The subsequent subvolume displayed in the second resized 3D cursor is shown at a subsequent point in time 1806. The 3D cursor is then moved in direction 1808. The subsequent subvolume displayed in the second resized 3D cursor is shown at a subsequent point in time 1810. The 3D cursor is then moved in direction 1812. The subsequent subvolume displayed in the second resized 3D cursor is shown at a subsequent point in time 1814. The figure shows a number of movements and resizing of the 3D cursor to view an example organization of interest. This type of search pattern may speed the review process. The search mode employs a 3D cursor (us patent 9,980,691 and us patent application 15/878,463). Note that: when reviewing medical images in the original 2D format, the eye may jump from one point to another according to the viewer's scan-and-scan path and may not see a large portion of the slice, and thus may miss small blocks. When using a 3D cursor, these patches subtend a larger portion of the displayed image and the probability of detection increases proportionally. If such a random search pattern is displayed, the computer program will track the portion of the total volume that is displayed and the portion of the total volume that is not displayed. If the 3D cursor does not display some portion of the total volume, the program will prompt the user to view these areas. In some embodiments, the sub-volumes are displayed to medical personnel in an automatic mode including, but not limited to, the following: windshield wiper mode or layer-by-layer mode. At any point in time, the 3D cursor and/or the subvolumes within the 3D cursor can be copied and pasted into the virtual movable table for later viewing. For example, a radiologist may wish to first benefit from finding all potential abnormalities or absolute abnormalities. The radiologist may then wish to investigate each abnormal finding in detail at a later time. Each time an anomalous imaging finding is identified, the radiologist may place the anomalous finding in a 3D cursor and ensure that the entire anomalous finding is included in the subvolume (e.g., the entirety of the liver is included in the 3D cursor, which defines the boundaries of the subvolume). The subvolume is then placed in a virtual bucket or virtual 3D clipboard. The remaining total imaging volume is then examined. Once the entire total image volume is removed and all abnormal subvolumes are placed in the virtual bucket or virtual 3D clipboard, the radiologist will begin to go through the volumes in the virtual bucket or virtual 3D clipboard.
Fig. 19 shows an example of a system mode (e.g., sequential virtual windshield wiper type mode) of viewing a medical image. An x-direction 1900, a y-direction 1902, and a z-direction 1904 are shown. The overall imaging volume 1906 is shown. The virtual windshield wiper may have multiple implementations in which the cursor is moved in a variety of systematic ways. As shown, the first subvolume 1908 is examined in the 3D cursor at an initial point in time where one of the coordinates of the 3D cursor has a corner 1910 at position (0, 0, 0). The 3D cursor is first moved in a manner where the x-direction is increased 1912 and the y-and z-coordinates of the 3D cursor are unchanged (as shown by the dashed arrow) and the subvolume along the direction of movement can be optimized. Thus, the pattern is to sequentially increase the X coordinate while keeping the Y and Z coordinates constant. Once the angle of the 3D cursor reaches the maximum x value 1906 of the total imaged volume, it moves in the increasing y direction 1916 and the x and z coordinates of the 3D cursor are unchanged (as shown by the dashed arrow), they do not change, and the subvolume along that direction of movement can be optimized. Thus, when the maximum value of the X coordinate is reached, the Y coordinate is increased by one increment and the X coordinate is sequentially decreased until the minimum X coordinate is reached, and then the Y coordinate is increased again. The process of moving 1918 the 3D cursor in the x-direction 1900 and the y-direction 1902 is then repeated until the bottom layer of the total imaged volume 1906 has been fully inspected by the 3D cursor. Upon completion of the plain, the Z coordinate will increase. The 3D cursor may move 1920 upward in the z-direction 1904. Note that during this system search mode, an anomaly 1922 may be found at a particular 3D cursor position 1924. Such exceptions may be placed into a virtual bucket or virtual 3D virtual moveable table for further analysis. As shown in figure 1926, multiple other system movements of the 3D cursor may be performed through the total imaged volume 1906 until all subvolumes within the total imaged volume have been examined, and then the 3D cursor reaches its final point 1928. One variation of the windshield mode is "fly-back" in which after the first row is completed, the mode will resume at an incremental Y coordinate and then at an incremental X coordinate. This type of search pattern helps to ensure that a thorough check has been performed. The search mode employs a 3D cursor (us patent 9,980,691 and us patent application 15/878,463). Note that: when reviewing medical images in the original 2D format, the eye may jump from one point to another according to the viewer's scan-and-scan path and may not see a large portion of the slice, and thus small blocks may be missed. When using a 3D cursor, these patches subtend a larger portion of the displayed image and the probability of detection increases proportionally. In some embodiments, the sub-volumes are displayed to medical personnel in an automatic mode including, but not limited to, the following: windshield wiper mode or layer-by-layer mode. The probability of detection may be proved to increase by an automatic search mode of the volume of interest. In this illustration, an automatic search mode is shown when the 3D cursor is moved in the volume of interest. Note that mass is identified in later sub-volumes. In the volume of interest, the future automatic search mode will be performed back and forth in each floor (similar to a windshield wiper) and then back and forth in the next floor.
Fig. 20 shows a volume of interest to be reviewed and a process in which any areas that are missed by the intended review can be highlighted to the medical personnel performing the review. These identified sub-volumes may be subsequently reviewed to ensure completeness of the review. This process calls for sequential selection of sub-volumes of the volume of interest by using a 3D cursor (us patent 9,980,691 and us patent application 15/878,463) to sequentially step through the volume to be examined (e.g., to check a checklist by the following medical facility). Furthermore, after the stepwise process is completed, a problem may arise as to whether the entire volume has been inspected. In this embodiment, the examined volumes contained in each 3D cursor may be summed and subtracted from the total original volume. This may result in missing some parts of the original volume that are intended to be reviewed. In this implementation, these missing portions will be highlighted to the medical personnel performing the review and may remind him/her to continue reviewing and inspecting these missing portions. Note that: when reviewing medical images in the original 2D format, the eye may jump from one point to another according to the viewer's scan-and-scan path and may not see a large portion of the slice, and thus small blocks may be missed. When using a 3D cursor, these small blocks go to a larger part of the displayed image and the probability of detection increases proportionally. The figure shows the sequence in which the 3D cursor is moved in the organ of interest. A volume of interest (i.e., liver) 2000 is shown. The subvolume 2002 shown at point in time #1 is shown. The 3D cursor is moved 2004 in a systematic manner within a volume of interest 2000 such as that depicted in fig. 19. The final sub-volume 2006 is shown at time point # N. The medical personnel will control the controls (e.g., change from one increment to another) to view the medical image. Alternatively, the user may move the control of the movement of the 3D cursor through a joystick or other geo-registered tool, as discussed in U.S. patent application 16/524,275. Finally, the volume displayed in the 3D cursor may be tracked and then reviewed at a later time (i.e., before the examination is complete). The size of the sub-volume is varied based on indications of cancer from conventional screening. Furthermore, recording the position of the 3D cursor over time and comparing the displayed subvolume to the total volume will allow for subvolumes that have not been displayed to medical professional 2008. Alternatively, subtracting already displayed sub-volumes from the entire volume, it can be determined which sub-volumes have not been displayed. Note that several regions of the structure are missing (i.e., not included in the 3D cursor volume) 2008; these regions may be tracked and the radiologist may choose to review the regions before the examination is complete. These missing sub-volumes 2008 can be moved to a new location and examined. In another implementation, the user may select the size of the 3D cursor, the rate of movement of the cursor, and the computer performs the automatic movement by examining the volume of interest of the items on the manifest. If the organ is not apparent, a sub-volume within the cube can be changed so that the imaged structure is not apparent (e.g., deleted, changed in Hounsfield units, etc.).
Fig. 21 shows an icon of a human having a position of a 3D virtual cursor included in an approximate position within a human body. The icon may be used in conjunction with viewing a display of the 3D medical image. In viewing a medical image examination volume by medical personnel, it may be useful to quickly reference the icon to reposition the location of the exact tissue of interest/interest in the human body. This icon will also play a role in the discussion between medical personnel. In fig. 21A, an icon of a human body facing the front 2100 in a vertical position is shown. The icon outlines the sub-volume being examined. Such labels include, but are not limited to, the following: marking of a region of interest indicated by a reservation physician; labeling of subdivided volumes that the radiologist is actively working (e.g., the radiologist is actively working on liver items on the exam list, thus labeling the subdivided liver on an icon); labeling of the subvolume the radiologist is examining (e.g., the radiologist is actively working on a subvolume within the liver that is within the range corresponding to the 3D cursor); indicia of viewing angles associated with the icons. For example, the prescribing physician can indicate an area of interest (e.g., a method of creating a computer-generated patient-specific image, sending the patient-specific image to the radiologist, as described in U.S. provisional patent application 62/843,612), and the area can be marked on the virtual icon 2102. Next, the subdivided volume 2104 (e.g., liver) that the radiologist is actively working on may be labeled. Next, the subvolumes within 3D cursor 2106 can be marked. An additional symbol may be displayed outside the icon. For example, initial viewing perspective symbol 2108 is shown representing an initial viewing perspective. The symbol 2110 is moved to represent a change in position from the initial viewing perspective represented by the initial viewing perspective symbol 2108 to the subsequent viewing perspective represented by the subsequent viewing perspective symbol 2112. In fig. 21B, an augmented reality earpiece 2114 is shown for a user with a left eye display 2116 and a left eye view of a 3D cursor 2118 and a right eye display 2120 with a right eye view of a 3D cursor 2122. Note that a left eye view of the marked 3D icon 2124 is displayed in the left eye display 2116 and a right eye view of the marked 3D icon 2126 is displayed in the right eye display. Thus, the outline of the sub-volume being examined may be one of the icons' markers. The approximate position of the 3D cursor in the human icon is another example marker of the icon. The orientation of the body will be under the control of the medical personnel viewing the medical image, as will whether the icon is displayed. For example, the icon may be rotated, translated, distorted (by the corresponding voxel operation if desired), or otherwise altered as directed by the radiologist. Further, adding a marker icon of a 3D cursor to the diagnostic 2D radiology monitor may be performed. When a medical person viewing a medical image rotates, tilts, and zooms tissue contained in a 3D cursor, it may be useful to view the position of the current viewpoint relative to the initial viewpoint (e.g., the position of a voxel has been changed from an initial direction to a new direction by a scroll, pitch, and/or yaw command). This figure shows a solid arrow originating at the initial point of view and terminating at the current point of view. Whether or not to display the icon will be under the control of the medical personnel viewing the medical image. The icon of the 3D cursor displays the content of the 3D cursor rotated and viewed from different viewpoints, while it is useful to view the current position and the position of the viewpoint relative to the original position.
FIG. 22 illustrates a virtual movable table for storing virtual images of suspicious tissue stored by checklist categories. The figure depicts a virtual movable table 2200 with virtual boxes corresponding to items on a healthcare facility checklist 2202, as well as boxes for emergency items 2204 and general/miscellaneous boxes 2206 (e.g., image artifacts, instructional cases, quality improvements, etc.). Emergency box 2204 may be used to place the discovery of critical time sensitive information items. The virtual mobile station is mobile, i.e., the user can view the virtual mobile table on an augmented reality headset, off to one side of the imaging volume that the radiologist is currently processing. The radiologist can then move or resize it to fit it into the workspace. Items deemed important by medical personnel will be "dragged and dropped" in the corresponding virtual trash bin according to the checklist items to be reviewed. For a trash bin without important items, the items on the checklist in the report will show an "unobtrusive" statement that will disappear when an item is added and the radiologist will replace the item with the appropriate description on the checklist. In addition to reviewers, medical personnel will be alerted and allowed access to an "emergency box" containing key items. These items can be jointly examined by the treating and reviewing staff as soon as possible. Forms with bins will aid in the preparation of reports and improve the quality and integrity of the reports. Current reports are nominally limited to word descriptions. In this process, annotated graphics containing the organization in question may be added.
Fig. 23 shows a radiology report sample that includes images processed using a virtual tool. Note that the radiology report sample 2300 includes a 3D cursor and an abnormal image finding.

Claims (62)

1. A method, comprising:
selecting a virtual tool suite from a set of available virtual tools in response to user input for a selected three-dimensional image volume loaded in the image processing system;
geographically registering each virtual tool of the selected suite with the three-dimensional image volume; and
manipulating the three-dimensional image volume in response to manipulation of some of the virtual toolkits.
2. The method of claim 1, comprising selecting the virtual tool suite from a set of available virtual tools, the virtual tool suite comprising: a virtual focus pen; a virtual 3D cursor; a virtual transport viewer; a virtual base; a virtual knife; a virtual catheter; a virtual signpost; a virtual ablation instrument; a virtual table; a virtual comparison tool; and a virtual icon.
3. The method of claim 1, wherein the virtual tool kit includes a virtual focal pen, and including manipulating the three-dimensional image volume by highlighting a portion of the three-dimensional image volume and adding annotations in response to the virtual focal pen.
4. A method according to claim 3 comprising altering a three-dimensional image volume voxel adjacent the tip of the virtual focus pen.
5. The method of claim 1, wherein the virtual toolkit includes a virtual knife, and comprising manipulating the three-dimensional image volume in response to the virtual knife, the virtual knife including at least one of the group consisting of separating voxels from the three-dimensional image volume and changing locations of voxels.
6. The method of claim 1, wherein the virtual toolkit includes a virtual transport viewer, and including manipulating the three-dimensional image volume in response to the virtual transport viewer by moving the virtual transport viewer within a hollow structure of the three-dimensional image volume and presenting images from a perspective of the virtual transport viewer.
7. The method of claim 6, comprising performing a virtual colonoscopy using the virtual transport viewer.
8. The method of claim 1, wherein the virtual kit includes a virtual contrast material, and including manipulating the three-dimensional image volume by inserting visible moving voxels into the three-dimensional image volume in response to the virtual contrast material.
9. The method of claim 8, comprising assigning data unit (e.g., Henschel) values to those different ones of the moving voxels.
10. The method of claim 1, wherein manipulating the three-dimensional image volume in response to manipulation of some of the virtual toolkits comprises removing voxels of an organ shell in a repetitive shell-by-shell manner.
11. The method of claim 1, wherein manipulating the three-dimensional image volume in response to manipulation of some of the virtual toolkits comprises separating the closely distributed tissues of interest by adjusting coordinates of voxels of the tissues of interest.
12. The method of claim 1, wherein the virtual toolkit includes a virtual table, and including manipulating the three-dimensional image volume by placing portions of the three-dimensional image volume in a virtual bin of the virtual table in response to the virtual table.
13. The method of claim 1, wherein the virtual kit includes a virtual catheter and including manipulating the three-dimensional image volume in response to the virtual catheter by limiting movement of the virtual catheter to a list of blood voxels within the selected vessel.
14. The method of claim 1, comprising automatically displaying information associated with the selected sub-volume of the three-dimensional image volume.
15. The method of claim 14, comprising displaying metadata and current conditions of a patient prompted to acquire the medical image volume, medical history of the patient, laboratory results, and pathology results.
16. The method of claim 1, wherein the including displays the information with a virtual windshield.
17. The method of claim 1, comprising displaying the distance to the key metric using a virtual signpost.
18. The method of claim 1, comprising displaying a visually-assisted icon indicating a viewing perspective.
19. The method of claim 1, comprising displaying a visual aid icon indicating a finding detected by an artificial intelligence algorithm.
20. The method of claim 1, comprising displaying a visually assisted icon modified in relation to the imaging volume (e.g., indicating a position of a displayed sub-volume relative to the three-dimensional image volume).
21. The method of claim 1, comprising selecting at least one sub-volume having a volume subtended three-dimensional cursor.
22. The method of claim 21, comprising selecting the sub-volume from a plurality of sub-volumes of a predetermined list of sub-volumes.
23. The method of claim 22, comprising sequentially displaying each of said sub-volumes of said list.
24. The method of claim 21, comprising selecting the sub-volume from a plurality of sub-volumes defined by sequential search pattern coordinates.
25. The method of claim 21, comprising selecting the volume from a plurality of sub-volumes defined by random search mode coordinates.
26. The method of claim 1, wherein manipulating the three-dimensional image volume comprises at least one of: changing the voxel size; changing a voxel shape; changing the voxel position; changing the voxel direction; changing voxel internal parameters; creating a voxel; and eliminating voxels.
27. The method of claim 1, wherein manipulating the three-dimensional image volume comprises dividing a volume of a sub-volume of interest into a plurality of portions based on a common characteristic.
28. The method of claim 1, wherein manipulating the three-dimensional image volume comprises generating an exploded view in which a subdivision of the three-dimensional image is moved away from a point in the three-dimensional image volume.
29. The method of claim 1, comprising employing a virtual eye tracker symbol to assist human eye viewing.
30. The method of claim 29 comprising causing the virtual eye tracker symbol to appear and disappear at spatially separated locations such that the human eye can perform saccades and jump from one location to another.
31. The method of claim 29, comprising smoothly moving the virtual eye tracker symbol along a path such that the human eye can perform smooth tracking.
32. An apparatus, comprising:
an image processing system comprising an interface to select a suite of virtual tools from a set of available virtual tools for a selected three-dimensional image volume display loaded in the image processing system to geographically register each virtual tool of the selected suite with the three-dimensional image volume in response to user input, and an image processor to manipulate the three-dimensional image volume in response to manipulation of some of the suite of virtual tools.
33. The apparatus of claim 32, wherein the virtual tool suite is selected from a set of available virtual tools comprising: a virtual focus pen; a virtual 3D cursor; a virtual transport viewer; a virtual base; a virtual knife; a virtual catheter; a virtual signpost; a virtual ablation instrument; a virtual table; a virtual comparison tool; and a virtual icon.
34. The apparatus of claim 32, wherein the virtual tool kit includes a virtual focus pen, and the image processor manipulates the three-dimensional image volume in response to the virtual focus pen by highlighting a portion of the three-dimensional image volume and adding annotations.
35. The apparatus of claim 34, wherein the image processor alters a three-dimensional image volume voxel adjacent to a tip of the virtual focus pen.
36. The apparatus of claim 32, wherein the virtual kit of tools includes a virtual knife and the image processor manipulates the three-dimensional image volume in response to the virtual knife, the virtual knife including at least one of the group consisting of separating voxels from the three-dimensional image volume and changing locations of voxels.
37. The apparatus of claim 32, wherein the virtual toolkit comprises a virtual transport viewer, and wherein the image processor manipulates the three-dimensional image volume in response to the virtual transport viewer by moving the virtual transport viewer within a hollow structure of the three-dimensional image volume and rendering images from a perspective of the virtual transport viewer.
38. The apparatus of claim 37, wherein the virtual transport viewer is configured to perform a virtual colonoscopy via the interface.
39. The apparatus of claim 32, the virtual kit comprising a virtual contrast material, and wherein the image processor manipulates the three-dimensional image volume by inserting visible moving voxels into the three-dimensional image volume in response to the virtual contrast material.
40. The apparatus according to claim 39, wherein the image processor allocates different units of data (e.g., Henschel) to those different ones of the moving voxels.
41. The apparatus of claim 32, wherein the image processor manipulates the three-dimensional image volume by repeatedly removing voxels of organ shells on a shell-by-shell basis in response to manipulation of some of the virtual toolkits.
42. The apparatus of claim 32, wherein the image processor manipulates the three-dimensional image volume in response to manipulation of some of the virtual toolkits by separating the closely distributed tissue of interest by adjusting coordinates of voxels of the tissue of interest.
43. The apparatus of claim 32, wherein the virtual toolkit comprises a virtual table, and comprising the image processor to manipulate the three-dimensional image volume in response to the virtual table by placing portions of the three-dimensional image volume in a virtual bin of the virtual table.
44. The apparatus of claim 32, wherein the virtual tool kit includes a virtual catheter and includes the image processor to manipulate the three-dimensional image volume in response to the virtual catheter by limiting movement of the virtual catheter to a list of blood voxels within a selected vessel.
45. The apparatus of claim 32, including said interface for automatically displaying information associated with a selected sub-volume of said three-dimensional image volume.
46. The apparatus of claim 45, comprising said interface to display metadata and current conditions of a patient prompting acquisition of said medical image volume, medical history of the patient, laboratory results, and pathology results.
47. The apparatus of claim 45, including said interface for displaying said information using a virtual windshield.
48. The apparatus of claim 45 including said interface for displaying distances to key measures using virtual signposts.
49. The device of claim 32, comprising the interface displaying a visually-assisted icon indicative of a viewing perspective.
50. The apparatus of claim 32, comprising the interface displaying a visually-aided icon indicative of a finding detected by an artificial intelligence algorithm.
51. The apparatus of claim 32, comprising the interface displaying a visually assisted icon modified in relation to the imaging volume (e.g., indicating a position of a displayed sub-volume relative to the three-dimensional image volume).
52. The apparatus of claim 32 including said interface for receiving a sub-volume having a three-dimensional cursor subtended by the volume.
53. The apparatus of claim 52, wherein the selected sub-volume is one of a plurality of sub-volumes of a predetermined list of sub-volumes.
54. The apparatus of claim 53, comprising said interface to sequentially display each of said sub-volumes of said list.
55. The apparatus of claim 52, wherein the selected sub-volume package is one of a plurality of sub-volumes defined by sequential search pattern coordinates.
56. The apparatus of claim 52, wherein the selected sub-volume is one of a plurality of sub-volumes defined by random search pattern coordinates.
57. The apparatus of claim 32, wherein the image processor manipulating the three-dimensional image volume comprises at least one of: changing the voxel size; changing a voxel shape; changing the voxel position; changing the voxel direction; changing voxel internal parameters; creating a voxel; and eliminating voxels.
58. The apparatus of claim 32, wherein the image processor manipulates the three-dimensional image volume by dividing a volume of a sub-volume of interest into a plurality of portions based on a common characteristic.
59. The apparatus of claim 32, wherein the image processor manipulates the three-dimensional image volume by generating an exploded view in which a subdivision of the three-dimensional image is moved away from a point in the three-dimensional image volume.
60. The apparatus of claim 32, wherein the interface comprises a virtual eye tracker symbol.
61. The apparatus of claim 60 wherein said virtual eye tracker symbol appears and disappears at spatially separated locations so that the human eye can perform saccades and jump from one location to another.
62. The apparatus as recited in claim 60, wherein said virtual eye tracker symbol moves smoothly along a path.
CN201980062928.3A 2018-08-24 2019-08-23 Virtual kit for radiologists Pending CN113424130A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201862722513P 2018-08-24 2018-08-24
US62/722,513 2018-08-24
PCT/US2019/047891 WO2020041693A1 (en) 2018-08-24 2019-08-23 A virtual tool kit for radiologists

Publications (1)

Publication Number Publication Date
CN113424130A true CN113424130A (en) 2021-09-21

Family

ID=69591478

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980062928.3A Pending CN113424130A (en) 2018-08-24 2019-08-23 Virtual kit for radiologists

Country Status (6)

Country Link
EP (1) EP3841450A1 (en)
JP (1) JP2021533940A (en)
CN (1) CN113424130A (en)
AU (1) AU2019325414A1 (en)
CA (1) CA3109234A1 (en)
WO (1) WO2020041693A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11207133B1 (en) * 2018-09-10 2021-12-28 David Byron Douglas Method and apparatus for the interaction of virtual tools and geo-registered tools
EP4052232A2 (en) 2019-12-31 2022-09-07 Novocure GmbH Methods, systems, and apparatuses for image segmentation
EP4134971A1 (en) * 2021-08-09 2023-02-15 Ai Medical AG Method and devices for supporting the observation of an abnormality in a body portion

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9900669B2 (en) * 2004-11-02 2018-02-20 Pierre Touma Wireless motion sensor system and method
US9563266B2 (en) * 2012-09-27 2017-02-07 Immersivetouch, Inc. Haptic augmented and virtual reality system for simulation of surgical procedures
AU2017301435B2 (en) * 2016-07-25 2022-07-14 Magic Leap, Inc. Imaging modification, display and visualization using augmented and virtual reality eyewear

Also Published As

Publication number Publication date
AU2019325414A1 (en) 2021-03-25
JP2021533940A (en) 2021-12-09
CA3109234A1 (en) 2020-02-27
WO2020041693A1 (en) 2020-02-27
EP3841450A1 (en) 2021-06-30

Similar Documents

Publication Publication Date Title
US11666385B2 (en) Systems and methods for augmented reality guidance
US10878639B2 (en) Interactive voxel manipulation in volumetric medical imaging for virtual motion, deformable tissue, and virtual radiological dissection
US11547499B2 (en) Dynamic and interactive navigation in a surgical environment
US11036311B2 (en) Method and apparatus for 3D viewing of images on a head display unit
US11594002B2 (en) Overlay and manipulation of medical images in a virtual environment
US11183296B1 (en) Method and apparatus for simulated contrast for CT and MRI examinations
US20190021677A1 (en) Methods and systems for classification and assessment using machine learning
JP6080248B2 (en) Three-dimensional image display apparatus and method, and program
KR20210104715A (en) Augmented reality display using optical code
JP2020191130A (en) Systems and methods for validating and correcting automated medical image annotations
US11798249B1 (en) Using tangible tools to manipulate 3D virtual objects
JP2017525418A (en) Intelligent display
US20070237369A1 (en) Method for displaying a number of images as well as an imaging system for executing the method
GB2395880A (en) Curved multi-planar reformatting of 3D volume data sets
CN113424130A (en) Virtual kit for radiologists
US10712837B1 (en) Using geo-registered tools to manipulate three-dimensional medical images
CN113645896A (en) System for surgical planning, surgical navigation and imaging
US11207133B1 (en) Method and apparatus for the interaction of virtual tools and geo-registered tools
CN116313028A (en) Medical assistance device, method, and computer-readable storage medium
Cecotti et al. Serious game for medical imaging in fully immersive virtual reality
US11763934B1 (en) Method and apparatus for a simulated physiologic change for CT and MRI examinations
JP2014526301A (en) Notes to tubular structures in medical images
CN115868998A (en) Computer-implemented method for performing at least one measurement using medical imaging
CN118251732A (en) Processing image data for evaluating clinical problems
MONDINO Integration of a virtual reality environment for percutaneous renal puncture in the routine clinical practice of a tertiary department of interventional urology: a feasibility study

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination