MXPA99002340A - System and method for performing a three-dimensional virtual examination - Google Patents

System and method for performing a three-dimensional virtual examination

Info

Publication number
MXPA99002340A
MXPA99002340A MXPA/A/1999/002340A MX9902340A MXPA99002340A MX PA99002340 A MXPA99002340 A MX PA99002340A MX 9902340 A MX9902340 A MX 9902340A MX PA99002340 A MXPA99002340 A MX PA99002340A
Authority
MX
Mexico
Prior art keywords
clause
volume
organ
dimensional
data
Prior art date
Application number
MXPA/A/1999/002340A
Other languages
Spanish (es)
Inventor
Ekaufamn Air
Hong Lichon
Liang Zhengrong
R Wax Mark
Viswambharan Ajay
Original Assignee
The Research Foundation Of State University Of New York
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Research Foundation Of State University Of New York filed Critical The Research Foundation Of State University Of New York
Publication of MXPA99002340A publication Critical patent/MXPA99002340A/en

Links

Abstract

The invention is a system and method for generating a three-dimensional visualization image of an object, such as an organ (1301, 1303) using volume visualization techniques and exploring the image using a guided navigation system which allows the operator to travel along a flight path, and to adjust the view to a particular portion of the image of interest in order, for example, to identify polyps (1305), cysts or other abnormal features in the visualized organ. An electronic biopsy can also be performed on an identified growth or mass in the visualized object.

Description

SYSTEM AND METHOD FOR CARRYING OUT A THREE-DIMENSIONAL VIRTUAL EXAMINATION Technical Field The present invention relates to a system and a method for performing a virtual three-dimensional volume based examination using planned and guided navigation techniques. One such application is performed in a virtual endoscopy.
Background of the Invention Colon cancer continues to be a leading cause of death throughout the world. The initial detection of growths-cancerous, which in the human colon initially manifest themselves as polyps, can greatly improve the chance of recovery of a patient. Currently, there are two conventional ways to detect polyp or other masses in the colon of a patient. patient The first method is a colonoscopy procedure, which uses a flexible fiberoptic tube called a colonoscope to visually examine the colon through the physical rectal entry with the device. The doctor can manipulate the tube to investigate and look for abnormal growths in the colon. Colonoscopy, although reliable, is both relatively expensive in time, and is an invasive and painfully uncomfortable procedure for the patient.
The second detection technique is the use of a barium enema and the formation of two-dimensional images of the colon. The barium enema is used to coat the colon with barium and a two-dimensional beam image is taken to capture an image of the colon. However, barium enemas may not always provide a complete colon view. They require extensive pretreatment of the patient, frequently depend on the patient when the operation is performed, expose the patient to excessive radiation and may be less sensitive than a colonoscopy. Due to the deficiency in the conventional practices described above, a more reliable, less intrusive and less expensive way to verify the colon with respect to polyps is desirable. A method for examining other human organs such as the lungs in respect to masses in a reliable and cost-effective manner with less discomfort for the patient is also desirable.
Two-dimensional ("2D") visualization of human organs using currently available medical imaging training devices, such as computed tomography and RI (magnetic resonance imaging) have been widely used for patient diagnosis.
Three-dimensional images can be formed by interpolating between two two-dimensional photographs produced from the visualization machines. The formation of images of an organ and the visualization of its volume in a three-dimensional space would be beneficial due to the lack of physical intrusion and the ease of data manipulation. However, the exploration of three-dimensional volume imaging must be carried out properly in order to fully exploit the advantages of virtually seeing an organ. from the inside.
When viewing the virtual image of a three-dimensional volume ("3D") of an environment, a functional model must be used to explore the virtual space. A possible model a virtual camera which can be used as a reference point for the observer to explore the virtual space The camera control in the context of navigation within a general three-dimensional virtual environment has been previously studied. There are two conventional types of camera control offered for the navigation of the virtual space. The first one gives the operator a complete control of the camera which allows the operator to manipulate the camera in different orientations positions to achieve the desired view. The effect operator will pilot the camera. This allows the operator to explore a particular section of interest while other sections are ignored. However, full control of a camera in a large domain will be tedious and tiring, and an operator may not see all the important characteristics between the starting point of completion of the scan. The camera will also "easily get lost in remote areas or" crash "on one of the walls by an unattentive operator or by numerous unexpected obstacles.
The second technique of camera control is a planned navigation method, which assigns the camera the predetermined path to take and which can not be changed by the operator. This is akin to having an "engaged autopilot." This allows the operator to concentrate on the virtual space being observed, and does not have to worry about steering inside the walls of the environment that is being examined. do not give the observer the flexibility to alter the course or investigate an interesting area seen along the flight path It would be desirable to use a combination of the navigation techniques described above to realize the advantages of both techniques while minimizing their respective disadvantages. It would be desirable to apply a flexible navigation technique for the examination of animal human organs which are represented in the virtual three-dimensional space in order to carry out a thorough examination if pain and non-intrusive. The desired navigation technique also allows a complete examination of a virtual organ in a three-dimensional space by an operator allowing a flexibilida while ensuring a smooth path and a complete exam through and around the organ. It would be desirable additionally to have the ability to show the exploration of an organ in a real-time placement by using a technique which minimizes the computations necessary to see the organ. The desired technique must also be equally applicable for exploring any virtual object.
Synthesis of the Invention The invention generates a three-dimensional image display of an object such as a human organ using volume visualization techniques and explores virtual image formation using a guided navigation system that allows the operator to move along a predefined travel path. and adjusting both the position and angle of view to a particular part of interest in the image outwardly of the predefined path in order to identify the polyps, cysts or other abnormal features in the organ.
The inventive technique for a three-dimensional virtual examination of an object includes producing a discrete representation of the object in volume elements by defining the part of the object which is to be examined, carrying out a navigation operation on the virtual object exhibiting the virtual object in real time during navigation.
The inventive technique for a three-dimensional virtuous examination as applied to an organ of a patient includes preparing the organ for examination or exploration, if necessary, exploring the organ and converting the data into volume element, defining the organ part of the organ. which will be examined, carry out a guided navigation operation in the virtual organ and display the virtual organ in real time during the guided navigation.
It is an object of the invention to use a system a method for performing a non-intrusive, inexpensive and relatively pain-free examination of an organ in which the actual analysis of the scanned organ can be easily carried out without the patient being present. . The organ can be examined and visualized in real time or the stored data can be visualized at a later time.
Another object of the invention is to generate representations of three-dimensional volume of an object, such as an organ, wherein the regions of the object can be peeled back layer by layer in order to provide a surface analysis of a region of the object of which an image has been formed. A surface of an object (such as an organ) can be made transparent or translucent in order to observe additional objects inside or behind the object's wall. The object can also be divided in order to examine a particular cross section of the object.
It is another object of the invention to provide a system and a guided navigation method through a three-dimensional volume representation of an object, such as an organ, using potential fields.
It is a further object of the invention to calculate the center line of an object, such as an organ, for a virtual fly using a layer peeling technique as described herein.
It is a further object of the invention to use a modified Z buffer technique to minimize the number of computations required to generate the visible screen.
Another object of the invention is the assignment of opacity coefficients to each volume element in representation in order to make the elements of particular volume transparent or translucent in several degrees in order to make the visualization of the part of the object that is being viewed. A section of the object can also be composed using the opacity coefficients.
Brief Description of the Drawings The additional objects, characteristic advantages of the invention will be apparent from the following detailed description taken in conjunction with the accompanying figures showing a preferred embodiment of the invention in which: Figure 1 is a flow chart of the steps for carrying out a virtual examination of an object specifically a colon according to the invention; Figure 2 is an illustration of a "underwater" camera model which performs virtual organ guided navigation; Figure 3 is an illustration of a pendulum used to model the tilt and roll of the "underwater" camera Figure 4 is a diagram illustrating a two-dimensional cross section of a volumetric colon that identified two blocking walls; Figure 5 is a diagram illustrating a two-dimensional cross-section of a volumetric colon on which start and end volume elements are selected; Figure 6 is a diagram illustrating a two-dimensional cross section of a volumetric colon showing discrete sub-volume enclosed by the blocking walls and colon surface; Figure 7 is a diagram illustrating a two dimensional cross section of a volumetric colon which has multiple, peeled layers; Figure 8 is a diagram illustrating a two-dimensional cross section of a volumetric colon cu which contains the remaining flight path; Figure 9 is a flow chart of the steps to generate a volume visualization of the scanned organ Figure 10 is an illustration of a virtual colo which has been subdivided into cells; Figure HA is a graphic detection of an organ which is examined virtually; Figure 11B is a graphic display of a lunge tree generated when the organ is displayed in figure HA; Figure 11C is an additional graphical display of a generated thrust tree while displaying the organ in figure HA; Figure 12A is a graphic display of the scene to be performed with objects within certain cell of the scene; Figure 12B is a graphical display of a generated thrust tree while the scene is displayed in Figure 12A; Figures 12C-12E are furthermore graphical display of generated thrust trees while displaying the image of Figure 12A.
Figure 13 is a two-dimensional representation of a virtual colon containing a polyp whose layers can be removed; and Figure 14 is a diagram of a system used to perform a virtual examination of a human organ according to the invention.
Detailed description Although the methods and systems described in this application can be applied to any object that is being examined, the preferred embodiment to be described is examination of an organ in the human body, specifically colon. The colon is long and crooked, which makes it especially suitable for a virtual examination, saving the patient as much money and the discomfort and damage of a physical probe. Other examples of organs which can be examined include lungs, stomach, and parts of the system, gastrointestinal, heart and blood vessels.
Figure 1 illustrates the necessary steps to carry out a virtual colonoscopy using volume visualization techniques. Step 101 prepares the colon to be examined in order to be seen for examination if any doctor or speculum examination instrument is required. This preparation may include cleaning the colon with "cocktail" or liquid entering the colon after it is made. Ingested orally and has passed through the stomach. The cocktail forces the patient to expel the waste material that is present in the colon. An example of such substance used is the Golytel Plus, in the case of the colon, air or C02 can form inside the colon in order to expand it to make the colon easier to explore and examine. This was accomplished with a small tube placed in the rectum with approximately 1,000 cubic centimeters of air pumped into the colon to distend the colon. Depending on the type of scanner used, it may be necessary for the patient to drink a contrast substance such as this to cover any unexplained defecation in order to distinguish the waste of the colon from the colon walls. Alternatively, the method for virtually examining the colon can remove the virtual waste before or during the virtual exam as explained later in this description. Step 101 does not need to be carried out at all, examinations as indicated by the dotted line -from figure 1 Step 103 explores the organ that is to be examined. The scanner can be a well-known apparatus in the art, such as a spiral CT scanner to scan a MRI Zenita colo or machine to scan a lung labeled eg with xenon gas. The scanner must be able to take multiple images of different positions around the body during suspended breathing, in order to produce the data necessary for volume visualization. An example of a single CT-image will use an X-ray of 5 millimeters in width, an inclination of 1: 1 or 2: 1, with a field of view of 4 centimeters taking place from the top of the splenic flexi of the colon to the rectum.
The representations of discrete data of the object can be produced by other methods besides exploration. The Voxel data representing an object can be derived from a geometric model by techniques described in United States of America Patent No. 5,038,302 entitled "Method to Convert Continuous Three-dimensional Geometric Representations into Discrete Three-dimensional Vox Base Representations Within a Three-dimensional Vox-Based System "of Kaufman, granted on August 8, 1991, presented on July 26, 1988, which is incorporated herein by reference. Additionally, data can be produced by computer model of an image which can be converted into three dimensional voxels and scanned according to the invention. An example of this type of data is a computer simulation of the turbulence surrounding a spacecraft.
Step 104 converts scanned images into elements of three-dimensional volume (Voxels). In preferred embodiment to examine a colon, the scanned data were reformatted into slices of 5 millimeters thickness at increments of 1 millimeter or 2.5 millimeters, with slice slice represented as a 512 by 512 pixel matrix. Thus, a large number of two dimensional slices can be used. generated depending on the length of the browser. The game two-dimensional slices is then reconstructed three-dimensional co-pixels. The process of converting two-dimensional images from the scanner to three-dimensional voxels can either be carried out by means of the scanning machine itself or through a separate machine such as a computer with techniques which are well known in the art (for example, see FIG. U.S. Patent No. 4,985.85 entitled "Method and Apparatus for Storing, Evaluating and Processing Voxel-Based Data" by Kaufman et al., granted on January 15, 1991, filed on November 11, 1988; which is incorporated herein by reference).
Step 105 allows the operator to define the part of the selected organ that is to be examined. A physician may be interested in a particular section of the colon that is likely to develop polyps. The doctor can see a two-dimensional slice vision map to indicate the section that will be examined. A start and end point of a trajectory to be examined can be indicated by the doctor / operator. A conventional computer and computer interconnection (keyboard, mouse or spaceball) can be used to designate the part of the colon which is to be inspected. A lattice system with coordinates can be used as the input key or the doctor / operator can "click" on the desired point. The input image of the colon can also be viewed if desired.
Step 107 performs the planned guided navigation operation of the virtual organ that is being examined. Carrying out a guided navigation operation is defined as navigating through an environment along a predefined or automatically predetermined flight path which may be be adjusted manually by an operator at any time. After the scanned data has been converted to three-dimensional voxels, the interior of the organ must be traversed from the selected start to the selected termination point. Virtual exams are modeled by having a small camera scrolling through virtual space with a lens focused toward the ending point. The guided navigation technique provides a level of interaction with the camera so that the camera can navigate through a virtual environment automatically in case there is no operator interaction, and then at the same time, allow the operator to manipulate the camera when necessary. The preferred embodiment for achieving the guided navigation achieved is to use a physically based camera model that uses potential fields to control the movement of the camera and which are described in detail in Figures 2 and 3.
Step 109, which can be performed concurrently with step 107, exhibits the interior of the organ from the point of view of the camera model along the selected trajectory of the guided navigation operation. Three-dimensional displays can be generated using well-known techniques. in art such as the walking cube technique. However, in order to produce a real-time display of the colon, a technique is required which reduces the coarse number of data computations necessary for the virtual organ display. Figure 9 describes this step d display in greater detail.
The method described in figure 1 can also be applied to explore multiple organs in a body at the same time. For example, a patient may be examined for cancerous growths in both the colon and the lungs. The method of Figure 1 can be modified to scan all the areas of interest in step 103 and to select the current organ to be examined in step 105. For example, the doctor / operator can initially select the colon for the examination. Virtual and then explore the lung Alternatively, two different doctors with different specialties can explore virtually different organs explored in relation to their respective specialties. After step 109, the next organ that is going to be examined and selected and its part will be defined and explored. It continues until all the organs which need examination have been processed.
The steps described in conjunction with the figure are also applied to the exploration of any object that can be represented by volume elements. For example, an architectural structure or an inanimate object can be represented and explored in the same way.
Figure 2 shows a control model of the camera "submarine" which carries out the guiding navigation technique in step 107. Where there is no operator control during guided navigation, the default navigation is similar to that of the planned navigation that automatically directs the camera to -lo During the planned navigation phase, the camera remains in the center of the colon to obtain better views of the colonic surface.When an interesting region is found, the Virtual camera operator and guided navigation can bring the camera interactively close to a specific region and direct the movement and angle of the camera to study the interesting area in detail without inadvertently colliding with the walls of the colon.The operator can control the camera with a standard interconnection device such as a keyboard, a mouse or non-standard device such as a spatial ball. In order to fully operate a camera in a virtual environment, it requires six degrees of freedom for the camera. The camera should be able to move in the horizontal, vertical, and (axes 217) direction as well as being able to rotate in another three degrees of freedom (axes 219) to allow the camera to move to explore all sides and angles of a virtual environment The camera model guided navigation includes a heavy and inextendible rod n that connects two particles xx 203 and 205, both particles being subjected to a potential field 215 The potential field is defined to be the highest in the walls of the organ in order to of pushing the camera out of the walls.
The positions of the particles are given by xx x2, and these are presumed to have the same mass m. A camera is attached to the head of the submarine xx 203, whose direction of vision coincides with x2X? - The submarine can carry out the translation and rotation around the center of mass x of the model when the two particles are affected by the forces from potential field V (x) which is defined below, by any friction forces, and by any simulated external force The relationships between xx, x2 and x are as follows: x = (x, y, z), r = (rsin0cosfrsin0sin, rcos #), Xi = x + r, x, r, (1) where r,? and f are the polar coordinates of the vector xx. The kinetic energy of the model T, was defined as the summary of the kinetic energies of the movements of xx and x2: T = m t (¿? + * A = mx2 + mi (2) = m { X2 + y2 + z2) + tnt2. { ? 2 + f2sm2?).
Therefore, the equations for the movement of the submarine model are obtained by using the LaGrange equation: where the qjS are the generalized coordinates of the model and can be considered as the time variables t as: (? q2 q3qq4 q5.6) = (x, y, z, 0,0,1 / = q (t), (4) with -? denoting the rotation angle of our camera system that will be explained later. The F ± s are called the generalized forces. The control of the submarine is carried out by applying a simulated external force to xx, ext - V "x /" and / "z" and it is presumed that both x and x2 are affected by the forces of the potential field and the frictions acting in the opposite direction of each particle velocity. Consequently, generalized forces are formulated as follows: where k denotes the coefficient of friction of the system. The external force FTXt is applied by the operator by simply "clicking" on the mouse button in the desired direction 207 in the generated image, as shown in figure 2. This camera model will then be moved in that direction. This allows the operator to control at least five degrees of camera freedom with just a single click of the mouse button. From Equations (2), (3) and (5) it can be derived that the accelerations of the five parameters of our submarine model as: x - 2. { dx dx 'm 2m' l f gV (xQ flVfo) * »* y. y = 2V Sy ßy '"" m 2tn' 2 dz + ß ^ m + 2 '? = ^ 2 without fl eos fl - [eoS fl. { co ^ (- ¿_?) + s ¿(-J-2 ± _1)} -sinflí v ^ ßz - * ^ dz) "] k • 1? - Fx eos fl eos 4- Fv eos fl sin f - .F, without fl), 2mr = 1 [-2? coa? sin A - where x and x denote the first and second derivatives of x, respectively, and? N (x). ? VCx). VfxK denotes the gradient (^ dx dy dz of the potential at a point x ^ -The terms f2sin <? Cos0 de? Y 2 ?? cos? without f are called centrifugal force and force Coriolis, respectively, and these are interested in the interchange of angular speeds of the submarine. Since the model does not have the moment of inertia defined for the submarine's rod, these terms tend to cause an overflow of the numerical calculation of f. Fortunately, these terms become significant only when the angular velocities of the submarine model are significant, which essentially means that the camera moves too fast. Since it is insignificant to allow the camera to move so fast because the organ can not be seen properly, these terms are minimized in our implementation to avoid the problem of overflow.
From the first three formulas of Equation (6), it can be seen that the submarine can not be driven by the external force against the potential field if the following condition is satisfied: | F | r (x + W (x2) | > «« ti m Since the velocity of the submarine and the external Fext force have higher limits in our implementation, by assigning sufficiently high potential values at the limit of the objects, it can be guaranteed that the submarine will never hit against the objects or walls in the environment.
As previously mentioned, the rolling angle f of the camera system needs to be considered. A possible option allows the operator full control of the angle? . However, even when the operator can rotate the camera freely around the model rod, he or she can easily become disoriented. The preferred technique assumes that the upper direction of the chamber is connected to a pendulum with the mass m2301, which freely rotates around the submarine's rod, as shown in figure 3. The direction of the pendulum r2, is expressed as : r2 = r2 (cos? cos f synt / »+ - cos0cos? - sin? sirní ') Even though it is possible to calculate the exact movement of this pendulum along with the movement of the submarine, this makes the system equations very complicated. Therefore, it is presumed that all generalized coordinates except the rolling angle? they are constant, and therefore define the independent kinetic energy for the pendulum system as: T = ™ * 2.2 = TO "l22r'22 f *.
This simplifies the model for the rolling angle. Since it is presumed in this model that the gravitational force is F, = m2g = (m2gx, m2g, m2gz) acts as the point of mass m2, the acceleration of? it can be derived using the LaGrange equation as: f = -. { < 7t (cos 0 cos < cos' - sin ^ sin ^) + gv (eos 5 sin < eos f + eos ^ sin) + < 7x (- sin? Eos >)} • 77l2 (7) From Equations (6) and (7), the generalized coordinates q (t) and its derivatives q (t) are calculated asymptotically by using the Taylor series as: q (t + h) = q (í) + h (t) + q (í) + 0 (h3), q (t + h) = q (í) + hq (t) + 0 (h2), to move the submarine freely. -To soften the movement of the submarine, step h of time is selected as an equilibrium value between being as small as possible to smooth the movement but as large as necessary to reduce the cost of computing.
Definition of the Potential Field The potential field in the submarine model in Figure 2 defines the boundaries (walls or other matter) in the virtual organ by assigning a high potential to the limit in order to ensure that the submarine chamber does not stick with the walls or other limit . If the camera model is intended to be moved to a high potential area by the operator, the camera model will be restricted from doing this unless the operator wishes to examine the organ behind the boundary or within a polyp, for example. In the case of carrying out a virtual colonoscopy, a potential field value is assigned to each piece of volumetric volume data (volume element). When a particular region of interest is designated in step 105 of the figure with a start and end point, the voxels within the selected area of the scanned colon are identified using conventional blocking operations. Subsequently, a power value is assigned to each voxel x of the selected volume based on the following three distance values: the distance from the termination point d (x), the distance from the colon surface ds (x) and the distance from the center line of the colon space of (x). The dt (x) was calculated by using a conventional growth strategy. The distance from the colon surface, ds (x), is computed using a conventional growth technique from the surface voxels to the inside. To determine from (x), the central line of the colon from the voxel is first extracted, and after (x) it is computed using the conventional growth strategy from the central line of the colon.
To calculate the centerline of the selected colo area defined by the starting point specified by the user and the termination point specified by the user, the maximum volume of ds (x) is located and denoted by dmax. Then for each voxel within the area of interest, a cost value of dmax-ds (x) is assigned. Therefore, voxels which are closer to the surface of the colon have high cost values and voxels near the center line have relatively low cost values. Then, based on the cost allocation, the shortest single source trajectory technique is applied as is known in the art to efficiently compute a minimum cost path from the source point to the termination point. This low cost line indicates the central line or skeleton of the colon section that is desired to be explored. This technique for determining the center line is the preferred technique of the invention.
To compute the potential value V (x) for the voxel x within the area of interest, the following formula was used: V (x) = Cidti + (8) where Cx, C2, μ and? they are chosen constants for the task. In order to avoid any clash between the virtual chamber and the virtual colonic surface, a sufficiently large potential value is assigned for all points outside the colon. The potential field gradient will therefore become so significant that the submarine model chamber will never collide with the colon wall when it is being moved.
Another technique for determining the central line of the trajectory in the colon is the technique called "peel-layer" and is shown from figure 4 to figure 8.
Figure 4 shows a two dimensional transverse section of the volumetric colon with the two side walls 401 and 403 of the colon being shown. The two blocking walls are selected by the operator in order to define the colon section which is of interest to examine. Nad can be seen beyond the blocking walls. This helps reduce the number of computations when the virtual representation is shown. The blocking walls together with the side walls identify a contained volumetric form of colon which is to be explored.
Figure 5 shows two endpoints of the flight path of the virtual scan, the start volume element 501 and the finished volume element 503. The start and end points are selected by the operator in step 105 of the figure 1. The voxels between the start and end points and the sides of the colon are identified and marked as indicated by the area designated by "x" s in Figure 6. Voxels are three-dimensional representations of the element d photograph or image.
The layer peeling technique is then applied to the voxels identified and marked in Figure 6. The outermost cap of all the voxels (closest to the walls of the colon) is peeled step by step, until there is only one cap interior of voxel subtracting. Stated differently, each voxel farther from a central point is removed if the removal does not lead to a disconnection of the path between the start voxel and the termination voxel. Figure 7 shows the intermediate result after a number of repeats of voxel stripping in the virtual colon have been completed. The voxels closest to the walls of the colon have been removed. Figure 8 shows the final flight path for the model d camera down the center of the colon after all the peeling repeats have been completed. This essentially produces a skeleton in the center of the colon and becomes the desired flight path for the camera model.
Visibility Helped with Z-Diverter Figure 9 describes a real-time visibility technique for displaying virtual images seen by the camera model in the representation of three-dimensional virtuous volume of an organ. Figure 9 shows a display technique using a modified Z derailleur which corresponds to step 1 in figure 1. The number of voxels that can possibly be seen from the camera model is very large. Since the total number of elements (or polygons) that must be computed and displayed is reduced from a full set of voxels in the explored environment, the overall number of computations will make the display process exceedingly slow for a large internal area. However, in the present invention only those images which are visible on the surface of the colon need to be computed for their display. The explored environment can be subdivided into smaller sections or cells. The diverter technique then yields only a portion of the cells which are visible from the camera. The deviator technique Z also uses for three-dimensional voxel representations. The use of a modified Z-derailleur reduces the number of visual voxels that will be computed and allows the real-time examination of the virtual colon by a method or a medical technician.
The area of interest from which the centerline has been calculated in step 107 is subdivided into cells before the display technique is applied. The cells are collective groups of voxels which become a visibility unit. The voxels in each cell will be displayed as a group. Each cell contains a number of portals through which the other cells can be seen. The colon subdivided by starting at the selected start point and moving along the center line 10Cl to the termination point. The colon is then divided into cells (e.g., cells 1003, 1005 and 1007 in Figure 10) when a predefined threshold distance is reached along the central path. The threshold distance is based on the specifications of the platform on which the visualization technique and its storage and processing capabilities are carried out. The cell size is directly related to the number of voxels that can be stored and processed by the platform. An example of a threshold distance is 5 centimeters, even though the distance can vary greatly. Each cell has two cross sections as portals for viewing outside the cell as shown in Figure 10.
Step 901 in Figure 9 identifies the cell within the selected organ that currently contains the camera. The current cell will be displayed as well as other cells which are visible given the orientation of the camera. Pas 903 accumulates a thrust tree (tree diagram) of hierarchical data of cells potentially visible from the camera (through defined portals) as will be described in more detail below. The thrust tree contains a node for each cell that can be visible to the camera. Some of the cells can be transparent without any blocking bodies present so that a cell will be visible in a unique direction. Step 905 stores a subset of voxels from a cell which includes the intersection of the adjacent cell edges and stores these at the outer edge of the dead tree to more efficiently determine which cells are visible.
Step 907 checks if any circuit nodes are present in the thrust tree. A node d circuit occurs when two or more edges of a single cell are both in the limit on the same cell nearby. This can happen when a single cell is surrounded by another cell. S a circuit node is identified in the thrust tree, and the method continues with step 909. If there is no circuit node the process goes to step 911, - Step 909 folds the two cells making a circuit node into a large node. The thrust tree is then corrected accordingly. This eliminates the problem of vision of the same cell twice due to a node d circuit. The step is carried out on the identified circuit nodes. The process then proceeds to step 911.
Step 911 then initiates the Z-derailleur with the largest Z-value. The Z value defines the distance away from the camera along the skeleton path. The tree then traversed to first verify the intersection values at each node. If an intersection of node e cover, meaning that the current portal sequence is obstructed (which was determined by the derailleur test Z) then the transverse of the current branch in the tree s stops. Step 913 goes through each of the branches to verify if the nodes are covered and displays them if they are not.
Step 915 then constructs the image to be displayed on the operator's screen from the volume elements within the visible cells identified in passage 913 using one of a variety of techniques known in the art, such as making volume by composition . The only cells shown are those which are identified as potentially visible. This technique limits the number of cells that requires calculations in order to achieve a real time display and correspondingly increases the speed of the display for better performance. This technique is an improvement over previous techniques that calculate all possible visible data points whether or not they are actually seen.
Figure HA is a two-dimensional pictorial presentation of an organ which is being explored through guided navigation and requires to be displayed to an operator. The organ 1101 shows two side walls 1102 and an object 110 at the center of the path. The organ has been divided into four cells A 1151, B 1153, C 1155 and D 1157. The camera 110 faces the D cell 1157 and has a field of view defined by the vision vectors 1107 and 1108 which can identify a cone shape field. The cells that can potentially be seen are B cells 1153, C 1155 and D 1157. Cell C 1155 is completely surrounded by B cell and thus constitutes a node circuit.
Figure 11B is a representation of a thrust tree constructed from the cells in figure HA. The node 1109 which contains the camera is on the roof of the tree. A line of sight or a vision cone, which is a visible trajectory without being blocked, is drawn for node B 1110. E node B has direct visible lines of sight for both the C 1112 node and the D 1114 node and the which are shown by the connection arrows. The line of sight of the node C 1112 in the direction of the vision camera combines with the node B 1110. The node 1112 and the node B 1110 will therefore be folded into a large node B 1122 as shown in Figure 11C.
Figure HC shows node A 1109 containing the camera adjacent to node B '1122 (both nodes B containing nodes C) and node D 1114. Nodes A, B' and D will be at least partially displayed to the operator.
Figures 12A-12E illustrate the use of the deviated Z modified with cells containing objects which obstruct the views. An object can be some waste material in a part of the virtual colon. Figure 12 shows a virtual space with 10 potential cells: A 1251 B 1253, C 1255, D 1257, E 1259, F 1261, G 1263, H 1265, I 1267 J 1269. Some of these cells contain objects. If the camera 1201 is placed in the cell I 1267 and is on its way to the cell F 1261 as indicated by the visio vectors 1203, then a thrust tree is generated according to the technique illustrated by the flow diagram in the Figure 9. L Figure 12B shows the thrust tree generated with the intersection nodes shown for the virtual representation as shown in Figure 12A. Figure 12B shows cell I 126 as the root node of the tree because it contains l camera 1201. Node I 1211 is pointing to node F 1213 (as indicated by an arrow), because the F cell is connected directly to the line of sight of the camera. The F 1213 node is pointing to both the B node 1215 and the E node 1219 The B node 1215 is pointing to the A node 1217. The C 120 node is completely blocked from the line of sight by the 1201 cache so as not to be included in the Lunge tree.
Figure 12C shows the thrust tree after node I 1211 is rendered on the display for the operator. Node I 1211 is then removed from the dead tree because it has already been displayed and node F 1213 becomes the root. Figure 12D shows the node F 1213 that is now rendered to join node I 1211. The next nodes in the tree connected by the arrows are then checked to see if they are already covered (if they are already processed). In this example, all the nodes intersected by the camera placed in the I cell 1267 have been covered so that the node B 515 (and therefore the dependent node A) require the surrender in the exhibitor.
Figure 12E 515 being blocked to determine if its intersection has been covered. If it has done so, the only nodes rendered in this example of Figures 12A-12E are I and F nodes while nodes A, B and E are not visible and require having their cells ready to be displayed.
The modified Z-derailleur technique described in Figure 9 allows for fewer computations and can be applied to an object which has been represented by voxels or other data elements such as polygons.
Figure 13 shows a two-dimensional virtual view of a colon with a large polyp present along one of its walls. Figure 13 shows a selected section of a patient's colon to which it is further examined. The view shows two colon walls 1301 1303 with growth indicated as 1305. Layers 1307, 130 and 1311 show the inner layers of growth. It is desirable that a physician be able to peel the layers of the polyp tumor outward to see within the mass any cancerous material or other harmful material. This process will perform a virtual biopsy of the mass without actually cutting the mass. Once the colon is virtually represented by voxels, the process of peeling the layers of the object is easily carried out in a similar manner as described in conjunction with Figures 4 through 8. The mass can then be sliced so that a Particular transverse section can be examined. In Figure 13, a planar cut 1313 can be made so that a particular growth part can be examined. Additionally, a sliced defined by the user 1319 can be done in any manner in the growth. The 1319 voxels may lie peeled out or modified as explained below.
A transfer function can be carried out for each voxel in the area of interest it can make to the transparent, semi-transparent or opaque object by altering the coefficients representing the translucency for each voxel. U opacity coefficient assigned to each voxel based on identity. A mapping function then transforms the density value to a coefficient representing its translucency. A high density scanned voxel will indicate either a wall or other dense manner in addition to simply an open space. The operator or a program routine can then change the opacity coefficient of a voxel or a group of voxels to make them appear transparent or semi-transparent to the underwater camera model. For example, an operator can see a tumor inside or outside a full growth. Or a transparent voxe can be made to appear as if it were not present for the display step of Figure 9. A composite of a section of the object can be created using a heavy average of the opacity coefficients of the voxels in that section.
If a physician wishes to see the various layers of a polyp to see cancerous areas, this can be accomplished by removing the outer layer of the polyp 1305 by giving a first layer 1307. Additionally, the first inner layer 130 can undress back to see the second inner layer 1309. The second inner layer can be undressed back to see the third inner layer 1311, etc. The doctor can also slice the polyp 1305 and see only those voxels within a desired section. The sliced area can be completely defined by the user.
The addition of an opacity coefficient can also be used in other ways to aid in the exploration of a virtual system. If the waste material is present has a density like other properties within a known range, the waste can be made transparent to the virtual chamber by changing its opacity coefficient during the examination. This will allow the patient to avoid ingestion of an intestinal cleansing agent before the procedure and make the examination quicker and easier. Other objects can be made similarly disappear depending on the actual application. Additionally, some objects such as polyps can be improved electronically by a contrast agent followed by a use of an appropriate transfer function.
Figure 14 shows a system for carrying out the virtual examination of an object such as a human organ using the techniques described in this description. The patient 140 lies on a platform 1402 while the scanning device 1405 scans the area containing the organ or organ to be examined. The scanning device 1405 contains a scanned part 1403 which takes pictures of the patient and an electronic part 1406. The electronic part 1406 comprises an interconnection 1407., a central processing unit 1409, a memory 1411 for temporarily storing the scanned data and a second interconnection 1413 for sending data to the virtual navigation platform. The interconnection 1407 and 141 may be included in a single interconnect component or may be the same component. The components in part 1406 are connected together with conventional connectors.
In the system 1400, the data provided by the scanned part of the device 1403 is transferred to the part 1405 for processing and stored in the memory 1411. L central processing unit 1409 converts the two-dimensional data scanned to three-dimensional voxel data stores the results in another part of the memory 1411 Alternatively, the converted data may be sent directly to the interconnection unit 1413 for transfer to the virtual navigation terminal 1416. The conversion of the two-dimensional data may also take place in the virtual navigation terminal 1416 after if transmitted from the interconnection 1413. In the preferred embodiment, the converted data is transmitted on the carrier 1414 to the virtual navigation terminal 1416 so that an operator can perform the virtual examination. The data may also be transported in other conventional ways such as by storing the data in a storage medium, physically transporting it to terminal 1416 or by using satellite transmissions.
The scanned data may not be converted to its three-dimensional representation until the visualization machine requires them in three-dimensional form. This saves computing steps and memory storage space.
The virtual navigation terminal 1416 includes a screen for viewing the virtual organ or other scanned image, an electronic part 1415 and an interconnection control 1419 ta as a keyboard, a mouse or a spatial ball. The electronic part 1415 comprises an interconnection port 1421 a central processing unit 1423, other components 142 necessary to run the terminal and a memory 1425. The components in the terminal 1416 are connected together with the conventional connectors. The converted voxel data is received in the interconnect port 1421 and stored in the memory 1425. The central processing unit 1423 then assembles the 3D voxels into a virtual representation and runs the underwater camera model as described in FIGS. and performs the virtual exam. As the underwater camera moves through the virtual organ, the viewing technique as described in Figure 9 is used to compute only those areas which are visible from the virtual camera and display it on the 1417 screen. Graphs can also be used to generate the representations. The operator can use the interconnection device 1419 to indicate which part of the scanned body is desired to be scanned. The interconnection device 1419 may also be used to control and move the underwater camera as desired as discussed in Figure 2 and its accompanying description. Terminal part 1415 may be the dedicated system box Cube-4 generally available from the Department of Computer Science of the State University of New York, at Stony Brook.
The scanner device 1405 and the terminal 1416 or parts thereof can be part of the same unit. A single platform can be used to receive the scanned image data, connect it to three-dimensional voxels if necessary to carry out the guided navigation.
An important feature in the system 140 is that the virtual organ can be examined at a later moment without the presence of the patient. Additionally, the virtual examination may take place while the patient is being explored. The scan data can also be sent to multiple terminals which can allow more than one doctor to see the inside of the organ simultaneously. So, a doctor in New York may be seeing the same part of a patient's organ at the same time as a doctor in California while they discuss the case. Alternatively, the data can be viewed at different times. Two or more doctorate students can carry out their own exam with the same data in a difficult case. Multiple virtual navigation terminals can be used to view the same scanned data. By reproducing the organ as a virtual organ with a discrete data session, there are multiple benefits in the areas such as accuracy, cost and possible manipulations of data.
The foregoing merely illustrates the principles of invention. It will be appreciated by those skilled in the art that it will be possible to design numerous systems, apparatuses and methods which, although not explicitly described herein, involve the principles of the invention and are therefore within the spirit and scope of the invention as defined by its clauses.
For example, the methods and systems described here can be applied to examine virtually an anima a fish or an inanimate object. In addition, the declared uses in medical field, the applications of the technique can be used to detect the contents of the sealed objects which can be opened. The technique can also be used within an architectural structure such as a building or a cavern to allow the operator to navigate through the structure.

Claims (68)

R E V I N D I C A C I O N S
1. A method for carrying out a three-dimensional virtual examination of at least one object comprising: explore with an explorer device producing scan data representative of said object; creating a three-dimensional volume representation of said object comprising volume elements of said scanning data; selecting a starting volume element and ending volume element of said three-dimensional volume representation; generate a defined path between the start and end volume elements; carrying out a guided navigation of the three-dimensional representation along a path between said start and end volume elements; display in real time said volume elements in response to said trajectory and at the entrance of the operator during the given navigation.
2. The method as claimed in clause 1, characterized in that said navigation step includes assigning a potential value to each of said volume elements.
3. The method as claimed in clause 2, characterized in that said potential values are assigned to be the largest near the walls of the object.
4. The method as claimed in clause 1, characterized in that said path is preselected.
5. The method as claimed in clause 1, characterized in that said defined path is located approximately equidistant from the outer walls of the object.
6. The method as claimed in clause 5, characterized in that a plurality of said volume elements are assigned low potential values and are located along said defined path.
7. The method as claimed in clause 1, characterized in that said object is an organ.
8. The method as claimed in clause 7, characterized in that said organ is a colon.
9. The method as claimed in clause 7, characterized in that said organ is a lung.
10. The method as claimed in clause 7, characterized in that said organ is at least a blood vessel.
11. The method as claimed in clause 1, characterized in that said display step includes identifying each of the volume elements which are visible along said path.
12. The method as claimed in clause 11, characterized in that said identification is carried out using a hierarchical data structure containing vision data.
13. The method as claimed in clause 1, characterized in that said navigation carried out is a guided navigation.
14. The method as claimed in clause 13, characterized in that said guided navigation uses a camera model to simulate a trip along said trajectory.
15. The method as claimed in clause 14, characterized in that said position of the model d camera can be changed in six degrees of freedom.
16. The method as claimed in clause 1, characterized in that said preselected trajectory and also allows changes in said camera orientation based on the input of an operator.
17. The method as claimed in clause 16, characterized in that said virtual examination exhibits only said volume elements in a cone of view of the camera model.
18. The method as claimed in clause 1, characterized in that said navigation step includes selecting a central line by removing the volume elements closer to the walls of the object until only one path remains.
19. The method as claimed in clause 1, characterized in that said virtual examination further includes a step of assigning opacity coefficients to each said volume elements.
20. The method as claimed in clause 19, characterized in that said opacid coefficients of the selected volume elements are changed response to the input of an operator.
21. The method as claimed in clause 20, characterized in that said volume elements c and low opacity coefficients are not displayed during the display step.
22. The method as claimed in clause 21, characterized in that at least one opacity coefficient of volume element is changed so that a changed volume element is not displayed in said display step.
23. The method as claimed in clause 20, characterized in that said volume elements are exhibited as translucent to a degree of response to the opacity coefficients of said volume elements.
24. The method as claimed in clause 1, characterized in that at least one associated data volume element is changed so that the changed volume element d is not displayed in said display step.
25. The method as claimed in clause 1, further characterized in that it comprises the step d preparing the object for exploration.
26. The method as claimed in clause 25, characterized in that said preparation step includes coating said object with a substance for improving the contrast of said object for scanning.
27. The method as claimed in clause 1, characterized in that said production of a discrete data representation of said object step includes scanning said object.
28. The method as claimed in clause 1, characterized in that said production of a discrete data representation of said object step includes creating a voxel image of a geometric model.
29. A method for carrying out a virtual three-dimensional internal examination of at least one organ comprising scanning the organ with a rheological scanning device and producing scanning data representative of said organ; creating a three-dimensional volume representation of said organ comprising volume elements of said scanning data; selecting a starting volume element and ending volume element of said three-dimensional volume representation; generate a defined path between said start and end volume elements; carry out a guided navigation of said three-dimensional representation along said trajectory; display in real time said volume elements in response to said trajectory and at the entrance of the operator during guided navigation.
30. The method as claimed in clause 29, characterized in that said guided navigation step includes assigning a potential value to each of the volume elements.
31. The method as claimed in clause 30, characterized in that said potential values are assigned to be the largest near the organ walls.
32. The method as claimed in clause 29, characterized in that said pre-selected trajectory is located in an approximately equidistant form of said outer walls of the object.
33. The method as claimed in clause 32, characterized in that low potential values are assigned to said plurality of volume element and are located along the defined path.
34. The method as claimed in clause 29, characterized in that said organ is a colon.
35. The method as claimed in clause 29, characterized in that said organ is a lung.
36. The method as claimed in clause 29, characterized in that said organ is at least a blood vessel.
37. The method as claimed in clause 29, characterized in that said step of displaying includes identifying each of the volume elements which are visible along said path.
38. The method as claimed in clause 37, characterized in that said identification was carried out using a hierarchical data structure containing vision data.
39. The method as claimed in clause 29, characterized in that said guided navigation uses a camera model to simulate a trip along said trajectory,
40. The method as claimed in clause 39, characterized in that said position of the model d camera can be changed in six degrees of freedom.
41. The method as claimed in clause 29, characterized in that said pre-selected trajectory and furthermore allows changes in the orientation of the camera based on the input of an operator.
42. The method as claimed in clause 41, characterized in that said virtual examination exhibits only said volume elements in a line of view of dic camera model.
43. The method as claimed in clause 29, characterized in that said guide navigation step includes selecting a central line by removing the volume elements closest to said walls of the body until only one path remains.
44. The method as claimed in clause 29, characterized in that said virtual examination also includes a step of assigning opacity coefficients to each one of the volume elements.
45. The method as claimed in clause 44, characterized in that said opacid coefficients of the selected volume elements are changed response to the input of an operator.
46. The method as claimed in clause 45, characterized in that said volume elements c low opacity coefficients are not exhibited during the display step.
47. The method as claimed in clause 46, characterized in that at least one opacity coefficient of volume element is changed so that said changed volume element is not displayed in said display step.
48. The method as claimed in clause 45, characterized in that said volume elements are exhibited as translucent to a degree in response to the operating coefficients of said volume elements.
49. The method as claimed in clause 29, characterized in that at least one volume element is changed so that the changed volume element is not displayed in said display step.
50. The method as claimed in clause 29, further characterized in that it comprises the step of preparing the organ for scanning.
51. The method as claimed in clause 50, characterized in that said preparation step includes cleaning said organ from movable objects.
52. The method as claimed in clause 50, characterized in that said preparation step includes coating said organ with a substance to improve the contrast of said organ for scanning.
53. A system to carry out a virtual three-dimensional examination of an object that comprises: an apparatus for producing a discrete representation of said object; an apparatus for converting said discrete representation into three-dimensional volume data elements; an apparatus for selecting an area to be displayed from said three-dimensional volume data elements; an apparatus for carrying out a guided navigation along a path of the selected three-dimensional volume data elements; Y an apparatus for displaying said volume elements in real time in proximity along said trajectory and in response to the entry of the operator.
54. The system as claimed in clause 53, characterized in that said three-dimensional data elements include opacity coefficients and dich display apparatus responds to the operating coefficients.
55. The system as claimed in clause 54, characterized in that said action apparatus is capable of changing the selected volume data element opacity elements.
56. The system as claimed in clause 55, characterized in that said display apparatus is capable of exhibiting a volume element translucently responsive to said opacity coefficients.
57. The system as claimed in clause 53, characterized in that said conversion apparatus said action apparatus are contained within a single unit.
58. The system as claimed in clause 53, characterized in that said apparatus for producing a discrete representation of said object produces scanning data and said scanning data is stored separately from the conversion apparatus.
59. The system as claimed in clause 53, characterized in that said apparatus for producing a discrete representation of said object produces scan data and said scan data is stored separately from said selection apparatus.
60. The system as claimed in clause 53, further characterized in that it includes at least one additional selection apparatus, the action apparatus and a display apparatus for carrying out the additional three-dimensional virtual examinations of said object.
61. A system for carrying out a virtual three-dimensional internal examination of an organ comprising. an apparatus for scanning said organ and producing scanning data representative of said organ; an apparatus for converting said scan data into data elements of three-dimensional volume; an apparatus for selecting an area to be displayed from said three-dimensional volume data elements; an apparatus for carrying out a guided navigation along a path of said selected three-dimensional volume data elements; Y an apparatus for displaying said volume elements in real time along said path in response to the operator input.
62. The system as claimed in clause 61, characterized in that said three-dimensional volume data elements include the opacity coefficients said display apparatus responds to said opacity coefficients.
63. The system as claimed in clause 62, characterized in that said action apparatus is capable of changing opacity coefficients of selected volume data elements.
64. The system as claimed in clause 63, characterized in that said display apparatus is capable of exhibiting a volume element translucently.
65. The system as claimed in clause 61, characterized in that said conversion apparatus said action apparatus are contained within a single unit.
66. The system as claimed in clause 61, characterized in that said scanning data is stored separately from the conversion apparatus.
67. The system as claimed in clause 61, characterized in that said scanning data is stored separately from said selection apparatus.
68. The system as claimed in clause 61, further characterized in that it includes at least one additional selection apparatus, the action apparatus and the display apparatus for carrying out the additional three-dimensional virtual examinations of said organ. R E S U E N The invention is a system and method for generating a three dimensional visualization image of an object, such as an organ using volume visualization techniques and explores the image using a guided navigation system that allows the operator to move along a trajectory. of flight, and to adjust the vision to a particular part of the image d interest, for example, identify polyps, remove or other abnormal characteristics in the displayed organ. An electronic biopsi can be carried out on a growth or more identified in the displayed object.
MXPA/A/1999/002340A 1996-09-16 1999-03-10 System and method for performing a three-dimensional virtual examination MXPA99002340A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US08714697 1996-09-16

Publications (1)

Publication Number Publication Date
MXPA99002340A true MXPA99002340A (en) 2000-05-01

Family

ID=

Similar Documents

Publication Publication Date Title
AU734557B2 (en) System and method for performing a three-dimensional virtual examination
US7148887B2 (en) System and method for performing a three-dimensional virtual segmentation and examination with optical texture mapping
KR100790536B1 (en) A method for generating a fly-path through a virtual colon lumen
EP1743302B1 (en) System and method for creating a panoramic view of a volumetric image
US7194117B2 (en) System and method for performing a three-dimensional virtual examination of objects, such as internal organs
US20030132936A1 (en) Display of two-dimensional and three-dimensional views during virtual examination
US20070276225A1 (en) System and method for performing a three-dimensional virtual examination of objects, such as internal organs
IL178768A (en) System and method for mapping optical texture properties from at least one optical image to an acquired monochrome data set
MXPA99002340A (en) System and method for performing a three-dimensional virtual examination
MXPA01009387A (en) System and method for performing a three-dimensional virtual examination, navigation and visualization
MXPA01009388A (en) System and method for performing a three-dimensional virtual segmentation and examination