MXPA01009387A - System and method for performing a three-dimensional virtual examination, navigation and visualization - Google Patents

System and method for performing a three-dimensional virtual examination, navigation and visualization

Info

Publication number
MXPA01009387A
MXPA01009387A MXPA/A/2001/009387A MXPA01009387A MXPA01009387A MX PA01009387 A MXPA01009387 A MX PA01009387A MX PA01009387 A MXPA01009387 A MX PA01009387A MX PA01009387 A MXPA01009387 A MX PA01009387A
Authority
MX
Mexico
Prior art keywords
colon
virtual
lumen
data
volumetric
Prior art date
Application number
MXPA/A/2001/009387A
Other languages
Spanish (es)
Inventor
Arie E Kaufman
Zhengrong Liang
Mark R Wax
Ming Wan
Dongqing Chen
Original Assignee
The Research Foundation Of State University Of New York
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Research Foundation Of State University Of New York filed Critical The Research Foundation Of State University Of New York
Publication of MXPA01009387A publication Critical patent/MXPA01009387A/en

Links

Abstract

A system and method for generating a three-dimensional visualization image of an object such as an organ using volume visualization techniques and exploring the image using a guided navigation system which allows the operator to travel along a flight path and to adjust the view to a particular portion of the image of interest in order, for example, to identify polyps, cysts or other abnormal features in the visualized organ. An electronic biopsy can also be performed on an identified growth or mass in the visualized object. Improved fly-path generation and volume rendering techniques provide enhanced navigation through, and examination of, a region of interest.

Description

SYSTEM AND METHOD TO PERFORM AN EXAM, NAVIGATION AND TRIDIMENSIONAL VIRTUAL VISUALIZATION This application is, in part, the continuation of the US patent application, series No. 09 / 343,012, filed on June 29, 1999 and entitled "System And Method For Performing To Three-Dimensional Virtual Segmentation And Examination", which in turn, it is in part continuation of U.S. Patent No. 5,971,767, entitled "System and Method for Performing to Three Dimensional Virtual Examination". The present application also requests the benefit of the provisional US patent application, series No. 60 / 125,041, filed on March 18, 1999 and entitled "Three Dimensional virtual Examination." TECHNICAL FIELD The present invention relates to a system and method for performing a virtual three-dimensional examination based on volume and, more specifically, with a system that offers improved display and navigation properties.
BACKGROUND OF THE INVENTION Colon cancer remains one of the leading causes of death worldwide. The early detection of cancerous growths, which in the colon of man ^ ft manifested as polyps, can greatly improve the 5 chances of patient recovery. Currently there are two conventional methods for the detection of polyps and other growths in a patient's colon. The first is colonoscopy, which uses a tube with flexible optical fibers called a colonoscope to examine AlO visually the colon after inserting it rectally. The doctor can manage the tube to look for any abnormal growth in the colon. Although it turns out to be a reliable method, colonoscopy is also relatively expensive, its completion takes time and it turns out to be an uncomfortable, invasive and painful procedure for the patient.
The second detection technique consists in applying a barium enema and taking two-dimensional radiography of the colon. The barium enema is used to coat the barium colon, 20 and the two-dimensional radiograph is taken to capture the colon image. However, barium enemas do not always provide a view of the entire colon, require intensive prior treatment and management of the patient, the operation usually requires an operator, exposes the patient to excessive radiation, and may be less accurate than colonoscopy. Due to the deficiencies of the conventional methods already described, a more reliable, less intrusive and less expensive means is recommended to verify the existence of polyps in the colon. A method is also recommended to examine other human organs, such as the lungs, for growth that is reliable and cost effective, and that is not so uncomfortable for the patient.
The two-dimensional visualization of human organs using the imaging devices available in the medical area, such as computed tomography and magnetic resonance imaging, has been widely used for patient diagnoses. Three-dimensional images are formed by superimposing and interpolating two-dimensional images obtained from scanning machines. The generation of images of an organ and the three-dimensional visualization of its volume are beneficial because they do not imply a physical intrusion and it is easy to manage the data. However, the analysis of a three-dimensional volumetric image must be carried out adequately to take full advantage of the virtual observation of an organ from within.
When observing the virtual three-dimensional volumetric image of an environment, a functional model must be used to analyze the virtual space. A possible model is a virtual camera that the observer can use as a reference point to analyze the virtual space. The control of the camera in the context of navigation within a general virtual environment in third dimension has already been studied previously. There are two conventional ways to control the camera in the navigation of virtual spaces. In the first, the operator fully controls the camera, which allows you to place it in different positions and angles to achieve the desired view. Being literally the "pilot" of the camera, the operator can explore a particular area and ignore the others. However, an absolute control of the camera in a wide field would be tedious and exhausting, and the operator might not see all the important features between the starting point and the conclusion of the scan.
The second technique for controlling the camera is a planned navigation method, which assigns the camera a predetermined route to follow that can not be modified by the operator. This is equivalent to having an "autopilot", which allows the operator to concentrate on the observed virtual space since he does not have to concentrate on maneuvering within the analyzed environment. However, this second technique does not give the observer the option to modify the course or investigate an interesting area that has been observed during the trajectory.
The ideal would be to use a combination of both described navigation techniques to take advantage of their advantages and minimize their drawbacks. It would be advisable to apply a flexible navigation technique to the examination of human or animal organs in a virtual three-dimensional space in order to perform a thorough, painless and non-intrusive examination. This ideal navigation technique would also allow the operator to examine in a flexible and complete way the exterior and interior of an organ in a virtual three-dimensional space. It would also be ideal to be able to show the exploration of the organ in real time using a technique that minimizes the calculations necessary to visualize the organ. The desired technique could also be applied to the exploration of any virtual object.
Another object of the invention is to assign opacity coefficients to each volumetric element in the representation in order to render it transparent or translucent to a different degree and thus adapt the visualization of a portion of the observed object. This portion of the object could also be composed using the opacity coefficients.
COMPENDIUM OF THE INVENTION The invention generates the image of three-dimensional visualization of an object - a human organ, for example - using volumetric visualization techniques and analyzes the virtual image using a guided navigation system flfcl that allows the operator to move along a route defined in advance and adjust both the position and the angle of observation to see an area of interest in the image that is not within the defined path, in order to identify polyps, cysts or other abnormal characteristics of the organ.
According to a navigation method to perform • virtual examinations, a route is generated through a virtual organ such as, for example, the lumen of the colon. From the volumetric representation element of the colon lumen, the volume decrease from the virtual lumen wall of the colon is used to generate a series of data of the decreased lumen of the colon. From the data series of the decreased colon lumen, a minimum distance path between ends of the lumen of the virtual colon is generated. Subsequently, control points are obtained along the minimum distance path along the length of the virtual colon fl lumen. The control points are then centered within the lumen of the virtual colon. Finally, a line is interpolated between the centered control points to define the final navigation path.
In the above method for generating a route, the step of decreasing the volume may include the following steps: represent the colon lumen as plural batches of graphic data, apply a discrete small wave transformation to the graphic data to generate a plurality of series of subdata with elements in a plurality of frequencies and, finally, choose the lowest frequency elements among the series of sub-data.
Another method for generating a path through the virtual colon lumen during virtual colonoscopy includes step 20 of dividing the virtual colon lumen into a series of segments. Within each segment a point is chosen and the points are centered with respect to a lumen wall of the virtual colon. The control points centered later join to create the route.
A method for examining the virtual colon lumen includes the volumetric representation operation. From each observation point within the lumen of the colon, rays are shot through each pixel of the image. In the case of each ray, the shortest distance from the observation point to the wall of the colon lumen is determined. If the distance is greater than a predetermined sampling interval, the process makes a jump equivalent to the distance along the ray and assigns a value based on a transfer function of open spaces to the points along the ray over the distance jumped If the distance is not greater than the sampling interval, then a sample of the current points is taken and the visualizable properties are determined according to a transfer function.
The methods of image generation and volumetric representation also lend themselves to a virtual biopsy of a region (for example, the wall of the colon or suspicious growth). From the three-dimensional structure of an area obtained with the data provided by the image-generating scanner, the volumetric representation is applied using an initial transfer function to the area to navigate the colon lumen and visualize the area surface. When a suspicious area is detected, the dynamic alteration of the transfer function allows an operator (a doctor, for example) ^ fc to selectively modify the opacity of the region and the composite information that is being observed. This allows to see in third dimension the internal structure of suspicious areas such as a polyp.
In still another method according to the present invention, polyps located on the surface of an area under study can be detected automatically. The lumen of the colon is represented by a plurality of volumetric units. The colon surface of the lumen is represented as a second differentiable surface continuously in which each volumetric unit of the surface has a related Gaussian curvature. Gaussian curvatures can be searched and evaluated automatically to detect local characteristics that deviate from the trend in that area. These characteristics sites corresponding to convex protuberances resembling mounds on the surface of the colon wall are classified as polyps to be analyzed in more detail.
Another method according to the present invention for generating virtual images, the realization of a virtual colonoscopy, includes the step of obtaining a series of graphic data of a region, such as the colon, and converting said data into volumetric units. The volumetric units that represent a wall of the colon lumen are identified and a path is created to navigate through the lumen of the colon. Then, at least one transfer function is used to map the color and opacity coefficients to the lumen wall of the colon. The colon can then be displayed along the route according to the assigned transfer functions.
In the virtual colonoscopy method, the step to generate a path can include the decrease in volume from the lumen wall of the virtual colon to generate a series of reduced data. From this series of reduced data, a minimum distance path is generated between the ends of the lumen of the virtual colon. In this way, control points can be assigned in the virtual colon lumen along the minimum distance path. The control points within the lumen of the virtual colon are subsequently centered and a line of junction is interpolated between the centered control points to complete the navigation path.
Alternatively, it is possible to generate a route by dividing the virtual colon lumen into a plurality of segments, choosing a point within each segment, centering the points with respect to the lumen wall of the virtual colon and joining the centered points to create the path .
According to the present invention, the system for generating images, navigating and examining a region three-dimensionally includes an image-generating scanner, such as magnetic resonance or computed tomography, to obtain graphic data. A processor converts the graphic data into a plurality of volumetric elements that form a series of volumetric element data. The processor also performs the following steps: identify the volumetric units that represent the lumen wall of the colon, create a path to navigate through said colon lumen and apply at least one transfer function to map the color and opacities to the lumen wall of the colon. A monitor operatively linked to the processor allows to visualize a representation of the area according to the route and at least one transfer function.
Another alternative computerized system for a virtual examination, formed in accordance with one embodiment of the present invention, is based on a bus architecture. A scanner interface board is attached to the bus structure and • 5 provides data from the scanner generating images to the bus. A main memory that is attached to the bus is also provided. A volumetric display plate with a locally resident volumetric memory receives at least part of the generator scanner data of images and stores this information in the volumetric representation memory during a volumetric representation operation. A graphics board is attached to the bus structure and to a monitor to display the system images. A processor is attached operationally to the bus structure and responds to the data coming from the image generating scanner. The processor converts the data of the image generating scanner into a representation of the volumetric element, stores the representation of the volumetric element in the memory principal, divides the representation of the volumetric element into cuts and transfers them to the volumetric representation plate.
BRIEF DESCRIPTION OF THE DRAWINGS The other objects, features and advantages of the invention will be apparent by virtue of the detailed description and the accompanying drawings that exemplify a preferred embodiment of the invention, and where: Figure 1 is a flow chart with the steps for virtually examining an object, specifically a colon, according to the invention; Figure 2 is an illustration of a "underwater" camera model performing guided navigation on the virtual organ; Figure 3 is an illustration of a pendulum used to model the displacement factor of the "underwater" camera, - Figure 4 is a diagram illustrating a two-dimensional cross section of a volumetric colon identifying two blocking walls; Figure 5 is a diagram illustrating a two dimensional cross section of a volumetric colon in which the start and end volume elements are selected; Figure 6 is a diagram illustrating a two-dimensional cross section of a volumetric colon showing fc a discrete subvolume delimited by the blocking walls 5 and the surface of the colon; Figure 7 is a diagram illustrating a two dimensional cross section of a volumetric colon with multiple detached layers; Figure 8 is a diagram illustrating a two-dimensional cross section of a volumetric colon containing the remaining route; Figure 9 is a flow chart of the steps for generating a volumetric display of the scanned organ; Figure 10 is an illustration of the virtual colon that has been subdivided into cells; 20 Figure HA is a graphic representation of an organ that is being examined virtually; Figure 11B is a graphical representation of a tree diagram generated by depicting the organ in Figure HA; Figure 11C is another graphical representation of a tree diagram generated by depicting the organ in Figure HA; Figure 12A is a graphic representation of an L10 scene to be represented, with objects within certain cells of the scene; Figure 12B is a graphical representation of a tree diagram generated by representing the scene in Figure 12A; Figures 12C-12E are other graphical representations of tree diagrams generated by depicting the image in Figure 12A; Figure 13 is a two-dimensional representation of a virtual colon with a polyp whose layers can be detached; Figure 14 is a diagram of a system used to virtually examine a human organ according to the invention; Figure 15 is a flowchart representing an improved method of image segmentation; Figure 16 is a graph showing the intensity of the voxels versus the frequency of a series of L10 data obtained by a typical computerized tomography of the abdomen; Figure 17 is a diagram of the perspective view of the structure of an intensity vector including an interesting voxel 15 and its selected neighbors; Figure 18A is a typical cut-off of an image obtained by computerized tomography scanning of the abdominal region of a human being, which mainly shows an area that includes the lungs; Figure 18B is a pictorial diagram showing the identification of the lung area in the section of the image of Figure 18A; Figure 18C is a pictorial diagram showing the volume extraction in the lung identified in Figure 18B; Figure 19A is a typical cut of an image obtained by computerized tomography scanning of the abdominal region of a human being, showing mainly an area that includes part of the colon and bone; , Figure 19B is a pictorial diagram showing the identification of the area with the colon and bone in the section of the image of Figure 19A; Figure 19C is a pictorial diagram showing the scanned image of Figure 19A with the bone regions that were extracted; Y Figure 20 is a flow diagram showing a method for applying texture to the data of a monochromatic image.
Figure 21 is a flow diagram showing a volumetric representation method that uses a fast perspective technique by emitting lightning.
Figure 22 is a flow diagram showing a method for determining the central route through the lumen of a colon using the volume reduction technique.
Figure 23 is a flow chart that also shows a volume reduction technique that can be used in the method illustrated in Figure 22.
Figure 24 is a three-dimensional pictorial representation of a segmented colon lumen and its central path.
Figure 25 is a flowchart exemplifying a method for generating a central path through the lumen of a colon using a segmentation technique.
Figure 26 is a block diagram of a modality of a system based on the bus architecture of a personal computer.
Figure 27 is a flowchart exemplifying a method for generating volumetric images using the system of Figure 26.
DETAILED DESCRIPTION OF THE PREFERRED MODALITIES Although the methods and systems described in this application can be applied to any object that wishes to be studied, the preferred embodiment to be described is the study of an organ of the human body, specifically the colon k10. The colon is an elongated tube with angles and curves, due to which it is particularly suitable to analyze it virtually. In this way, the patient saves money and avoids the discomforts and risks of a physical sounding. Other examples of organs that can studied in this way are the lungs, the stomach and parts of the gastrointestinal system, the heart and blood vessels.
Figure 1 exemplifies the steps necessary to perform a virtual colonoscopy using volume visualization techniques. In step 101 the colon is prepared for scanning so that it is visualized for examination, if so required by the doctor or the particular scanning instrument. This preparation may include cleaning the colon with a "cocktail" or liquid that enters the colon after being administered orally and passing through the stomach. The cocktail forces the patient to expel stool present in the colon. An example of this purgative is Golytely. Additionally, in the case of the colon, air or C02 can be insufflated in order to distend the colon and facilitate its scanning and study. This can be achieved by inserting a small tube into the rectum and pumping about 1,000 cc of air to expand the colon. Depending on the type of scanner used, it may be necessary for the patient to drink a contrast substance, such as barium, to cover the unexposed stool and thus distinguish it from the walls of the colon. Another option is to remove the virtual fecal matter with the method to virtually examine the colon, either before or during the virtual examination, as will be explained later in this specification. Step 101 does not need to be done on all exams, as indicated by the dotted line in Figure 1.
In step 103 the organ to be evaluated is scanned. The scanner can be a device well known to those skilled in the art, such as a helical computed tomograph for the colon, or a Zenith magnetic resonance machine for scanning a lung marked with xenon gas, for example. The scanner must be able to take multiple images from different positions around the body while holding the breath, in order to generate the necessary data to visualize the volume. For example, to obtain a single image of the computerized tomograph, a 5 mm wide X-ray beam would be used, with a displacement factor of 1: 1 to 2: 1 and a field of view of 40 cm from the top from the splenic flexure to the rectum.
In addition to scanning, there are other methods to obtain discrete data representations of that object. By means of a geometrical model, data of voxels representing an object can be obtained by applying the techniques described in U.S. Patent No. 5,038,302 entitled "Method of Converting Continuous Three-Dimensional Geometrical Representations into, Discrete Three-Dimensional Voxel -Based Representations Within a Three- Dimensional Voxel-Based System "by Kaufman, issued on August 8, 1991 and requested on July 26, 1988, which is considered to be reproduced as if it were inserted in the letter. Also, data can be generated by a computer model of an image that can be converted into three-dimensional voxels and analyzed according to this invention. An example of this type of data is a computer simulation of the turbulence surrounding a space shuttle.
In step 104 the scanned images are converted into three-dimensional volumetric elements (voxels). In the preferred embodiment for examining the colon, the scan data are reformatted into 5 mm thick cuts at 1 mm or 2.5 mm increments and reconstructed into 1 k10 mm slices, each of which is represented as a matrix of 512 x 512 pixels. When doing this, voxels of approximately 1 mm3 are created. Depending on the length of the scan, a large number of two-dimensional cuts will be generated. This series of two-dimensional cuts is later reconstructs in three-dimensional voxels. The process of converting two-dimensional scanner images into three-dimensional voxels is performed by the scanner itself or by another machine, such as a computer, by applying techniques well known to experts in the field.
The material (for example, see U.S. Patent No. 4,985,856 entitled "Method and Apparatus for Storing, Accessing, and Processing Voxel -based Data" by Kaufman et al., Issued January 15, 1991 and requested November 11). of 1998, which is considered to be reproduced as if it were inserted in the letter).
Step 105 allows the operator to define which portion of the selected organ he is going to examine. A doctor may be interested in a particular section of the colon that is prone to developing polyps. The doctor can use a panoramic map in two-dimensional cut to indicate the section to be examined. The doctor or operator can indicate the start and end point of the route he will see. A computer or interface (for example, conventional keyboard, mouse or fixed mouse) can be used to indicate which portion of the colon will be inspected. The doctor or operator can use a grid system or coordinates to enter the desired points with the keyboard, or mark them by clicking with the mouse. If desired, it is also possible to visualize the complete colon image.
In step 107, the guided or planned navigation of the virtual organ to be examined is carried out. Guided navigation is defined as navigating through an environment along a predefined or automatically defined route that the operator can manually adjust at any time. After the scan data has been converted into three-dimensional voxels, the inside of the organ must be traversed from the start point to the selected end. Virtual exams are modeled on the basis of a tiny camera that travels to through the virtual space with a lens directed towards the end. The guided navigation technique provides some interaction with the camera, so that it can automatically navigate through a virtual environment without the operator intervening and, at the same time, allows the operator ^ 1 handle the camera when necessary. The preferred mode for guided navigation is to use a physically based camera model that employs potential fields to control the movement of the camera, which are described in detail in Figures 2 and 3. In Step 109, which can be performed next to step 107, the interior of the organ is seen from the perspective of • camera model along the route chosen for guided navigation. You can generate visualizations Three-dimensional models using techniques well known to those skilled in the art as "floating buckets". However, to visualize the colon in real time, a technique is required that decreases the extensive number of operations with data necessary to show the virtual organ. Figure 9 describes this viewing step in more detail. j ^ k The method described in Figure 1 can also be applied to the simultaneous scanning of multiple organs of the body. For example, the cancerous nodules of a patient in the colon and in the lungs can be analyzed. The method in Figure 1 would be modified to scan all the areas of interest in step 103 and the organ that would like to be chosen would be chosen. examined in step 105. For example, initially the doctor or operator could choose to virtually explore the colon and then the lung. Alternatively, two doctors with different specialties could explore virtually two different organs scanned that were related to their respective specialty. In accordance with step 109, the next organ to be examined is chosen, and the corresponding portion is defined and explored. This process is repeated until all the organs that are to be examined have been processed. The steps described in relation to Figure 1 can also be applied to the exploration of any object that can be represented by volumetric elements. For example, an architectural structure or an inanimate object can be represented and studied in this way.
Figure 2 depicts a "underwater" camera control model 5 that performs the guided navigation technique described in step 107. When there is no operator direction during guided navigation, the default navigation is similar to the planned navigation, which automatically directs the camera by a route from one end chosen from the colon to another. During the planned navigation phase, the camera remains in the center of the colon to obtain better views of the surface of the colon. When an interesting area appears, the operator of the virtual camera that uses guided navigation can interact with the camera to bring it closer to the specific area and direct its movements and angle in order to study in detail the area of interest and not crash unintentionally with the walls of the colon. The operator can control the camera with a standard interface device, either a keyboard or a mouse, or a non-standard device (a fixed mouse). In order to fully handle a camera in a virtual environment, six degrees of sleeve are required so that the camera can move in horizontal, vertical and Z direction (axes 217), and three other degrees of sleeve to rotate (axes 219). This way, you can scroll and scan all the sides and angles of a virtual environment. The camera model for guided navigation includes an inextensible and weightless bar 201 that joins two particles, Xi 203 and x2 205, which are subject to a potential field 205. The height assigned to the potential field will be greater than the height of the walls of the organ in order to move the camera away from the walls.
Xi and x2 provide the position of the particles and it is assumed that they have the same mass m. In front of the submarine Xi 203 is a camera; Does the direction in which it points coincide with x2X? . The submarine can perform a movement of translation and rotation around the center of mass x of the model when the two particles are affected by the forces of the potential field V (x) defined below, by any force of friction and by any external force simulated . The relationships between xi, x2 and x are as follows: X = (X, Y, Z), r = (rsin? Cos? Rsin? Sin? Rcos?), X, = x + r, X2 = x - r, where r,? and F are polar coordinates of the vector xxi, The kinetic energy of the model, T, is defined as the sum of the kinetic energy of the movement of Xi and x2: T T = - m ([x12 +, x2 A) mx2 + mr2 = m (x2 + y2 + ¿2) + mr2 (? 2sen2?) (2) Then, the equations corresponding to the movement of the submarine model are obtained by using the LaGrange equation: where qjS are the generalized coordinates of the model and can be considered as variables of time t as: (q?, q2, q3, q4, q5, qe) = (x, y, z,?,,?) = g (t) (4; when ? denotes the balance angle of our camera system, which will be explained later. The Fis are called generalized forces. The control of the submarine is carried out by applying a simulated external force to Xi, and it is assumed that both i and x2 are affected by the forces of the potential field and by frictions acting in the opposite direction of the velocity of each particle. Consequently, generalized forces have the following formula: F = - m V V (xx) - kx? + Fexl, F2 = - m V V. { x2) - kx 2 (3) where k denotes the coefficient of friction of the system. He The operator applies the Fext external force by simply clicking the mouse button in the desired direction 207 within the generated image, as shown in Figure 2. The camera model would then move in that direction. This allows the operator to control at least five degrees of the sleeve of the camera with just one click of the mouse button. From the equations (2), (3) and (5), the acceleration of the five parameters of our submarine model can be deduced as: dV (x?) dV (x2) kz f2 z = + dz dz m 2m k \ ¡\ -? + \ FX cos # cos ^ + E cos # without? -Ez without #), m 2mr where X X denote the first and second derivatives of x, dV (x) dV (x) dV (x) 10 respectively, and denotes the gradient of dx dy dz potential at a point x.
The terms f 12 26 > cos < 9 sin? Cos? from ? and of f are called without # centrifugal force and Coriolis force, respectively, and • intervene in the exchange of angular velocities of the submarine. Since the model does not have the moment of inertia defined by the submarine's bar, these terms tend to cause an overrun of the numerical calculation of F.
Fortunately, these terms become significant only when the angular velocities of the model of the submarines are significant, which in essence means that the camera moves too fast. Since it does not make sense to allow the camera to move so fast that it does not allow to see an organ properly, these terms are minimized in our application to avoid the problem of overrun.
From the first three formulas of Equation (6), it is known that the submarine can not be propelled by external force against the potential field if the following condition is met: Since the speed of the submarine and the external force Fext have upper limits in our application, by assigning high enough potential values in the limit of the objects, it can be guaranteed that the submarine Never hit against objects or walls in the environment.
As mentioned above, is it necessary to consider the balance angle? of the camera system. One possible option allows the operator to fully control the angle? However, while the operator can rotate the camera freely around the bar of the model, it is easy to become disoriented. The preferred technique assumes that the upper direction of the chamber is connected to a pendulum with a mass m2 301, which rotates freely around the submarine bar, as shown in Figure 3. The direction of the pendulum, r2, It is expressed as: r2 = r2 (cos? cosFsin? + sinFcos? 2cos? sin! ffsin-cosFcos ¥ '2-sin? sin!).
While it is possible to calculate the precise movement of this pendulum along with the movement of the submarine, this complicates the equations of the system too much. Therefore, it is assumed that all generalized coordinates except the balance angle? they are constant and, therefore, the independent kinetic energy of the pendular system is defined as: 2 _ m2r2 • T = ^ X 2"2 '2 ~ ~ -F This simplifies the model of the balance angle. Since it is assumed in this model that the gravitational force Fg = 2g = (m2gx m2gy / m2gz) 10 acts at the point of the mass m2, the acceleration of? can be obtained using the LaGrange equation as: • '15 From Equations (6) and (7), the generalized coordinates q (t) and their derivatives q (t) are calculated asintomatically using the Taylor series as: h2 q (t + h) = q. { t) + hq (t) + - q (t) + 0 (h q (t + h) = q (t) + hq (t) + 0 (h2) to move the submarine freely. In order to smooth the displacement of the submarine, the value chosen for the time interval h will be a balance between the smallest number possible to smooth the displacement, but as large as necessary to reduce the cost of the calculation.
Definition of the potential field The potential field in the submarine model in Figure 2 defines the limits (walls or other objects) in the virtual organ by assigning a high potential to the limit in order to ensure that the submarine's camera does not collide with the walls and other limits. If the operator tries to move the model of the camera to an area of high potential, the model of the camera will be prevented from doing so unless the operator wishes to analyze the organ beyond the limit or, for example, the interior of a polyp. . In the case of a virtual colonoscopy, a potential field value is assigned to each data of the volumetric colon (the volumetric element). When a region of particular interest is designated in step 105 of Figure 1 with a starting point and a final point, the voxels within the selected area of the scanned colon are identified using conventional blocking operations. Subsequently, a potential value is assigned to each voxel x of the selected volume based on the three following distance values: the distance from the end dt (x), the distance from the colon surface ds (x) and the distance from the longitudinal axis of the colon space of (x). dt (x) is calculated using a conventional growth strategy. The distance from the surface of the colon, ds (x), is calculated using a conventional growth technique from the voxels of the surface inwards. To determine from (x), we first obtain the longitudinal axis of the colon from the voxel, and then calculate it from (x) using the conventional growth strategy from the longitudinal axis of the colon.
To calculate the longitudinal axis of the selected colonic area, defined by the start and end point specified by the user, locate the maximum value of ds (x) and denote it as dmax. Then, a cost value of dmax -ds (x) is assigned to each voxel within the area of interest. Thus, voxels that are near the surface of the colon have high cost values and those near the longitudinal axis have relatively low cost values. Subsequently, based on the allocation of costs, the shortest route technique from a single source, which is well known to those skilled in the art, is applied to effectively calculate a minimum cost route from the point of origin to the extreme. This low cost line indicates the longitudinal axis or backbone of the section of colon that you want to explore. This technique for determining the longitudinal axis is the preferred technique of the invention. o To calculate the potential value V (x) of a voxel x within the area of interest, the following formula is used: Where Ci, C2, μ and v are constants chosen for the task. In order to avoid collisions between the virtual camera and the virtual colonic surface, a sufficiently large potential value is assigned to all points outside the colon. 0 Therefore, the potential field gradient will become so significant that, during its operation, the submarine model chamber will never collide with the colon wall.
Another technique for determining the longitudinal axis of the route in the colon is called the "detached layer" technique and is shown in Figures 4 to 8.
Figure 4 shows a two-dimensional cross section of the volumetric colon, together with its two side walls 401 and 403. The operator chooses two blocking walls in order to define which section of the colon he will analyze. It is not possible to see anything beyond the blocking walls. This ^ 10 helps reduce the number of calculations needed to show the virtual representation. The blocking walls and the lateral walls identify a volumetric form within the colon, which will be investigated.
Figure 5 shows two end points in the virtual examination path: the initial volumetric element 501 and the final volumetric element 503. The operator chooses the start and end points in step 105 of Figure 1. The voxels between the points of start and end and the sides of the colon are identified and marked, as indicated by the area designated with x (x) in Figure 6. Voxels are three-dimensional representations of the graphic element.
Subsequently, the layering technique is applied to the voxels identified and marked in Figure 6. The outer layer of all the voxels (closest to the walls of the colon) is detached, and then the next, and so on consecutively. until only one layer of voxels remains: the internal one. Stated differently, each voxel is removed from the center point until such removal does not cause the path to be unlinked between the start voxel and the end voxel. Figure 7 shows the intermediate result ^ 10 after a series of voxels detachments have been made in the virtual colon. The voxels closest to the colon walls have already been eliminated. Figure 8 shows the final route of the chamber model to the center of the colon after all 15 detachments have been made. In essence, this produces a backbone in the center of the colon that becomes the desired path for the camera model.
Assisted visibility by Z buffer 20 Figure 9 describes a real-time visibility technique to show the virtual images observed by the camera model in the three-dimensional virtual volume representation of an organ. Figure 9 shows a visualization technique using a modified Z buffer, corresponding to step 109 in Figure 1. The number of voxels that would be possible to see with the camera model is extremely high. Unless the total number of elements ^^ (or polygons) that must be calculated and displayed is less than 5 the entire series of voxels in the scanned environment, the general number of calculations required will render the process of displaying an internal area excessively slow. extensive. However, in the present invention it is only necessary to make the visualization calculations of those images that are L10 visible on the surface of the colon. The scanned environment can be subdivided into smaller sections or cells. The Z buffer technique produces only a portion of the cells visible from the camera. The Z buffer technique has also been used to represent voxels three-dimensionally. The The use of a modified Z buffer reduces the number of visible voxels that must be calculated and allows a physician or technician in the medical area to examine the virtual colon in real time.
The area of interest for which the longitudinal axis has been calculated in step 107 is subdivided into cells before applying the visualization technique. Cells are groups of voxels that become a unit of visibility. The voxels in each cell will be displayed as a group. Each cell contains a number of thresholds through which it is possible to see other cells. The colon begins to subdivide from the chosen starting point and along the longitudinal axis 1001 towards the end. • 5 Subsequently, the colon is divided into cells (for example, cells 1003, 1005 and 1007 in Figure 10) when a predefined threshold distance is reached along the longitudinal axis. The threshold distance is based on the specifications of the platform on which the visualization technique, as well as in its storage and processing capacity. The size of the cells is directly related to the number of voxels that the platform can store and process. An example of a threshold distance is 5 cm, although the distance may vary enormously. Each cell has two longitudinal axes that act as thresholds to see outside the cell as shown in Figure 10.
Step 901 in Figure 9 identifies the cell within 20 of the selected organ that currently contains the camera. That cell will be displayed along with all the other cells visible from that orientation of the camera. In step 903, a hierarchical tree diagram of the cells potentially visible from the camera (through defined thresholds) is constructed, as will be described in more detail below. The tree diagram contains one node for each cell visible to the camera. Some cells could ^ fc be transparent by not having blocking entities, by 5 what could be more than one cell in a given direction. In step 905, the subset of voxels of a cell that is intercepted with the edge of the adjacent cells is stored on the outer edge of the tree diagram in order to more effectively determine which cells are visible.
In step 907, it is checked whether the tree diagram contains any double node. A double node occurs when two or more edges of a single cell limit in the same adjacent cell. This can occur when a single cell is surrounded by another cell. If a double node is identified in the tree diagram, the method proceeds to step 909. If there is no double node, the process performs step 911.
In step 909, the two cells forming the double node to form a large node. In this way the tree diagram is corrected and the problem of seeing the same cell twice due to the double node is eliminated. This step is repeated with each of the double nodes that have been detected. The process subsequently proceeds to step 911.
^} In step 911 the buffer Z is started with the maximum value 5 of Z. The value of Z defines the distance away from the camera along the path-backbone. Afterwards, the tree is traversed to verify the intersection values in each node. If the intersection of a node is covered, which means that the sequence of the current threshold ^^ 10 is occluded (based on the Z buffer test), then the path of that branch of the tree will end. In step 913 each of the branches is traversed to verify if the nodes are covered and shows them if not. In step 915 the image that will be displayed on the operator's monitor with the volumetric elements within the visible cells identified in step 913 is constructed. This is carried out using one of several techniques known to those skilled in the art, such as volumetric representation by composition. Only those cells identified as potentially visible are displayed. This technique limits the number of cells that require calculations to achieve a real-time visualization and, therefore, increases the speed of visualization and performance. This technique is an improvement over previous techniques that calculate all possible points of visible information, whether they truly can • 5 displayed or not.
Figure HA is a pictorial representation of an organ that is being explored through guided navigation and that must be visualized by an operator. The organ 1101 k10 shows two side walls 1102 and an object 1105 in the • center of the route. The organ has been divided into four cells: A 1151, B 1153, C 1155 and D 1157. Camera 1103 is facing cell D 1157 and has a field of view defined by the vision vectors 1107, 1108 which identify a field of conical shape. The cells that can potentially be displayed are B 1153, C 1155 and D 1157. Cell C 1155 is completely surrounded by • cell B and, therefore, it constitutes a double node.
Figure 11B is a representation of a tree diagram constructed from the cells in Figure 11 A. Node A 1109 containing the camera is at the root of the tree. A line or cone of visibility is drawn, an unobstructed route, to node B 1110. Node B has direct lines of visibility to both the C 1112 node and the D 1114 node, which are indicated by the joining arrows. The line of visibility of node C 1112 in flfe direction of the camera is combined with node B 1110. Thus, node 5 C 1112 and node B 1110 are merged into a large node B'1122, as shown in Figure 11C.
Figure 11C shows node A 1109 that contains the camera and is adjacent to node B '1122 (which contains both AlO node B and node C) and node D 1114. Nodes A, B' and D will be shown at less partially to the operator.
Figures 12A-12E exemplify the use of the modified Z buffer with cells containing objects that hinder visibility. An object can be fecal matter in a part of the virtual colon. Figure 12A shows a virtual space with 10 potential cells: A 1251, B 1253, • C 1255, D 1257, E 1259, F 1261, G 1263, H 1265, I 1267 and J 1269. Some of these cells contain objects. If the camera 1201 is positioned in cell I 1267 and looks towards cell F 1261 as indicated by visibility vectors 1203, then a tree diagram is generated according to the technique exemplified by the flow diagram in Figure 9. Figure 12B shows the tree diagram generated and the intersection nodes show the virtual representation as exemplified in Figure 12A. Figure 12B shows cell I 1267 as the root node of the tree because it contains camera 1201. Node I 1211 points to node F • 5 1213 (indicated with a date), because cell F is directly attached to the line of visibility of the camera. The node F 1213 points to both node B 1215 and node E 1219. Node B 1215 points to node A 1217. Node C 1202 is completely blocked by camera 1201 in line 10 visibility, so it does not appear in the diagram • of tree .
Figure 12C shows the tree diagram after node I 1211 is displayed on the monitor for the operator. He node I 1211 is subsequently removed from the tree diagram because it has already been shown and node F 1213 becomes the root. Figure 12D shows that node F 1213 is now going to join node I 1211. The next nodes of the tree connected by arrows are checked to see if are covered (processed). In this example, all intercepted nodes that are observed from the camera located in cell I 1267 have been covered, so it is not necessary to show node B 515 (and, therefore, dependent node A) in the monitor .
Figure 12E shows that node E 515 is being verified to determine if its intersection has been ^ fc cover. Since this has been the case, the only nodes shown in this example of Figure 12A-12E are nodes I and F, while nodes A, B and E are not visible and it is not necessary to prepare their cells to display them. The modified Z buffer technique described in Figure 9 allows for fewer calculations and can be applied to an object that has been represented by voxels or other data elements, such as polygons.
Figure 13 shows a two-dimensional virtual view of a colon with a huge polyp on one of its walls. The Figure 13 shows a selected section of the colon of the patient to be studied in greater detail. The view shows two colon walls 1301 and 1303 with growth indicated as 1305. Layers 1307, 1309 and 1311 show the inner growth layers. It is ideal that a doctor can detach the polyp or tumor layers to see if there is cancerous or harmful material inside the mass. This process would be a virtual biopsy of the mass without surgery. Once the colon is represented virtually by voxels, the process of detaching the layers of an object is easily performed in a manner similar to that described in conjunction with Figures 4 to 8. It is also possible to make cuts to the mass to study a particular cross section . In Figure 13, B a flat cut 1313 can be made so that that particular portion of the growth can be studied. Likewise, it is possible to make any cut 1319 defined by the user. The voxels 1319 can be detached or modified as explained below.
It is possible to apply a transfer function to each voxel in the area of interest that becomes transparent, semitransparent or opaque to the object by modifying the coefficients that represent the translucency of each voxel. An opacity coefficient is assigned to each voxel based on its density. Subsequently, a mapping function transforms the density value into a coefficient that represents its translucency. A high density voxel will indicate a wall or dense matter of another type, and not just open space. An operator or program could later change the opacity coefficient of a voxel or group of voxels to make them appear transparent or semitransparent in the eyes of the underwater camera model. For example, an operator might see a tumor inside or outside of a growth. Or it could make a transparent voxel seem not to be present .táa? tmÉm? ^ aa ^^. in the visualization step of Figure 9. It is possible to compose a section of the object using a weighted average of the opacity coefficients of the voxels in that section.
If the physician wishes to see the various layers of a polyp to look for cancerous areas, it is possible to do so by detaching the outer layer of the polyp 1305 and exposing a first layer 1307. Also, the inner first layer 1307 can be peeled off to display a second layer. inner layer 1309. The second inner layer can be peeled off to see a third inner layer 1311, and so on. The doctor could also cut the polyp 1305 and see only the voxels that are inside the section desired. This cutting area can be fully defined by the user.
The incorporation of an opacity coefficient can also be used in other useful ways for scanning a virtual system. If fecal material is found whose density and other properties are within a certain known range, it is possible to return said transparent material to the eyes of the virtual camera by changing its opacity coefficient during the study. This will prevent the patient from having to ingest a laxative before the procedure and facilitates and expedites the examination. Depending on the application in use, other objects can also disappear in the same way. Also, some objects such as polyps could • Enhanced electronically through the application of a contrast agent and the subsequent use of an appropriate transfer function.
Figure 14 shows a system for performing the virtual examination of an object such as a human organ using the techniques described in this specification. The patient 1401 lies on a platform 1402 while the scanning device 1405 scans the area containing the organ or organs to be analyzed. The scanning device 1405 contains a scanning section 1403 taking pictures of the patient and an electronic section 1406. The electronic section 1406 comprises an interface 1407, a central processing unit (CPU) 1409, a memory 1411 for temporarily storing the scan data and a second interface 1413 to send data to the virtual navigation platform. It is possible to include interfaces 1407 and 1413 within a single component, or both can constitute the same component. The components in section 1406 are connected by conventional connectors.
In the system 1400, the data from the scan section of the device 1403 is transferred to the flk section 1405 to be processed and stored in the memory 1411. The , the central processing unit 1409 converts the scanned two-dimensional data into three-dimensional voxel data and stores the results in another section of the memory 1411. Another option could be to send the converted data directly to the interface unit 1413 for Transfer them to the virtual navigation terminal 1416. The two-dimensional data conversion could also occur in the virtual navigation terminal 1416 after being transmitted from the interface 1413. In the preferred embodiment, the converted data is transmitted by wave carrier 1414 to the virtual navigation terminal 1416 in order for an operator to perform the virtual examination. The data can also be transferred by other means • conventional, either by storing them in a storage medium and physically transporting them to the terminal 1416 or using satellite transmissions.
The scanned data will not be converted to its three-dimensional representation until it is required by the machine that generates the visualization. In this way, computational steps and memory storage space are avoided.
The virtual navigation terminal 1416 includes a monitor for viewing the virtual organ or any other scanned image, an electronic section 1415 and an interface control 1419 (keyboard, mouse or fixed mouse). The electronic section 1415 comprises a port for interface 1421, a central processing unit 1423, other components 1427 needed to operate the terminal and a memory 1425. The components in terminal 1416 are connected by conventional connectors. The converted voxel data is received at the port for interface 1421 and stored in memory 1425. The central processing unit 1423 subsequently joins the three-dimensional voxels in a virtual representation and operates the underwater camera model as described in Figures 2 and 3 to perform the virtual exam. As the underwater camera travels through the virtual organ, the visibility technique described in Figure 9 is used to make calculations of the areas visible from the virtual camera and display them on the monitor 1417. A graphics accelerator can also be used to generate the representations. The operator may employ a device for interface 1419 to indicate which portion of the scanned body it is desired to scan. The device for interface 1419 can also be used to control and move the underwater camera as desired, as indicated in FIG. 2 and its accompanying description. Section 1415 of the terminal 5 may be a dedicated Cube-4 system, available from the Department of Computer Science at the State University of New York at Stony Brook.
The scanning device 1405 and the terminal 1416, or parts thereof, can be part of the same unit. A single platform can be used to receive the scanned graphic data, connect it to the three-dimensional voxels if necessary and perform guided navigation. A preponderant feature of the 1400 system is that the virtual organ can be examined later without the need for the patient to be present. On the other hand, it is possible to perform the virtual examination while the patient is being scanned. Scan data can also be sent to multiple terminals. This would make it possible for more of a doctor saw the inside of an organ simultaneously. In this way, a doctor in New York could be seeing the same section of a patient's organ as a doctor in California while both discuss the case. Another option could be to visualize the data at different times. Two or more doctors could perform on their own an analysis of the same data on a difficult case. Multiple virtual navigation terminals could be used to see the same scan data. By reproducing the organ as a virtual organ with a series of discrete data, various benefits are obtained in terms of accuracy, costs and multi-faceted data management.
The techniques described above can be improved in the applications of virtual colonoscopy by using a technique to clean the colon electronically using modified operations to prepare the intestines, followed by operations to segment images, so that fluid and stool remaining in the colon. the colon during a computed tomography scan or an MRI can be detected and extracted from the virtual colonoscopy images. By using such techniques, the discomfort caused by the physical means of washing the colon is minimized or totally eliminated.
Referring to Figure 15, the first step to cleanse the colon electronically is to prepare the intestines (step 1510). This is done before computed tomography or magnetic resonance imaging is performed and is intended to generate conditions in which the faeces or fluid remaining in the colon have visual properties totally different from those inside the colon insufflated with gas and its walls. By way of example, said operation to prepare the intestines includes the ingestion of three doses of 250 cc of a suspension of barium sulfate at 2.1% W / V, such as that manufactured by EZ-EM, Inc. of Westbury, New York, the day before the computed tomography or magnetic resonance k10. The three doses should be distributed throughout the day and can be taken with all three meals. Barium sulfate is used to improve the image of any fecal matter that remains in the colon. In addition to the intake of barium sulfate, it is preferable to increase the consumption of fluids during the day before the computed tomography or magnetic resonance imaging. Cranberry juice is preferred because it increases intestinal fluids, but water can also be ingested. In order to improve the properties of the colonic fluid image, night and morning before the tomography, 60 ml of a diatrizoate solution of meglumine or sodium, manufactured under the brand MD-Gastroview by Mallinckrodt, Inc. of St. Louis, Missouri. Sodium phosphate can also be added to the solution to liquefy stool in the colon; In this way, colonic fluid and residual feces are more uniformly highlighted.
The preliminary operation to prepare the intestines described above by way of example may render conventional colonic lavage protocols unnecessary, requiring a gallon of solution to be ingested Golytely before a tomography.
In order to minimize collapse of the colon, just prior to performing the tomography, 1 ml of Glucagon, manufactured by Ely Lily and Company of Indianapolis, Indiana, can be administered by intravenous injection. Subsequently, the colon can be insufflated using approximately 1000 cc of compressed gas, such as C02, or ambient air, introduced through a rectal tube. Once this is done, a conventional computed tomography is performed to obtain data from the colon region (step 1520). For example, data can be obtained using a GE / CTI spiral scanner operating in helical mode with a 5 mm gap between coils, a displacement factor of 1.5-2.0: 1, reconstructions in 1 mm slices and a displacement factor adjusted as is customary to the patient's height. For this operation, a routine image generation protocol of 120 kVp and 200-280 ma can be used. The data can be obtained and reconstructed as images with cuts of 1 mm in thickness and a matrix size of 512x512 pixels in the field of vision, which ranges between 34 and 40 cm depending on the size of the patient. The number of such cuts in these conditions usually ranges between 300 and 450, depending on the height of the patient. The series of graphical data is converted into volume elements or voxels (step 1530). o The segmentation of the image can be done in several ways. In the current method of image segmentation, the local neighbor technique is used to classify the voxels of the graphic data according to similar intensity values. In this method, each voxel of an image obtained is evaluated with respect to a group of neighboring voxels. The voxel of interest is called the central voxel and has a related intensity value. A classification indicator of each voxel is established by comparing the value of the central voxel 0 with that of each of its neighbors. If the neighbor voxel has the same value as the central voxel, the value of the classification indicator increases. However, if the neighboring voxel has a different value than the central voxel, the central voxel classification indicator decreases. The central voxel is then classified according to the category that obtains the maximum value of the indicator, which indicates the most uniform neighborhood between the local neighboring voxels. Each classification is indicative of a range of intensity • 5 particular, which in turn represents one or more types of materials in the image. The method can be improved by applying a probability mix function to the obtained similarity classifications. , 10 A second process of segmentation of the image is • performs as two main operations: low-level processing and extraction of high-level features. During low-level processing, regions outside the body outline are no longer processed and voxels within the contour of the body are roughly classified according to the well-defined characteristics of different intensity classes. For example, a • Computed tomography of the abdominal region generates a series of data that tends to show a distribution of well-defined intensity. The graph of Figure 16 exemplifies said intensity distribution as a typical histogram with four well-defined peaks (1602, 1604, 1606, and 1608) that can be classified according to intensity thresholds.
The voxels of the abdominal tomography data series are roughly classified by intensity threshold as four groups (step 1540). For example, Grouping 1 may include voxels with intensity less than 140. This grouping generally corresponds to the regions of lowest density inside the gas-filled colon. Grouping 2 can include voxels with intensity values greater than 2200. These intensity values correspond to feces and liquids enhanced inside the colon, as well as to bone. Grouping 3 can include voxels with intensities in the range of 900 to 1080. This intensity range generally represents soft tissue, such as fat and muscle, that is hardly related to the colon. The remaining voxels may meet as Grouping 4, and are probably related to the wall of the colon (the mucosa and partial volumetric mixtures around the wall of the colon), lung tissue and soft bones.
Clusters 1 and 3 are not particularly useful for identifying the colon wall and, therefore, are not subjected to substantial processing during image segmentation procedures for virtual colonoscopy. The voxels related to Group 2 are important for separating feces and liquids from the colon wall, so they receive a greater processing during extraction operations of high level characteristics. ^^ Low-level processing is concentrated in the fourth cluster, which is more likely to correspond to colonic tissue (step 1550).
In the case of each voxel in the fourth cluster, an intensity vector is generated using the voxel itself and ^ L0 its neighbors. The intensity vector provides an indication of the change in intensity in the immediate vicinity of a given voxel. The number of neighboring voxels that are used to establish the intensity vector is not critical, but implies a balance between processing costs and precision. For example, a simple voxel intensity vector can be established with seven (7) voxels: the voxel of interest, its neighbors in front and behind, its neighbors on the right and left, and its neighbors up and down, all of which surround the voxel of interest in three axes respectively perpendicular. Figure 17 is a perspective view exemplifying a typical intensity vector in the form of a 25 voxels intensity vector model, which includes the selected voxel 1702 as well as its first, second and third order neighbors. The selected voxel 1702 is the central point of this model and receives the name of fixed voxel. A flat cut of voxels, which includes 12 neighbors in the same plane as the fixed voxel, is called a fixed cut ^ fc 1704. In planes adjacent to the fixed cut are the 5 two closest cuts 1706, which have five voxels each. Beside these first closest cuts 1706, there are the two next close cuts 1708, each of which has a single voxel. This set of intensity vectors for each voxel in the fourth grouping receives or the serial name of local vectors.
Because the series of data corresponding to an abdominal image generally includes more than 300 images of cuts, each with a matrix of 512 x 512 voxels, and to which each voxel has a local vector of 25 voxels related, it is advisable to apply a feature analysis (step 1570) to the series of local vectors to reduce the computational load. One such feature analysis is principal component analysis (PCA), which can apply to the series of local vectors to determine the dimension of a series of vector characteristics and an orthogonal transformation matrix for the voxels of Grouping 4. ju ^^^^ Sa n ^ It has been discovered that the histogram (Figure 16) of the intensity of tomographic images tends to be fairly constant between one patient and another in a given scanner, ^^ with equivalent preparation and scanning parameters. Based on this observation, an orthogonal transformation matrix can be established which is a predetermined matrix obtained by employing several series of orienting data obtained using the same scanner under similar conditions. From these data, a transformation matrix such as the Karlhunen-Loéve transform (K-L) can be obtained in a known manner. The transformation matrix is applied to the series of local vectors to obtain a series of characteristic vectors. Once in the spatial domain of the characteristic vectors, vector quantization techniques can be used to classify the series of characteristic vectors.
A self-adapting analytical algorithm can be used to classify the characteristic vectors. When defining this algorithm, let's say that. { XiGR4: i = l, 2, 3, ..., N} is the series of characteristic vectors, where N is the number of characteristic vectors, K denotes the maximum number of classes and T is a threshold adaptable to the data series. For each class, a representative element is generated through the algorithm.
Let's say that ak is the representative element of the class k and nk is the number of characteristic vectors in that class.
The algorithm could then be described as: • 5 1. Set 121 = 1; a1 = X1; K = 1; 2. Obtain the class number K and the class parameters (ak, nk) for (i = l; i &l; N; i ++) for (j = l; j < K; j ++) 0 calculate dj = dist ( X ±, aj) conclude index = are min dj; if ((Kx < T) j or (? T = K) update class parameters: 15 conclude if all 20 generates a new class deduce all conclude 3. label each characteristic vector of a class according to the nearest neighbor rule corresponding to (i = l; i &l; N; i ++) for (j = l; j < K; ++) calculate dj = dist (Xi, aj); deduct index = are min dj; label the voxel i to classify the index, conclude In this algorithm, dist (x, y) is the Euclidean distance between the vector x and the vector y, and are min dj gives the integral j that generates the minimum value of d.
The algorithm described above depends only on the parameters T and K. However, the value of K, which is related to the number of classes within each voxel cluster, is not critical and can be set at a constant value such as K = 18 . However, T, which is the threshold of similarity of vectors, greatly influences the results of the classification. If the chosen value of T is too high, only one class will be generated. On the other hand, if the value of T is too small, the resulting classes will show undesirable redundancy. By setting the value of T as equal to the maximum component variation of the series of characteristic vectors, we obtain the maximum number of distinctive classes.
As a result of the initial classification processes, each voxel selected within the cluster is assigned a class (step 1570). In this typical case of virtual colonoscopy, there are several classes within Grouping 4. Therefore,, the next task is to determine which of the various classes in Grouping 4 corresponds to the wall of the colon. The first coordinate of the characteristic vector, which is the one that shows the greatest variation, reflects the information of the average of intensities of three-dimensional local voxels. The remaining coordinates of the characteristic vector contain the information of the directional intensity change within the local neighbors. Because the voxels of the colon wall are generally very close to the voxels of the gas in Grouping 1, it is possible to determine a threshold interval by choosing data samples of the typical intensities of the wall of a colon in the data of a typical tomography, in order to distinguish approximately the candidate voxel of the colon wall. For each protocol and particular device generating an image a particular threshold value is chosen. This threshold interval can then be applied to all the data series of the tomography (obtained from the same machine, using the same image generation protocol). If the first coordinate of the representative element is located in the threshold interval, the corresponding class is considered as the class of the colon wall and all the voxels in that class will be marked as voxels similar to the colon wall.
Each voxel similar to the colon wall is a candidate for a voxel of the colon wall. There are three possibilities that a voxel does not belong to the colon wall. The first case is related to the voxels that are near the stool / fluid inside the colon. The second case occurs when the voxels are in the regions of lung tissue. The third case represents voxels of the mucosa. It is clear then that a low level of classification implies a degree of uncertainty in the classification. The causes of uncertainty at a low level of classification vary. For example, a partial volume effect because voxels contain more than one type of material (ie, liquid and colon wall) leads to the first case of uncertainty. The second and third cases of uncertainty are due both to the effect of partial volume and to a low contrast in the images of the tomography. To resolve this uncertainty, additional information is required. Therefore, a high-level feature extraction procedure is used in the present method to better distinguish those candidates to be voxels from the colon wall of other voxels similar to the colon wall. This method is based on previous anatomical knowledge of tomographic images (step 1580).
A first step of the high-level feature extraction procedure may be to remove the lung tissue region from the results of the low-level classification. Figure 18A is the typical image of a cut that clearly exemplifies lung region 1802. The lung region 1802 is usually identified as a generally contiguous three-dimensional volume, limited by voxels similar to the colon wall, as exemplified in Figure 18B. Due to this characteristic, the lung region can be identified using a • Regions growth strategy. The first step of this technique is to find a voxel seed within the region that is going to increase. The operator who performs the tomography usually sets the range of image generation so that the first cut of the tomography does not contain any voxel of the colon. Because the inside of the lung ^^ LO is full of air, the low level classification provides the seed simply by choosing a voxel of air. Once the contour of the lung region of Figure 18B is determined, the volume of the lung can be extracted from the image cut (Figure 18C). A next step to perform a high-level feature extraction may consist of separating the bone voxels from the fecal / fluid enhanced voxels in Grouping 2. The bone tissue voxels 1902 are usually located relatively far from the wall of the body. colon and outside the volume of the colon. And on the contrary, the 1906 feces and residual 1904 fluid are contained within the volume of the colon. The approximate volume of the colon wall is obtained by combining information that was known in advance about proximity to the information about the colon wall that was obtained from the low-level classification process. Any voxel separated by more than a predetermined number (eg, 3) of voxel units from the colon wall, and which is outside the colon volume, will be marked as bone and subsequently extracted from the image. It can be assumed that the remaining voxels in Grouping 2 represent fecal matter and fluid within the volume of the colon (see Figures 19A-C).
The voxels within the volume of the colon identified as feces 1906 and liquid 1904 can be removed from the image to generate a clean image of the lumen and wall of the colon. In general, there are two types of regions with stool / fluid. A type of region consists of small residual areas with feces 1906 in the wall of the colon. The other type of region consists of large volumes of liquid 1904 that are concentrated in bowl-like colonic folds (see Figures 19A-C).
Regions with residual feces bound to colon 1906 can be identified and eliminated because they are within the approximate volume of the colon that was generated during the low-level classification process. The liquid 1906 in the folds of the colon usually has a horizontal surface 1908 due to the effect of gravity. On said surface there is always a gas region of very high contrast with respect to the intensity of the fluid. Therefore, it is very easy to mark the contact surface of the regions with liquid.
By means of a strategy of growth of regions, the contour of the regions with feces attached to the colon wall 1906 can be delineated, and the part that is remote from the volume of the colon wall can be eliminated. Similarly, the contour of the regions can be delineated with liquid 1904. After removing the horizontal surfaces 1908, 15 the outline of the colon wall appears and the clean colon wall is obtained.
^ It is difficult to distinguish mucosal voxels from voxels in the colon wall. Although the three-dimensional processing mentioned above can eliminate some mucus voxels, it is difficult to eliminate all of them. In optic colonoscopy, doctors inspect the colonic mucosa directly and look for lesions based on their color and texture. In virtual colonoscopy, most of the mucosal voxels in the colon wall can be left intact in order to preserve more information. This can be very useful for three-dimensional volumetric representation. • 5 It is possible to extract the internal and external surface of the colon, as well as the colon wall itself, from the segmented volume of the colon wall, and visualize them as virtual objects. This circumstance represents a clear advantage over conventional optical colonoscopy ^ LO because it is possible to examine both the external wall of the colon and the internal one. Also, the wall and lumen of the colon can be obtained separately from the segmentation.
Due to the substantial evaluation of the colon before imaging, the collapse of the colon lumen in some segments is a common problem. Although the insufflation of the colon with compressed gas, air or C02, reduces the frequency of the collapsed regions, this situation is not eliminated completely. When performing a virtual colonoscopy, it is advisable to automatically maintain a route through the collapsed regions and use the graphic data from the scan to at least partially recreate the colon lumen in the collapsed regions. Since the image segmentation methods described above allow obtaining both the internal and external colon walls, this information can be used to improve the determination of the route through the collapsed regions.
The first step to extend the route through the collapsed regions of the colon or to distend these regions is to detect them. To detect the areas where the colon has collapsed, an entropic analysis can be used based on the premise that the grayscale values of the graphical data outside the colon wall change more noticeably than values in the grayscale within the colon wall itself and 15 in other regions such as fat, muscle or other tissue.
The degree of change in the value of the gray scale, • for example along the longitudinal axis, can be expressed and measured by an entropic value. To calculate the entropic value, voxels are selected on the external surface of the colon wall. These points are identified from the image segmentation techniques described above. A 5x5x5 cubic window can be applied to the pixels, centered on the pixel of interest. Before calculating the entropic value, a smaller window (3x3x3) can be applied to the pixels of interest in order to filter the noise and remove it from the graphic data. The entropic value of an open window around a pixel can be determined by the equation: E =? C (¡) ln (c (.)) where E is the entropy and C (i) is the number of points in the window with the gray scale of i (i = 0, 1, 2, ..., 255). The entropic values calculated for each window are then compared to a predetermined threshold value. In the case of air regions, the entropic values will be much lower than those of tissue regions. Therefore, there will be a region collapsed along the longitudinal axis of the colon lumen when the entropic values increase and exceed the predetermined threshold value. The exact value of the threshold is not critical and will depend in part on the protocol for generating images and particular aspects of the image generating device.
Once a collapsed region is detected, it is possible to extend the previously determined route along the longitudinal axis by making a perforation, consisting of a voxel-wide navigation line, which runs through the center of the collapsed segment.
In addition to automatically continuing the path of the virtual chamber through the colon lumen, the collapsed colon region can be opened virtually using a physical modeling technique to recover some of the properties of the collapsed region. In this technique, a model of the physical properties of the colon wall is created. From this model, the parameters of movement, density of the mass, density of cushioning and coefficients of stretching and bending for a LaGrange equation are calculated. Next, a model of expansive force (ie, a liquid or gas, such as air, blown into the colon) is formulated and applied according to the elastic properties of the colon, as defined by the LaGrange equation, so that the image of the collapsed region of the colon recovers its natural form.
To model the colon, a finite element model can be applied to the collapsed or obstructed regions of the colon lumen. This can be done by creating a sample of the elements in a regular grid, such as a rectangle of 8 voxels, and then applying the traditional techniques of volumetric representation. Another option is to apply a representation approach of irregular volumes, such as tetrahedra, to collapsed regions. • By applying the external force model (air insufflation) to the colon model, we first determine the magnitude of the external force to properly separate the regions of the collapsed colon wall. A three-dimensional growth model can be used to track the internal and external surfaces of the colon wall in a parallel fashion. Each surface is marked from a starting point in the collapsed region to a point of growth, and the force model is applied to distend the surfaces similarly and naturally. The region between the internal and external surfaces (ie, the colon wall) is classified as a shared region. The external repulsive force model is applied to the shared regions to separate and distend the collapsed segments of the colon wall in a manner natural.
To visualize more clearly the characteristics of a virtual object, such as the colon, which is being examined virtually, it is useful to provide a representation of the various textures of the object. Said textures, which can be observed in the color images presented during the optical colonoscopy, are usually lost in the black and white images, or within the gray scale, which provide the 5 graphic data of a tomography. Therefore, a system and method for generating textured images during a virtual exam is required.
Figure 20 is a flow diagram showing the The present method to generate virtual objects with texture.
The purpose of this method is to map the textures obtained by means of optical colonoscopy images in the red-green-blue chromatic space, for example, those of the Visible Man Project, in the monochrome graphic data with grayscale that are used to generate virtual objects. The images of the optical colonoscopy are obtained by conventional imaging techniques 'digital, such as the image grabber 1429 that receives optical analog images from a camera (video, example) and converts the image into digital data that can be sent to the CPU 1423 through a port for 1431 interface (Figure 14). The first step in this process is to segment the graphical data of the tomography (step 2010).
The image segmentation techniques described above can be applied to choose thresholds of grayscale image intensity to classify the graphical data of the tomography into various types of tissue: bone, colon wall tissue, air, and so on. • 5 style.
In addition to performing a segmentation of images in the graphical data of the tomography, it is necessary to obtain the characteristics of the texture in the optical image from ^^ LO optical graphical data (step 2020). To do this, a Gaussian filter can be applied to the optical graphical data and, subsequently, a sub-sampling can be performed to decompose the data into a pyramid with multiple resolutions. A Laplacian filter and one can also be applied to this pyramid. orientable to obtain the oriented and unoriented characteristics of the data. While this method is useful for obtaining and capturing texture characteristics, its ^ application requires a large memory and processing capacity. Another approach to obtain the characteristics of the texture of the optical image is to use a small wave transformation. However, although small-wave computer transformations are usually very efficient, conventional small-wave transformations are limited in the sense that they only capture characteristics in those orientations parallel to the axes and can not be applied directly to them. a region of interest. To overcome these limitations, an inseparable filter can be used. For example, a hoist system can be used to build filter groups for small wave transformation in any direction using a two-pronged prediction and update approach.
^^ LO steps. These groups of filters can be synthesized by the Boor-Rom algorithm for multidimensional polynomial interpolations.
After the characteristics of the texture of optical graphical data, models should be created to describe them (step 2030). This can be done, for example, using a nonparametric statistical model with ^ several scales based on calculating and managing the entropy of non-Gaussian distributions attributable to textures natural Once patterns of the texture are generated from the optical graphical data, a combination of textures must be made to relate these models to the graphical data of the segmented tomography (step 2050). In those regions of the tomographic graphical data where the texture is continuous, it is easy to combine the corresponding grades of texture. However, in the regions border between regions with two or more textures, the process is more complex. The segmentation of the tomographic data around the boundary region usually generates fuzzy data, that is, the results reflect a percentage of texture of each material or tissue, and vary depending on the various ^^ - 0 weights of each. The weighting percentage can be used to determine the importance of the collating criteria.
In the case of the nonparametric statistical model with 15 different scales, a crossed entropy or a Kullback-Leiber divergence algorithm can be used to measure the distribution of different textures in a border region.
After combining the textures, a textural synthesis is applied to the tomographic graphics data (step 2050). This is done by merging the textures of the optical graphical data with the tomographic graphics data. In the case of isotropic texture patterns, such as those presented by bone, a sample of the texture of the optical data can be taken and merged with the graphical data of the segmented tomography. In the case of regions with anisotropic texture, such as those of the colon mucosa, a sampling procedure with multiple resolutions is preferred. In this process, repeated selective sampling of homogeneous and heterogeneous regions is used. Volumetric representationIn addition to the image segmentation and texture mapping procedures already described, volumetric representation techniques can be used in virtual colonoscopy to improve the fidelity of the resulting image. Figure 21 exemplifies a lightning method for representing the perspective volume that can be used for a volumetric representation in accordance with the present invention. From a given virtual observation point (the location of the camera, for example) within the lumen of the colon, rays are fired through each nearby pixel of the image (step 2100). In the case of each beam, the first sampling point is set as the pixel of the current image along the beam (step 2110). Subsequently, the distance (d) between the current sampling point to the nearest colon wall is determined (step 2120). The current distance (d) is compared to a predetermined sampling interval (i) (step 2130). If the distance (d) is greater than the predetermined sampling interval (i), sampling does not occur and the next sampling point is determined along the ray by jumping the distance d along the ray (step 2140). If the distance is less than or equal to the sampling interval (i), then a conventional sampling of this point is performed (step 2150) and the next sampling point is chosen according to the sampling interval (i) (step 2160) . For example, trilinear interpolation between the density values of 8 neighboring voxels can be performed to determine the new value of the density at the sampling point.
The method of Figure 21 effectively accelerates the emission of rays because a space jumping technique is used to rapidly jump empty spaces along the ray in the image plane with the colon wall. In this method, the distance from a sample point to the nearest colon wall is determined along each ray. If the distance is greater than the predetermined sampling interval (i), a jump is made to the next sampling point along the ray. Since we already have the information about the closest distance based on the potential field that was used to control the virtual camera, it is not necessary to do more distance coding calculations. In this case, neither surface generation nor Z buffer transformation is required, which saves preprocessing time and memory space. Alternatively, a space jump method can provide the information about the distance corresponding to each ray from the Z buffer of the corresponding surface generation image. If both the image of surfaces and the volume image are to be generated, this approach provides a minimum processing load since the information of the Z buffer is provided as a result of the volumetric representation methods. Therefore, this method for skipping spaces only requires an additional processing to perform a depth transformation of the scope of the images to the world scope.
In the case of those regions along the ray where the distance (d) was traversed in step 2140, the region at 20 along the ray corresponds to an open space and can be assigned a value according to a function of transfer of open spaces. Typically, open spaces do not contribute to the final value of the pixel. At each sampling point, one or more defined transfer functions can be assigned to map different ranges of sample values from the original data volume with different colors and opacities and, possibly, other viewable parameters. For example, four independent transfer functions have been used to determine different materials by mapping the ranges of the tomographic density values as specified colors (red, green and blue) and opacity, each in the range of 0 to 255.
Virtual biopsy The techniques described above can also form the basis of a system for performing a virtual electronic biopsy of the region under study that is flexible and non-invasive. As already noted, the volumetric representation techniques use one or more transfer functions defined to map different ranges of sample values of the original volume data as different colors, opacities and other parameters or drop down for navigation and visualization. During navigation, the chosen transfer function generally assigns a maximum opacity to the colon wall so that it is easy to see its external surface. Once a suspicious area is detected during the virtual examination, the physician intervenes to change the assigned transfer function during the volumetric representation procedure, so that the outer surface being observed becomes transparent. This allows you to compose the information of the region and, therefore, visualize its internal structure. By using a predetermined number of transfer functions, the suspicious area can be displayed at different depths and with a variable degree of opacity.
Detection of polyps The present system and methods can be used to detect polyps by automated means. Referring to Figure 12, polyps 1305 that are formed, for example, within the colon, usually take the form of small convex structures similar to mounds on the wall of colon 1301. This configuration is distinct from the folds of the wall of the colon. colon. Therefore, a differential configuration model can be used to detect this type of polyps in the colon wall.
The surface of the colon lumen can be represented as a second continuously differentiable surface in three-dimensional Euclidean space, using a C-2 model of surface homogeneity. Said model is described in Modern Geometry Methods and Applications, by B.A. Dubrovin et al. , published by Springer-Verlag in 1994, and which has been reproduced as if it were inserted to the letter. In this model, each voxel on the surface of the colon has an inherent configuration characteristic that has a Gaussian curvature, called the Gaussian curvature field. A convex promontory on the surface, possible sign of a polyp, has a unique local characteristic in the Gaussian curvature fields. Therefore, it is possible to detect the polyps if the Gaussian curvature fields corresponding to specific local characteristics are searched. Once detected, suspicious polyps can be highlighted to be observed and measured by the doctor, who can use any of the virtual biopsy methods already described to further investigate the suspected region.
Obtaining the central route In the case of virtual colonoscopy, the determination of an appropriate line or navigation route through the lumen of the colon is an important aspect of the systems and methods described. While certain techniques were discussed to determine the virtual camera model path in relation to Figures 4-8, Figure 22 exemplifies another method for obtaining the central path through the colon lumen. Once the colon wall is identified by, for example, the image segmentation methods described herein, a volume reduction algorithm can be used to emphasize the colon lumen tendency and reduce the size of the colon lumen. • 5 subsequent search time within the lumen volume (step 2310).
Figure 23 also exemplifies the steps of a typical volume reduction algorithm, based on an analysis model with multiple resolutions. In this procedure, the three-dimensional volume is represented by a batch of binary images having matrices of the same size (step 2310). Collectively, these images form a series of binary data. One can apply Discrete small-wave transformation to the binary data series, which results in a series of sub-data representing different time-frequency components of the binary data series (step 2320). For example, the discrete small wave transformation can produce eight (8) series of sub-data. The subdata series are compared to the predetermined threshold values, so that the lowest frequency component (2330) is identified. This component forms the series of binary data for the next discrete small-wave transformation and thresholding steps, which are applied recursively in a structure with multiple resolutions (step 2340). In the case of virtual colonoscopy, the discrete small-wave transformation and related thresholding can be applied three times recursively to the subsequent sub-series that represents the lowest frequency component (a decomposition with multiple resolutions in 3 levels).
Returning to Figure 22, from the reduced colon volume model, a distance mapping technique can be employed to obtain the minimum distance path between two ends of the colon (eg, from rectum to cecum) (step 2215). The resulting route preserves the information about the global trend of the colon lumen, but ignores the tendencies shown by the local folds. The control points within the overall colon can be determined by mapping the minimum distance path back into the original data space (step 2220). For example, in the case of a decomposition with multiple resolutions in three levels, the reduced volume is three times smaller than the original volume. And it is possible to use an affine transform, which is well known to those skilled in the art, to map the reduced volume model back exactly to the scale of the original volume. The minimum distance path of the reduced value can also be mapped back to the scale of the original volume as a series of points that can be used as control points within the colon.
The preferred route is one that is located in the longitudinal axis of the colon lumen. However, the initial control points may not be exactly in the center of the colon lumen. Therefore, it is possible to center the initial control points by using a bisectional plane algorithm (step 2230). For example, at each chosen control point, a bisectional plane can be defined as a normal plane with respect to the direction of the trend and which traverses the lumen of the colon. A centralization algorithm, such as a maximum disk algorithm, can be applied to each bisectional plane. This algorithm is discussed in the article "On the Generation of Skeletons from Discrete Euclidean Distance Maps" by GE et al. , IEEE Transactions on PAMI, vol. 18, pp. 1055-1066, 1996, which is considered to be reproduced as if it were inserted to the letter.
Once the control points are centralized, the route can be obtained by interpolating a line that connects said points (step 2240). In the case of virtual colonoscopy, it is recommended that the interpolated route take the form of a soft curve centered substantially within the lumen of the colon. An interpolation algorithm of ^ fc coitstrained > cubic »B-spfttine based on Serret-5 Frenet's theorem in the theory of differential geometry that can be used to establish a slightly curved route, such as the one described in Numerical Recipes in C: The Art of Scientific Computing, by Press et al. . , 2a. edition, Cambridge University Press, 1992.
The pictorial representation of the colon lumen segmented in Figure 24 and the flow chart of Figure 25 shows another method for obtaining the route according to the present invention. In this alternative method, the representation of the colon lumen 2400 is first divided into a certain number of segments 2402 a-g along lumen 2400 (step 2500). In each segment 2402 a representative point 2404 a-g is chosen (step 2520). Each representative point 2404 a-g is subsequently centered with respect to the colon wall (step 2530), for example by using a physically based deformable model that is used to push the points to the center of the respective segment. After the representative points are centered, the points are joined sequentially to establish the longitudinal route for the virtual camera model (step 2540). If the segments have a sufficiently small length, the centered points can be joined ^ fc with straight line segments 2406 a-f. However, when linear curve splice techniques are applied to join the centered points, a smoother continuous path is established.
Each of the aforementioned methods can be applied using a system as exemplified in Figure 14 and the appropriate software to control the operation of the CPU 1409 and the CPU 1423.
Another form of hardware, suitable for installation on a personal computer, is exemplified in Figure 26.
The system includes a 2600 processor that must be high-speed multitasking, such as a Pentium III processor that operates at a speed greater than 400 MHZ. The processor 2600 is linked to a conventional bus structure 2620 which provides the parallel data transfer to high speed. Also attached to the bus structure 2620 is the main memory 2630, a graphics plate 2640 and a volumetric display plate 2650. The graphics plate 2640 should preferably perform a mapping of structures, such as the Diamond Viper v770 Ultra, manufactured by Diamond Multimedia Systems. The volumetric display plate 2650 may be Volume Pro of Mitsubishi Electric, based on U.S. Patents No. 5,760,781 and 5,847,711, which are considered to be reproduced as if they were inserted to the letter. A display device 2645, such as a conventional SVGA or RGB monitor, is operatively linked to a graphics board 2640 to display the graphic data. You must also have a 2660 scanner interface board that receives the scanner data ¿.O generator of images, be these by tomography or magnetic resonance, and transmit them to the bus structure 2620. The interface board for scanner can be a product to be applied specifically to a particular image-generating scanner or a card of entry / exit for general purposes. The system based on a PC 26000 will generally include an input / output interface 2670 for joining the input / output 2680 devices such as keyboards, digital pointers (a mouse, for example) and similar to the 2620 processor. Alternatively, the interface input / output can be connected to the processor 2620 via bus 2620.
In the case of the generation of three-dimensional images, including the synthesis of textures and the volumetric representation, several operations are required for the processing and handling of data. In the case of large data series, such as those represented by the lumen of the colon and surrounding areas, this processing may take a lot of time and occupy a lot of memory. However, if the topology of Figure 26 is used according to the processing method exemplified in the flow chart of Figure 27, such operations can be performed on a relatively inexpensive personal computer (PC). The data for the generation of images are received in the processor • 2620 and stored in the main memory 2630 via the scanner interface board 2660 and the bus structure 2620. These graphics data (pixels) are converted into a representation of volume elements (voxels) (step 2710). This representation, which is stored in main memory 2630, is divided into sections, for example, along a volumetric or other axis, of the region over which it is placed. • are generating images (step 2720). The volumetric partitions are subsequently transferred to the volumetric representation plate and are temporarily stored in the volumetric representation memory 2655 to perform the volumetric representation operations (step 2730). The use of a locally resident volumetric memory 2655 improves the velocity of the volumetric representation since it is not necessary to exchange data on the bus 2620 when representing each cut of the total volume. Once the ^ volumetric representation of each cut, the data are transferred back to the main memory 2630 or to the graphics board 2640 in a sequential buffer (step 2740). After all the cuts of interest have been submitted to the representation, the content of the sequential buffer is processed in ^ 0 the graphics plate 2640 to be displayed on the display unit 2645 (step 2750).
The foregoing merely exemplifies the principles of the invention. Therefore, it will be useful for experts in the field to design various systems, apparatus and methods which, while not explicitly shown or described here, represent the principles of the invention and, therefore, their spirit and scope according to its definition in the claims. 20 For example, the methods and systems described here could be applied to the virtual examination of an animal, fish or inanimate object. In addition to the indicated uses in the field of medicine, the applications of the technique could be used to detect the contents of sealed objects that could not be opened. The technique could also be used within an architectural structure (a building or a cave, for example) and allow the operator to navigate through it.

Claims (16)

  1. CLAIMS 1. A method to generate a route through a colon lumen, partially defined by a colon wall, for a virtual colonoscopy, which consists of: 5 decreasing the volume from the lumen wall of the virtual colon to generate a data series of the decreased colon lumen generate a minimum distance path between virtual colon lumen ends from the data series of the .o colon colon lumen decreased; obtain the control points along the minimum distance path along the length of the virtual colon lumen; center within the virtual colon lumen the control points; and interpolate a line between the centered control points to define the final navigation path.
  2. 2. The method for generating a route according to claim 1, wherein the step of decreasing the volume comprises the following steps: representing the lumen of the colon as plural lots of graphic data; applying a discrete small wave transformation to the graphics data to generate a plurality of subdata series with elements in a plurality of frequencies; and choose the lowest frequency elements among the series of sub-data.
  3. 3. A method to generate a route through a colon lumen during a virtual colonoscopy, which includes: dividing the lumen of the virtual colon into a series of segments; choose a point within each segment; center the points with respect to the lumen wall of the virtual colon; and join the centered control points to create the route.
  4. 4. A method to examine a virtual colon lumen, the method that includes: choosing an observation point within the lumen of the colon; shoot rays from the observation point through each pixel of the image; determine the distance from the observation point to the wall of the colon along each ray; if the distance is greater than a sampling interval, make a jump equivalent to the distance along the ray and assign a value based on a transfer function of open spaces to the points along the ray over the distance; if the distance is not greater than the sampling interval, then take a sample of the pixels and determine a value based on the transfer function.
  5. 5. A method for performing a virtual biopsy of a region in a virtual colon, comprising: assigning an initial transfer function with respect to the region to navigate the colon; present volumetrically by means of the initial transfer function: observe the region; dynamically alter the transfer function to selectively modify the opacity of the region under observation; and present volumetrically by means of said transfer function.
  6. 6. A method for detecting polyps located in the walls of a virtual colon, represented by a plurality of volumetric units, which consist of: representing the surface of the colon walls as 5 a second differentiable surface in which each volumetric unit of the surface has a Gaussian curvature; look for Gaussian curvatures to detect local characteristics; and classifying the local characteristics corresponding to A 0 the convex protuberances similar to mounts on the surface of the colon wall as polyps.
  7. 7. The method according to claim 6 comprising the step of performing a virtual biopsy in areas 15 of the colon classified as polyps.
  8. 8. The method according to claim 7 wherein the • step of performing a virtual biopsy includes: assigning an initial transfer function with respect to the region to navigate the colon; present volumetrically using said transfer function; observe the region; dynamically alter the transfer function to selectively modify the opacity of the region under observation; and present volumetrically by means of said transfer function.
  9. 9. A method to perform a virtual colonoscopy that includes: obtaining a series of graphical data of a region, including the colon; • convert such data into volumetric units; represent the lumen of the colon as a plurality of volumetric units; identify those volumetric units that represent a wall of the lumen of the colon; create a route to navigate through said colon lumen; • apply a transfer function to map the color and opacity coefficients to the lumen wall of the colon; show the colon lumen along the route according to the assigned transfer functions.
  10. 10. The method for performing a virtual colonoscopy according to claim 9, wherein said step for generating routes comprises: reducing the volume from the lumen wall of the colon • 5 virtual; generate a minimum distance path between the ends of the virtual colon lumen; extract the control points along the length of the virtual colon lumen; center the control points inside the lumen of the colon; and interpolate a line that connects the centered control points.
  11. The method of claim 10, wherein the step of volumetric reduction comprises the steps: representing the lumen of the colon as a series of graphical data; applying a discrete small wave transformation of the graphic data to generate a plurality of subdata series; Select the lowest frequency components of the subdata series.
  12. 12. The method for performing a virtual colonoscopy according to claim 9, wherein said step of generating a route comprises: dividing the lumen of the virtual colon into a plurality of segments; choose a point within each segment; center each of the points with respect to the lumen wall of the virtual colon; and | -0 join the centered points to create the route.
  13. 13. The method for performing a virtual colonoscopy according to claim 9, wherein said step of applying at least one transfer function comprises: choosing an observation point within the lumen of the colon; shoot rays from the observation point through each pixel of the image; determining the distance from the observation point to the wall of the colon along each ray; if the distance is greater than a sampling interval, perform a jump equivalent to the distance along the ray and assign a transfer function of open spaces to the points along the ray over the distance; if the distance is not greater than the sampling interval, then assign a transfer function based on a sample of the pixel value.
  14. 14. The method for performing a virtual colonoscopy according to claim 9, further comprising the step of dynamically altering the transfer function of at least a portion of the colon lumen to selectively modify the opacity of the region under observation. ,
  15. 15. A system for generating images, navigating and examining a region three-dimensionally including: a scanner generator of images to obtain graphic data; 5 a processor, said processor converts the graphic data into a plurality of volumetric elements that form a series of data of volumetric elements, the processor performs the following steps: identify the volumetric units that represent 0 the wall of the colon lumen; create a route to navigate through the lumen of the colon; and applying at least one transfer function to map the color and opacities to the lumen wall of the colon; and A monitor operatively linked to the processor for displaying a representation of the region according to the route and at least one transfer function.
  16. 16. A computer system for a virtual examination, comprising: 0 a bus structure; a scanner interface board, said scanner interface is attached to said bus structure and provides data from the scanner generating images to the bus; a main memory that is attached to the bus; 15 a volumetric display plate with a locally resident volumetric representation memory, said volumetric representation data that receives at least "a part of the data from the image generating scanner and stores said data in the volumetric representation memory during an operation of volumetric representation during a volumetric representation operation, a graphics board attached to the bus structure, a monitor to show attached to the graphics board, and a processor, said processor is operatively linked to the bus structure and responds to the data coming from the image generating scanner, said processor converts the data of the image generating scanner into a representation of the volumetric element, stores the representation of the volumetric element in the main memory, divides the representation of the volumetric element into cuts and transfers them to the plate of r volumetric representation.
MXPA/A/2001/009387A 1999-03-18 2001-09-18 System and method for performing a three-dimensional virtual examination, navigation and visualization MXPA01009387A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US60/125,041 1999-03-18
US09343012 1999-06-29
US09493559 2000-01-28

Publications (1)

Publication Number Publication Date
MXPA01009387A true MXPA01009387A (en) 2002-06-05

Family

ID=

Similar Documents

Publication Publication Date Title
US6514082B2 (en) System and method for performing a three-dimensional examination with collapse correction
US6343936B1 (en) System and method for performing a three-dimensional virtual examination, navigation and visualization
US7194117B2 (en) System and method for performing a three-dimensional virtual examination of objects, such as internal organs
US7477768B2 (en) System and method for performing a three-dimensional virtual examination of objects, such as internal organs
US5971767A (en) System and method for performing a three-dimensional virtual examination
IL178768A (en) System and method for mapping optical texture properties from at least one optical image to an acquired monochrome data set
MXPA01009387A (en) System and method for performing a three-dimensional virtual examination, navigation and visualization
MXPA01009388A (en) System and method for performing a three-dimensional virtual segmentation and examination
MXPA99002340A (en) System and method for performing a three-dimensional virtual examination