WO2009040497A1 - Image enhancement method - Google Patents

Image enhancement method Download PDF

Info

Publication number
WO2009040497A1
WO2009040497A1 PCT/GB2008/002892 GB2008002892W WO2009040497A1 WO 2009040497 A1 WO2009040497 A1 WO 2009040497A1 GB 2008002892 W GB2008002892 W GB 2008002892W WO 2009040497 A1 WO2009040497 A1 WO 2009040497A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
image
mask
generating
processing
Prior art date
Application number
PCT/GB2008/002892
Other languages
French (fr)
Inventor
Thomas Edward Marchant
Christopher John Moore
Original Assignee
Christie Hospital Nhs Foundation Trust
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Christie Hospital Nhs Foundation Trust filed Critical Christie Hospital Nhs Foundation Trust
Publication of WO2009040497A1 publication Critical patent/WO2009040497A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30092Stomach; Gastric

Definitions

  • the present invention relates to a method of enhancing image data. More particularly, the invention relates to a method of generating enhanced image data by processing first and second image data.
  • the enhanced image data may be particularly appropriate for visual interpretation.
  • Imaging methods are known in the art. Many such imaging methods find applications in medical imaging in which images of a patient or part of a patient are generated. Such imaging techniques are clinically useful in that they allow noninvasive investigation of a patient, therefore allowing appropriate diagnoses to be made.
  • CT computed .tomography
  • a CT image is obtained by acquiring measurements of a patient at a plurality of points along a longitudinal axis. Highly collimated X-ray fan beams are emitted perpendicular to a point on the longitudinal axis, through the patient, and attenuation of each fan beam is measured. The resulting measurements are tomographically reconstructed into a two dimensional slice depicting and physically characterising patient anatomy at a given longitudinal point, according to methods known in the art.
  • a three-dimensional volume of image data can be displayed as a plurality of two-dimensional slices taken at different longitudinal points.
  • Each image element (e.g. a pixel or voxel) in the image data represents the radiodensity, measured in Hounsfield units, of a point on a plane perpendicular to the longitudinal axis of the patient.
  • the size, or resolution, of an image element is given by the lateral and vertical distances between a data point corresponding to the image element and data points corresponding to the nearest lateral and vertical image elements in the image, as well as the longitudinal thickness of the slice.
  • the data points reconstructed from the measurements are sampled on a regular grid and, therefore, all image elements in the image have the same resolution. Longitudinally the resolution depends on the slice thickness that has been selected.
  • a further known imaging method is Cone Beam Computed Tomography (CBCT) imaging.
  • CBCT Cone Beam Computed Tomography
  • This imaging method can be used as an alternative to fan beam CT imaging of the type described above.
  • CBCT Cone Beam Computed Tomography
  • This imaging method can be used as an alternative to fan beam CT imaging of the type described above.
  • an image may be constructed, from the measured attenuation values, for each of a plurality of points along the longitudinal axis of the patient.
  • measured radiodensity values produced by CBCT imaging are subject to increased error as compared with values produced by fan beam CT imaging.
  • CBCT imaging is beneficial in some applications as CBCT image data can be obtained more easily than CT image data.
  • by increasing the cone beam angle CBCT image data can be obtained without having to move a patient from one longitudinal point to the next. Avoiding the need to move a patient from one position to another to allow the generation of image data is considered advantageous in some applications.
  • the lower quality of image data obtained using CBCT imaging is however disadvantageous.
  • a method for generating enhanced image data comprising receiving first image data and second image data; generating data indicating a relationship between said first image data and said second image data; and applying said generated data to said second image data to generate enhanced image data.
  • the relationship may be an arithmetic relationship.
  • relatively high quality first image data can be processed together with relatively low quality second image data so as to improve the quality of the second image data.
  • Each of the first image data and the second image data may comprise a respective plurality of image elements, which can conveniently be pixels or voxels.
  • Generating data indicating a relationship between said first image data and said second image data may comprise processing each of said plurality of image elements in said first image data together with a respective image element in said second image data. That is pixel-wise or voxel-wise processing may be carried. Such processing may comprise dividing a value of each image element in said second image data by a value of a respective image element in said first image data to generate third image data.
  • the third image data is referred to herein as a shading map.
  • a smoothing function may be applied to said third image data.
  • the method may further comprise processing one or both of said first and second image data to generate processed first or second image data respectively.
  • Generating data indicating a relationship between said first image data and said second image data may then comprise generating data indicating a relationship between said processed first image data and said processed second image data.
  • Processing at least one of said first and second image data may comprise generating a mask indicating regions of said processed image data representing particular structures.
  • the mask may be a binary mask.
  • Generating the mask may comprise applying a threshold to values of image elements in the processed image data, such that image elements having a value satisfying the threshold have a corresponding mask element having a first value, while image elements having a value not satisfying the threshold have a corresponding mask element having a second value.
  • the method may further comprise eroding areas of said mask representing a particular structure.
  • Each of the first and second image data may represent an image of a human or animal body.
  • the mask may indicate regions of said image data representing bone and/or gas and/or regions of said image data representing tissue.
  • Processing at least one of said first and second image data may further comprise applying the generated mask to at least one of the first and second image data to generate masked image data.
  • the method may further comprise processing said masked image data by generating values for image elements within masked regions of said image data from values for image elements within unmasked regions, for example using interpolation such as linear interpolation.
  • the method may further comprise appropriately pre-processing the first and second image data.
  • Such pre-processing may be arranged to allow the first and second image data to be properly processed alongside one another. Accordingly, values of image elements in one of the first and second image data may be modified based upon values of image elements in the other of the first and second image data, so as to arrange that both the first and second image data comprise image elements having comparable values.
  • the pre-processing may comprise registering said first and second image data with one another.
  • the pre-processing may comprise modifying the spatial resolution of at least one of said first and second image data such that each of said first and second image data have substantially equal spatial resolution.
  • a method of generating output image data comprising: generating first enhanced image data using a method substantially as described above, generating second enhanced image data using a method substantially as described above, and combining said first and second enhanced image data to generate as output enhanced image data.
  • the first and second enhanced image data are each generated using the method described above, although the masks discussed above are created using differing thresholds so as to create different enhanced image data.
  • the first enhanced image data may be created by processing the first and second image data with reference to a mask differentiating between soft tissue on the one hand and bone and gas on the other.
  • the second enhanced image data may be created by processing the first and second image data with reference to a mask differentiating between soft tissue and bone on the one hand and gas on the other.
  • the combination of enhanced image data in this way typically produces higher quality output data.
  • Combining the first and second enhanced image data may comprise generating a mask from one of said first enhanced image data and said second enhanced image data, and combining said first and second enhanced image data in accordance with said mask.
  • Generating said mask may comprise applying a threshold to values of image element of said first or second enhanced image data, and optionally applying a morphological closing operation after application of said threshold.
  • the generated mask may identify a particular structure within the enhanced image data, for example the mask may identify bone.
  • the first image data may be obtained in any convenient way.
  • the first image data may be obtained using computed tomography.
  • the second image data may be obtained in any convenient way.
  • the second image data may be obtained using cone beam computed tomography.
  • the invention further provides a method for determining a treatment dose for a patient.
  • the method comprises processing first image data obtained at a first time to determine an initial treatment dose; and processing second image data obtained at a second later time together with said first image data to generate enhanced image data, and generating a modified treatment dose from said enhanced image data.
  • the second image data can be used to appropriately modify the treatment dose given its enhancement.
  • the treatment may be radiation therapy, intended, for example, to shrink or eradicate a tumour.
  • aspects of the present invention can be implemented in any convenient way including by way of suitable methods, apparatus and computer systems.
  • Some embodiments of the invention provide computer programs configured to carry out the methods set out above. Such computer programs can be carried on appropriate computer readable media. Such media can include tangible media such as CD-ROMS, flash memory devices, hard disk drives and so on, and also include intangible media such as communications signals.
  • Figure 1 is a flowchart providing an overview of operation of an embodiment of the invention
  • Figure 2 is a schematic illustration of operation of an embodiment of the present invention
  • FIG. 3 is a high level flowchart showing the operations carried out in the embodiment of the present invention shown in Figure 2;
  • Figure 4 is an image taken from a set of Computed Tomography (CT) image data
  • Figure 5 is an image taken from a set of Cone Beam Computed Tomography (CBCT) image data
  • FIG. 6 is a flowchart showing part of the processing of Figure 3 in further detail
  • Figure 7 is a graph showing the distribution of pixel values in CT image data in dashed line and CBCT image data in solid line
  • Figure 8 is a graph showing the distribution of pixel values in CT image data in dashed line and CBCT image data in solid line after part of the processing of Figure 3;
  • Figure 9 is a flowchart showing part of the processing of Figure 3 in further detail
  • Figure 10 is a bone mask created from the CT image data of Figure 4 by the processing of Figure 9;
  • Figure 11 is an image of the bone mask of Figure 10 after performance of an erosion operation
  • Figure 12 is an image showing application of the mask of Figure 11 to the image of Figure 4;
  • Figure 13 is an image showing the result of interpolation carried out on the image of Figure 12;
  • Figure 14 is a mask created from the CBCT image shown in Figure 5;
  • Figure 15 is an image showing the result of an erosion operation carried out on the mask of Figure 14;
  • Figure 16 is an image showing the result of application of the mask of Figure 15 to the image of Figure 5;
  • Figure 17 is an image showing the result of an interpolation operation carried out on the image of Figure 16;
  • Figure 18 is a shading map created using the images of Figures of 13 and 17;
  • Figure 19 is an image showing the result of a smoothing operation carried out on the shading map of Figure 18;
  • Figure 20 is an image showing application of the smoothed shading map of Figure 19 to the image of Figure 5;
  • Figure 21 is a mask created from the image of Figure 4.
  • Figure 22 is an image showing the result of an erosion operation carried out on the mask of Figure 21;
  • Figure 23 is an image showing the result of application of the mask of Figure 22 to the image of Figure 4;
  • Figure 24 is an image showing the result of interpolation carried out on the image of Figure 23;
  • Figure 25 is a mask created from the image of Figure 5;
  • Figure 26 is an image showing the result of an erosion operation carried out on the mask of Figure 25;
  • Figure 27 is an image showing the application of a mask of Figure 26 to the image of Figure 5;
  • Figure 28 is an image showing the result of an interpolation operation carried out on the image of Figure 27;
  • Figure 29 is a shading map created from the images of Figures 24 and 28;
  • Figure 30 is a smoothed shading map created from the shading map of Figure 29;
  • Figure 31 is an image showing application of the shading map of Figure 30 to the image of Figure 5;
  • Figure 32 is a flowchart showing part of the processing of Figure 3 in further detail
  • Figure 33 is a mask created from the image of Figure 20;
  • Figure 34 is an image showing the result of a closing operation carried out on the mask of Figure 33;
  • Figure 35 is an image showing the result of filling the mask shown in Figure 34.
  • Figure 36 is an image showing combination of the images of Figures 20 and 31 using the bone mask of Figure 35.
  • CT computed tomography
  • the CT image will, in general terms, be an image of a part of the patient relevant to a particular clinical procedure.
  • CT images are generally of a high quality but are relatively difficult to obtain, not least because their generation requires the use of expensive imaging equipment. It is therefore often the case that where a patent is to undergo treatment (such as radiation therapy treatment), a sequence of CT image slices is obtained initially before treatment begins (as shown at step Sl of Figure 1), but that it is impractical to regularly obtain CT image sequences as treatment progresses.
  • CBCT cone beam computed tomography
  • Figure 3 is a flow chart showing the image processing process 3 at a high level.
  • step S5 both the CT image data 1 and the CBCT image data 2 are appropriately pre- processed.
  • Two parallel streams of processing are then initiated.
  • a first stream comprises steps S6a to S8a, while a second stream comprises steps S6b to S8b.
  • each of the CT image data 1 and CBCT image data 2 is processed individually. In each case parts of the respective image data representing bone or gas are removed, before appropriate interpolation from adjacent parts of the image data is carried out to avoid discontinuities in the image data.
  • a shading map is created by dividing the CBCT image data output from step S6a by the CT image data output from step S6a. Suitable smoothing is carried out at step S7a.
  • the shading map created at step S7a is applied to the CBCT image data, as output from the pre-processing of step S5, to generate as output first enhanced CBCT image data Ci.
  • steps S6b to S8b of Figure 3 is similar to that of steps S6a to S8a.
  • the processing carried out is such as to exclude only parts of each of the CT image data 1 and the CBCT image data 2 which represent air (i.e. not those which represent bone).
  • appropriate interpolation is also carried out at step S6b.
  • an appropriate shading map is created using the CT image data 1 and the CBCT image data 2 output from the processing of step S7a.
  • Appropriate smoothing is also carried out at step S7b to the CBCT image data output from the pre-processing of step S 5 at step S 8b to generate as output second enhanced CBCT image data C 2 .
  • two sets of enhanced CBCT image data are generated, one (C 1 ) at step S8a and one (C 2 ) at step S8b.
  • the first enhanced CBCT image data C 1 is further processed to generate a mask indicating parts of the first processed CBCT image data C 1 representing bone.
  • the mask created at step S9 is used to appropriately combine the first enhanced CBCT image data C 1 and the second enhanced CBCT image data C 2 to generate improved CBCT image data as the output data 4.
  • Both the CT image data 1 and the CBCT image data 2 is respectively arranged in a plurality of slices, each slice comprising an array of voxels. That is, each of the CT image data 1 and the CBCT image data 2 comprise a volume of voxels arranged in a plurality of slices.
  • Figure 4 shows a slice of the CT image data 1
  • Figure 5 shows a corresponding slice of the CBCT image data 2. It can be seen that the CT image data 1 provides a higher quality image than the CBCT image data 2 which shows some artefacts.
  • FIG. 6 is a flowchart showing the pre-processing of step S5 of Figure 3 in further detail.
  • step SI voxel values of the CT image data 1 and the CBCT image data 2 are processed so as to arrange that the voxel values of each set of image data are similarly scaled.
  • Figure 7 shows voxel values of the CT image data 1 by way of a broken line.
  • Voxel values of the CBCT image data 2 are shown by way of a solid line. It can be seen that voxel values of the CT image data 1 define a peak 6 which represents voxels representing air, and a peak 7 which represents voxels representing tissue.
  • voxel values of the CBCT image data 2 define a peak 8 which represents voxels representing air and a peak 9 which represents voxels representing tissue. It can be seen that the peaks of the CT image data 1 and the CBCT image data 2 are not coincident. In order to allow the CT image data 1 and the CBCT image data 2 to be processed together it is necessary to modify values of voxels of the CBCT image data 2. This is achieved by processing voxels in the CBCT image data 2 by multiplying values of those voxels by a determined scalar value, and adding a further scalar value to the result of the multiplication. Specifically:
  • p is the initial voxel value
  • p' is the modified voxel value
  • a and B are scalar values chosen so as to allow the peaks defined by voxel values of the CBCT image data 2 to be made coincident with peaks of the CT image data l.
  • the iteration is continued until a minimum in the sum of squared difference is found.
  • the minimization is carried out using the downhill simplex method of Nelder and Mead, 1965, Computer Journal, VoI 7, pp 308-313.
  • the values A and B can be chosen by the user to give the best match as subjectively assessed by the user. This is sometimes necessary in cases where the automatic determination fails.
  • voxel values of the CBCT image data 2 have a distribution as shown in Figure 8. That is, while a broken line representing voxel values of the CT image data 1 again shows two peaks 6, 7 in positions which are the same as those of corresponding peaks in Figure 7, the solid line representing voxel values of the CBCT image data 2 shows two peaks 8', 9' which correspond to the peaks 8, 9 of Figure 7 but which have been moved so as to be coincident with voxel values represented by the peaks 6, 7.
  • step SI l the processing of step SI l described above is arranged to modify voxel values of the CBCT image data 2 so as to be within a similar range to those of the CT image data 1.
  • the CT image data 1 and the CBCT image data 2 are registered together, that is, the CBCT image data 2 is spatially modified, so as to be defined by a co-ordinate system common to the CT image data 1 and the CBCT image data 2. It will be appreciated that such registration is required so as to allow the CT image data 1 and the CBCT image data 2 to be processed together. More specifically, such registration allows respective points of the CT image data 1 and the CBCT image data 2 to be compared.
  • step S 12 The registration process of step S 12 is carried out using a chamfer matching algorithm which is described in van Herk M, Kooy HM. "Automatic three-dimensional correlation of CT-CT, CT-MRI, and CT-SPECT using chamfer matching". Medical Physics 1994;21(7):1163-78, the contents of which are herein incorporated by reference.
  • Van Herk et al describe a chamfer matching algorithm in the context of medical images. Van Herk et al compare a number of different ways to implement chamfer matching for medical images. Methods for matching CT images are described, as are methods for matching a CT image with an MRI image and a CT image with a SPECT image. It has been found that the described method for matching two CT images can be applied to match a CT image and a CBCT image.
  • the described method includes a step of reducing the number of points to speed up the calculation.
  • the number to which the feature points are reduced is treated as a variable that can be adjusted, and results are presented for different values.
  • Van Herk et al describes three different cost functions for the matching: rms distance, mean distance, and maximum distance. A preferred embodiment of the present invention uses mean distance as a cost function. Van Herk et al describes two different optimisation methods: downhill simplex, and Powell's method, a preferred embodiment of the present invention uses downhill simplex optimization.
  • the chamfer matching algorithm registers images represented by the CT image data 1 and the CBCT image data 2 by reference to bone structures within the two images. Voxels representing bone edges are identified in each image and the generalised distance between corresponding voxels in the two images is minimised by an appropriate registration operation, which may comprise any suitable transformation such as a rotation and/or translation.
  • an operation is carried out to ensure that the CT image data 1 and the CBCT image data 2 are of equal spatial resolution.
  • the CT image data 1 will be defined by a plurality of voxels of typical size 0.95mm x 0.95mm x 5mm in the lateral, vertical and longitudinal directions respectively.
  • the CBCT image data 2 will be defined by a plurality of voxels of typical size lmm x lmm x lmm. It can therefore be appreciated that the CBCT image data 2 is of different spatial resolution than the CT image data 1. Accordingly, the CT image data 1 is processed so as to change its spatial resolution by interpolation in each of the three dimensions.
  • the processing of Figure 6 is therefore such as to arrange that each voxel of the CT image data 1 can be processed together with a corresponding voxel of the CBCT image data 2, the voxel values having being processed at step SI l so as to be comparable with one another.
  • CT image data 1 ' which is output from the pre-processing of step S 5 is input to processing of step S 14 which generates a binary mask indicating regions of the image represented by the CT image data 1 ' which represent soft tissue, and regions which do not represent tissue.
  • voxels having values in the range 850 to l l50 are considered to represent soft tissue and such voxels are set to have a value of 1. All other voxels (i.e. those considered to represent air or bone) are set to have a value of 0.
  • Figure 10 shows the output of the processing of step S 14 where the input is CT image data 1 ' as shown in Figure 4 after appropriate pre-processing. It can be seen that voxels representing bone or air are illustrated in black, while those representing tissue are shown in white.
  • this mask is further processed at step S 15. Specifically an erosion operation is carried out using a 5mm structuring element. Erosion operations in general terms will be known to those of ordinary skill in the art. In general terms, the 5mm structuring element is centred on each voxel of the mask in turn. If any voxel within the structuring element at a particular position has a value of 0 (i.e. is considered to represent air or bone), all voxels within the structuring element are set to have a value of 0. It can accordingly be appreciated that the effect of the erosion operation is to expand regions of the CT image data 1 which represent air and bone, and reduce regions of the CT image data 1 which represent soft tissue.
  • the output of the erosion operation of step S15 when carried out on the mask of Figure 10 is shown in Figure 11.
  • step S15 The mask output from step S15 is then applied to the CT image data 1 ' at step S 16, so as to remove from the CT image data 1 ' regions of the CT image data which do not represent soft tissue.
  • the output of step S 16 when the mask of Figure 11 is applied to the CT image data of Figure 4 is shown in Figure 12.
  • an interpolation operation is carried out at step S 17, generating an image as shown in Figure 13. Any appropriate interpolation can be used to generate voxel values for parts of the CT image data 1 which do not represent soft tissue. In a preferred embodiment of the invention linear interpolation is used for reasons of speed.
  • IDL Interactive Data Language
  • ITT Visual Information Systems of Boulder, Colorado USA.
  • the IDL package provides functions TRIANGULATE and TRIGRID which can conveniently be used to perform the necessary interpolation.
  • the TRIANGULATE function constructs a Delaunay triangulation of a planar set of points, and the TRIGRID function can then be used to carry out the required interpolation.
  • step Sl 8 an appropriate threshold is applied to voxels of the CBCT image data 2 shown in Figure 5, to generate a mask of the form shown in Figure 14.
  • the threshold applied is such that voxels having values between 600 and 1350 are considered to represent soft tissue and are set to a value of 1 while all other voxels are set to a value of 0.
  • a larger range of voxel values is used in connection with the CBCT image data 2' (as compared with the CT image data 1') due to greater intensity variations in the CBCT data 2'.
  • an erosion operation is carried out on the mask generated at step S18 and shown in Figure 14, generating a mask as shown in Figure 15.
  • the erosion operation uses a 5mm cube structuring element, and works analogously to the erosion operation of step S 15.
  • the mask of Figure 15 is applied to the CBCT image data 2 shown in Figure 5 at step S20, resulting in the generation of an image as shown in Figure 16.
  • the masked image data is interpolated as described with reference to step S 17, resulting in the generation of an image as shown in Figure 17.
  • the interpolated CBCT image data output from interpolation operation of step S21 and shown in Figure 13 is divided by the CT image data output from the interpolation operation of step S17 and shown in Figure 17 to generate an image shown in Figure 18, which is referred to as a shading map.
  • This division is carried out by dividing pairs of voxels in turn, one voxel of each pair is being taken from the interpolated CBCT data and the other voxel of each pair being taken from the interpolated CT data.
  • the shading map is smoothed at step S23, by the application of a 15mm boxcar average function, resulting in the generation of a smoothed shading map as shown in Figure 19. Smoothing using a boxcar average function essentially processes blocks of voxels in turn and replaces a voxel value for a voxel at an origin of a block with an average of all voxel values in the block.
  • step S23 is the output of step S7a of Figure 3.
  • step S8a the CBCT image data output from the processing of step S5 is divided by the smoothed shading map of Figure 19 resulting in the generation of first enhanced CBCT image data, as shown in Figure 20.
  • steps S6b and S7b are equivalent to the processing of steps S6a and S7a, save that a different mask is created. That is, the processing of steps S6b and S7b is the same as that illustrated in Figure 9, subject to a modification to the thresholds applied at steps S 14 and S 18. Specifically, previously voxels representing soft tissue were set to a value of 1, while all other voxels (specifically those representing bone or air) were set to a value of 0. In the processing of steps S6b and S7b the thresholds applied at step S 14 and Sl 8 are such as to exclude voxels representing air, but retain voxels representing bone.
  • voxels representing air are set to 0 while voxels representing soft tissue or bone are set to 1.
  • This will involve applying a threshold such that voxels in the CT data 1 ' having a value a value greater than 850 and voxels in the CBCT data 2' having a value greater than 600 are set to a value of 1 , while all other voxels are set to a value of 0.
  • the mask generated at step S 14 is that shown in Figure 21.
  • the erosion operation of step S15 generates the mask shown in Figure 22, which is applied to the CT image data 1 ' as output from the preprocessing of step S 5 at step Sl 6 to generate image data as shown in Figure 23.
  • Interpolation carried out at step S 17 generates an image as shown in Figure 24.
  • step Sl 8 the mask shown in Figure 25 is created by applying an appropriate threshold to the CBCT image data 2' which differs from that applied for corresponding processing described with reference to steps S6a and S7a of Figure 3.
  • the mask created at step Sl 8 is eroded at step S19 to create the mask of Figure 26.
  • image data as shown in Figure 27 is generated.
  • the interpolation of step S21 generates an image as shown in Figure 28.
  • steps S6b and S7b includes the operations of steps S22 and S23.
  • step S23 the image data shown in Figure 28 generated from the CBCT image data 2 is divided by the image data shown in Figure 24 generated from the CT image data 1.
  • the resulting shading map is shown in Figure 29.
  • the smoothing of step S23 is applied to the shading map of Figure 29 to generate a smoothed shading map as shown in Figure 30.
  • the CBCT image data 2' output from the preprocessing of Figure 5 is divided by the smoothed shading map shown in Figure 30 to create second enhanced CBCT image data as shown in Figure 31.
  • the first enhanced image data generated at step S8a and shown in Figure 20 is processed as shown in Figure 32.
  • a thresholding operation is carried out on the image shown in Figure 20 such that all voxels having a value greater than 1150 (which voxels are considered to represent regions of bone) are set to a value of 1, while all other voxels are set to a value of 0. This results in the generation of a bone mask as shown in Figure 33.
  • the mask of Figure 33 is subjected to a dilation operation using a 25mm spherical structuring element, before an erosion operation using the same structuring element is carried out at step S26.
  • the dilation operation involves centring the structuring element at each voxel of the mask in turn, if any voxel enclosed by the structuring element have a value of 1, all voxels enclosed by the structuring elements are set to have a value of 1. Erosion operates as described above.
  • the combination of dilation and erosion make up a morphological closing operation, the result of which is shown in Figure 34.
  • the mask of Figure 34 is further processed at step S37 by filling in "gaps" in the bone structure.
  • step S37 This is achieved by replacing any voxels or groups of voxels having a value of 0 which are wholly enclosed by voxels having a value of 1 with voxels having a value of 1.
  • the result of the processing of step S37 is shown in Figure 35.
  • the mask created at step S37 and shown in Figure 35 is then used at step SlO ( Figure 3) to generate an output image by combining the enhanced image data output from steps S8a and S8b.
  • regions of the output image data within the bone regions of the mask of Figure 35 have values determined by the enhanced image data output from step S8b and shown in Figure 31.
  • Regions of the output image data outside the bone regions of the mask of Figure 35 have values determined by the enhanced image data output from step S8a and shown in Figure 20.
  • the image output at step SlO is shown in Figure 36.
  • some of the figures represent particular image data. It will be appreciated that given that both the CT image data 1 and the CBCT image data 2 are both three-dimensional, figures illustrating image data in fact represent a slice of that image data.
  • CT image data is used to generate data used to calculate doses of radiotherapy to be applied to a patient for treatment of a particular tumour.
  • doses will be computed with reference to the size and location of the tumour and will specify the quantity of the radiation dose to be applied together with the location at which the radiation should be applied, the time for which radiation should be applied, and the frequency of application of radiation.
  • CBCT image data is typically obtained during treatment, but is of insufficient accuracy to allow a radiotherapy dose to be modified should the tumour have grown or shrunk during treatment. As indicated above, obtaining CBCT image data during treatment is advantageous given that it is typically more easily obtained at the point of treatment than corresponding CT image data.
  • the invention also has applications in the field of terrain mapping.
  • two images may be obtained such that the effects of lighting or cloud cover degrade one of the images.
  • the two images may be processed using the method described above such that the image which is degraded is improved by use of the other image.
  • Such a method can be useful where a high quality image of particular terrain exists, but transient images of lower quality are obtained. In such a case the high quality image can be used to improve the quality of the lower quality transient images.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

A method for generating enhanced image data. First image data, for example CT image data, and second image data, for example CBCT image data is received. Data indicating a relationship between said first image data and said second image data is generated, for example in the form of a shading map. The generated data is applied to said second image data to generate enhanced image data.

Description

IMAGE ENHANCEMENT METHOD
The present invention relates to a method of enhancing image data. More particularly, the invention relates to a method of generating enhanced image data by processing first and second image data. The enhanced image data may be particularly appropriate for visual interpretation.
Various imaging methods are known in the art. Many such imaging methods find applications in medical imaging in which images of a patient or part of a patient are generated. Such imaging techniques are clinically useful in that they allow noninvasive investigation of a patient, therefore allowing appropriate diagnoses to be made.
One imaging method used in medical applications is computed .tomography (CT) imaging. A CT image is obtained by acquiring measurements of a patient at a plurality of points along a longitudinal axis. Highly collimated X-ray fan beams are emitted perpendicular to a point on the longitudinal axis, through the patient, and attenuation of each fan beam is measured. The resulting measurements are tomographically reconstructed into a two dimensional slice depicting and physically characterising patient anatomy at a given longitudinal point, according to methods known in the art. A three-dimensional volume of image data, can be displayed as a plurality of two-dimensional slices taken at different longitudinal points.
Each image element (e.g. a pixel or voxel) in the image data represents the radiodensity, measured in Hounsfield units, of a point on a plane perpendicular to the longitudinal axis of the patient. The size, or resolution, of an image element is given by the lateral and vertical distances between a data point corresponding to the image element and data points corresponding to the nearest lateral and vertical image elements in the image, as well as the longitudinal thickness of the slice. Within a slice the data points reconstructed from the measurements are sampled on a regular grid and, therefore, all image elements in the image have the same resolution. Longitudinally the resolution depends on the slice thickness that has been selected.
A further known imaging method is Cone Beam Computed Tomography (CBCT) imaging. This imaging method can be used as an alternative to fan beam CT imaging of the type described above. By emitting a less collimated, i.e. cone shaped, x-ray beam and measuring the attenuation of the beam after it passes through the patient, an image may be constructed, from the measured attenuation values, for each of a plurality of points along the longitudinal axis of the patient. However measured radiodensity values produced by CBCT imaging are subject to increased error as compared with values produced by fan beam CT imaging.
The use of CBCT imaging is beneficial in some applications as CBCT image data can be obtained more easily than CT image data. In particular, by increasing the cone beam angle CBCT image data can be obtained without having to move a patient from one longitudinal point to the next. Avoiding the need to move a patient from one position to another to allow the generation of image data is considered advantageous in some applications. The lower quality of image data obtained using CBCT imaging is however disadvantageous.
It is an object of some embodiments of the present invention to obviate or mitigate at least some of the problems set out above. More particularly, but not exclusively, it is an object of particular embodiments of the present invention to provide a method allowing the enhancement of CBCT image data.
According to the present invention, there is provided, a method for generating enhanced image data, the method comprising receiving first image data and second image data; generating data indicating a relationship between said first image data and said second image data; and applying said generated data to said second image data to generate enhanced image data. The relationship may be an arithmetic relationship. In this way relatively high quality first image data can be processed together with relatively low quality second image data so as to improve the quality of the second image data. Each of the first image data and the second image data may comprise a respective plurality of image elements, which can conveniently be pixels or voxels.
Generating data indicating a relationship between said first image data and said second image data may comprise processing each of said plurality of image elements in said first image data together with a respective image element in said second image data. That is pixel-wise or voxel-wise processing may be carried. Such processing may comprise dividing a value of each image element in said second image data by a value of a respective image element in said first image data to generate third image data. The third image data is referred to herein as a shading map. A smoothing function may be applied to said third image data.
The method may further comprise processing one or both of said first and second image data to generate processed first or second image data respectively. Generating data indicating a relationship between said first image data and said second image data may then comprise generating data indicating a relationship between said processed first image data and said processed second image data.
Processing at least one of said first and second image data may comprise generating a mask indicating regions of said processed image data representing particular structures. The mask may be a binary mask. Generating the mask may comprise applying a threshold to values of image elements in the processed image data, such that image elements having a value satisfying the threshold have a corresponding mask element having a first value, while image elements having a value not satisfying the threshold have a corresponding mask element having a second value. The method may further comprise eroding areas of said mask representing a particular structure. Each of the first and second image data may represent an image of a human or animal body. The mask may indicate regions of said image data representing bone and/or gas and/or regions of said image data representing tissue. Processing at least one of said first and second image data may further comprise applying the generated mask to at least one of the first and second image data to generate masked image data. The method may further comprise processing said masked image data by generating values for image elements within masked regions of said image data from values for image elements within unmasked regions, for example using interpolation such as linear interpolation.
The method may further comprise appropriately pre-processing the first and second image data. Such pre-processing may be arranged to allow the first and second image data to be properly processed alongside one another. Accordingly, values of image elements in one of the first and second image data may be modified based upon values of image elements in the other of the first and second image data, so as to arrange that both the first and second image data comprise image elements having comparable values. The pre-processing may comprise registering said first and second image data with one another. The pre-processing may comprise modifying the spatial resolution of at least one of said first and second image data such that each of said first and second image data have substantially equal spatial resolution.
There is also provided a method of generating output image data, the method comprising: generating first enhanced image data using a method substantially as described above, generating second enhanced image data using a method substantially as described above, and combining said first and second enhanced image data to generate as output enhanced image data.
Typically, the first and second enhanced image data are each generated using the method described above, although the masks discussed above are created using differing thresholds so as to create different enhanced image data. For example the first enhanced image data may be created by processing the first and second image data with reference to a mask differentiating between soft tissue on the one hand and bone and gas on the other. The second enhanced image data may be created by processing the first and second image data with reference to a mask differentiating between soft tissue and bone on the one hand and gas on the other. The combination of enhanced image data in this way typically produces higher quality output data.
Combining the first and second enhanced image data may comprise generating a mask from one of said first enhanced image data and said second enhanced image data, and combining said first and second enhanced image data in accordance with said mask. Generating said mask may comprise applying a threshold to values of image element of said first or second enhanced image data, and optionally applying a morphological closing operation after application of said threshold. The generated mask may identify a particular structure within the enhanced image data, for example the mask may identify bone.
The first image data may be obtained in any convenient way. For example, the first image data may be obtained using computed tomography. The second image data may be obtained in any convenient way. For example, the second image data may be obtained using cone beam computed tomography.
The invention further provides a method for determining a treatment dose for a patient. The method comprises processing first image data obtained at a first time to determine an initial treatment dose; and processing second image data obtained at a second later time together with said first image data to generate enhanced image data, and generating a modified treatment dose from said enhanced image data.
In this way, where the first image data is obtained at a first time in a treatment regime, and the second image data is obtained as the treatment regime progresses, the second image data can be used to appropriately modify the treatment dose given its enhancement. Such a method is typically advantageous where the second image data is more easily obtainable than the first image data. The treatment may be radiation therapy, intended, for example, to shrink or eradicate a tumour. Aspects of the present invention can be implemented in any convenient way including by way of suitable methods, apparatus and computer systems. Some embodiments of the invention provide computer programs configured to carry out the methods set out above. Such computer programs can be carried on appropriate computer readable media. Such media can include tangible media such as CD-ROMS, flash memory devices, hard disk drives and so on, and also include intangible media such as communications signals.
Embodiments of the present invention will now be described, by way of example, with reference to the accompanying drawings, in which:
Figure 1 is a flowchart providing an overview of operation of an embodiment of the invention;
Figure 2 is a schematic illustration of operation of an embodiment of the present invention;
Figure 3 is a high level flowchart showing the operations carried out in the embodiment of the present invention shown in Figure 2;
Figure 4 is an image taken from a set of Computed Tomography (CT) image data;
Figure 5 is an image taken from a set of Cone Beam Computed Tomography (CBCT) image data;
Figure 6 is a flowchart showing part of the processing of Figure 3 in further detail;
Figure 7 is a graph showing the distribution of pixel values in CT image data in dashed line and CBCT image data in solid line; Figure 8 is a graph showing the distribution of pixel values in CT image data in dashed line and CBCT image data in solid line after part of the processing of Figure 3;
Figure 9 is a flowchart showing part of the processing of Figure 3 in further detail;
Figure 10 is a bone mask created from the CT image data of Figure 4 by the processing of Figure 9;
Figure 11 is an image of the bone mask of Figure 10 after performance of an erosion operation;
Figure 12 is an image showing application of the mask of Figure 11 to the image of Figure 4;
Figure 13 is an image showing the result of interpolation carried out on the image of Figure 12;
Figure 14 is a mask created from the CBCT image shown in Figure 5;
Figure 15 is an image showing the result of an erosion operation carried out on the mask of Figure 14;
Figure 16 is an image showing the result of application of the mask of Figure 15 to the image of Figure 5;
Figure 17 is an image showing the result of an interpolation operation carried out on the image of Figure 16;
Figure 18 is a shading map created using the images of Figures of 13 and 17; Figure 19 is an image showing the result of a smoothing operation carried out on the shading map of Figure 18;
Figure 20 is an image showing application of the smoothed shading map of Figure 19 to the image of Figure 5;
Figure 21 is a mask created from the image of Figure 4;
Figure 22 is an image showing the result of an erosion operation carried out on the mask of Figure 21;
Figure 23 is an image showing the result of application of the mask of Figure 22 to the image of Figure 4;
Figure 24 is an image showing the result of interpolation carried out on the image of Figure 23;
Figure 25 is a mask created from the image of Figure 5;
Figure 26 is an image showing the result of an erosion operation carried out on the mask of Figure 25;
Figure 27 is an image showing the application of a mask of Figure 26 to the image of Figure 5;
Figure 28 is an image showing the result of an interpolation operation carried out on the image of Figure 27;
Figure 29 is a shading map created from the images of Figures 24 and 28;
Figure 30 is a smoothed shading map created from the shading map of Figure 29; Figure 31 is an image showing application of the shading map of Figure 30 to the image of Figure 5;
Figure 32 is a flowchart showing part of the processing of Figure 3 in further detail;
Figure 33 is a mask created from the image of Figure 20;
Figure 34 is an image showing the result of a closing operation carried out on the mask of Figure 33;
Figure 35 is an image showing the result of filling the mask shown in Figure 34; and
Figure 36 is an image showing combination of the images of Figures 20 and 31 using the bone mask of Figure 35.
An embodiment of the invention is now described in the context of medical imaging. Referring to Figure 1, at step SI a computed tomography (CT) image (in the form of a slice sequence) of a patient is obtained. It will be appreciated that the CT image will, in general terms, be an image of a part of the patient relevant to a particular clinical procedure. CT images are generally of a high quality but are relatively difficult to obtain, not least because their generation requires the use of expensive imaging equipment. It is therefore often the case that where a patent is to undergo treatment (such as radiation therapy treatment), a sequence of CT image slices is obtained initially before treatment begins (as shown at step Sl of Figure 1), but that it is impractical to regularly obtain CT image sequences as treatment progresses. Therefore, at step S2, during the course of treatment, a cone beam computed tomography (CBCT) image of the patient (or the relevant part of the patient) is obtained. This is advantageous given that CBCT images can generally be obtained with the patient in the treatment position, and do not require the patient to be moved, as would be the case with CT imaging. The use of CBCT images is however disadvantageous in that such images are of lower quality that CT images, and have remained so since practical X-ray CBCT appeared in the 1980s. Accordingly, in the described embodiment of the invention, at step S3 the CT image and the CBCT image are processed together in such a way that the CT image is used to improve the quality of the CBCT image, resulting in the output of an improved image at step S4. This is shown in Figure 2. Here, it can be seen that CT image data 1 and CBCT image data 2 are together input to an image processing process 3 which generates output data 4 which is an improved quality CBCT image. The image processing process 3 is described in further detail below.
Figure 3 is a flow chart showing the image processing process 3 at a high level. At step S5 both the CT image data 1 and the CBCT image data 2 are appropriately pre- processed. Two parallel streams of processing are then initiated. A first stream comprises steps S6a to S8a, while a second stream comprises steps S6b to S8b. Although parts of this description refer to parallel processing to aid understanding, it will be appreciated that the two streams of processing can, in some embodiments, be carried out sequentially.
At step S6a, each of the CT image data 1 and CBCT image data 2 is processed individually. In each case parts of the respective image data representing bone or gas are removed, before appropriate interpolation from adjacent parts of the image data is carried out to avoid discontinuities in the image data. At step S 7a a shading map is created by dividing the CBCT image data output from step S6a by the CT image data output from step S6a. Suitable smoothing is carried out at step S7a. At step S8a the shading map created at step S7a is applied to the CBCT image data, as output from the pre-processing of step S5, to generate as output first enhanced CBCT image data Ci.
The processing of steps S6b to S8b of Figure 3 is similar to that of steps S6a to S8a. However, at step S6b the processing carried out is such as to exclude only parts of each of the CT image data 1 and the CBCT image data 2 which represent air (i.e. not those which represent bone). Having excluded parts of each of the CT image data 1 and the CBCT image data 2 which represent air, appropriate interpolation is also carried out at step S6b. At step S7b an appropriate shading map is created using the CT image data 1 and the CBCT image data 2 output from the processing of step S7a. Appropriate smoothing is also carried out at step S7b to the CBCT image data output from the pre-processing of step S 5 at step S 8b to generate as output second enhanced CBCT image data C2.
From the preceding description, it can be seen that two sets of enhanced CBCT image data are generated, one (C1) at step S8a and one (C2) at step S8b. At step S9 the first enhanced CBCT image data C1 is further processed to generate a mask indicating parts of the first processed CBCT image data C1 representing bone. At step SlO the mask created at step S9 is used to appropriately combine the first enhanced CBCT image data C1 and the second enhanced CBCT image data C2 to generate improved CBCT image data as the output data 4.
The processing of Figure 3 is now described in further detail.
Both the CT image data 1 and the CBCT image data 2 is respectively arranged in a plurality of slices, each slice comprising an array of voxels. That is, each of the CT image data 1 and the CBCT image data 2 comprise a volume of voxels arranged in a plurality of slices. Figure 4 shows a slice of the CT image data 1, while Figure 5 shows a corresponding slice of the CBCT image data 2. It can be seen that the CT image data 1 provides a higher quality image than the CBCT image data 2 which shows some artefacts.
Figure 6 is a flowchart showing the pre-processing of step S5 of Figure 3 in further detail. At step SI l voxel values of the CT image data 1 and the CBCT image data 2 are processed so as to arrange that the voxel values of each set of image data are similarly scaled. Specifically, Figure 7 shows voxel values of the CT image data 1 by way of a broken line. Voxel values of the CBCT image data 2 are shown by way of a solid line. It can be seen that voxel values of the CT image data 1 define a peak 6 which represents voxels representing air, and a peak 7 which represents voxels representing tissue. Similarly, voxel values of the CBCT image data 2 define a peak 8 which represents voxels representing air and a peak 9 which represents voxels representing tissue. It can be seen that the peaks of the CT image data 1 and the CBCT image data 2 are not coincident. In order to allow the CT image data 1 and the CBCT image data 2 to be processed together it is necessary to modify values of voxels of the CBCT image data 2. This is achieved by processing voxels in the CBCT image data 2 by multiplying values of those voxels by a determined scalar value, and adding a further scalar value to the result of the multiplication. Specifically:
p'= Ap + B
where p is the initial voxel value; p' is the modified voxel value; and
A and B are scalar values chosen so as to allow the peaks defined by voxel values of the CBCT image data 2 to be made coincident with peaks of the CT image data l.
The values of A and B are determined by finding the values which minimize the sum of squared difference between the two histograms. Starting values of A and B are defined (such that A = 1.0, and B = (mean CT pixel value - mean CBCT pixel value)). The values of p' produced by these values of A and B are computed. The CT histogram is then smoothed slightly, to mitigate the difference in width of the peaks between CT and CBCT, and subtracted from the scaled CBCT histogram (p5)- Each element in the array resulting from this subtraction is then squared and sum of the squared difference values is computed. The values of A and B are then iterated and the sum of the squared differences is computed at each iteration. The iteration is continued until a minimum in the sum of squared difference is found. The minimization is carried out using the downhill simplex method of Nelder and Mead, 1965, Computer Journal, VoI 7, pp 308-313. In an alternative embodiment, the values A and B can be chosen by the user to give the best match as subjectively assessed by the user. This is sometimes necessary in cases where the automatic determination fails.
Having modified voxel values of the CBCT image data 2, voxel values of the CBCT image data 2 have a distribution as shown in Figure 8. That is, while a broken line representing voxel values of the CT image data 1 again shows two peaks 6, 7 in positions which are the same as those of corresponding peaks in Figure 7, the solid line representing voxel values of the CBCT image data 2 shows two peaks 8', 9' which correspond to the peaks 8, 9 of Figure 7 but which have been moved so as to be coincident with voxel values represented by the peaks 6, 7.
Referring back to Figure 6, the processing of step SI l described above is arranged to modify voxel values of the CBCT image data 2 so as to be within a similar range to those of the CT image data 1. At step S 12 the CT image data 1 and the CBCT image data 2 are registered together, that is, the CBCT image data 2 is spatially modified, so as to be defined by a co-ordinate system common to the CT image data 1 and the CBCT image data 2. It will be appreciated that such registration is required so as to allow the CT image data 1 and the CBCT image data 2 to be processed together. More specifically, such registration allows respective points of the CT image data 1 and the CBCT image data 2 to be compared. The registration process of step S 12 is carried out using a chamfer matching algorithm which is described in van Herk M, Kooy HM. "Automatic three-dimensional correlation of CT-CT, CT-MRI, and CT-SPECT using chamfer matching". Medical Physics 1994;21(7):1163-78, the contents of which are herein incorporated by reference.
Van Herk et al describe a chamfer matching algorithm in the context of medical images. Van Herk et al compare a number of different ways to implement chamfer matching for medical images. Methods for matching CT images are described, as are methods for matching a CT image with an MRI image and a CT image with a SPECT image. It has been found that the described method for matching two CT images can be applied to match a CT image and a CBCT image.
When selecting feature points from the CT image, the described method includes a step of reducing the number of points to speed up the calculation. The number to which the feature points are reduced is treated as a variable that can be adjusted, and results are presented for different values. However, in a preferred embodiment of the present invention, there is no step where the number of feature points is reduced. This can be considered as using a value for the reduced number of points which is the same as the initial number of points.
Van Herk et al describes three different cost functions for the matching: rms distance, mean distance, and maximum distance. A preferred embodiment of the present invention uses mean distance as a cost function. Van Herk et al describes two different optimisation methods: downhill simplex, and Powell's method, a preferred embodiment of the present invention uses downhill simplex optimization.
In general terms, the chamfer matching algorithm registers images represented by the CT image data 1 and the CBCT image data 2 by reference to bone structures within the two images. Voxels representing bone edges are identified in each image and the generalised distance between corresponding voxels in the two images is minimised by an appropriate registration operation, which may comprise any suitable transformation such as a rotation and/or translation.
Referring again to Figure 6, at step S13 an operation is carried out to ensure that the CT image data 1 and the CBCT image data 2 are of equal spatial resolution. The CT image data 1 will be defined by a plurality of voxels of typical size 0.95mm x 0.95mm x 5mm in the lateral, vertical and longitudinal directions respectively. The CBCT image data 2 will be defined by a plurality of voxels of typical size lmm x lmm x lmm. It can therefore be appreciated that the CBCT image data 2 is of different spatial resolution than the CT image data 1. Accordingly, the CT image data 1 is processed so as to change its spatial resolution by interpolation in each of the three dimensions.
The processing of Figure 6 is therefore such as to arrange that each voxel of the CT image data 1 can be processed together with a corresponding voxel of the CBCT image data 2, the voxel values having being processed at step SI l so as to be comparable with one another.
Referring back to Figure 3, having described the pre-processing of step S5 with reference to Figure 6, the processing of steps S6a and S7a is now described with reference to Figure 9. CT image data 1 ' which is output from the pre-processing of step S 5 is input to processing of step S 14 which generates a binary mask indicating regions of the image represented by the CT image data 1 ' which represent soft tissue, and regions which do not represent tissue. Typically voxels having values in the range 850 to l l50 are considered to represent soft tissue and such voxels are set to have a value of 1. All other voxels (i.e. those considered to represent air or bone) are set to have a value of 0. Figure 10 shows the output of the processing of step S 14 where the input is CT image data 1 ' as shown in Figure 4 after appropriate pre-processing. It can be seen that voxels representing bone or air are illustrated in black, while those representing tissue are shown in white.
Having generated an appropriate mask at step S 14, this mask is further processed at step S 15. Specifically an erosion operation is carried out using a 5mm structuring element. Erosion operations in general terms will be known to those of ordinary skill in the art. In general terms, the 5mm structuring element is centred on each voxel of the mask in turn. If any voxel within the structuring element at a particular position has a value of 0 (i.e. is considered to represent air or bone), all voxels within the structuring element are set to have a value of 0. It can accordingly be appreciated that the effect of the erosion operation is to expand regions of the CT image data 1 which represent air and bone, and reduce regions of the CT image data 1 which represent soft tissue. The output of the erosion operation of step S15 when carried out on the mask of Figure 10 is shown in Figure 11.
The mask output from step S15 is then applied to the CT image data 1 ' at step S 16, so as to remove from the CT image data 1 ' regions of the CT image data which do not represent soft tissue. The output of step S 16 when the mask of Figure 11 is applied to the CT image data of Figure 4 is shown in Figure 12. Having removed regions of the image represented by the CT image data 1 which do not represent soft tissue, an interpolation operation is carried out at step S 17, generating an image as shown in Figure 13. Any appropriate interpolation can be used to generate voxel values for parts of the CT image data 1 which do not represent soft tissue. In a preferred embodiment of the invention linear interpolation is used for reasons of speed. Some embodiments of the invention are implemented using the Interactive Data Language (IDL) package available from ITT Visual Information Systems of Boulder, Colorado, USA. The IDL package provides functions TRIANGULATE and TRIGRID which can conveniently be used to perform the necessary interpolation. The TRIANGULATE function constructs a Delaunay triangulation of a planar set of points, and the TRIGRID function can then be used to carry out the required interpolation.
It can be seen from Figure 9 that similar processing to that described above is carried out on CBCT image data 2' which is output from the pre-processing of step S5. Specifically, at step Sl 8 an appropriate threshold is applied to voxels of the CBCT image data 2 shown in Figure 5, to generate a mask of the form shown in Figure 14. Here, the threshold applied is such that voxels having values between 600 and 1350 are considered to represent soft tissue and are set to a value of 1 while all other voxels are set to a value of 0. A larger range of voxel values is used in connection with the CBCT image data 2' (as compared with the CT image data 1') due to greater intensity variations in the CBCT data 2'. At step S19 an erosion operation is carried out on the mask generated at step S18 and shown in Figure 14, generating a mask as shown in Figure 15. Again, the erosion operation uses a 5mm cube structuring element, and works analogously to the erosion operation of step S 15. The mask of Figure 15 is applied to the CBCT image data 2 shown in Figure 5 at step S20, resulting in the generation of an image as shown in Figure 16. At step S21 the masked image data is interpolated as described with reference to step S 17, resulting in the generation of an image as shown in Figure 17.
At step S22 of Figure 9 the interpolated CBCT image data output from interpolation operation of step S21 and shown in Figure 13 is divided by the CT image data output from the interpolation operation of step S17 and shown in Figure 17 to generate an image shown in Figure 18, which is referred to as a shading map. This division is carried out by dividing pairs of voxels in turn, one voxel of each pair is being taken from the interpolated CBCT data and the other voxel of each pair being taken from the interpolated CT data. The shading map is smoothed at step S23, by the application of a 15mm boxcar average function, resulting in the generation of a smoothed shading map as shown in Figure 19. Smoothing using a boxcar average function essentially processes blocks of voxels in turn and replaces a voxel value for a voxel at an origin of a block with an average of all voxel values in the block.
The output of step S23 is the output of step S7a of Figure 3. Referring to Figure 3, at step S8a the CBCT image data output from the processing of step S5 is divided by the smoothed shading map of Figure 19 resulting in the generation of first enhanced CBCT image data, as shown in Figure 20.
As indicated above, the processing of steps S6b and S7b is equivalent to the processing of steps S6a and S7a, save that a different mask is created. That is, the processing of steps S6b and S7b is the same as that illustrated in Figure 9, subject to a modification to the thresholds applied at steps S 14 and S 18. Specifically, previously voxels representing soft tissue were set to a value of 1, while all other voxels (specifically those representing bone or air) were set to a value of 0. In the processing of steps S6b and S7b the thresholds applied at step S 14 and Sl 8 are such as to exclude voxels representing air, but retain voxels representing bone. That is, voxels representing air are set to 0 while voxels representing soft tissue or bone are set to 1. This will involve applying a threshold such that voxels in the CT data 1 ' having a value a value greater than 850 and voxels in the CBCT data 2' having a value greater than 600 are set to a value of 1 , while all other voxels are set to a value of 0.
When the processing of steps S6b and S7b is carried out, the mask generated at step S 14 is that shown in Figure 21. The erosion operation of step S15 generates the mask shown in Figure 22, which is applied to the CT image data 1 ' as output from the preprocessing of step S 5 at step Sl 6 to generate image data as shown in Figure 23. Interpolation carried out at step S 17 generates an image as shown in Figure 24.
Additionally, at step Sl 8 the mask shown in Figure 25 is created by applying an appropriate threshold to the CBCT image data 2' which differs from that applied for corresponding processing described with reference to steps S6a and S7a of Figure 3. The mask created at step Sl 8 is eroded at step S19 to create the mask of Figure 26. When the mask of Figure 26 is applied to the CBCT image data 2' output from the pre-processing of Figure 5 at step S20, image data as shown in Figure 27 is generated. The interpolation of step S21 generates an image as shown in Figure 28.
Again, the processing of steps S6b and S7b includes the operations of steps S22 and S23. At step S23 the image data shown in Figure 28 generated from the CBCT image data 2 is divided by the image data shown in Figure 24 generated from the CT image data 1. The resulting shading map is shown in Figure 29. The smoothing of step S23 is applied to the shading map of Figure 29 to generate a smoothed shading map as shown in Figure 30.
Referring again to Figure 3, at step S 8b the CBCT image data 2' output from the preprocessing of Figure 5 is divided by the smoothed shading map shown in Figure 30 to create second enhanced CBCT image data as shown in Figure 31. At step S9 of Figure 3, the first enhanced image data generated at step S8a and shown in Figure 20 is processed as shown in Figure 32. At step S24 a thresholding operation is carried out on the image shown in Figure 20 such that all voxels having a value greater than 1150 (which voxels are considered to represent regions of bone) are set to a value of 1, while all other voxels are set to a value of 0. This results in the generation of a bone mask as shown in Figure 33.
At step S25 the mask of Figure 33 is subjected to a dilation operation using a 25mm spherical structuring element, before an erosion operation using the same structuring element is carried out at step S26. The dilation operation involves centring the structuring element at each voxel of the mask in turn, if any voxel enclosed by the structuring element have a value of 1, all voxels enclosed by the structuring elements are set to have a value of 1. Erosion operates as described above. The combination of dilation and erosion make up a morphological closing operation, the result of which is shown in Figure 34. The mask of Figure 34 is further processed at step S37 by filling in "gaps" in the bone structure. This is achieved by replacing any voxels or groups of voxels having a value of 0 which are wholly enclosed by voxels having a value of 1 with voxels having a value of 1. The result of the processing of step S37 is shown in Figure 35.
The mask created at step S37 and shown in Figure 35 is then used at step SlO (Figure 3) to generate an output image by combining the enhanced image data output from steps S8a and S8b. Specifically, regions of the output image data within the bone regions of the mask of Figure 35 have values determined by the enhanced image data output from step S8b and shown in Figure 31. Regions of the output image data outside the bone regions of the mask of Figure 35 have values determined by the enhanced image data output from step S8a and shown in Figure 20. The image output at step SlO is shown in Figure 36. In the preceding description it has been indicated that some of the figures represent particular image data. It will be appreciated that given that both the CT image data 1 and the CBCT image data 2 are both three-dimensional, figures illustrating image data in fact represent a slice of that image data.
It will be appreciated that the generation of improved image data, such as that output from the processing of step SlO of Figure 3 has a number of applications. For example, it is currently the case that CT image data is used to generate data used to calculate doses of radiotherapy to be applied to a patient for treatment of a particular tumour. Such doses will be computed with reference to the size and location of the tumour and will specify the quantity of the radiation dose to be applied together with the location at which the radiation should be applied, the time for which radiation should be applied, and the frequency of application of radiation.
CBCT image data is typically obtained during treatment, but is of insufficient accuracy to allow a radiotherapy dose to be modified should the tumour have grown or shrunk during treatment. As indicated above, obtaining CBCT image data during treatment is advantageous given that it is typically more easily obtained at the point of treatment than corresponding CT image data.
Using the methods described herein, it is possible to process the CBCT image data obtained during treatment together with the initially obtained CT image data to generate CBCT image data of improved quality which can be used to modify radiotherapy dosage.
It will be appreciated that although preferred embodiments of the invention have been described above, various modification can be made to the described embodiments without departing from the spirit and scope of the present invention, as defined by the appended claims. In particular, it will be appreciated that although embodiments of the invention have been described with reference to medical data, the invention is not limited to medical applications. For example, the invention also has applications in the field of terrain mapping. Here, two images may be obtained such that the effects of lighting or cloud cover degrade one of the images. The two images may be processed using the method described above such that the image which is degraded is improved by use of the other image. Such a method can be useful where a high quality image of particular terrain exists, but transient images of lower quality are obtained. In such a case the high quality image can be used to improve the quality of the lower quality transient images.

Claims

1. A method for generating enhanced image data, the method comprising: receiving first image data and second image data, wherein each of said first and second image data comprises a respective plurality of image elements; generating data indicating a relationship between a first region of said first image data and a second region of said second image data, each of said first and second regions comprising a plurality of image elements; applying said generated data to said second image data to generate enhanced image data.
2. A method according to claim 1, wherein said relationship is an arithematic relationship.
3. A method according to claim 2, wherein said processing comprises performing a mathematical operation on a value of each image element in said second image data and a value of a respective image element in said first image data to generate third image data.
4. A method according to claim 3, wherein said processing comprises dividing a value of each image element in said second image data by a value of a respective image element in said first image data to generate third image data.
5. A method according to claim 3, wherein said processing comprises subtracting a value of each image element in said second image data from a value of a respective image element in said first image data to generate third image data.
6. A method according to claims 3, 4 or 5, further comprising applying a smoothing operation to said third image data.
7. A method according to any preceding claim, further comprising processing one of said first and second image data to generate processed first or second image data respectively.
8. A method according to any one of claims 1 to 6, comprising processing each of said first and second image data to generate processed first and second image data.
9. A method according to claim 8, wherein generating data indicating a relationship between said first image data and said second image data comprises generating data indicating a relationship between said processed first image data and said processed second image data.
10. A method according to claim 8 or 9, wherein said processing identifies regions of each of said first and second image data which are non-comparable.
11. A method according to claim 10, wherein said regions of said first and second image data which are non-comparable represent bone and/or gas.
12. A method according to claim 10 or 11, further comprising modifying values of image elements in said non-comparable regions based upon values of image elements in adjacent regions.
13. A method according to claim 7, 8 or 9 wherein processing at least one of said first and second image data comprises generating a mask indicating regions of said processed image data representing particular structures.
14. A method according to claim 13, wherein said mask is a binary mask.
15. A method according to claim 14, wherein generating said mask comprises applying a threshold to values of image elements in the processed image data, such that image elements having a value satisfying the threshold have a corresponding mask element having a first value, while image elements having a value not satisfying the threshold have a corresponding mask element having a second value.
16. A method according to claim 13, 14 or 15, further comprising eroding areas of said mask representing a particular structure.
17. A method according to claim 13, 14, 15 or 16, wherein each of said first and second image data represent an image of a human or animal body and said mask indicates regions of said image data representing bone and/or gas.
18. A method according to claim 13, 14, 15, 16 or 17, wherein each of said first and second image data represent an image of a human or animal body and said mask indicates regions of said image data representing tissue.
19. A method according to any one of claims 13 to 18, wherein processing at least one of said first and second image data further comprises applying the generated mask to at least one of the first and second image data to generate masked image data.
20. A method according to claim 19, further comprising processing said masked image data by generating values for image elements within masked regions of said image data from values for image elements within unmasked regions.
21. A method according to claim 20, wherein generating values for image elements within masked regions of said image data comprises an interpolation operation.
22. A method according to any preceding claim, further comprising preprocessing said first and second image data.
23. A method according to claim 22, wherein said pre-processing comprises modifying values of one of said first and second image data based upon values of the other of said first and second image data.
24. A method according to claim 22 or 23, wherein said pre-processing comprises registering said first and second image data with one another.
25. A method according to claim 22, 23 or 24, wherein said pre-processing comprises modifying the spatial resolution of at least one of said first and second image data such that each of said first and second image data have substantially equal spatial resolution.
26. A method of generating output image data, the method comprising: generating first enhanced image data using a method according to any preceding claim; generating second enhanced image data using a method according to any preceding claim; combining said first and second enhanced image data to generate output enhanced image data.
27. A method according to claim 26, wherein combining said first and second enhanced image data comprises generating a mask from one of said first enhanced image data and said second enhanced image data, and combining said first and second enhanced image data in accordance with said mask.
28. A method according to claim 27, wherein generating said mask comprises applying a threshold to values of image element of said first or second enhanced image data.
29. A method according to claim 28, wherein generating said mask comprises applying a morphological closing operation after application of said threshold.
30. A method according to any preceding, wherein said image elements are pixels or voxels.
31. A method according to any preceding claim, wherein said first image data is obtained using computed tomography and said second image data is obtained using cone beam computed tomography.
32. A computer readable medium carrying computer readable instructions configured to cause a computer to carry out a method according to any preceding claim.
33. A computer apparatus for generating enhanced image data, the apparatus comprising: a program memory storing processor readable instructions; and a processor configured to read and execute instructions stored in said program memory; wherein the processor readable instructions comprise instructions configured to cause the computer to carry out a method according to any one of claims 1 to 31.
34. A method for determining a treatment dose for a patient, the method comprising: processing first image data obtained at a first time to determine an initial treatment dose; and processing second image data obtained at a second later time together with said first image data to generate enhanced image data, and generating a modified treatment dose from said enhanced image data. wherein generating enhanced image data comprises carrying out a method according to any one of claims 1 to 31.
35. A method according to claim 34, wherein said treatment is radiation therapy.
36. A method according to claim 34 or 35, wherein said first image data is generated using a computed tomography method and said second image data is generated using a cone beam computed tomography method.
37. A computer readable medium carrying computer readable instructions configured to cause a computer to carry out a method according to any one of claims 34 to 36.
38. A computer apparatus for determining a treatment dose for a patient, the apparatus comprising: a program memory storing processor readable instructions; and a processor configured to read and execute instructions stored in said program memory; wherein the processor readable instructions comprise instructions configured to cause the computer to carry out a method according to any one of claims 34 to 36.
PCT/GB2008/002892 2007-09-28 2008-08-28 Image enhancement method WO2009040497A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US99572907P 2007-09-28 2007-09-28
GB0719076.2 2007-09-28
US60/995,729 2007-09-28
GB0719076A GB2453177C (en) 2007-09-28 2007-09-28 Image enhancement method

Publications (1)

Publication Number Publication Date
WO2009040497A1 true WO2009040497A1 (en) 2009-04-02

Family

ID=38701927

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2008/002892 WO2009040497A1 (en) 2007-09-28 2008-08-28 Image enhancement method

Country Status (2)

Country Link
GB (1) GB2453177C (en)
WO (1) WO2009040497A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111192268A (en) * 2019-12-31 2020-05-22 广州华端科技有限公司 Medical image segmentation model construction method and CBCT image bone segmentation method

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201009725D0 (en) 2010-06-11 2010-07-21 Univ Leuven Kath Method of quantifying local bone loss
US10535167B2 (en) 2014-12-31 2020-01-14 General Electric Company Method and system for tomosynthesis projection image enhancement and review
GB2533801B (en) * 2014-12-31 2018-09-12 Gen Electric Method and system for tomosynthesis projection images enhancement
CN111062997B (en) * 2019-12-09 2023-09-12 上海联影医疗科技股份有限公司 Angiography imaging method, angiography imaging system, angiography imaging equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070047834A1 (en) * 2005-08-31 2007-03-01 International Business Machines Corporation Method and apparatus for visual background subtraction with one or more preprocessing modules

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10206397B4 (en) * 2002-02-15 2005-10-06 Siemens Ag Method for displaying projection or sectional images from 3D volume data of an examination volume
US6904118B2 (en) * 2002-07-23 2005-06-07 General Electric Company Method and apparatus for generating a density map using dual-energy CT
US7724930B2 (en) * 2005-11-03 2010-05-25 Siemens Medical Solutions Usa, Inc. Systems and methods for automatic change quantification for medical decision support
WO2009004571A1 (en) * 2007-07-05 2009-01-08 Koninklijke Philips Electronics N.V. Method and apparatus for image reconstruction
US8144953B2 (en) * 2007-09-11 2012-03-27 Siemens Medical Solutions Usa, Inc. Multi-scale analysis of signal enhancement in breast MRI

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070047834A1 (en) * 2005-08-31 2007-03-01 International Business Machines Corporation Method and apparatus for visual background subtraction with one or more preprocessing modules

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
AADIREDDY PRABHAKAR ET AL: "Real-time well site log analysis application using MRI logs", SPE ANNUAL TECHNICAL CONFERENCE AND EXHIBITION, XX, XX, 1 October 2000 (2000-10-01), pages 877 - 892, XP009101959 *
BRADLEY K M ET AL: "Serial brain MRI at 3-6 month intervals as a surrogate marker for Alzheimer's disease.", THE BRITISH JOURNAL OF RADIOLOGY JUN 2002, vol. 75, no. 894, June 2002 (2002-06-01), pages 506 - 513, XP002504072, ISSN: 0007-1285 *
DOGDAS B ET AL: "Segmentation of skull and scalp in 3-D human MRI using mathematical morphology", HUMAN BRAIN MAPPING WILEY USA, vol. 26, no. 4, December 2005 (2005-12-01), pages 273 - 285, XP002504071, ISSN: 1065-9471 *
GINNEKEN VAN B ET AL: "COMPUTER-AIDED DIAGNOSIS IN CHEST RADIOGRAPHY: A SURVEY", IEEE TRANSACTIONS ON MEDICAL IMAGING, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 20, no. 12, 1 December 2001 (2001-12-01), pages 1228 - 1241, XP001101452, ISSN: 0278-0062 *
MARCHANT T E ET AL: "Shading correction algorithm for improvement of cone-beam CT images in radiotherapy.", PHYSICS IN MEDICINE AND BIOLOGY 21 OCT 2008, vol. 53, no. 20, 21 October 2008 (2008-10-21), pages 5719 - 5733, XP002504070, ISSN: 0031-9155 *
MORIN ET AL: "Megavoltage cone-beam CT: System description and clinical applications", MEDICAL DOSIMETRY, ELSEVIER, US, vol. 31, no. 1, 18 March 2006 (2006-03-18), pages 51 - 61, XP005864689, ISSN: 0958-3947 *
STUDHOLME C ET AL: "ACCURATE TEMPLATE-BASED CORRECTION OF BRAIN MRI INTENSITY DISTORTION WITH APPLICATION TO DEMENTIA AND AGING", IEEE TRANSACTIONS ON MEDICAL IMAGING, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 23, no. 1, 1 January 2004 (2004-01-01), pages 99 - 110, XP001245969, ISSN: 0278-0062 *
URO VOVK ET AL: "A Review of Methods for Correction of Intensity Inhomogeneity in MRI", IEEE TRANSACTIONS ON MEDICAL IMAGING, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 26, no. 3, 1 March 2007 (2007-03-01), pages 405 - 421, XP011171979, ISSN: 0278-0062 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111192268A (en) * 2019-12-31 2020-05-22 广州华端科技有限公司 Medical image segmentation model construction method and CBCT image bone segmentation method
CN111192268B (en) * 2019-12-31 2024-03-22 广州开云影像科技有限公司 Medical image segmentation model construction method and CBCT image bone segmentation method

Also Published As

Publication number Publication date
GB0719076D0 (en) 2007-11-07
GB2453177A (en) 2009-04-01
GB2453177C (en) 2010-04-28
GB2453177B (en) 2010-03-24

Similar Documents

Publication Publication Date Title
Kida et al. Cone beam computed tomography image quality improvement using a deep convolutional neural network
Thummerer et al. Comparison of CBCT based synthetic CT methods suitable for proton dose calculations in adaptive proton therapy
Yang et al. 4D‐CT motion estimation using deformable image registration and 5D respiratory motion modeling
US8000435B2 (en) Method and system for error compensation
RU2556428C2 (en) Method for weakening of bone x-ray images
Marchant et al. Shading correction algorithm for improvement of cone-beam CT images in radiotherapy
Xu et al. An algorithm for efficient metal artifact reductions in permanent seed implants
WO2007148263A1 (en) Method and system for error compensation
US10282872B2 (en) Noise reduction in tomograms
JP2009536857A5 (en) Deformable registration of images for image-guided radiation therapy
JP2010246883A (en) Patient positioning system
JP2014509037A (en) Processing based on image data model
Koike et al. Deep learning-based metal artifact reduction using cycle-consistent adversarial network for intensity-modulated head and neck radiation therapy treatment planning
Wu et al. Iterative CT shading correction with no prior information
WO2012069965A1 (en) Interactive deformation map corrections
Chen et al. Low dose CBCT reconstruction via prior contour based total variation (PCTV) regularization: a feasibility study
US20080144904A1 (en) Apparatus and Method for the Processing of Sectional Images
Schnurr et al. Simulation-based deep artifact correction with convolutional neural networks for limited angle artifacts
Shao et al. Real-time liver tumor localization via a single x-ray projection using deep graph neural network-assisted biomechanical modeling
Zhang An unsupervised 2D–3D deformable registration network (2D3D-RegNet) for cone-beam CT estimation
Wein et al. 2D/3D registration based on volume gradients
Li et al. Multienergy cone-beam computed tomography reconstruction with a spatial spectral nonlocal means algorithm
WO2009040497A1 (en) Image enhancement method
US20190005685A1 (en) Systems and Methods for Generating 2D Projection from Previously Generated 3D Dataset
Alam et al. Generalizable cone beam CT esophagus segmentation using physics-based data augmentation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08788451

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08788451

Country of ref document: EP

Kind code of ref document: A1