WO2002067201A1 - Statistically reconstructing an x-ray computed tomography image with beam hardening corrector - Google Patents

Statistically reconstructing an x-ray computed tomography image with beam hardening corrector Download PDF

Info

Publication number
WO2002067201A1
WO2002067201A1 PCT/US2001/004894 US0104894W WO02067201A1 WO 2002067201 A1 WO2002067201 A1 WO 2002067201A1 US 0104894 W US0104894 W US 0104894W WO 02067201 A1 WO02067201 A1 WO 02067201A1
Authority
WO
WIPO (PCT)
Prior art keywords
calculating
image
ray
gradient
algorithm
Prior art date
Application number
PCT/US2001/004894
Other languages
French (fr)
Inventor
Idris A. Elbakri
Jeffrey A. Fessler
Original Assignee
The Regents Of The University Of Michigan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Regents Of The University Of Michigan filed Critical The Regents Of The University Of Michigan
Priority to PCT/US2001/004894 priority Critical patent/WO2002067201A1/en
Publication of WO2002067201A1 publication Critical patent/WO2002067201A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/006Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/408Dual energy
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/424Iterative

Definitions

  • the present invention relates to statistical methods for reconstructing a polyenergetic X-ray computed tomography image and image reconstructor apparatus and, in particular, to methods and reconstructor apparatus which reconstruct such images from a single X-ray CT scan having a polyenergetic source
  • X-ray computed or computerized tomography provides structural information about tissue anatomy. Its strength lies in the fact that it can provide "slice" images, taken through a three-dimensional volume with enhanced 15 contrast and reduced structure noise relative to projection radiography.
  • Figure 1 illustrates a simple CT system.
  • An X-ray source is collimated and its rays are scanned through the plane of interest.
  • the intensity of the X-ray photons is diminished by tissue attenuation.
  • a detector measures the photon flux that emerges from the object. This procedure is repeated at sufficiently
  • Figures 2a-2c illustrate the evolution of CT geometries.
  • Figure 2a is a parallel-beam (single ray) arrangement, much like what was found in a first- generation CT scanner.
  • the major drawback of this arrangement is long scan time, since the source detector arrangement has to be translated and rotated.
  • the fan-beam geometry of Figure 2b reduces the scan time to a fraction of a second by eliminating the need for translation.
  • Figure 2c It further reduces scan time by providing three-dimensional information in one rotation. It is most efficient in its usage of the X-ray tube, but it suffers from high scatter ( ⁇ 40%). It is also the most challenging in terms of reconstruction algorithm implementation.
  • the linear attenuation coefficient ⁇ (x,y,z,E) characterizes the overall attenuation property of tissue. It depends on the spatial coordinates and the beam energy, and has units of inverse distance. For a ray of infinitesimal width, the mean photon flux detected along a particular projection line L, is given by:
  • FBP Filtered back projection
  • the attenuation coefficient ⁇ is energy dependent and the X-ray beam is polyenergetic. Lower energy X-rays are preferentially attenuated.
  • Figure 7 shows the energy dependence of the attenuation coefficients of water (density 1.0 gm/cm 3 ) and bone (at density 1.92 gm/cm 3 ).
  • a hard X-ray beam is one with higher average energy. Beam hardening is a process whereby the average energy of the X-ray beam increases as the beam propagates through a material. This increase in average energy is a direct consequence of the energy dependence of the attenuation coefficient.
  • the expected detected photon flux along path L is given by (1). If one were to ignore the energy dependence of the measurements and simply apply FBP to the log processed data, some attenuation map ⁇ would be reconstructed that is indirectly related to the source spectrum and object attenuation properties.
  • Figures 8 and 9 show the effect of beam hardening on the line integral in bone and water.
  • the line integral increases linearly with thickness.
  • the soft tissue line integral departs slightly from the linear behavior. The effect is more pronounced for high Z (atomic number) tissue such as bone.
  • Dual-energy imaging has been described as the most theoretically elegant approach to eliminate beam hardening artifacts.
  • the approach is based on expressing the spectral dependence of the attenuation coefficient as a linear combination of two basis function, scaled by constants independent of energy.
  • the two basis functions are intended to model the photo-electric effect and Compton scattering.
  • This technique provides complete energy dependence information for CT imaging.
  • An attenuation coefficient image can, in principle, be presented at any energy, free from beam hardening artifacts.
  • the method's major drawback is the requirement for two independent energy measurements. This has inhibited its use in clinical applications, despite the potential diagnostic benefit of energy information.
  • Recently, some work has been presented on the use of multi-energy X-ray CT for imaging small animals. For that particular application, the CT scanner was custom built with an energy-selective detector arrangement.
  • Pre-processing works well when the object consists of homogeneous soft tissue. Artifacts caused by high Z materials such as bone mandate the use of post-processing techniques to produce acceptable images.
  • the attenuation coefficient of some material k is modeled as the product of the energy -dependent mass attenuation coefficient m k (E) (cm 2 /g) and the energy-independent density p(x,y) (g/cm 3 ) of the tissue.
  • ⁇ (x,y,E) ⁇ m k (E)p k (x,y)r k (x,y) (2)
  • ⁇ (x,y, E) m l (E)p (x,y) (3) * m w (E)p sof ' (x,y) (4)
  • m w (E) is the mass attenuation coefficient of water
  • f f o ⁇ is the effective soft tissue density.
  • T is the line integral of the density along path L
  • the goal of the pre-processing method is to estimate ⁇ t ⁇ and from that reconstruct (using FBP) an estimate p(x,y) of the energy-independent density f f o ⁇ .
  • This pre-processing approach is inaccurate when bone is present, but is often the first step in a post-processing bone correction algorithm.
  • Post-processing techniques first pre-process and reconstruct the data for soft tissue correction, as explained above.
  • the resulting effective density image is then segmented into bone and soft tissue. This classification enables one to estimate the contributions of soft tissue and bone to the line integrals. These estimates are used to correct for non-linear effects in the line integrals.
  • the final artifact-free image is produced using FBP and displays density values independent of energy according to the following relationship:
  • ⁇ 0 is some constant independent of energy that maintains image contrast.
  • post-processing accomplishes its goal of eliminating energy dependence, it suffers from quantitative inaccuracy.
  • the parameter ⁇ 0 is somewhat heuristically estimated.
  • Another beam hardening correction of interest is known. This algorithm is iterative (but not statistical). At each pixel, it assumes that the attenuation coefficient is a linear combination of the known attenuation coefficients of two base materials, and it iteratively determines the volume fractions of the base materials.
  • the algorithm depends on an empirically -determined estimate of effective X-ray spectrum of the scanner. The main limitations of this approach is that the spectrum estimate captures the imaging characteristics for a small FOV only and that prior knowledge of the base materials at each pixel is necessary.
  • Statistical methods are a subclass of iterative techniques, although the two terms are often used interchangeably in the literature.
  • the broader class of iterative reconstruction techniques includes non-statistical methods such as the Algebraic Reconstruction Technique (ART) which casts the problem as an algebraic system of equations.
  • Successive substitution methods such as Joseph and Spital's beam-hardening correction algorithm, are also iterative but not statistical. Hence, statistical methods are iterative, but the opposite is not necessarily true.
  • Dual-energy systems operate based on the principle that the attenuation coefficient can be expressed as a linear combination of two energy basis functions and are capable of providing density images independent of energy.
  • An object of the present invention is to provide a method for reconstructing a polyenergetic X-ray computed tomography image and an image reconstructor apparatus, both of which utilize a statistical algorithm which explicitly accounts for a polyenergetic source spectrum and resulting beam hardening effects.
  • Another object of the present invention is to provide a method for reconstructing a polyenergetic X-ray computed tomography image and an image reconstructor apparatus, both of which utilize a statistical algorithm which is portable to different scanner geometries.
  • a method for statistically reconstructing a polyenergetic X-ray computed tomography image to obtain a corrected image includes providing a computed tomography initial image produced by a single X-ray CT scan having a polyenergetic source spectrum.
  • the initial image has components of materials which cause beam hardening artifacts.
  • the method also includes separating the initial image into different sections to obtain a segmented image and calculating a series of intermediate corrected images based on the segmented image utilizing a statistical algorithm which accounts for the polyenergetic source spectrum and which converges to obtain a final corrected image which has significantly reduced beam hardening artifacts.
  • the step of calculating may include the steps of calculating a gradient of a cost function having an argument and utilizing the gradient to minimize the cost function with respect to its argument.
  • the step of calculating the gradient may include the step of back projecting.
  • the cost function preferably has a regularizing penalty term.
  • the step of calculating the gradient may include the step of calculating thicknesses of the components.
  • the step of calculating thicknesses may include the step of reprojecting the segmented image.
  • the step of calculating the gradient may include the step of calculating means of data along paths and gradients based on the thicknesses of the components.
  • the argument may be density of the materials at each image voxel.
  • the method may further include calibrating the spectrum of the X-ray
  • the method may further include displaying the final corrected image.
  • the step of calculating the gradient may include the step of utilizing ordered subsets to accelerate convergence of the algorithm.
  • an image reconstructor apparatus for statistically reconstructing a polyenergetic X-ray computed tomography image to obtain a corrected image.
  • the apparatus includes means for providing a computed tomography initial image produced by a single X-ray CT scan having a polyenergetic source spectrum wherein the initial image has components of materials which cause beam hardening artifacts.
  • the apparatus further includes means for separating the initial image into different sections to obtain a segmented image and means for calculating a series of intermediate corrected images based on the segmented image utilizing a statistical algorithm which accounts for the polyenergetic source spectrum and which converges to obtain a final corrected image which has significantly reduced beam hardening artifacts.
  • the means for calculating may include means for calculating a gradient of a cost function having an argument and means for utilizing the gradient to minimize the cost function with respect to its argument.
  • the cost function preferably has a regularizing penalty term.
  • the means for calculating the gradient may include means for back projecting.
  • the means for calculating the gradient may include means for calculating thicknesses of the components.
  • the means for calculating thicknesses may include means for reprojecting the segmented image.
  • the means for calculating the gradient may include means for calculating means of data along paths and gradients based on the thicknesses of the components.
  • the argument may be density of the materials at each image voxel.
  • the means for calculating the gradient may include means for utilizing ordered subsets to accelerate convergence of the algorithm.
  • FIGURE 1 is a schematic view of a basic CT system
  • FIGURES 2a-2c are schematic views of various CT geometries wherein Figure 2a shows a parallel-beam (single ray) arrangement, Figure 2b shows a fan-beam geometry and Figure 2c shows a cone-beam arrangement;
  • FIGURE 3 is a schematic view which illustrates system matrix computation for the fan-beam geometry
  • FIGURE 4 shows graphs which illustrate convex penalty functions
  • FIGURE 5 shows graphs which illustrate the optimization transfer principle
  • FIGURE 6 shows graphs which are quadratic approximations to the Poisson log likelihood
  • FIGURE 7 shows graphs which illustrate attenuation coefficient energy dependence
  • FIGURE 8 shows graphs which illustrate beam hardening induced deviation of line integral from linearity in water
  • FIGURE 9 shows graphs which illustrate beam hardening induced deviation of line integral from linearity in bone.
  • FIGURE 10 is a block diagram flow chart of the method of the present invention.
  • the method and system of the present invention utilize a statistical approach to CT reconstruction and, in particular, iterative algorithms for transmission X-ray CT.
  • the method and system of the invention are described herein for a single-slice fan-beam geometry reconstruction, the method and system may also be used with cone-beam geometries and helical scanning. The method and system may also be used with flat-panel detectors.
  • Statistical Reconstruction for X-ray CT is described herein for a single-slice fan-beam geometry reconstruction, the method and system may also be used with cone-beam geometries and helical scanning.
  • the method and system may also be used with flat-panel detectors.
  • the image in object space (attenuation coefficient) is parameterized using square pixels.
  • the goal of the algorithm becomes to estimate the value of the discretized attenuation coefficient at those pixels.
  • [ ⁇ ! , ... , ⁇ p ] ' be the vector of unknown attenuation coefficients having units of inverse length.
  • the measurements in a photon-limited counting process are reasonably modeled as independently distributed Poisson random variables.
  • the mean number of detected photons is related exponentially to the projections (line integrals) of the attenuation map.
  • the measurements are also contaminated by extra background counts, caused primarily by scatter in X-ray CT.
  • the following model is assumed for measurements:
  • N the number of measurements (or, equivalently, the number of detector bins).
  • Figure 3 illustrates one method for computing the elements of A in the fan-beam case.
  • a l ⁇ is the normalized area of overlap between the ray beam and the pixel.
  • the term r is the mean number of background events, b, is the blank scan factor and Y, represents the photon flux measured by the ith detector.
  • the 7,'s are assumed independent and that b l ,r l and ⁇ y ⁇ are known non-negative constants, ⁇ is also assumed to be independent of energy.
  • ML Maximum Likelihood
  • Regularization penalized-likelihood
  • the penalty function improves the conditioning of the problem.
  • the penalty functions with certain desirable properties such as edge preservation.
  • a general form for the regularizing penalty is the following:
  • ⁇ 's are potential functions acting on the soft constraints C ⁇ * 0 and K is the number of such constraints.
  • the potential functions are symmetric, convex, non-negative and differentiable.
  • Non-convex penalties can be useful for preserving edges, but are more difficult to analyze.
  • One can think of the penalty as imposing a degree of smoothness or as a Bayesian prior. Both views are practically equivalent.
  • penalty function penalize differences in the neighborhood of any particular pixel. They can be expressed as:
  • Equation (14) forces the estimator to match the measured data.
  • the second term imposes some degree of smoothness leading to visually appealing images.
  • the scalar parameter ⁇ controls the tradeoff between the two terms (or, alternatively, between resolution and noise).
  • the goal of the reconstruction technique becomes to maximize (14) subject to certain object constraints such as non-negativity:
  • OSTR Ordered Subsets Transmission Reconstruction
  • OS-PWLS Ordered Subsets Penalized Weighted Least Squares
  • the optimization transfer principle is a very useful and intuitive principle that underlies many iterative methods, including the ones described herein.
  • De Pierro introduced it in the context of inverse problems (emission tomography, to be specific).
  • the process is repeated iteratively, using a new surrogate function at each iteration. If the surrogate is chose appropriately, then the maximizer of ⁇ ( ⁇ ) can be found. Sufficient conditions that ensure that the surrogate leads to a monotonic algorithm are known.
  • Paraboloidal surrogates are used because they are analytically simple, and can be easily maximized. One can also take advantage of the convexity of these surrogates to parallelize the algorithm. Separable Paraboloidal Surrogates
  • the parameter ⁇ controls the tradeoff between the data-fit and penalty terms, and R( ⁇ ) imposes a degree of smoothness on the solution.
  • curvature c must be such that the surrogate satisfies the monotonicity conditions (19).
  • This formulation decouples the pixels.
  • Each pixel effectively has its own cost function q ⁇ .
  • the q s can be minimized for all pixels simultaneously, resulting in a parallelizable algorithm.
  • the pre-computed curvature may violate the conditions of monotonicity. It does, however, give an almost-monotonic algorithm, where the surrogate becomes a quadratic approximation of the log likelihood.
  • the pre- computed curvature seems to work well in practice, and the computational savings seem well worth the sacrifice.
  • Major computational savings come from the use of ordered subsets, discussed hereinbelow.
  • Ordered subsets are useful when an algorithm involves a summation over sinogram indices (i.e. , a back projection).
  • the basic idea is to break the set of sinogram angles into subsets, each of which subsamples the sinogram in the angular domain.
  • the back projection process over the complete sinogram is replaced with successive back projections over the subsets of the sinogram.
  • One iteration is complete after going through all of the subsets.
  • Ordered subsets have been applied to emission tomography with a good degree of success. Improvements in convergence rate by a factor approximately equal to the number of subsets have been reported. Ordered subsets have also been used with transmission data for attenuation map estimation in SPECT. Ordered subsets where applied to the convex algorithm, and an increase in noise level with number of subsets have been reported. Ordered subsets have been used with the transmission EM algorithm and a cone-beam geometry. The OSTR algorithm was originally developed for attenuation correction in PET scans with considerable success.
  • OSTR combines the accuracy of statistical reconstruction with the accelerated rate of convergence that one gets from ordered subsets.
  • the separability of the surrogates makes the algorithm easily parallelizable.
  • the algorithm also naturally enforces the non-negativity constraint. The monotonicity property has been satisfied, but that seems to hardly make a difference in practice if a reasonable starting image is used.
  • OSTR uses Poisson statistics to model the detection process.
  • the Gaussian model is a reasonable approximation to the Poisson distribution.
  • the Gaussian model leads to a simpler quadratic objection function and weighted-least-squares minimization. With high counts, PWLS leads to negligible bias and the simpler objective function reduces computation time.
  • Figure 6 illustrates how the quadratic approximation to the likelihood improves with count number.
  • the algorithm is reformulated by deriving a quadratic approximation to the Poisson likelihood, which leads to a simpler objective function.
  • the regularization term and the use of ordered subsets are retained.
  • This variation of the method of the invention is called Ordered Subset Penalized Weight Least Squares (OS-PWLS).
  • Taylor's expansion is applied to h,(l) around some value , and first and second order terms only are retained.
  • the first term in (37) is independent of /, and can be dropped.
  • the subscript q indicates that this objective function is based on a quadratic approximation to the log likelihood. Subsequently, the subscript is dropped. The penalty term is also added. Minimizing this objective function over ⁇ ⁇ 0 will lead to an estimator with negligible bias, since the number of counts is large.
  • the numerator (first derivative) in (41) involves no exponential terms and the denominator (second derivative) in (42) can be pre-computed and stored.
  • the sum over sinogram indices can also be broken into sums over ordered subsets, further accelerating the algorithm.
  • CT transmission model is generalized hereinbelow to account for the broad spectrum of the beam. From the model, a cost function is derived and an iterative algorithm is developed for finding the minimizer of the cost function.
  • a model for X-ray CT is described now that incorporates the energy dependence of the attenuation coefficient.
  • a prior art algorithm could be applied to an image reconstructed with OS-PWLS. Instead, beam hardening correction is developed as an integral element of the statistical algorithm.
  • An iterative algorithm that generalizes OS- PWLS naturally emerges from the model.
  • R k ⁇ set of pixels classified as tissue type k], ( 4 )
  • K vector quantities of length p each representing the density of one kind of tissue
  • the non-overlapping assumption of the tissue types enables one to keep the number of unknowns equal to p, as is the case in the monoenergetic model. This is possible when prior segmentation of the object is available. This can be obtained from a good FBP image, for example.
  • the Poisson log likelihood is set up in terms of the density p and the vector v,-.
  • a quadratic cost function one follows a similar procedure to that described hereinabove, using the second-order Taylor's expansion.
  • the function Y t represents the expected value of the measurement
  • the gradient VA is a row vector and the Laplacian operator V 2 gives a K x K matrix of partial derivatives.
  • Y t is assumed to be close enough to l ⁇ (v ( ) for one to drop the first term on the right of (57). This also ensures that the resulting Hessian approximation is non-negative definite.
  • V k denotes the km element of the gradient vector.
  • A ⁇ y ⁇ is the geometrical system matrix.
  • the matrix B ⁇ b y ⁇ is a weighted system matrix, with the weights expressed as the non-zero elements of a diagonal matrix D(-), to the left of A.
  • the term Z combines constants independent of p.
  • This algorithm globally converges to the minimizer of the cost function ⁇ q (p) when one subset is used, provided the penalty is chosen so that ⁇ q is strictly convex. When two or more subsets are used, it is expected to be monotone in the initial iterations.
  • the spectrum of the incident X-ray beam is calibrated.
  • the correction image is then subtracted and non-negativity is enforced.
  • the number of iterations is checked against a predetermined number or other criteria and if the iterative part of the method is complete, then the final corrected image is displayed. If not done, the iterative part of the method is re- entered at the reprojection step. At least some of the results obtained after subtraction may be used in the segmentation step as described herein as indicated by the dashed line from the "DONE" block.
  • beam hardening correction in the method of the invention depends on the availability of accurate classification of the different substances in the object.
  • the bone/tissue distribution was known exactly.
  • such a classification would be available from segmenting an initial image reconstructed with FBP. Using this segmentation map for all iterations may adversely affect the accuracy of the reconstruction.
  • joint likelihoods and penalties are used to estimate both pixel density values and tissue classes.
  • the tissue classes are treated as random variables with a Markov random field model and are estimated jointly with the attenuation map.
  • the joint likelihood will be a function of both the pixel density value and the pixel class.
  • the joint penalties involve two parameters that balance the tradeoff between data fit and smoothness.
  • joint penalties must account for the fact that pixels tend to have similar attenuation map values if the underlying tissue classes are the same, and vice versa. Such penalties would encourage smoothness in the same region but allow discontinuities between regions of different tissues.
  • Scatter is a major problem in cone-beam CT, where it can range from 40% up to 200% of the unscattered data. Collimation reduces scatter, but collimating flat-panel detectors is challenging. Among several factors that affect the performance of a cone-beam, flat-panel detector computed tomography system, scatter was shown to degrade the detector quantum efficiency (DQE) and to influence the optimal magnification factor. Larger air gaps were needed to cope with high scatter, especially if imaging a large FOV.
  • DQE detector quantum efficiency
  • Scatter can either be physically removed (or reduced) before detection or can be numerically estimated and its effect compensated for.
  • Ways to physically remove scatter include air gaps and grids, but are not very practical once flat-panel detectors are used, due to the small size of detector pixels.
  • idle detectors can be used to provide a scatter estimate.
  • Such measurements can be combined with analytic models for scatter that depend, among other factors, on the energy of the radiation used, the volume of scattering material and system geometry.
  • a scatter estimate may be incorporated in the model of the CT problem as well as in the reconstruction algorithm using the ri terms above.
  • One approach to scatter estimation and correction is a numerical one, i.e. , ways to physically eliminate scatter will not be considered. This makes the approach portable to different systems, and less costly.
  • the statistical measurement model has the potential to be extended to take into account the time dimension when imaging the heart, and thus free the designer from synchronization constraints. Moreover, statistical reconstruction eliminates the need for rebinning and interpolation. This may lead to higher helical scanning pitch and cardiac imaging with good temporal and axial resolutions.

Abstract

A method for statistically reconstructing an X-ray computed tomography image produced by a single X-ray CT scan having a polenergetic source spectrum and an image reconstructor which utilize a convergent statistical algorithm which explicitly accounts for the polyenergetic source spectrum are provided. First and second related statistical iterative methods for CT reconstruction based on a Poisson statistical model are described. Both methods are accelerated by the use of ordered subsets, which replace sums over the angular index of a sinogram with a series of sums over angular subsets of the sinogram. The first method is generalized to model the more realistic case of polyenergetic computed tomography (CT). The second method eliminates beam hardening artifacts seen when filtered back projection (FBP) is used without post-processing correction. The method are superior to FBP reconstruction in terms of noise reduction. The methods are superior to FBP reconstruction in terms of noise reduction. The method and image reconstructor of the invention are effective in producing corrected images that do not suffer from beam hardening effects.

Description

STATISTICALLY RECONSTRUCTING AN X-RAY COMPUTED TOMOGRAPHY IMAGE WITH BEAM HARDENING CORRECTION
BACKGROUND OF THE INVENTION
5 1. Field of the Invention
The present invention relates to statistical methods for reconstructing a polyenergetic X-ray computed tomography image and image reconstructor apparatus and, in particular, to methods and reconstructor apparatus which reconstruct such images from a single X-ray CT scan having a polyenergetic source
10 spectrum.
2. Background Art
X-ray computed or computerized tomography (i.e. CT) provides structural information about tissue anatomy. Its strength lies in the fact that it can provide "slice" images, taken through a three-dimensional volume with enhanced 15 contrast and reduced structure noise relative to projection radiography.
Figure 1 illustrates a simple CT system. An X-ray source is collimated and its rays are scanned through the plane of interest. The intensity of the X-ray photons is diminished by tissue attenuation. A detector measures the photon flux that emerges from the object. This procedure is repeated at sufficiently
20 close angular samples over 180° or 360°. The data from different projections are organized with the projection angles on one axis and the projection bins (radial distance) on the other. This array is referred to as the sinogram, because the sinogram of a single point traces a sinusoidal wave. Reconstruction techniques have the goal of estimating the attenuation map of the object that gave rise to the
25 measured sinogram. Figures 2a-2c illustrate the evolution of CT geometries. Figure 2a is a parallel-beam (single ray) arrangement, much like what was found in a first- generation CT scanner. The major drawback of this arrangement is long scan time, since the source detector arrangement has to be translated and rotated.
The fan-beam geometry of Figure 2b reduces the scan time to a fraction of a second by eliminating the need for translation. By translating the patient table as the source detector arrangement rotates, one gets an effective helical path around the object leading to increased exposure volume and three-dimensional imaging.
The latest CT geometry is the cone-beam arrangement, shown in
Figure 2c. It further reduces scan time by providing three-dimensional information in one rotation. It is most efficient in its usage of the X-ray tube, but it suffers from high scatter (≥ 40%). It is also the most challenging in terms of reconstruction algorithm implementation.
Two dominant effects, both a function of the X-ray source spectrum, govern tissue attenuation. At the lower energies of interest in the diagnostic region, the photoelectric effect dominates. At higher energies, Compton scattering is the most significant source of tissue attenuation.
The linear attenuation coefficient μ(x,y,z,E) characterizes the overall attenuation property of tissue. It depends on the spatial coordinates and the beam energy, and has units of inverse distance. For a ray of infinitesimal width, the mean photon flux detected along a particular projection line L, is given by:
Figure imgf000003_0001
where the integral in the exponent is taken over the line L, and I,(E) incorporates the energy dependence of the incident ray and detector sensitivity. The goal of any CT algorithm is to reconstruct the attenuation map μ from the measured data [Y, , ... ,YN] where N is the number of rays measured. Filtered Back Projection
Filtered back projection (FBP) is the standard reconstruction technique for X-ray CT. It is an analytic technique based on the Fourier slice theorem.
Use of the FFT in the filtering step of FBP renders the algorithm quite fast. Moreover, its properties are well understood. However, because it ignores the noise statistics of the data, it results in biased estimators. It also suffers from streak artifacts when imaging objects with metallic implants or other high- density structures.
Polyenergetic X-ray CT
In reality, the attenuation coefficient μ is energy dependent and the X-ray beam is polyenergetic. Lower energy X-rays are preferentially attenuated. Figure 7 shows the energy dependence of the attenuation coefficients of water (density 1.0 gm/cm3) and bone (at density 1.92 gm/cm3). A hard X-ray beam is one with higher average energy. Beam hardening is a process whereby the average energy of the X-ray beam increases as the beam propagates through a material. This increase in average energy is a direct consequence of the energy dependence of the attenuation coefficient.
With a polyenergetic source, the expected detected photon flux along path L, is given by (1). If one were to ignore the energy dependence of the measurements and simply apply FBP to the log processed data, some attenuation map μ would be reconstructed that is indirectly related to the source spectrum and object attenuation properties.
Beam hardening leads to several disturbing artifacts in image reconstruction. Figures 8 and 9 show the effect of beam hardening on the line integral in bone and water. In the monoenergetic case, the line integral increases linearly with thickness. With a polyenergetic beam, the soft tissue line integral departs slightly from the linear behavior. The effect is more pronounced for high Z (atomic number) tissue such as bone.
This non-linear behavior generally leads to a reduction in the attenuation coefficient. In bone, beam hardening can cause reductions of up to 10% . Thick bones also generate dark streaks. In soft tissue, the values are depressed in a non-uniform manner, leading to what has been termed the "cupping" effect. In addition, bone areas can "spill over" into soft tissue, leading to a perceived increase in the attenuation coefficient.
Beam Hardening Correction Methods
Because of SNR considerations, monoenergetic X-ray scanning is not practical. Beam hardening correction methods are therefore necessary for reconstructing artifact-free attenuation coefficient images. An ideal reconstruction method would be quantitatively accurate and portable to different scanning geometries. It would somehow reconstruct μ(x,y,E), retaining the energy dependence of the attenuation process. This is difficult, if not impossible, to achieve with a single source spectrum. A more realistic goal is to remove or reduce the beam hardening artifacts by compensating for the energy dependence in the data.
There are a wide variety of schemes for beam hardening artifact reduction. Existing methods fall into three categories: dual-energy imaging, preprocessing of projection data and post-processing of the reconstructed image.
Dual-energy imaging has been described as the most theoretically elegant approach to eliminate beam hardening artifacts. The approach is based on expressing the spectral dependence of the attenuation coefficient as a linear combination of two basis function, scaled by constants independent of energy. The two basis functions are intended to model the photo-electric effect and Compton scattering. This technique provides complete energy dependence information for CT imaging. An attenuation coefficient image can, in principle, be presented at any energy, free from beam hardening artifacts. The method's major drawback is the requirement for two independent energy measurements. This has inhibited its use in clinical applications, despite the potential diagnostic benefit of energy information. Recently, some work has been presented on the use of multi-energy X-ray CT for imaging small animals. For that particular application, the CT scanner was custom built with an energy-selective detector arrangement.
Commercial beam hardening correction methods usually involve both pre-processing and post-processing, and are often implemented with a parallel or fan-beam geometry in mind. They also make the assumption that the object consists of soft tissue (water-like) and bone (high Z). Recently, these methods were generalized to three base materials and cone-beam geometry.
Pre-processing works well when the object consists of homogeneous soft tissue. Artifacts caused by high Z materials such as bone mandate the use of post-processing techniques to produce acceptable images.
The attenuation coefficient of some material k is modeled as the product of the energy -dependent mass attenuation coefficient mk(E) (cm2/g) and the energy-independent density p(x,y) (g/cm3) of the tissue. Expressed mathematically,
μ(x,y,E) = ∑ mk (E)pk (x,y)rk (x,y) (2)
Jt= l
where K is the number of tissue types in the object and
Figure imgf000006_0001
= 1 if (x,y) e tissue k and ^(x.y) = 0 otherwise.
For the classical pre-processing approach, the object is assumed to consist of a single tissue type (K= 1) and to have energy dependence similar to that of water, i.e. ,
μ(x,y, E) = ml (E)p (x,y) (3) * mw (E)psof' (x,y) (4) where mw(E) is the mass attenuation coefficient of water and ff is the effective soft tissue density. One can rewrite (1) as follows:
E[Y,] = j/,(£)e','-'<''''£>'*rf£
= [ ,(£) -m (B)ϊι, P~* (x,y) (5) dE
= } I,(E)e-m^E)T'dE W
- F O (?)
where T, is the line integral of the density along path L Each function Ft(T) i = I, ...,N is 1-1 and monotone decreasing and hence invertible. The goal of the pre-processing method is to estimate { t }^ and from that reconstruct (using FBP) an estimate p(x,y) of the energy-independent density ff. In other words, β(x,y) - FBP{Tl ) where f, = R,~'(^) . This pre-processing approach is inaccurate when bone is present, but is often the first step in a post-processing bone correction algorithm.
Post-processing techniques first pre-process and reconstruct the data for soft tissue correction, as explained above. The resulting effective density image is then segmented into bone and soft tissue. This classification enables one to estimate the contributions of soft tissue and bone to the line integrals. These estimates are used to correct for non-linear effects in the line integrals. The final artifact-free image is produced using FBP and displays density values independent of energy according to the following relationship:
β(x,y) = βsof x,y) + λ0βbone(x,y) (8)
where λ0 is some constant independent of energy that maintains image contrast. Although post-processing accomplishes its goal of eliminating energy dependence, it suffers from quantitative inaccuracy. As explainer elsewhere, the parameter λ0 is somewhat heuristically estimated. In addition, applying postprocessing to 3-D or cone-beam geometries can be computationally expensive. Another beam hardening correction of interest is known. This algorithm is iterative (but not statistical). At each pixel, it assumes that the attenuation coefficient is a linear combination of the known attenuation coefficients of two base materials, and it iteratively determines the volume fractions of the base materials. The algorithm depends on an empirically -determined estimate of effective X-ray spectrum of the scanner. The main limitations of this approach is that the spectrum estimate captures the imaging characteristics for a small FOV only and that prior knowledge of the base materials at each pixel is necessary.
Statistical Reconstruction for CT
Many of the inherent shortcomings of FBP are adequately (and naturally) compensated for by statistical methods.
Statistical methods are a subclass of iterative techniques, although the two terms are often used interchangeably in the literature. The broader class of iterative reconstruction techniques includes non-statistical methods such as the Algebraic Reconstruction Technique (ART) which casts the problem as an algebraic system of equations. Successive substitution methods, such as Joseph and Spital's beam-hardening correction algorithm, are also iterative but not statistical. Hence, statistical methods are iterative, but the opposite is not necessarily true.
Statistical techniques have several attractive features. They account for the statistics of the data in the reconstruction process, and therefore lead to more accurate estimates with lower bias and variance. This is especially important in the low-SNR case, where deterministic methods suffer from severe bias. Moreover, statistical methods can be well suited for arbitrary geometries and situations with truncated data. They easily incorporate the system geometry, detector response, object constraints and any prior knowledge. Their main drawback (when compared to FBP) is their longer computational times.
Statistical reconstruction for monoenergetic CT was shown to outperform FBP in metal artifact reduction, in limited-angle tomography and to have lower bias-noise curves.
All single-scan statistical X-ray reconstruction algorithms assume (either implicitly or explicitly) monoenergetic X-ray beams, and thus do not deal with the issue of beam hardening artifacts. The future potential of statistical methods to correct for beam-hardening was speculated by Lange et al. in 1987, but no methods have been proposed for realizing this potential. One exception is the area of dual-energy CT. Dual-energy systems operate based on the principle that the attenuation coefficient can be expressed as a linear combination of two energy basis functions and are capable of providing density images independent of energy.
For X-ray CT images, with typical sizes of 512 x 512 pixels or larger, statistical methods require very long computational times. This has hindered their widespread use.
The article of Yan et al. in Jan. 2000 IEEE TRANS. MED. IM. uses a polyenergetic source spectrum, but it is not statistical and it does not have any regular ization. There is no mathematical evidence that their algorithm will converge.
In general, all previous algorithms for regularized statistical image reconstruction of X-ray CT images from a single X-ray CT scan have been either:
1) based explicitly on a monoenergetic source assumption, or
2) based implicitly on such an assumption in that the polyenergetic spectrum and resulting beam hardening effects were disregarded. SUMMARY OF THE INVENTION
An object of the present invention is to provide a method for reconstructing a polyenergetic X-ray computed tomography image and an image reconstructor apparatus, both of which utilize a statistical algorithm which explicitly accounts for a polyenergetic source spectrum and resulting beam hardening effects.
Another object of the present invention is to provide a method for reconstructing a polyenergetic X-ray computed tomography image and an image reconstructor apparatus, both of which utilize a statistical algorithm which is portable to different scanner geometries.
In carrying out the above objects and other objects of the present invention, a method for statistically reconstructing a polyenergetic X-ray computed tomography image to obtain a corrected image is provided. The method includes providing a computed tomography initial image produced by a single X-ray CT scan having a polyenergetic source spectrum. The initial image has components of materials which cause beam hardening artifacts. The method also includes separating the initial image into different sections to obtain a segmented image and calculating a series of intermediate corrected images based on the segmented image utilizing a statistical algorithm which accounts for the polyenergetic source spectrum and which converges to obtain a final corrected image which has significantly reduced beam hardening artifacts.
The step of calculating may include the steps of calculating a gradient of a cost function having an argument and utilizing the gradient to minimize the cost function with respect to its argument. The step of calculating the gradient may include the step of back projecting. The cost function preferably has a regularizing penalty term.
The step of calculating the gradient may include the step of calculating thicknesses of the components. The step of calculating thicknesses may include the step of reprojecting the segmented image. The step of calculating the gradient may include the step of calculating means of data along paths and gradients based on the thicknesses of the components.
The argument may be density of the materials at each image voxel.
The method may further include calibrating the spectrum of the X-ray
CT scan.
The method may further include displaying the final corrected image.
The step of calculating the gradient may include the step of utilizing ordered subsets to accelerate convergence of the algorithm.
Further in carrying out the above objects and other objects of the present invention, an image reconstructor apparatus for statistically reconstructing a polyenergetic X-ray computed tomography image to obtain a corrected image is provided. The apparatus includes means for providing a computed tomography initial image produced by a single X-ray CT scan having a polyenergetic source spectrum wherein the initial image has components of materials which cause beam hardening artifacts. The apparatus further includes means for separating the initial image into different sections to obtain a segmented image and means for calculating a series of intermediate corrected images based on the segmented image utilizing a statistical algorithm which accounts for the polyenergetic source spectrum and which converges to obtain a final corrected image which has significantly reduced beam hardening artifacts.
The means for calculating may include means for calculating a gradient of a cost function having an argument and means for utilizing the gradient to minimize the cost function with respect to its argument. The cost function preferably has a regularizing penalty term. The means for calculating the gradient may include means for back projecting.
The means for calculating the gradient may include means for calculating thicknesses of the components.
The means for calculating thicknesses may include means for reprojecting the segmented image.
The means for calculating the gradient may include means for calculating means of data along paths and gradients based on the thicknesses of the components.
The argument may be density of the materials at each image voxel.
The means for calculating the gradient may include means for utilizing ordered subsets to accelerate convergence of the algorithm.
The above objects and other objects, features, and advantages of the present invention are readily apparent from the following detailed description of the best mode for carrying out the invention when taken in connection with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
FIGURE 1 is a schematic view of a basic CT system;
FIGURES 2a-2c are schematic views of various CT geometries wherein Figure 2a shows a parallel-beam (single ray) arrangement, Figure 2b shows a fan-beam geometry and Figure 2c shows a cone-beam arrangement;
FIGURE 3 is a schematic view which illustrates system matrix computation for the fan-beam geometry; FIGURE 4 shows graphs which illustrate convex penalty functions;
FIGURE 5 shows graphs which illustrate the optimization transfer principle;
FIGURE 6 shows graphs which are quadratic approximations to the Poisson log likelihood;
FIGURE 7 shows graphs which illustrate attenuation coefficient energy dependence;
FIGURE 8 shows graphs which illustrate beam hardening induced deviation of line integral from linearity in water;
FIGURE 9 shows graphs which illustrate beam hardening induced deviation of line integral from linearity in bone; and
FIGURE 10 is a block diagram flow chart of the method of the present invention.
BEST MODE FOR CARRYING OUT THE INVENTION
In general, the method and system of the present invention utilize a statistical approach to CT reconstruction and, in particular, iterative algorithms for transmission X-ray CT.
Although the method and system of the invention are described herein for a single-slice fan-beam geometry reconstruction, the method and system may also be used with cone-beam geometries and helical scanning. The method and system may also be used with flat-panel detectors. Statistical Reconstruction for X-ray CT
The physical and statistical models for the problem of X-ray CT reconstruction is described herein, and an objective function is obtained. Maximizing the objective function by some appropriate iterative algorithm yields the reconstructed image.
Monoenergetic Model and Assumptions
For the purposes of describing the basic principles underlying this invention, for the benefit of skilled practitioners who wish to implement the method, the simplified setting in which the transmission source is monoenergetic is first reviewed. Later the description is generalized to the more realistic polyenergetic case.
The image in object space (attenuation coefficient) is parameterized using square pixels. The goal of the algorithm becomes to estimate the value of the discretized attenuation coefficient at those pixels. Let μ = [μ! , ... ,μp]' be the vector of unknown attenuation coefficients having units of inverse length.
The measurements in a photon-limited counting process are reasonably modeled as independently distributed Poisson random variables. In transmission tomography, the mean number of detected photons is related exponentially to the projections (line integrals) of the attenuation map. The measurements are also contaminated by extra background counts, caused primarily by scatter in X-ray CT. Thus, the following model is assumed for measurements:
Y, - Poisson{b1e"[A'i]>+rl},i= l,...,N (9)
where b, = I,(E0) and N is the number of measurements (or, equivalently, the number of detector bins). The notation [Aμ], = ∑,P =1 al} μi represents the t'th line integral. The N x p matrix A = {a,j} is the system matrix which accounts for the system geometry as well as any other significant physical effects such as detector response.
Figure 3 illustrates one method for computing the elements of A in the fan-beam case. For ray and pixel j, al} is the normalized area of overlap between the ray beam and the pixel. The term r, is the mean number of background events, b, is the blank scan factor and Y, represents the photon flux measured by the ith detector. The 7,'s are assumed independent and that bl,rl and { y} are known non-negative constants, μ is also assumed to be independent of energy.
To find a statistical estimate for the attenuation coefficient vector μ that is anatomically reasonable, a likelihood-based estimation approach is used. This is a natural choice since the likelihood is based on the statistical properties of the problem. The maximum likelihood (ML) approach also has good theoretical properties. It is asymptotically consistent, unbiased and efficient. The Poisson log likelihood for independent measurements is given by:
N L(μ) = ∑ fø logtøβ-1""1 + r,) - b,e-[A^ + r,)} (10) ι=l ignoring constant terms.
Regularization
Without regularization, the Maximum Likelihood (ML) algorithm leads to noisy reconstruction. To reduce noise, it is possible to stop the algorithm when the reconstruction is visually appealing. Another approach is to pre-filter the data or post-filter the reconstruction.
Regularization (penalized-likelihood) approaches for noise reduction have two important advantages. First, the penalty function improves the conditioning of the problem. Second, one can choose penalty functions with certain desirable properties such as edge preservation. A general form for the regularizing penalty is the following:
Figure imgf000016_0001
where ψ's are potential functions acting on the soft constraints Cμ * 0 and K is the number of such constraints. Generally, the potential functions are symmetric, convex, non-negative and differentiable. Non-convex penalties can be useful for preserving edges, but are more difficult to analyze. One can think of the penalty as imposing a degree of smoothness or as a Bayesian prior. Both views are practically equivalent.
Most commonly, penalty function penalize differences in the neighborhood of any particular pixel. They can be expressed as:
R{μ) = Σ ∑ Jkψ{μj - μk) (12)
7 = 1 kεNj where the weights wJk are 1 for orthogonal pixels and -?= for diagonal pixels and Nj is the pixel neighborhood.
A common penalty is the quadratic function, ψ (x) = — . This penalty is effective for noise reduction and is analytically simple, but can cause significant blurring, especially at edges. To preserve the edges, one can use a penalty that is less penalizing of large differences. One used herein is the Huber penalty:
2 ' X < δ ψ (x;δ) = (13)
S\χ\ - —, x ≥ δ
A plot of both penalties is shown in Figure 4. Combining the ML objective function with a penalty gives a penalized-likelihood (PL) objective function:
Φ (μ) = L(μ) - βR(μ). (14)
The first term on the right in Equation (14) forces the estimator to match the measured data. The second term imposes some degree of smoothness leading to visually appealing images. The scalar parameter β controls the tradeoff between the two terms (or, alternatively, between resolution and noise).
The goal of the reconstruction technique becomes to maximize (14) subject to certain object constraints such as non-negativity:
are max μ = μ ≥ Q (μ). (15)
Ordered Subsets Transmission Algorithms
The ideal statistical algorithm converges to the solution quickly (in a few iterations) and monotonically. It easily incorporates prior knowledge and constraints, accepts any type of system matrix and is parallelizable. No practical algorithm fits this description, and one settles for some compromise between the conflicting requirements.
Two closely related statistical methods for X-ray CT, namely, the Ordered Subsets Transmission Reconstruction (OSTR) and the Ordered Subsets Penalized Weighted Least Squares (OS-PWLS) algorithms are described herein. OSTR models the sinogram data using Poisson statistics. The randomness in the measurement is a result of the photon emission and detection processes, electronic noise in the current- integrating detectors, as well as background radiation and scatter. Using the Poisson model is important in the low SNR case, since incorrect modeling can lead to reconstruction bias. At high SNR, the noise can be adequately modeled as additive and Gaussian. The additive Gaussian model leads to a least- squares method.
Both methods are based on the optimization transfer principle, which is discussed first. Then, the Poisson transmission model and associated algorithm is developed, including the idea or ordered subsets as a way to accelerate convergence. Although one could develop the least-squares approach from a Gaussian likelihood, it is derived as a quadratic approximation to the Poisson likelihood.
Optimization Transfer Principle
The optimization transfer principle is a very useful and intuitive principle that underlies many iterative methods, including the ones described herein. De Pierro introduced it in the context of inverse problems (emission tomography, to be specific).
Often in iterative techniques, the goal is to maximize some objective function Φ(θ) with respect to its argument θ. The objective Φ(θ) can be difficult to maximize. One resorts to replacing Φ with a surrogate function φ(θ;θ(n)) that is easier to maximize at each iteration. Figure 5 illustrates the idea in one-dimension. The full utility of optimization transfer comes into play when the dimension of θ is large, such as in tomography.
The process is repeated iteratively, using a new surrogate function at each iteration. If the surrogate is chose appropriately, then the maximizer of Φ(θ) can be found. Sufficient conditions that ensure that the surrogate leads to a monotonic algorithm are known.
Paraboloidal surrogates are used because they are analytically simple, and can be easily maximized. One can also take advantage of the convexity of these surrogates to parallelize the algorithm. Separable Paraboloidal Surrogates
Instead of maximizing the log likelihood, one can minimize the negative log likelihood, which is written as:
Figure imgf000019_0001
where h,(l) = b,e'l+rl-Yllog(b,e'l+rl). Direct minimization of (16) leads to noisy estimates, so the likelihood is penalized by adding a roughness penalty. The problem becomes to find an estimate μ such that:
are min μ ≥ 0 Φ ( ) (17)
where
Φ (μ) = - L(μ) + βR(μ). (18)
The parameter β controls the tradeoff between the data-fit and penalty terms, and R(μ) imposes a degree of smoothness on the solution.
When r, ≠ 0, the log likelihood is not concave and is difficult to maximize. A monotonic algorithm is possible if the optimization transfer principle is applied with paraboloidal surrogates. Paraboloidal surrogates are used such that the iterates μ" monotonically decrease Φ. For that to occur, the surrogate φ must satisfy the following "monotonicity" condition:
Φ (μ) - Φ (μ ") ≤ (μnn), V t > 0. (19)
The following conditions are sufficient to ensure (19):
Figure imgf000020_0001
≥ Φ (μ), forμ ≥ 0.
Attention is restricted to differentiable surrogates. Paraboloidal surrogates are convenient to use because they are easy to minimize. In (16), the likelihood function ht(l) is replaced with the following quadratic surrogate:
Figure imgf000020_0002
where hi is the first derivative of Λ,. To ensure monotonicity, curvature c, must be such that the surrogate satisfies the monotonicity conditions (19). Alternatively, to reduce computation, one can pre-compute the c,'s prior to iterating.
Replacing h,([Aμ]J with qPlAμJ Aμ"]) in (16) above will lead to a paraboloidal surrogate for the log likelihood. One can further take advantage of the nature of the surrogate to obtain a separable algorithm, i.e. , an algorithm where all pixels are updated simultaneously.
Rewrite the line integral:
[Aμ"), (22)
Figure imgf000020_0003
where
∑ αtf = 1, V ,«,_, > 0. (23)
7 = 1
Using the convexity of q„ and writing /" = [Aμ" , one gets: .dA liO≤ μ )+[Λμ"l;i (24)
Figure imgf000021_0001
The overall separable surrogate for the log likelihood becomes:
N p
Q(μ,- ")=∑ ∑ , -riMj ~M )+ [ μ"\;i: (25)
'=1 7=1 \ β,
Figure imgf000021_0002
where
i pμ") = < j - μ )+[Aμn],;i
Figure imgf000021_0003
This formulation decouples the pixels. Each pixel effectively has its own cost function q}. The q s can be minimized for all pixels simultaneously, resulting in a parallelizable algorithm.
A similar development can be pursued for the penalty term R(μ).
Taking advantage of the convexity of the potential function yields a separable penalty surrogate S(μ;μn). One now seeks to minimize the new separable global surrogate:
(μ;μ" Q(μ;μ") + βS(μ;μ"). (27)
Since the surrogate is a separable paraboloid, it can be easily minimized by zeroing the first derivative. This leads to the following simultaneous update algorithm:
Figure imgf000022_0001
where μ is some estimate of μ, usually taken to be the current iterate μ". The [•] + operator enforces the non-negativity constant. From (21) and (25) the first and second derivatives of the surrogate are obtained:
Figure imgf000022_0002
c, is the surrogate curvamre and {atJ} satisfies (23). To make the denominator in (28) small (and hence the step size large), one wants {<*,,} to be large. One also wants {ay} to facilitate convergence, and to be independent of the current iterate so that it can be pre-computed. One convenient choice is:
Figure imgf000022_0003
It is possible that better choices exist. For the surrogate curvature ch one can use the optimal one derived in (24). The optimal curvature is iteration dependent. To save computations, one can use the following pre-computed approximation for the curvature:
Figure imgf000022_0004
The pre-computed curvature may violate the conditions of monotonicity. It does, however, give an almost-monotonic algorithm, where the surrogate becomes a quadratic approximation of the log likelihood. The pre- computed curvature seems to work well in practice, and the computational savings seem well worth the sacrifice. Major computational savings, however, come from the use of ordered subsets, discussed hereinbelow.
Ordered Subsets
Ordered subsets are useful when an algorithm involves a summation over sinogram indices (i.e. , a back projection). The basic idea is to break the set of sinogram angles into subsets, each of which subsamples the sinogram in the angular domain. The back projection process over the complete sinogram is replaced with successive back projections over the subsets of the sinogram. One iteration is complete after going through all of the subsets.
Ordered subsets have been applied to emission tomography with a good degree of success. Improvements in convergence rate by a factor approximately equal to the number of subsets have been reported. Ordered subsets have also been used with transmission data for attenuation map estimation in SPECT. Ordered subsets where applied to the convex algorithm, and an increase in noise level with number of subsets have been reported. Ordered subsets have been used with the transmission EM algorithm and a cone-beam geometry. The OSTR algorithm was originally developed for attenuation correction in PET scans with considerable success.
The choice of ordering the subsets is somewhat arbitrary, but it is preferably to order them "orthogonally. " In such an arrangement, the projection corresponding to angles with maximum angular distance from previously chosen angles are used at each step.
The cost of accelerating convergence with ordered subsets is loss of monotonicity. Hence, the term "convergence" is used loosely to mean that one gets visually acceptable reconstruction. Practically speaking, this loss of monotonicity seems to make very little difference for the end result, especially if the algorithm is initialized with a reasonable starting image, such as an FBP reconstruction.
Understanding the convergence properties of ordered subsets can provide insight into additional ways to accelerate convergence that may not be readily apparent.
To summarize, the algorithm thus far flows as follows:
for each iteration n = 1 , ... , niter - for each subset S = 1 , ... , M
* compute htfAμ])
* compute cffAμJ) (or use pre-computed value) * Lj=M∑ieSalJ hi =l, ...,p
Figure imgf000024_0001
Figure imgf000024_0002
- end end
Scaling the denominator and numerator by the number of subsets ensures that the regularization parameter β remains independent of the number of subsets. This algorithm is known as the Ordered Subsets Transmission Reconstruction (OSTR) algorithm.
OSTR combines the accuracy of statistical reconstruction with the accelerated rate of convergence that one gets from ordered subsets. The separability of the surrogates makes the algorithm easily parallelizable. The algorithm also naturally enforces the non-negativity constraint. The monotonicity property has been satisfied, but that seems to hardly make a difference in practice if a reasonable starting image is used.
Penalized Weighted Least Squares with Ordered Subsets
OSTR uses Poisson statistics to model the detection process. For high SNR scans, the Gaussian model is a reasonable approximation to the Poisson distribution. The Gaussian model leads to a simpler quadratic objection function and weighted-least-squares minimization. With high counts, PWLS leads to negligible bias and the simpler objective function reduces computation time. Figure 6 illustrates how the quadratic approximation to the likelihood improves with count number.
The algorithm is reformulated by deriving a quadratic approximation to the Poisson likelihood, which leads to a simpler objective function. The regularization term and the use of ordered subsets are retained. This variation of the method of the invention is called Ordered Subset Penalized Weight Least Squares (OS-PWLS).
For convenience, the negative log likelihood for transmission data is rewritten:
Figure imgf000025_0001
= Σ {" Y, log(b,e-MM + + b,e-M"]' + r,) } . (34)
;=1
Taylor's expansion is applied to h,(l) around some value , and first and second order terms only are retained.
h(l,) * hXO+ WXl, - /
Figure imgf000025_0002
- I,)2 (35) where ht and ht are the first and second derivatives of A,-. Assuming Yt > r„ one can estimate the line integral with:
Figure imgf000026_0001
Substituting this estimate in (35) gives the following approximation for A,:
Figure imgf000026_0002
The first term in (37) is independent of /, and can be dropped. The
(Y - r weight is w, = — — — . The new objective function is:
Figure imgf000026_0003
The subscript q indicates that this objective function is based on a quadratic approximation to the log likelihood. Subsequently, the subscript is dropped. The penalty term is also added. Minimizing this objective function over μ ≥ 0 will lead to an estimator with negligible bias, since the number of counts is large.
A separable surrogate for the objective function is almost immediately available. The terms inside the objective function summation are all convex. This convexity "trick" is explored yet one more time. Along the lines of (24) and (25), the surrogate for the PWLS objective function is:
Figure imgf000026_0005
The subscript q again emphasizes that this surrogate resulted from the quadratic approximation to the likelihood. It is dropped to simplify notation. A surrogate for the penalty function can be derived in a similar manner and added to (40).
To use the iterative update (28) to minimize the surrogate, its first and second derivatives are computed, and the computational savings of the PWLS algorithm is demonstrated.
Figure imgf000027_0001
Unlike A( term in (29), the numerator (first derivative) in (41) involves no exponential terms and the denominator (second derivative) in (42) can be pre-computed and stored. The sum over sinogram indices can also be broken into sums over ordered subsets, further accelerating the algorithm.
The iterative update is rewritten with the changes resulting from OS-
PWLS:
Figure imgf000027_0002
Both OSTR and OS-PWLS have been described above for the monoenergetic case. The more realistic case of multi-energetic X-ray beam is described hereinbelow and a correction scheme is incorporated into OS-PWLS for the artifacts caused by the broad beam spectrum. Statistical Reconstruction for Polyenergetic X-ray CT
One of the strengths of statistical methods is their applicability to different system and physical models. The CT transmission model is generalized hereinbelow to account for the broad spectrum of the beam. From the model, a cost function is derived and an iterative algorithm is developed for finding the minimizer of the cost function.
Polyenergetic Statistical Model for CT
A model for X-ray CT is described now that incorporates the energy dependence of the attenuation coefficient. A prior art algorithm could be applied to an image reconstructed with OS-PWLS. Instead, beam hardening correction is developed as an integral element of the statistical algorithm. The object is assumed to be comprised of K non-overlapping tissue types (this assumption may be generalized to allow for mixtures of tissues). For example, when K=2, one can use soft tissue and bone tissue classes, and when K=3, one can use soft tissue, bone, and a contrast agent, such as iodine. An iterative algorithm that generalizes OS- PWLS naturally emerges from the model.
Assume that the .^tissue classes are determined by pre-processing the data with soft tissue correction and then segmenting an initial reconstructed image. The object model of (2) is restricted to two spatial dimensions:
K μ(x,y,E) = ∑mk(E)pk(x,y)rk(x,y) . (44) i=l
The system matrix is denoted with A = {a and the following definitions are made:
Rk = {set of pixels classified as tissue type k], (4 )
r* = > >; where (46)
Figure imgf000028_0001
, 5* ( 7) = \ pk (x,y)rk(x,y)dl, (47)
v_Xp) = (s , s , ...,sϊ). (48)
The mass attenuation coefficients {mk (E)}k=l of each of the K tissue types are assumed to be known. Discretization aside, from (1) the mean of the measured data along path L, is:
Figure imgf000029_0001
= Ϋ,( ,(p))
where m(E) = [m1(E), ...,mk(E)J and the ' stands for vector transpose. The measurements are expressed as a function of the column vector quantity v, which has its elements the line integrals of the K different tissue densities. From knowledge of the X-ray spectrum, the values of ^( () and its gradient V ^(v ) are tabulated. In the discrete domain,
Figure imgf000029_0002
The goal of the algorithm is to estimate the density coefficient vector p=[p1, ...ppj. Rather than estimating K vector quantities of length p, each representing the density of one kind of tissue, the non-overlapping assumption of the tissue types enables one to keep the number of unknowns equal to p, as is the case in the monoenergetic model. This is possible when prior segmentation of the object is available. This can be obtained from a good FBP image, for example. Polyenergetic Model Cost Function
The Poisson log likelihood is set up in terms of the density p and the vector v,-. To get a quadratic cost function, one follows a similar procedure to that described hereinabove, using the second-order Taylor's expansion.
The function Yt represents the expected value of the measurement
Yj at the it detector. Using Yt in (10) gives the following negative log likelihood:
-L(p) = ∑ ,(v») (51) ι=l
h,(γ,(p)) - ^logfϊfo (p))+ r,]+ O(v»)+ r,) (52)
The problem now is to find an estimate p such that:
argmin β= p≥ (j Φ(p) (53)
where
Φ(p)= -L(p)+βR(p). (54)
The regularization term can be treated exactly as before or it can be modified to avoid smoothing between different tissue types. For now, one focuses on the likelihood term and set the background term r, = 0.
Suppose one can determine some initial estimate of Vj(p), denoted v, = (sj,...,^). One expands A(fv, ?J in a second-order Taylor series around v :
h,(γ)* h,(t )+ VA,(v,)(v- v,)+ -(v-
Figure imgf000030_0001
v,). (55) Taking the first and second derivatives of ht (v) = - Yt log ,(v) + Y,(v) , one gets the following for the first and second order terms of the Taylor expansion:
Figure imgf000031_0001
Figure imgf000031_0002
The gradient VA, is a row vector and the Laplacian operator V2 gives a K x K matrix of partial derivatives.
To simplify the algorithm and maintain the desirable property of separability, Yt is assumed to be close enough to l^(v() for one to drop the first term on the right of (57). This also ensures that the resulting Hessian approximation is non-negative definite.
In the Taylor expansion (55), the first term is constant and does not affect minimization, so it is dropped. The following is a quadratic approximation to the negative log likelihood:
Δ N 1
L(p) * Φq(p) = 1/(v/)(v (/,) - v ) + — (v (^) - v/)N^(v/)V^(v;)(v (/J) - v ) (5g)
Figure imgf000031_0003
Substituting (50) into (59) and expanding the vector inner product yields:
Figure imgf000032_0001
where Vk denotes the km element of the gradient vector. To simplify the above equation, the following definitions are made:
k=\
Figure imgf000032_0002
A = {αy} is the geometrical system matrix. The matrix B = {by} is a weighted system matrix, with the weights expressed as the non-zero elements of a diagonal matrix D(-), to the left of A. The term Z, combines constants independent of p. With the above definitions, expressing the line integrals explicitly in terms of the image pixels yields the following form of the cost function:
+ βR(p). (62)
Figure imgf000032_0003
This cost function is convex, so a separable surrogate and an iterative update are easily derived as described above. The results of the algorithm derivation are described below.
The separable paraboloidal surrogate for Φq(p) is given by: ΛdH^
Figure imgf000033_0001
(63)
Figure imgf000033_0002
where
J=ι °<J
Setting the point of linearization of the Taylor series at p", and evaluating the first and second derivative of Q at the same point gives:
Figure imgf000033_0003
Figure imgf000033_0004
The overall algorithm is:
initialize with p. for each iteration n = 1 , ... , niter
- for each subset S = 1,...,M
compute s," = ∑ a J pl for k = l,...,K,v, = [I,1, ...,-?,*]
7=1
compute ^( ,), its gradient vector V l^( ,) and A,(y() compute bg∑ V XvJa k=\ compute d^ - ιεS Y. „
compute N, = ∑ ∑ aarkV kh,(v,) lεS k=\
compute
Figure imgf000034_0001
Figure imgf000034_0002
- end end
This algorithm globally converges to the minimizer of the cost function Φq(p) when one subset is used, provided the penalty is chosen so that Φq is strictly convex. When two or more subsets are used, it is expected to be monotone in the initial iterations.
Referring now to Figure 10, there is illustrated in block diagram flow chart form, the method of the present invention.
Initially, the spectrum of the incident X-ray beam is calibrated.
Then, the initial CT image is obtained.
Segmentation of the CT image is then performed.
Reprojection of the segmented image to calculate each component thickness is performed. Then, measurement means and gradients are calculated as described above.
Then, a cost function gradient is computed using back projection to yield the correction image as also described above.
The correction image is then subtracted and non-negativity is enforced.
The number of iterations is checked against a predetermined number or other criteria and if the iterative part of the method is complete, then the final corrected image is displayed. If not done, the iterative part of the method is re- entered at the reprojection step. At least some of the results obtained after subtraction may be used in the segmentation step as described herein as indicated by the dashed line from the "DONE" block.
Realistically, many pixels are not comprised of one tissue only, but can be a mixture of several substances. This fact and the errors it causes in CT reconstruction, known as the partial volume effect, must be addressed for more accurate CT images. The method and apparatus described above are generalizable to the case of voxel mixtures by using fractional values in equation (46). It is possible to use histogram information to determine the value of the attenuation coefficient as a probabilistically weighted sum of several tissue types. Combining multi-energy measurements with tissue characteristics may also lead to more accurate "mixel models", where a pixel contains a tissue mixture.
Algorithm
So far, beam hardening correction in the method of the invention depends on the availability of accurate classification of the different substances in the object. In simulated phantom studies, the bone/tissue distribution was known exactly. In a more realistic setting, such a classification would be available from segmenting an initial image reconstructed with FBP. Using this segmentation map for all iterations may adversely affect the accuracy of the reconstruction.
One promising alternative approach is to use joint likelihoods and penalties to estimate both pixel density values and tissue classes. In such an approach, the tissue classes are treated as random variables with a Markov random field model and are estimated jointly with the attenuation map. The joint likelihood will be a function of both the pixel density value and the pixel class. The joint penalties involve two parameters that balance the tradeoff between data fit and smoothness. In addition, joint penalties must account for the fact that pixels tend to have similar attenuation map values if the underlying tissue classes are the same, and vice versa. Such penalties would encourage smoothness in the same region but allow discontinuities between regions of different tissues.
Computation Time
The long computation times of statistical methods hinder their use in clinical X-ray CT applications. Significant acceleration by using ordered subsets have been demonstrated.
From the algorithm design perspective, minor modifications may be made to accelerate convergence. Another possibility is to use a hybrid class of methods, which combines the faster early convergence rate of gradient methods with the faster ultimate linear convergence rate of steepest descent.
The most computationally expensive components of the iterative algorithm are back projection and forward projection, and there are algorithms that claim to perform these operations very quickly. It is possible that customized hardware may be used to perform the projections. Some recent work showed that readily available 2-D texture mapping hardware speeds up the Simultaneous Algebraic Reconstruction Technique (SART) to almost real-time realizations. SART involves forward and back projections much like OS-PWLS does. Computation time may be reduced by using Fourier domain methods, where the 2-D Fourier transform of the image is assembled from its projections. The practicality of this approach depends on the availability of fast gridding methods in the Fourier domain.
Scatter
Scatter is a major problem in cone-beam CT, where it can range from 40% up to 200% of the unscattered data. Collimation reduces scatter, but collimating flat-panel detectors is challenging. Among several factors that affect the performance of a cone-beam, flat-panel detector computed tomography system, scatter was shown to degrade the detector quantum efficiency (DQE) and to influence the optimal magnification factor. Larger air gaps were needed to cope with high scatter, especially if imaging a large FOV.
There are generally two approaches to dealing with the problem of scatter. Scatter can either be physically removed (or reduced) before detection or can be numerically estimated and its effect compensated for. Ways to physically remove scatter include air gaps and grids, but are not very practical once flat-panel detectors are used, due to the small size of detector pixels.
There are several approaches to numerically estimate scatter. For example, idle detectors (those outside FOV) can be used to provide a scatter estimate. Such measurements can be combined with analytic models for scatter that depend, among other factors, on the energy of the radiation used, the volume of scattering material and system geometry.
There have also been encouraging attempts at incorporating scatter estimation/correction into statistical reconstruction. For instance, ordered subsets EM methods with scatter correction gave superior results to FBP with scatter correction in emission CT. In another work, the maximum likelihood EM algorithm successfully improved contrast and SNR of digital radiology images by incorporating a convolution-based scatter estimate. This is particularly relevant since cone-beam transmission CT with flat-panel detectors is in many ways similar to digital radiography. It is, in essence, a rotating digital radiography system.
A scatter estimate may be incorporated in the model of the CT problem as well as in the reconstruction algorithm using the ri terms above. One approach to scatter estimation and correction is a numerical one, i.e. , ways to physically eliminate scatter will not be considered. This makes the approach portable to different systems, and less costly.
System Design
The statistical measurement model has the potential to be extended to take into account the time dimension when imaging the heart, and thus free the designer from synchronization constraints. Moreover, statistical reconstruction eliminates the need for rebinning and interpolation. This may lead to higher helical scanning pitch and cardiac imaging with good temporal and axial resolutions.
Conclusion
In conclusion, a framework has been described above for using statistical methods to reconstruct X-ray CT images from polyenergetic sources.
While the best mode for carrying out the invention has been described in detail, those familiar with the art to which this invention relates will recognize various alternative designs and embodiments for practicing the invention as defined by the following claims.

Claims

WHAT IS CLAIMED IS:
1. A method for statistically reconstructing a polyenergetic X-ray computed tomography image to obtain a corrected image, the method comprising: providing a computed tomography initial image produced by a single X-ray CT scan having a polyenergetic source spectrum wherein the initial image has components of materials which cause beam hardening artifacts; separating the initial image into different sections to obtain a segmented image; and calculating a series of intermediate corrected images based on the segmented image utilizing a statistical algorithm which accounts for the polyenergetic source spectrum and which converges to obtain a final corrected image which has significantly reduced beam hardening artifacts.
2. The method as claimed in claim 1 wherein the step of calculating includes the steps of calculating a gradient of a cost function having an argument and utilizing the gradient to minimize the cost function with respect to its argument.
3. The method as claimed in claim 2 wherein the step of calculating the gradient includes the step of back projecting.
4. The method as claimed in claim 2 wherein the step of calculating the gradient includes the step of calculating thicknesses of the components.
5. The method as claimed in claim 4 wherein the step of calculating thicknesses includes the step of reprojecting the segmented image.
6. The method as claimed in claim 4 wherein the step of calculating the gradient includes the step of calculating means of data along paths and gradients based on the thicknesses of the components.
7. The method as claimed in claim 2 wherein the argument is density of the materials at each image voxel.
8. The method as claimed in claim 1 further comprising calibrating the spectrum of the X-ray CT scan.
9. The method as claimed in claim 1 further comprising displaying the final corrected image.
10. The method as claimed in claim 2 wherein the step of calculating the gradient includes the step of utilizing ordered subsets to accelerate convergence of the algorithm.
11. The method as claimed in claim 2 wherein the cost function has a regularizing penalty term.
12. An image reconstructor apparatus for statistically reconstructing a polyenergetic X-ray computed tomography image to obtain a corrected image, the apparatus comprising: means for providing a computed tomography initial image produced by a single X-ray CT scan having a polyenergetic source spectrum wherein the initial image has components of materials which cause beam hardening artifacts; means for separating the initial image into different sections to obtain a segmented image; and means for calculating a series of intermediate corrected images based on the segmented image utilizing a statistical algorithm which accounts for the polyenergetic source spectrum and which converges to obtain a final corrected image which has significantly reduced beam hardening artifacts.
13. The apparatus as claimed in claim 12 wherein the means for calculating includes means for calculating a gradient of a cost function having an argument and means for utilizing the gradient to minimize the cost function with respect to its argument.
14. The apparatus as claimed in claim 13 wherein the means for calculating the gradient includes means for back projecting.
15. The apparatus as claimed in claim 13 wherein the means for calculating the gradient includes means for calculating thicknesses of the components.
16. The apparatus as claimed in claim 15 wherein the means for calculating thicknesses includes means for reprojecting the segmented image.
17. The apparatus as claimed in claim 15 wherein the means for calculating the gradient includes means for calculating means of data along paths and gradients based on the thicknesses of the components.
18. The apparatus as claimed in claim 13 wherein the argument is density of the materials at each image voxel.
19. The apparatus as claimed in claim 12 wherein the means for calculating the gradient includes means for utilizing ordered subsets to accelerate convergence of the algorithm.
20. The apparatus as claimed in claim 13 wherein the cost function has a regularizing penalty term.
PCT/US2001/004894 2001-02-15 2001-02-15 Statistically reconstructing an x-ray computed tomography image with beam hardening corrector WO2002067201A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2001/004894 WO2002067201A1 (en) 2001-02-15 2001-02-15 Statistically reconstructing an x-ray computed tomography image with beam hardening corrector

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2001/004894 WO2002067201A1 (en) 2001-02-15 2001-02-15 Statistically reconstructing an x-ray computed tomography image with beam hardening corrector

Publications (1)

Publication Number Publication Date
WO2002067201A1 true WO2002067201A1 (en) 2002-08-29

Family

ID=21742342

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/004894 WO2002067201A1 (en) 2001-02-15 2001-02-15 Statistically reconstructing an x-ray computed tomography image with beam hardening corrector

Country Status (1)

Country Link
WO (1) WO2002067201A1 (en)

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004075116A1 (en) * 2003-02-18 2004-09-02 Koninklijke Philips Electronics N.V., Volume visualization using tissue mix
US7254209B2 (en) 2003-11-17 2007-08-07 General Electric Company Iterative CT reconstruction method using multi-modal edge information
US7526060B2 (en) 2004-03-10 2009-04-28 Koninklijke Philips Electronics N.V. Artifact correction
WO2011036624A1 (en) 2009-09-24 2011-03-31 Koninklijke Philips Electronics N.V. System and method for generating an image of a region of interest
US7924968B2 (en) 2007-04-23 2011-04-12 Koninklijke Philips Electronics N.V. Imaging system for imaging a region of interest from energy-dependent projection data
EP1677255A3 (en) * 2004-12-30 2012-08-08 GE Healthcare Finland Oy Method and arrangement for three-dimensional medical X-ray imaging
US20130121553A1 (en) * 2011-11-16 2013-05-16 General Electric Company Method and apparatus for statistical iterative reconstruction
EP2663964A1 (en) * 2011-01-10 2013-11-20 Koninklijke Philips N.V. Dual-energy tomographic imaging system
US8897528B2 (en) 2006-06-26 2014-11-25 General Electric Company System and method for iterative image reconstruction
US8976190B1 (en) 2013-03-15 2015-03-10 Pme Ip Australia Pty Ltd Method and system for rule based display of sets of images
WO2015042314A1 (en) * 2013-09-18 2015-03-26 Imagerecon, Llc Method and system for statistical modeling of data using a quadratic likelihood functional
US9019287B2 (en) 2007-11-23 2015-04-28 Pme Ip Australia Pty Ltd Client-server visualization system with hybrid data processing
WO2015154054A1 (en) * 2014-04-04 2015-10-08 Decision Sciences International Corporation Muon tomography imaging improvement using optimized limited angle data
US9167027B2 (en) 2007-08-27 2015-10-20 PME IP Pty Ltd Fast file server methods and systems
US9355616B2 (en) 2007-11-23 2016-05-31 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US9454813B2 (en) 2007-11-23 2016-09-27 PME IP Pty Ltd Image segmentation assignment of a volume by comparing and correlating slice histograms with an anatomic atlas of average histograms
CN105976412A (en) * 2016-05-25 2016-09-28 天津商业大学 Offline-dictionary-sparse-regularization-based CT image reconstruction method in state of low tube current intensity scanning
US9509802B1 (en) 2013-03-15 2016-11-29 PME IP Pty Ltd Method and system FPOR transferring data to improve responsiveness when sending large data sets
EP2997900A4 (en) * 2013-05-15 2017-01-04 Kyoto University X-ray ct image processing method, x-ray ct image processing program, and x-ray ct image device
CN107730455A (en) * 2016-08-11 2018-02-23 通用电气公司 Obtain the method and device of MAR images
US9904969B1 (en) 2007-11-23 2018-02-27 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US9984478B2 (en) 2015-07-28 2018-05-29 PME IP Pty Ltd Apparatus and method for visualizing digital breast tomosynthesis and other volumetric images
CN108364326A (en) * 2018-02-08 2018-08-03 中南大学 A kind of CT imaging methods
US10070839B2 (en) 2013-03-15 2018-09-11 PME IP Pty Ltd Apparatus and system for rule based visualization of digital breast tomosynthesis and other volumetric images
CN109009181A (en) * 2018-06-07 2018-12-18 西安交通大学 The method of X-ray bulb spectrum and reconstruction image is estimated under dual energy CT simultaneously
US10311541B2 (en) 2007-11-23 2019-06-04 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US10540803B2 (en) 2013-03-15 2020-01-21 PME IP Pty Ltd Method and system for rule-based display of sets of images
US10909679B2 (en) 2017-09-24 2021-02-02 PME IP Pty Ltd Method and system for rule based display of sets of images using image content derived parameters
CN112581556A (en) * 2020-12-25 2021-03-30 上海联影医疗科技股份有限公司 Multi-energy CT image hardening correction method and device, computer equipment and storage medium
CN113362404A (en) * 2020-03-05 2021-09-07 上海西门子医疗器械有限公司 Scatter correction method, device and storage medium for computer tomography
US11244495B2 (en) 2013-03-15 2022-02-08 PME IP Pty Ltd Method and system for rule based display of sets of images using image content derived parameters
US11599672B2 (en) 2015-07-31 2023-03-07 PME IP Pty Ltd Method and apparatus for anonymized display and data export
US11810660B2 (en) 2013-03-15 2023-11-07 PME IP Pty Ltd Method and system for rule-based anonymized display and data export
US11972024B2 (en) 2023-02-14 2024-04-30 PME IP Pty Ltd Method and apparatus for anonymized display and data export

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHYE HWANG YAN ET AL: "Reconstruction algorithm for polychromatic CT imaging: application to beam hardening correction", IEEE TRANSACTIONS ON MEDICAL IMAGING, JAN. 2000, IEEE, USA, vol. 19, no. 1, pages 1 - 11, XP002180095, ISSN: 0278-0062 *
ERDOGAN H ET AL: "Monotonic algorithms for transmission tomography", IEEE TRANSACTIONS ON MEDICAL IMAGING, SEPT. 1999, IEEE, USA, vol. 18, no. 9, pages 801 - 814, XP002180097, ISSN: 0278-0062 *
ERDOGAN H ET AL: "Ordered subsets algorithms for transmission tomography", PHYSICS IN MEDICINE AND BIOLOGY, NOV. 1999, IOP PUBLISHING, UK, vol. 44, no. 11, pages 2835 - 2851, XP001032919, ISSN: 0031-9155 *
JOSEPH P M ET AL: "A method for simultaneous correction of spectrum hardening artifacts in CT images containing bone and iodine", MEDICAL PHYSICS, OCT. 1997, AIP FOR AMERICAN ASSOC. PHYS. MED, USA, vol. 24, no. 10, pages 1629 - 1634, XP002180098, ISSN: 0094-2405 *

Cited By (93)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8570323B2 (en) 2003-02-18 2013-10-29 Koninklijke Philips N.V. Volume visualization using tissue mix
WO2004075117A1 (en) * 2003-02-18 2004-09-02 Koninklijke Philips Electronics N.V. Volume visualization using tissue mix
JP2006518074A (en) * 2003-02-18 2006-08-03 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Volume visualization using tissue mixture
US7454044B2 (en) 2003-02-18 2008-11-18 Koninkklijke Philips Electronics, N.V. Volume visualization using tissue mix
WO2004075116A1 (en) * 2003-02-18 2004-09-02 Koninklijke Philips Electronics N.V., Volume visualization using tissue mix
US7254209B2 (en) 2003-11-17 2007-08-07 General Electric Company Iterative CT reconstruction method using multi-modal edge information
US7526060B2 (en) 2004-03-10 2009-04-28 Koninklijke Philips Electronics N.V. Artifact correction
EP1677255A3 (en) * 2004-12-30 2012-08-08 GE Healthcare Finland Oy Method and arrangement for three-dimensional medical X-ray imaging
US8897528B2 (en) 2006-06-26 2014-11-25 General Electric Company System and method for iterative image reconstruction
US7924968B2 (en) 2007-04-23 2011-04-12 Koninklijke Philips Electronics N.V. Imaging system for imaging a region of interest from energy-dependent projection data
US9167027B2 (en) 2007-08-27 2015-10-20 PME IP Pty Ltd Fast file server methods and systems
US10038739B2 (en) 2007-08-27 2018-07-31 PME IP Pty Ltd Fast file server methods and systems
US9860300B2 (en) 2007-08-27 2018-01-02 PME IP Pty Ltd Fast file server methods and systems
US11075978B2 (en) 2007-08-27 2021-07-27 PME IP Pty Ltd Fast file server methods and systems
US10686868B2 (en) 2007-08-27 2020-06-16 PME IP Pty Ltd Fast file server methods and systems
US11902357B2 (en) 2007-08-27 2024-02-13 PME IP Pty Ltd Fast file server methods and systems
US9531789B2 (en) 2007-08-27 2016-12-27 PME IP Pty Ltd Fast file server methods and systems
US11516282B2 (en) 2007-08-27 2022-11-29 PME IP Pty Ltd Fast file server methods and systems
US9454813B2 (en) 2007-11-23 2016-09-27 PME IP Pty Ltd Image segmentation assignment of a volume by comparing and correlating slice histograms with an anatomic atlas of average histograms
US9728165B1 (en) 2007-11-23 2017-08-08 PME IP Pty Ltd Multi-user/multi-GPU render server apparatus and methods
US11900501B2 (en) 2007-11-23 2024-02-13 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US9355616B2 (en) 2007-11-23 2016-05-31 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US10311541B2 (en) 2007-11-23 2019-06-04 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US11514572B2 (en) 2007-11-23 2022-11-29 PME IP Pty Ltd Automatic image segmentation methods and analysis
US9019287B2 (en) 2007-11-23 2015-04-28 Pme Ip Australia Pty Ltd Client-server visualization system with hybrid data processing
US11328381B2 (en) 2007-11-23 2022-05-10 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US11315210B2 (en) 2007-11-23 2022-04-26 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US11900608B2 (en) 2007-11-23 2024-02-13 PME IP Pty Ltd Automatic image segmentation methods and analysis
US11244650B2 (en) 2007-11-23 2022-02-08 PME IP Pty Ltd Client-server visualization system with hybrid data processing
US9595242B1 (en) 2007-11-23 2017-03-14 PME IP Pty Ltd Client-server visualization system with hybrid data processing
US10380970B2 (en) 2007-11-23 2019-08-13 PME IP Pty Ltd Client-server visualization system with hybrid data processing
US11640809B2 (en) 2007-11-23 2023-05-02 PME IP Pty Ltd Client-server visualization system with hybrid data processing
US10430914B2 (en) 2007-11-23 2019-10-01 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US10614543B2 (en) 2007-11-23 2020-04-07 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US10825126B2 (en) 2007-11-23 2020-11-03 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US10762872B2 (en) 2007-11-23 2020-09-01 PME IP Pty Ltd Client-server visualization system with hybrid data processing
US9904969B1 (en) 2007-11-23 2018-02-27 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US9984460B2 (en) 2007-11-23 2018-05-29 PME IP Pty Ltd Automatic image segmentation methods and analysis
US10706538B2 (en) 2007-11-23 2020-07-07 PME IP Pty Ltd Automatic image segmentation methods and analysis
US10043482B2 (en) 2007-11-23 2018-08-07 PME IP Pty Ltd Client-server visualization system with hybrid data processing
WO2011036624A1 (en) 2009-09-24 2011-03-31 Koninklijke Philips Electronics N.V. System and method for generating an image of a region of interest
US8705831B2 (en) 2009-09-24 2014-04-22 Koninklijke Philips N.V. System and method for generating an image of a region of interest
EP2663964A1 (en) * 2011-01-10 2013-11-20 Koninklijke Philips N.V. Dual-energy tomographic imaging system
US20130121553A1 (en) * 2011-11-16 2013-05-16 General Electric Company Method and apparatus for statistical iterative reconstruction
US8885903B2 (en) 2011-11-16 2014-11-11 General Electric Company Method and apparatus for statistical iterative reconstruction
US11129578B2 (en) 2013-03-15 2021-09-28 PME IP Pty Ltd Method and system for rule based display of sets of images
US11296989B2 (en) 2013-03-15 2022-04-05 PME IP Pty Ltd Method and system for transferring data to improve responsiveness when sending large data sets
US10320684B2 (en) 2013-03-15 2019-06-11 PME IP Pty Ltd Method and system for transferring data to improve responsiveness when sending large data sets
US10373368B2 (en) 2013-03-15 2019-08-06 PME IP Pty Ltd Method and system for rule-based display of sets of images
US10070839B2 (en) 2013-03-15 2018-09-11 PME IP Pty Ltd Apparatus and system for rule based visualization of digital breast tomosynthesis and other volumetric images
US11701064B2 (en) 2013-03-15 2023-07-18 PME IP Pty Ltd Method and system for rule based display of sets of images
US9509802B1 (en) 2013-03-15 2016-11-29 PME IP Pty Ltd Method and system FPOR transferring data to improve responsiveness when sending large data sets
US10540803B2 (en) 2013-03-15 2020-01-21 PME IP Pty Ltd Method and system for rule-based display of sets of images
US9524577B1 (en) 2013-03-15 2016-12-20 PME IP Pty Ltd Method and system for rule based display of sets of images
US10631812B2 (en) 2013-03-15 2020-04-28 PME IP Pty Ltd Apparatus and system for rule based visualization of digital breast tomosynthesis and other volumetric images
US11916794B2 (en) 2013-03-15 2024-02-27 PME IP Pty Ltd Method and system fpor transferring data to improve responsiveness when sending large data sets
US11666298B2 (en) 2013-03-15 2023-06-06 PME IP Pty Ltd Apparatus and system for rule based visualization of digital breast tomosynthesis and other volumetric images
US8976190B1 (en) 2013-03-15 2015-03-10 Pme Ip Australia Pty Ltd Method and system for rule based display of sets of images
US10762687B2 (en) 2013-03-15 2020-09-01 PME IP Pty Ltd Method and system for rule based display of sets of images
US10764190B2 (en) 2013-03-15 2020-09-01 PME IP Pty Ltd Method and system for transferring data to improve responsiveness when sending large data sets
US9898855B2 (en) 2013-03-15 2018-02-20 PME IP Pty Ltd Method and system for rule based display of sets of images
US10820877B2 (en) 2013-03-15 2020-11-03 PME IP Pty Ltd Apparatus and system for rule based visualization of digital breast tomosynthesis and other volumetric images
US10832467B2 (en) 2013-03-15 2020-11-10 PME IP Pty Ltd Method and system for rule based display of sets of images using image content derived parameters
US11244495B2 (en) 2013-03-15 2022-02-08 PME IP Pty Ltd Method and system for rule based display of sets of images using image content derived parameters
US11129583B2 (en) 2013-03-15 2021-09-28 PME IP Pty Ltd Apparatus and system for rule based visualization of digital breast tomosynthesis and other volumetric images
US11810660B2 (en) 2013-03-15 2023-11-07 PME IP Pty Ltd Method and system for rule-based anonymized display and data export
US11763516B2 (en) 2013-03-15 2023-09-19 PME IP Pty Ltd Method and system for rule based display of sets of images using image content derived parameters
US9749245B2 (en) 2013-03-15 2017-08-29 PME IP Pty Ltd Method and system for transferring data to improve responsiveness when sending large data sets
EP2997900A4 (en) * 2013-05-15 2017-01-04 Kyoto University X-ray ct image processing method, x-ray ct image processing program, and x-ray ct image device
WO2015042314A1 (en) * 2013-09-18 2015-03-26 Imagerecon, Llc Method and system for statistical modeling of data using a quadratic likelihood functional
US10068327B2 (en) 2013-09-18 2018-09-04 Siemens Medical Solutions Usa, Inc. Method and system for statistical modeling of data using a quadratic likelihood functional
JP2016534482A (en) * 2013-09-18 2016-11-04 イメージレコン, リミティッド ライアビリティ カンパニーImageRecon, LLC Method and system for statistical modeling of data using second-order likelihood functionals
US20150287237A1 (en) * 2014-04-04 2015-10-08 Decision Sciences International Corporation Muon tomography imaging improvement using optimized limited angle data
WO2015154054A1 (en) * 2014-04-04 2015-10-08 Decision Sciences International Corporation Muon tomography imaging improvement using optimized limited angle data
US9639973B2 (en) 2014-04-04 2017-05-02 Decision Sciences International Corporation Muon tomography imaging improvement using optimized limited angle data
US9984478B2 (en) 2015-07-28 2018-05-29 PME IP Pty Ltd Apparatus and method for visualizing digital breast tomosynthesis and other volumetric images
US11017568B2 (en) 2015-07-28 2021-05-25 PME IP Pty Ltd Apparatus and method for visualizing digital breast tomosynthesis and other volumetric images
US10395398B2 (en) 2015-07-28 2019-08-27 PME IP Pty Ltd Appartus and method for visualizing digital breast tomosynthesis and other volumetric images
US11620773B2 (en) 2015-07-28 2023-04-04 PME IP Pty Ltd Apparatus and method for visualizing digital breast tomosynthesis and other volumetric images
US11599672B2 (en) 2015-07-31 2023-03-07 PME IP Pty Ltd Method and apparatus for anonymized display and data export
CN105976412B (en) * 2016-05-25 2018-08-24 天津商业大学 A kind of CT image rebuilding methods of the low tube current intensity scan based on the sparse regularization of offline dictionary
CN105976412A (en) * 2016-05-25 2016-09-28 天津商业大学 Offline-dictionary-sparse-regularization-based CT image reconstruction method in state of low tube current intensity scanning
CN107730455A (en) * 2016-08-11 2018-02-23 通用电气公司 Obtain the method and device of MAR images
US11669969B2 (en) 2017-09-24 2023-06-06 PME IP Pty Ltd Method and system for rule based display of sets of images using image content derived parameters
US10909679B2 (en) 2017-09-24 2021-02-02 PME IP Pty Ltd Method and system for rule based display of sets of images using image content derived parameters
CN108364326B (en) * 2018-02-08 2021-04-02 中南大学 CT imaging method
CN108364326A (en) * 2018-02-08 2018-08-03 中南大学 A kind of CT imaging methods
CN109009181A (en) * 2018-06-07 2018-12-18 西安交通大学 The method of X-ray bulb spectrum and reconstruction image is estimated under dual energy CT simultaneously
CN109009181B (en) * 2018-06-07 2024-04-05 西安交通大学 Method for simultaneously estimating spectrum and reconstructed image of X-ray tube under dual-energy CT
CN113362404A (en) * 2020-03-05 2021-09-07 上海西门子医疗器械有限公司 Scatter correction method, device and storage medium for computer tomography
CN113362404B (en) * 2020-03-05 2024-03-22 上海西门子医疗器械有限公司 Scan correction method, apparatus and storage medium for computed tomography
CN112581556A (en) * 2020-12-25 2021-03-30 上海联影医疗科技股份有限公司 Multi-energy CT image hardening correction method and device, computer equipment and storage medium
US11972024B2 (en) 2023-02-14 2024-04-30 PME IP Pty Ltd Method and apparatus for anonymized display and data export

Similar Documents

Publication Publication Date Title
US6507633B1 (en) Method for statistically reconstructing a polyenergetic X-ray computed tomography image and image reconstructor apparatus utilizing the method
WO2002067201A1 (en) Statistically reconstructing an x-ray computed tomography image with beam hardening corrector
Elbakri et al. Segmentation-free statistical image reconstruction for polyenergetic x-ray computed tomography with experimental validation
US9245320B2 (en) Method and system for correcting artifacts in image reconstruction
Elbakri et al. Statistical image reconstruction for polyenergetic X-ray computed tomography
Thibault et al. A three‐dimensional statistical approach to improved image quality for multislice helical CT
US9036885B2 (en) Image reconstruction in computed tomography
Siltanen et al. Statistical inversion for medical x-ray tomography with few radiographs: I. General theory
CN102667852B (en) Strengthen view data/dosage to reduce
Zbijewski et al. Efficient Monte Carlo based scatter artifact reduction in cone-beam micro-CT
La Riviere et al. Reduction of noise-induced streak artifacts in X-ray computed tomography through spline-based penalized-likelihood sinogram smoothing
Van Slambrouck et al. Metal artifact reduction in computed tomography using local models in an image block‐iterative scheme
Fu et al. Comparison between pre-log and post-log statistical models in ultra-low-dose CT reconstruction
US10395397B2 (en) Metal artifacts reduction for cone beam CT
US7983462B2 (en) Methods and systems for improving quality of an image
WO2003071483A2 (en) Method for statistically reconstructing images from a plurality of transmission measurements having energy diversity and image reconstructor apparatus utilizing the method
Xu et al. Statistical projection completion in X-ray CT using consistency conditions
Zamyatin et al. Extension of the reconstruction field of view and truncation correction using sinogram decomposition
Yoon et al. Simultaneous segmentation and reconstruction: A level set method approach for limited view computed tomography
JP7341879B2 (en) Medical image processing device, X-ray computed tomography device and program
EP1716537B1 (en) Apparatus and method for the processing of sectional images
Xu et al. Statistical iterative reconstruction to improve image quality for digital breast tomosynthesis
CN114387359A (en) Three-dimensional X-ray low-dose imaging method and device
Tang et al. Using algebraic reconstruction in computed tomography
Karimi et al. Metal artifact reduction for CT-based luggage screening

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP