CN113034641A - Sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding - Google Patents

Sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding Download PDF

Info

Publication number
CN113034641A
CN113034641A CN202110331020.7A CN202110331020A CN113034641A CN 113034641 A CN113034641 A CN 113034641A CN 202110331020 A CN202110331020 A CN 202110331020A CN 113034641 A CN113034641 A CN 113034641A
Authority
CN
China
Prior art keywords
scale
image
wavelet
reconstruction
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110331020.7A
Other languages
Chinese (zh)
Other versions
CN113034641B (en
Inventor
刘进
亢艳芹
强俊
王勇
夏振宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Polytechnic University
Original Assignee
Anhui Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Polytechnic University filed Critical Anhui Polytechnic University
Priority to CN202110331020.7A priority Critical patent/CN113034641B/en
Publication of CN113034641A publication Critical patent/CN113034641A/en
Application granted granted Critical
Publication of CN113034641B publication Critical patent/CN113034641B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/006Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/421Filtered back projection [FBP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/424Iterative
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/432Truncation

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Algebra (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding, and belongs to the technical field of computed tomography. The method comprises the steps of firstly carrying out wavelet transformation on a high-quality CT sample image to obtain a high-frequency coefficient image, then carrying out multi-scale convolution feature learning on the high-frequency coefficient image, and constructing a multi-scale filter dictionary; then introducing the constructed multi-scale filter dictionary, and establishing a sparse angle CT reconstruction model constrained by wavelet multi-scale convolution feature coding; carrying out variable decomposition on the reconstruction model, and dividing the variable decomposition into a convolution characteristic learning updating target function and a reconstruction image updating target function; and finally, updating the reconstructed image and the multi-scale filter dictionary through an alternate iteration strategy to obtain a final reconstructed image. The invention can effectively slow down the strip artifact and detail loss in sparse angle CT reconstruction, improve the reconstructed image contrast and promote the use of sparse angle CT scanning in the field of clinical diagnosis and treatment.

Description

Sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding
Technical Field
The invention relates to the technical field of computed tomography, in particular to a sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding.
Background
Computed Tomography (CT) is an image technique that uses the X-ray attenuation difference of object components to present accurate and error-free structural information through a reconstruction algorithm, and can implement non-invasive detection. The CT imaging has a series of advantages of high spatial resolution, low scanning cost, short time and the like, is complementary with the magnetic resonance imaging, ultrasonic imaging, positron emission tomography imaging and other technologies in clinic, can provide an image basis for disease screening, diagnosis and treatment, and is one of indispensable medical devices in hospitals at all levels at present. However, excessive X-ray exposure can damage tissue cells, increasing the risk of acquiring an underlying disease. It has been investigated that in a conventional helical CT scan, the examiner may be exposed to a radiation dose of 1.5-10 mSV, which is much higher than the dose of 0.2-0.5 mSV in a conventional chest examination. With the increase of the number of times of examination, the radiation also has an accumulative effect, the injury suffered by the examiners is prolonged, and in addition, the injury suffered by some special people (such as children, pregnant women, old people and the like) is larger. For this reason, the international radiation protection commission has suggested that the X-ray dose be reduced as much as possible without affecting the image diagnosis.
And sparse angle scanning is adopted, so that the number of projection data angles is reduced, and the method is an effective way for reducing X-ray irradiation. However, the reduction of the sampling of the ray can cause the loss of the acquired signal, thereby causing the degradation of the reconstructed image, in particular, the loss of tissue details can be caused, the stripe artifact of the reconstructed image is increased, and the condition of missed diagnosis and misdiagnosis can occur to the doctor during the film reading. In order to improve the sparse angle CT imaging effect: on one hand, from the perspective of CT images, researchers design specialized image restoration and processing algorithms to suppress artifacts and enhance image details. However, the artifact characterization of CT images varies greatly in different scanning apparatuses, modes and reconstruction methods, which also results in poor generalization ability of the method. On the other hand, from the viewpoint of CT projection data, the original data or the projection data after logarithmic transformation is subjected to processing such as restoration and restoration to improve the consistency of the projection data, and thus the reconstruction effect can be improved. However, due to the high sensitivity of the projection data, under-correction, over-correction, low data consistency, and the like are likely to occur in the processing process. In addition, an improved reconstruction algorithm is also a main way for improving the imaging effect, and a large number of iterative reconstruction algorithms are proposed and achieve excellent performance in recent years, especially a statistical iterative reconstruction algorithm based on prior information constraint. However, the main problems faced by this type of algorithm are: the super-parameters are many, and self-adaptive optimization is difficult; the algorithm has high complexity and needs repeated iterative computation; the prior information has instability, and the prior item under a unified framework cannot be obtained, so that the value of iterative reconstruction in a clinical application scene is difficult to be fully exerted. Although there are many problems in the "sparse scan imaging", these will be important indicators in the future CT research field and also the main direction for developing the X-ray imaging.
Sparse feature learning is used as a prior model to form a constraint term, and the method is widely applied to sparse angle CT reconstruction. The sparse feature learning method also shows excellent performance, and greatly promotes the practicability of the sparse angle CT imaging algorithm. The method mainly constructs a dictionary through sample training, utilizes the dictionary to carry out sparse coding on signals, and is widely concerned in the fields of feature recognition, classification, image restoration and the like. However, the traditional sparse feature learning has limited prior information extraction capability, how to expand and enhance the feature learning capability, how to design feature coding forms of multiple scales to give full play to the advantages, and better serve low-dose CT imaging, which is a key problem in clinical CT imaging development.
The information disclosed in this background section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
Disclosure of Invention
1. Technical problem to be solved by the invention
The invention aims to solve the problems of low image quality, more artifact residues, tissue detail loss, low contrast ratio and the like of a sparse angle CT Reconstruction method in the prior art, and provides a sparse angle CT Reconstruction method based on Wavelet Multi-Scale Convolutional feature coding, which is called Wavelet Multi-Scale Convolutional feature coding constrained Reconstruction (WMCR). The method improves the sensing, coding and decoding capabilities of feature information by means of convolution feature learning on multiple scales under the condition of not changing the existing CT hardware cost, obtains rich priori knowledge, and serves for sparse angle CT high-quality reconstruction. According to the invention, the image artifact and detail loss phenomenon caused by the scanning angle loss are inhibited, so that the sparse angle CT reconstructed image quality is improved, the extra radiation is finally reduced for a patient, and the diagnosis and treatment benefit is increased.
2. Technical scheme
In order to achieve the purpose, the technical scheme provided by the invention is as follows:
the invention discloses a sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding, which comprises the following steps of:
step 1, obtaining dictionary atoms of an initial multi-scale filter;
for a given high quality CT sample image xsPerforming wavelet transform to obtain high-frequency and low-frequency wavelet coefficients, wherein the high-frequency coefficient is partially expressed as
Figure BDA0002996179850000021
And
Figure BDA0002996179850000022
sub-band signals in the horizontal direction, the vertical direction and the diagonal direction respectively; to F0Performing multi-scale convolution characteristic learning to obtain initial multi-scale filter dictionary atoms
Figure BDA0002996179850000023
The learning model is represented as:
Figure BDA0002996179850000024
wherein, K is the scale number of convolution kernels, N is the number of convolution kernels under a single scale,
Figure BDA0002996179850000025
is a feature map of the corresponding atom, beta is a regularization parameter;
step 2, constructing a sparse angle CT reconstruction model constrained by wavelet multi-scale convolution feature coding;
step 3, decomposition of a reconstruction model: carrying out fixed variable decomposition on the reconstruction model to obtain a convolution characteristic learning update target function and an image to be reconstructed update target function;
and 4, solving the convolution characteristic learning updating target function and the image to be reconstructed updating target function in an alternative mode to obtain a final reconstruction result graph.
Further, the reconstruction model constructed in step 2 is represented as:
Figure BDA0002996179850000031
wherein, X is convolution operator, K is convolution kernel scale number, N is convolution kernel number under single scale, A is projection matrix of CT system, x is image to be reconstructed, p is sparse projection data, W is wavelet transform high-frequency coefficient extraction operator, lambda and beta are regularization parameters, d is regularization parametern,kFor multi-scale filter dictionary atoms, Mn,kIs a corresponding characteristic diagram.
Furthermore, the convolution feature learning update objective function and the image to be reconstructed update objective function obtained by decomposition in step 3 are respectively expressed as:
Figure BDA0002996179850000032
Figure BDA0002996179850000033
wherein, K is the convolution operator, N is the convolution kernel number under single scale, A is the projection matrix of CT system, x is the image to be reconstructed, p is sparse projection data, and W is the wavelet transform high-frequencyCoefficient extraction operator, λ and β being regularization parameters, xtIs the image to be reconstructed after the t (0 is less than or equal to t) time of updating,
Figure BDA0002996179850000034
for the multi-scale filter dictionary atom after the t-th update,
Figure BDA0002996179850000035
is the characteristic diagram of the corresponding atom after the t-th updating.
Furthermore, in the wavelet transform in the step 1, 1-layer two-dimensional stationary wavelet transform is adopted, and Haar wavelet bases are selected.
Further, the parameters of the multi-scale filter in step 1 are: k is more than or equal to 2 and less than or equal to 5, N is more than or equal to 32 and less than or equal to 64 under the condition of single scale, and the size of the convolution kernel can be selected from 6 multiplied by 3 to 14 multiplied by 3.
Furthermore, in the learning model formula (1) in the step 1, an alternating direction multiplier algorithm is adopted to solve, and an initial multi-scale filter dictionary atom is obtained.
Furthermore, the operation steps of the wavelet transform high-frequency coefficient extraction operator W in step 2 are as follows: firstly, performing 1-layer two-dimensional stationary wavelet transform on an image, and selecting a Haar wavelet base; then selecting subband signals of a high-frequency coefficient part in the horizontal direction, the vertical direction and the diagonal direction; and finally, sequentially superposing and combining the subband signals in the three directions according to a third dimension.
Further, when t is 0 in step 3, the initial multi-scale filter dictionary atom is obtained in step 1, and the initial image x to be reconstructed is obtained0And reconstructing the image by using a filtered back projection algorithm of a ramp filter.
Furthermore, in the step 4, the convolution feature learning updated target function formula (3) is solved by adopting an alternative direction multiplier algorithm, and the image to be reconstructed updated target function formula (4) is solved by adopting a paraboloid substitution algorithm.
Furthermore, in step 4, the image to be reconstructed satisfies RMSE (x) before and after iterationt+1-xt) Stopping at less than or equal to 30, and outputting a final reconstruction result graph, wherein RMSE (DEG) is mean squareAnd an error calculation operator.
3. Advantageous effects
Compared with the prior art, the technical scheme provided by the invention has the following beneficial effects:
the invention discloses a sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding, which comprises the steps of firstly, carrying out wavelet transformation on a high-quality CT sample image to obtain a high-frequency coefficient image, carrying out multi-scale convolution feature learning on the high-frequency coefficient image, and constructing a multi-scale filter dictionary; then, establishing a wavelet domain high-frequency coefficient convolution characteristic coding constrained reconstruction model by taking the constructed multi-scale filter dictionary as an initial value; then, carrying out variable decomposition on the reconstructed model, and dividing the reconstructed model into reconstructed image updating and multi-scale filter dictionary updating; and finally, updating the reconstructed image and the multi-scale filter dictionary through an alternate iteration strategy to obtain a final reconstructed image. By introducing wavelet domain multi-scale convolution feature coding prior into reconstruction, the problems of stripe artifacts and detail loss in sparse scan angle reconstruction of the conventional reconstruction method can be effectively solved. Experimental results prove that under the condition of scanning data of various Sparse angles, compared with a traditional Wavelet domain Convolutional Sparse Coding reconstruction method (WCSC for short), the method (WMCR) can effectively inhibit the problems of strip artifacts and detail loss caused by the loss of projection angles in a reconstructed image, and the reconstructed image has better visual effect and contrast. The method is expected to provide an advanced and practical sparse angle reconstruction frame for image departments and CT manufacturers of domestic hospitals, reduces additional radiation for patients, increases diagnosis and treatment benefits, and has high application and popularization prospects.
Drawings
FIG. 1 is a schematic flow chart of a sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding in the present invention;
FIG. 2 is a reconstructed image of 180 scan angle projection data of an abdomen in an embodiment (a: a reference image; b: FBP reconstructed image; c: WCSC reconstructed image; d: WMCR reconstructed image);
FIG. 3 is a reconstructed image of 120 scan angle projection data of the abdomen in the embodiment (a: a reference image; b: FBP reconstructed image; c: WCSC reconstructed image; d: WMCR reconstructed image);
FIG. 4 is a diagram of a filter dictionary set after abdominal reconstruction in an embodiment (a: 180 angular scanning experiment; b: 120 angular scanning experiment);
FIG. 5 is a reconstructed image of 180 scan angle projection data of the breast in an embodiment (a: reference image; b: FBP reconstructed image; c: WCSC reconstructed image; d: WMCR reconstructed image);
FIG. 6 is a reconstructed image of 120 scan angle projection data of the breast in an embodiment (a: reference image; b: FBP reconstructed image; c: WCSC reconstructed image; d: WMCR reconstructed image);
fig. 7 is a Profile curve of a reconstructed map of chest projection data in an embodiment (a: 180 scan angles; b: 120 scan angles).
Detailed Description
For a further understanding of the invention, reference should be made to the following detailed description taken in conjunction with the accompanying drawings.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The present invention will be further described with reference to the following examples.
Example 1
A flowchart of a sparse angle CT reconstruction method based on wavelet multi-scale convolutional feature coding according to this embodiment is shown in fig. 1, and the specific steps are as follows:
step 1, obtaining dictionary atoms of an initial multi-scale filter;
for a given high quality CT sample image xsPerforming wavelet transform to obtain high-frequency and low-frequency wavelet coefficients, wherein the high-frequency coefficient part can be expressed as
Figure BDA0002996179850000051
And
Figure BDA0002996179850000052
sub-band signals in the horizontal direction, the vertical direction and the diagonal direction respectively; to F0Performing multi-scale convolution characteristic learning to obtain initial multi-scale filter dictionary atoms
Figure BDA0002996179850000053
The learning model may be represented as:
Figure BDA0002996179850000054
wherein, K is the scale number of convolution kernels, N is the number of convolution kernels under a single scale,
Figure BDA0002996179850000055
beta is a regularization parameter for the feature map of the corresponding atom.
In particular, for a given high quality CT sample image xsAnd performing 1-layer two-dimensional stationary wavelet transform, and selecting a Haar wavelet base. The multi-scale filter parameters are: k is more than or equal to 2 and less than or equal to 5, N is more than or equal to 32 and less than or equal to 64 under the condition of single scale, the size of the convolution kernel can be selected from 6 multiplied by 3 to 14 multiplied by 3, and the specific size is manually selected according to factors such as the scanning angle number, the size of a computer storage space, the quality of an image to be reconstructed and the like. Beta is a regularization parameter that is manually adjusted according to specific data. After solving equation (1) using an alternating direction multiplier algorithm, an initial multi-scale filter dictionary atom is obtained
Figure BDA0002996179850000061
Step 2, constructing a sparse angle CT reconstruction model constrained by wavelet multi-scale convolution feature coding;
the reconstructed model constructed can be expressed as:
Figure BDA0002996179850000062
wherein, X is convolution operator, K is convolution kernel scale number, N is convolution kernel number under single scale, A is projection matrix of CT system, x is image to be reconstructed, p is sparse projection data, W is wavelet transform high-frequency coefficient extraction operator, lambda and beta are regularization parameters, d is regularization parametern,kFor multi-scale filter dictionary atoms, Mn,kIs a characteristic diagram of the corresponding atom.
Specifically, the operation steps of the wavelet transform high-frequency coefficient extraction operator W in the sparse angle CT reconstruction model are as follows: firstly, performing 1-layer two-dimensional stationary wavelet transform on an image, and selecting a Haar wavelet base; then selecting subband signals of a high-frequency coefficient part in the horizontal direction, the vertical direction and the diagonal direction; and finally, sequentially superposing and combining the subband signals in the three directions according to a third dimension. And obtaining three-dimensional data after the operator W is operated, wherein the size of the first dimension and the second dimension is equal to that of the image to be reconstructed, and the size of the third dimension is 3. The regularization parameter λ is empirically selected based on the specific data.
Step 3, decomposition of a reconstruction model:
performing fixed variable decomposition on the reconstruction model formula (2) to obtain a convolution feature learning update objective function and an image to be reconstructed update objective function, which are respectively expressed as:
Figure BDA0002996179850000063
Figure BDA0002996179850000064
wherein, X is convolution operator, K is convolution kernel scale degree, N is scale convolution kernel number, A is projection matrix, p is projection data, W is wavelet transform high-frequency coefficient extraction operator, lambda and beta are regularization parameters, and x istIs the image to be reconstructed after the t (0 is less than or equal to t) time of updating,
Figure BDA0002996179850000065
for the multi-scale filter dictionary atom after the t-th update,
Figure BDA0002996179850000066
is the characteristic diagram of the corresponding atom after the t-th updating.
Specifically, in the convolution feature learning updating target function and the image to be reconstructed updating target function, when t is 0, the initial multi-scale filter dictionary atom and the feature map are obtained in step 1, and the initial image x to be reconstructed is obtained0And (3) reconstructing the image by using a Filtered Back Projection (FBP) algorithm.
And 4, solving the convolution characteristic learning updating target function and the image to be reconstructed updating target function in an alternative mode to obtain a final reconstructed image.
Specifically, the target function formula (3) is updated by convolution feature learning and solved by adopting an alternating direction multiplier algorithm. Let D be (D)1,1,d1,2,…,dn,k) For vectorized filter dictionary, M ═ M1,1,M1,2,…,Mn,k) For vectorized feature set, then
Figure BDA0002996179850000071
Which can be simplified to a matrix multiplication DM (where there is still a convolution operation between the matrix elements) and adding the auxiliary variables C and F to the vectorized feature set M and the filter dictionary D, the solution of equation (3) can include the following:
Figure BDA0002996179850000072
Figure BDA0002996179850000073
ut+1=ut+Mt+1-Ct+1(3-3)
Figure BDA0002996179850000074
Figure BDA0002996179850000075
ht+1=ht+Dt+1-Ft+1(3-6)
where u and h are scaled dual auxiliary variables in the solution, ρ1And ρ2For the Lagrange multiplier, the size can be set to ρ1=50β+1,ρ21, Proj (·) is projection truncation operation, and F is initialized by truncating a filter to ensure that the size of a code is the same as that of image data to be reconstructed0=D0,C0=M0,h0=0,u 00. The formula (3-1) is characteristic diagram updating, the formula (3-4) is filter dictionary updating, solutions of the formula (3-1) and the formula (3-4) can be obtained through three-dimensional Fourier transform line solving, the formula (3-2) can be obtained through soft threshold shrinkage algorithm solving, and the formula (3-5) can be obtained through three-dimensional Fourier transform and projection truncation solving. The target function formula (4) for updating the image to be reconstructed is solved by adopting a paraboloid substitution algorithm, and can be specifically expressed as follows:
xt+1=xt-[AT(Axt-p)+λWT(DM-Wxt)]/[ATAI+λ] (4-1)
wherein A isTFor the back-projection operator of CT systems, WTFor the inverse wavelet high frequency coefficient transform operation, I is a vector of all 1 s. Finally, solving the formula (3-1), the formula (3-2), the formula (3-3), the formula (3-4), the formula (3-5), the formula (3-6) and the formula (4-1) alternately in sequence, and repeating the iteration until the image to be reconstructed meets RMSE (x) before and after the iterationt+1-xt) Stopping when the calculation is less than or equal to 30, and outputting a final reconstruction result graph, wherein RMSE ((-)) is a mean square error calculation operator.
Criteria for evaluation of effects
In the experiment, high-quality abdomen and chest image data are simulated, simulated projection data of different scanning angles are obtained, and corresponding reconstruction is carried out. The parameters used for the simulated scan were: the size of the detector is 960, the size of the detector unit is 0.78mm, the distances from the ray source to the center of the object and the center of the detector are 50cm and 100cm respectively, 180 projection data and 120 projection data are acquired by scanning respectively, and other parameters adopt default values. The regularization parameters λ and β in the reconstruction of 180 abdominal projection data are 0.02 and 0.016 respectively, and the parameters in the reconstruction of 120 abdominal projection data are 0.025 and 0.018 respectively; the regularization parameters λ and β in the reconstruction of 180 chest projection data were 0.022 and 0.018, respectively, and the parameters in the reconstruction of 120 chest projection data were 0.026 and 0.021. In the experiment, the number of convolution kernels under a single scale is 32, the number of the convolution kernels under the scale is 3, and the sizes of the convolution kernels are 8 multiplied by 3, 10 multiplied by 3 and 12 multiplied by 3 respectively.
In the figure, all reconstructed CT images show a window width of 400HU (Housfield Units, HU) and a window level of 50 HU. The experiment adopts subjective evaluation and objective evaluation methods to verify the effectiveness of the algorithm. Subjective evaluation: the reconstruction effect of the present invention was averaged by comparing the FBP, WCSC and WMCR reconstructed maps of the sparse angular lower abdomen and chest data (see fig. 2, 3, 5 and 6); by selecting the region of interest and drawing a profile curve of the fixed region of the reconstruction map (such as white line segment marked regions in fig. 4 and 7), the deviation of the details of the reconstructed tissue can be observed in detail. Objective evaluation: the results of the experiments will be quantitatively compared using reference evaluation indexes such as Peak Signal to Noise Ratio (PSNR) and Structural Similarity (SSIM).
Subjective evaluation
By observing and comparing the characteristics of the CT reconstructed images in fig. 2, 3, 5 and 6, such as the intensity of the strip artifact, the distribution of the artifact, the details of the organization, the contrast among different organizations, the texture of the reconstructed image and the like, the reconstructed image with higher quality can be obtained. Meanwhile, as can be seen from the reconstruction result, after the number of scanning angles is reduced, the FBP reconstruction image is subjected to serious noise interference, so that the tissue details cannot be distinguished, noise artifacts in the WCSC reconstruction image and the WMCR reconstruction image are obviously suppressed, and the WMCR reconstruction method can better maintain the image details and improve the contrast ratio compared with the WCSC reconstruction method. With the reduction of the number of angles, the more the artifacts are, the gradually reduced quality of the reconstructed image is, but the WMCR method still brings better reconstruction results and is obviously superior to the WCSC algorithm.
Objective evaluation
While the effectiveness of the method in sparse angular scanning CT reconstruction is subjectively evaluated, the experiment further adopts two quantitative indexes of PSNR and SSIM to evaluate the reconstructed image so as to quantitatively confirm the effectiveness of the method. The PSNR and SSIM calculation method comprises the following steps:
Figure BDA0002996179850000091
Figure BDA0002996179850000092
wherein xTFor the last updated image to be reconstructed, xrFor a high quality reference image for simulation, N is the total number of image pixels; hmaxIs xrMaximum value of (d), muxTAnd muxrRespectively representing CT images xTAnd xrAverage value of the CT values of the medium total pixels; sigmaxTAnd σxrRespectively representing CT images xTAnd xrStandard deviation of CT value of middle total pixel, sigmaxTrFor CT image xTAnd xrCovariance of (2), constant C1=(0.01×Hmax)2,C2=(0.03×Hmax)2. PSNR and SSIM values of different data reconstructed images were calculated using the high-quality images used for the simulation as a reference image, and the results are shown in table 1.
TABLE 1
Figure BDA0002996179850000093
It can be seen from table 1 that, in the abdominal and thoracic data reconstruction under the simulated sparse angle, the quantization index of the FBP reconstructed image is worst, and the WCSC reconstruction result is improved to a certain extent, whereas higher SSIM and PSNR values can be obtained by using the WMCR method of the present invention (compared with the WCSC method result, in the 180-degree scan data experiment, the PSNR is higher by about 1.4dB, the SSIM is higher by about 0.01, in the 120-degree scan data experiment, the PSNR is higher by about.9 dB, and the SSIM is higher by about 0.01). As can be seen from fig. 4 and 7, in the selected pixels (the white line segments of the images in fig. 4(a) and fig. 7(a) mark the area, Ref is the reference image), the tissue boundary pixel value jump in the WMCR reconstructed image is more obvious, the tissue boundary is sharper, and the curve trend is closer to the reference image. The experiments show that the method can obtain CT reconstructed images with less artifacts under the same sparse angle scanning condition, has high stability and has a certain application prospect.
The present invention and its embodiments have been described above schematically, without limitation, and what is shown in the drawings is only one of the embodiments of the present invention, and the actual structure is not limited thereto. Therefore, if the person skilled in the art receives the teaching, without departing from the spirit of the invention, the person skilled in the art shall not inventively design the similar structural modes and embodiments to the technical solution, but shall fall within the scope of the invention.

Claims (10)

1. A sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding is characterized by comprising the following steps:
step 1, obtaining dictionary atoms of an initial multi-scale filter;
for a given high quality CT sample image xsPerforming wavelet transform to obtain high-frequency and low-frequency wavelet coefficients, wherein the high-frequency coefficient is partially expressed as
Figure FDA0002996179840000011
Figure FDA0002996179840000012
And
Figure FDA0002996179840000013
sub-band signals in the horizontal direction, the vertical direction and the diagonal direction respectively; to F0Performing multi-scale convolution characteristic learning to obtain initial multi-scale filter dictionary atoms
Figure FDA0002996179840000014
The learning model is represented as:
Figure FDA0002996179840000015
Figure FDA0002996179840000016
wherein, K is the scale number of convolution kernels, N is the number of convolution kernels under a single scale,
Figure FDA0002996179840000017
is a feature map of the corresponding atom, beta is a regularization parameter;
step 2, constructing a sparse angle CT reconstruction model constrained by wavelet multi-scale convolution feature coding;
step 3, decomposition of a reconstruction model: carrying out fixed variable decomposition on the reconstruction model to obtain a convolution characteristic learning update target function and an image to be reconstructed update target function;
and 4, solving the convolution characteristic learning updating target function and the image to be reconstructed updating target function in an alternative mode to obtain a final reconstruction result graph.
2. The sparse angular CT reconstruction method based on wavelet multi-scale convolutional feature coding of claim 1, wherein: the reconstruction model constructed in step 2 is represented as:
Figure FDA0002996179840000018
Figure FDA0002996179840000019
wherein, X is convolution operator, K is convolution kernel scale number, N is convolution kernel number under single scale, A is projection matrix of CT system, x is image to be reconstructed, p is sparse projection data, W is wavelet transform high-frequency coefficient extraction operator, lambda and beta are regularization parameters, d is regularization parametern,kFor multi-scale filter dictionary atoms, Mn,kIs a corresponding characteristic diagram.
3. The sparse angular CT reconstruction method based on wavelet multi-scale convolutional feature coding of claim 1, wherein: the convolution characteristic learning updating target function and the image to be reconstructed updating target function obtained in the step 3 are respectively expressed as:
Figure FDA0002996179840000021
Figure FDA0002996179840000022
Figure FDA0002996179840000023
wherein, X is convolution operator, K is convolution kernel scale number, N is convolution kernel number under single scale, A is projection matrix of CT system, x is image to be reconstructed, p is sparse projection data, W is wavelet transform high-frequency coefficient extraction operator, lambda and beta are regularization parameters, x istIs the image to be reconstructed after the t (0 is less than or equal to t) time of updating,
Figure FDA0002996179840000024
for the multi-scale filter dictionary atom after the t-th update,
Figure FDA0002996179840000025
is the characteristic diagram of the corresponding atom after the t-th updating.
4. The sparse angular CT reconstruction method based on wavelet multi-scale convolutional feature coding of claim 1, wherein: in the step 1, wavelet transformation adopts 1-layer two-dimensional stationary wavelet transformation, and a Haar wavelet base is selected.
5. The sparse angular CT reconstruction method based on wavelet multi-scale convolutional feature coding of claim 1, wherein: in step 1, the parameters of the multi-scale filter are as follows: k is more than or equal to 2 and less than or equal to 5, N is more than or equal to 32 and less than or equal to 64 under the condition of single scale, and the size of the convolution kernel can be selected from 6 multiplied by 3 to 14 multiplied by 3.
6. The sparse angular CT reconstruction method based on wavelet multi-scale convolutional feature coding of claim 1, wherein: in the step 1, the learning model formula (1) is solved by adopting an alternating direction multiplier algorithm to obtain an initial multi-scale filter dictionary atom.
7. The sparse angular CT reconstruction method based on wavelet multi-scale convolutional feature coding of claim 1, wherein: step 2, the operation steps of the wavelet transform high-frequency coefficient extraction operator W are as follows: firstly, performing 1-layer two-dimensional stationary wavelet transform on an image, and selecting a Haar wavelet base; then selecting subband signals of a high-frequency coefficient part in the horizontal direction, the vertical direction and the diagonal direction; and finally, sequentially superposing and combining the subband signals in the three directions according to a third dimension.
8. The sparse angular CT reconstruction method based on wavelet multi-scale convolutional feature coding according to claim 1The method is characterized in that: in the step 3, when t is equal to 0, the initial multi-scale filter dictionary atom is obtained in the step 1, and the initial image x to be reconstructed is obtained0And reconstructing the image by using a filtered back projection algorithm of a ramp filter.
9. The sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding as claimed in claim 1, wherein the convolution feature learning update objective function formula (3) in step 4 is solved by using an alternating direction multiplier algorithm, and the image to be reconstructed update objective function formula (4) is solved by using a paraboloid substitution algorithm.
10. The sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding as claimed in claim 1, wherein the image to be reconstructed before and after iteration in step 4 satisfies RMSE (x)t+1-xt) Stopping when the calculation is less than or equal to 30, and outputting a final reconstruction result graph, wherein RMSE ((-)) is a mean square error calculation operator.
CN202110331020.7A 2021-03-29 2021-03-29 Sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding Active CN113034641B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110331020.7A CN113034641B (en) 2021-03-29 2021-03-29 Sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110331020.7A CN113034641B (en) 2021-03-29 2021-03-29 Sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding

Publications (2)

Publication Number Publication Date
CN113034641A true CN113034641A (en) 2021-06-25
CN113034641B CN113034641B (en) 2022-11-08

Family

ID=76473376

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110331020.7A Active CN113034641B (en) 2021-03-29 2021-03-29 Sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding

Country Status (1)

Country Link
CN (1) CN113034641B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379868A (en) * 2021-07-08 2021-09-10 安徽工程大学 Low-dose CT image noise artifact decomposition method based on convolution sparse coding network
CN113436118A (en) * 2021-08-10 2021-09-24 安徽工程大学 Low-dose CT image restoration method based on multi-scale convolutional coding network
CN114723842A (en) * 2022-05-24 2022-07-08 之江实验室 Sparse visual angle CT imaging method and device based on depth fusion neural network
CN115115551A (en) * 2022-07-26 2022-09-27 北京计算机技术及应用研究所 Disparity map restoration method based on convolution dictionary

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107871332A (en) * 2017-11-09 2018-04-03 南京邮电大学 A kind of CT based on residual error study is sparse to rebuild artifact correction method and system
US20180225807A1 (en) * 2016-12-28 2018-08-09 Shenzhen China Star Optoelectronics Technology Co., Ltd. Single-frame super-resolution reconstruction method and device based on sparse domain reconstruction
CN108898642A (en) * 2018-06-01 2018-11-27 安徽工程大学 A kind of sparse angular CT imaging method based on convolutional neural networks
CN112507962A (en) * 2020-12-22 2021-03-16 哈尔滨工业大学 Hyperspectral image multi-scale feature extraction method based on convolution sparse decomposition

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180225807A1 (en) * 2016-12-28 2018-08-09 Shenzhen China Star Optoelectronics Technology Co., Ltd. Single-frame super-resolution reconstruction method and device based on sparse domain reconstruction
CN107871332A (en) * 2017-11-09 2018-04-03 南京邮电大学 A kind of CT based on residual error study is sparse to rebuild artifact correction method and system
CN108898642A (en) * 2018-06-01 2018-11-27 安徽工程大学 A kind of sparse angular CT imaging method based on convolutional neural networks
CN112507962A (en) * 2020-12-22 2021-03-16 哈尔滨工业大学 Hyperspectral image multi-scale feature extraction method based on convolution sparse decomposition

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
THANH NGUYEN-DUC等: "Frequency-splitting Dynamic MRI Reconstruction using Multi-scale 3D Convolutional Sparse Coding and Automatic Parameter Selection", 《MEDICAL IMAGE ANALYSIS》 *
刘进等: "小波域卷积稀疏编码的低剂量CT图像重建", 《计算机辅助设计与图形学学报》 *
张健等: "采用稀疏表示和小波变换的超分辨率重建算法", 《华侨大学学报(自然科学版)》 *
赵可等: "基于字典学习方法的CT不完全投影图像重建算法", 《数学的实践与认识》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379868A (en) * 2021-07-08 2021-09-10 安徽工程大学 Low-dose CT image noise artifact decomposition method based on convolution sparse coding network
CN113436118A (en) * 2021-08-10 2021-09-24 安徽工程大学 Low-dose CT image restoration method based on multi-scale convolutional coding network
CN114723842A (en) * 2022-05-24 2022-07-08 之江实验室 Sparse visual angle CT imaging method and device based on depth fusion neural network
CN114723842B (en) * 2022-05-24 2022-08-23 之江实验室 Sparse visual angle CT imaging method and device based on depth fusion neural network
CN115115551A (en) * 2022-07-26 2022-09-27 北京计算机技术及应用研究所 Disparity map restoration method based on convolution dictionary
CN115115551B (en) * 2022-07-26 2024-03-29 北京计算机技术及应用研究所 Parallax map restoration method based on convolution dictionary

Also Published As

Publication number Publication date
CN113034641B (en) 2022-11-08

Similar Documents

Publication Publication Date Title
Sagheer et al. A review on medical image denoising algorithms
CN113034641B (en) Sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding
Liu et al. 3D feature constrained reconstruction for low-dose CT imaging
Chen et al. Artifact suppressed dictionary learning for low-dose CT image processing
CN108961237B (en) Low-dose CT image decomposition method based on convolutional neural network
US9558570B2 (en) Iterative reconstruction for X-ray computed tomography using prior-image induced nonlocal regularization
US11562469B2 (en) System and method for image processing
Zhang et al. Statistical image reconstruction for low-dose CT using nonlocal means-based regularization. Part II: An adaptive approach
CA3067078C (en) System and method for image processing
US8355555B2 (en) System and method for multi-image based virtual non-contrast image enhancement for dual source CT
Bai et al. Z-index parameterization for volumetric CT image reconstruction via 3-D dictionary learning
Li et al. Incorporation of residual attention modules into two neural networks for low‐dose CT denoising
CN115984394A (en) Low-dose CT reconstruction method combining prior image and convolution sparse network
Zhang et al. Adaptive non‐local means on local principle neighborhood for noise/artifacts reduction in low‐dose CT images
Chen et al. Low-dose CT image denoising model based on sparse representation by stationarily classified sub-dictionaries
Wang et al. Noise removal of low-dose CT images using modified smooth patch ordering
CN113205461B (en) Low-dose CT image denoising model training method, denoising method and device
Liao et al. Noise estimation for single-slice sinogram of low-dose X-ray computed tomography using homogenous patch
Du et al. X-ray CT image denoising with MINF: A modularized iterative network framework for data from multiple dose levels
CN115731158A (en) Low-dose CT reconstruction method based on residual error domain iterative optimization network
CN116167929A (en) Low-dose CT image denoising network based on residual error multi-scale feature extraction
Bao et al. Denoising human cardiac diffusion tensor magnetic resonance images using sparse representation combined with segmentation
Zheng et al. Improving spatial adaptivity of nonlocal means in low-dosed CT imaging using pointwise fractal dimension
Xiong et al. Re-UNet: a novel multi-scale reverse U-shape network architecture for low-dose CT image reconstruction
CN114926487A (en) Multi-modal image brain glioma target area segmentation method, system and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant