CN117710343A - Conical beam CT reconstruction method based on multi-scale hash coding - Google Patents

Conical beam CT reconstruction method based on multi-scale hash coding Download PDF

Info

Publication number
CN117710343A
CN117710343A CN202311753282.8A CN202311753282A CN117710343A CN 117710343 A CN117710343 A CN 117710343A CN 202311753282 A CN202311753282 A CN 202311753282A CN 117710343 A CN117710343 A CN 117710343A
Authority
CN
China
Prior art keywords
sampling points
projection
rays
scale
sampling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311753282.8A
Other languages
Chinese (zh)
Inventor
秦红星
范若冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202311753282.8A priority Critical patent/CN117710343A/en
Publication of CN117710343A publication Critical patent/CN117710343A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention belongs to the technical field of CT imaging, and particularly relates to a cone beam CT reconstruction method based on multi-scale hash coding; the method comprises the following steps: scanning the real object and collecting CT projection data; generating rays from the X-ray source to the projection pixel direction according to CT projection data, and uniformly sampling N points at the intersection part of the rays and the object; mapping the position information of the sampling points to a high-dimensional space by adopting multi-scale hash coding to obtain characteristic vectors of the sampling points; inputting the feature vectors of the sampling points into a multi-layer perceptron for processing to obtain attenuation coefficients of the sampling points; calculating total loss according to attenuation coefficients of the sampling points, and adjusting parameters of the multi-layer perceptron according to the total loss to obtain a trained multi-layer perceptron; processing the real object to be CT reconstructed by adopting a trained multi-layer perceptron to obtain a CT reconstruction result of the real object; the CT data reconstruction method is fast in CT data reconstruction speed, high in accuracy and good in application prospect.

Description

Conical beam CT reconstruction method based on multi-scale hash coding
Technical Field
The invention belongs to the technical field of CT imaging, and particularly relates to a cone beam CT reconstruction method based on multi-scale hash coding.
Background
Compared with the traditional CT technology, the Cone Beam Computer Tomography (CBCT) has the advantages of high imaging speed and clarity. CT imaging utilizes the principle that X rays can penetrate through a plurality of substances, human tissues are irradiated by the X rays at a plurality of angles, then the attenuated X-ray dose is measured, a CT reconstruction algorithm is used for calculating attenuation coefficients of different tissues on the X rays according to the emitted X-ray dose, an object discretization model and the measured attenuated X-ray dose, and therefore an image is generated. To reduce the chance that excessive X-ray exposure may increase the patient's risk of cancer, the radiation dose may be reduced by reducing the number of CBCT projections.
CBCT reconstruction is a process of obtaining attenuation coefficients at various locations of human tissue from partial projection data. Current methods of CBCT reconstruction can be divided into three categories. First, the analytical reconstruction method estimates the attenuation coefficient by Jie Ladong transformation and inverse transformation thereof. Which in an ideal case can obtain a better reconstruction result but perform poorly in sparse angles or in cases of insufficient angular coverage. And secondly, an iterative reconstruction method is adopted, wherein the iterative reconstruction is carried out by introducing a regularized optimization framework, and the reconstruction process is used as a minimization process. It can solve the problem of parsing reconstruction, but consumes a lot of computation time and memory. Thirdly, learning-based methods, in most of these methods, there are problems that it is difficult to collect training data of sufficient labels and that the training time is long.
In summary, the invention provides a self-supervision cone beam CT reconstruction method based on multi-scale hash coding, which only needs to train X-ray projection data, can accelerate the training process of a network, and can ensure the accuracy of reconstructed CT data.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention provides a cone beam CT reconstruction method based on multi-scale hash coding, which comprises the following steps:
s1: scanning the real object and collecting CT projection data;
s2: generating rays from the X-ray source to the projection pixel direction according to CT projection data, and uniformly sampling N points at the intersection part of the rays and the object;
s3: mapping the position information of the sampling points to a high-dimensional space by adopting multi-scale hash coding to obtain characteristic vectors of the sampling points;
s4: inputting the feature vectors of the sampling points into a multi-layer perceptron for processing to obtain attenuation coefficients of the sampling points;
s5: calculating total loss according to attenuation coefficients of the sampling points, and adjusting parameters of the multi-layer perceptron according to the total loss to obtain a trained multi-layer perceptron;
s6: and processing the real object to be CT reconstructed by adopting the trained multi-layer perceptron to obtain a CT reconstruction result of the real object.
Preferably, the generated rays are expressed as:
wherein r is k,i,j Representing the X-ray source to projection pixel I k,i,j Rays in the direction, O k Representing the generation of projection I k T represents the distance of the sampling point from the source,representing ray r k,i,j Is usually a unit vector.
Preferably, the mapping of the position information of the sampling points to the high-dimensional space includes:
mapping all grid points of voxels where the sampling points are located under each scale into a hash table to obtain feature vectors of all grid points under each scale;
according to the feature vector of each lattice point, linear interpolation is adopted to respectively obtain initial feature vectors of sampling points under each scale;
and splicing the initial feature vectors of the sampling points under all scales to obtain the final feature vectors of the sampling points.
Further, the formula for mapping the lattice point to the hash table is:
wherein h (Z) represents the eigenvector of the lattice point X, X 1 、x 2 And x 3 Representing the horizontal, vertical and vertical coordinates of the grid point X,representing element-wise exclusive-or operations, pi 1 ,π 2 ,π 3 Representing the first, second and third large prime numbers, and T represents the number of entries of the hash table.
Preferably, the process of calculating the total loss according to the attenuation coefficient of the sampling point includes:
synthesizing attenuation coefficients of all sampling points on the rays to obtain synthesized projection of the rays;
the total loss is calculated from the composite projection and the real projection of the rays.
Further, the formula of the attenuation coefficients of all sampling points on the synthetic ray is as follows:
wherein I represents a synthetic projection of the ray, I 0 Representing the initial intensity of rays, σ i The attenuation coefficient representing the ith sample point, N represents the number of sample points on the ray, delta i Representing the distance between the i-th sampling point and the i + 1-th sampling point.
Further, the formula for calculating the total loss is:
wherein Loss represents total Loss, I r (r) represents the real projection of ray r, I s (R) represents a composite projection of the real projections, R represents a ray R, and R represents a ray set.
The beneficial effects of the invention are as follows: the present invention represents CT data as a mapping of spatial coordinates to attenuation coefficients in combination with a self-supervised network framework to support reconstruction of CT data from two-dimensional projections to three-dimensional. And the input of the neural network is mapped to a high-dimensional space by adopting multi-scale hash coding so as to better fit the high-frequency changed data, thereby ensuring the accuracy of reconstructed CT data. The invention combines the hash table and the neural network by using the multi-scale hash coding, stores the feature vectors in a structured mode, and distributes most of learning tasks to the data structure.
Drawings
FIG. 1 is a flow chart of a cone beam CT reconstruction method based on multi-scale hash coding in the invention;
FIG. 2 is a schematic diagram of the geometry of a CBCT scanner according to the present invention;
FIG. 3 is a schematic diagram of a multi-scale CT data grid in accordance with the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention provides a cone beam CT reconstruction method based on multi-scale hash coding, which is shown in fig. 1 and comprises the following steps:
s1: and scanning the object and acquiring CT projection data.
Scanning a real object by using a CBCT scanner, and determining the geometric structure of the CBCT scanner for successfully acquiring projection data; as shown in fig. 2, the geometry of the CBCT scanner of the present invention is:
the X-ray source S is located at a distance DSO from the rotation center O, the origin of the cartesian coordinate system being located at the rotation center O. X-ray source illumination including CT dataIs a conical region of (1) and a detector->The detector measures the intensity of photons impinging thereon, which have decayed according to beer-lambert law. CT data are centered on a position O' offset from the origin of the coordinate system by +.>The detector is positioned at a distance from the ray source DSD, the center is positioned at D', and the offset between the detector and D is thatD is a point located in the xy plane at a distance DSD-DSO from the origin. The projection coordinate system uv is defined centered on the lower left corner of the detector. During acquisition of projection data, the X-ray source and detector are rotated about the z-axis at an angle α (rotation interval α degrees) from the initial position.
After determining the geometry of the CBCT scanner, the X-ray source rotates around the object and emits a cone-shaped X-ray beam, detecting projection data of the X-rays at the same angular intervals in the 2D plane.
S2: rays from the X-ray source to the projection pixel direction are generated according to CT projection data, and N points are uniformly sampled at the intersection part of the rays and the object.
Each pixel value of the projection data is a result of the X-rays passing through the object (CT data) and being attenuated by the internal medium. X-ray source to projection I k Pixel I of (1) k ,i, j The directional ray r is expressed as:
wherein r is k,i,j Representing the X-ray source to projection pixel I k,i,j Rays in the direction, k is the index of the projection, and (i, j) is the index of the projection pixel; o (O) k Representing the generation of projection I k T represents the distance from the sampling point to the source;representing ray r k,i,j Is usually a unit vector. Calculated from the difference between the pixel position of the detector plane and the source position.
And adopting a layered sampling method to sample N points at the intersection part of the X-ray and the object. In particular, the ray is divided into N uniform intervals, and a point is sampled within each interval. N is set to be larger than the required CT data size to ensure that at least one sampling point is assigned to each voxel through which the X-rays pass.
S3: and mapping the position information of the sampling points to a high-dimensional space by adopting multi-scale hash coding to obtain the characteristic vectors of the sampling points.
For CT data to be reconstructed, the CT data are considered to have L layers of different scales, and each layer of scale is associated with a hash table, wherein the hash table has T items, and each item is a feature vector of F dimensions.
Each scale corresponds to a different resolution N l ,l∈[1,L]The method comprises the steps of carrying out a first treatment on the surface of the The resolution range is [ N ] min ,N max ]The calculation formula of the resolution of each layer is as follows:
wherein b is a smaller growth factor, and the calculation formula is as follows:
from the resolution it is possible to determine in which voxels the sampling point is.
Mapping all grid points of voxels where the sampling points are located under each scale into a hash table to obtain feature vectors of all the grid points under each scale, and obtaining initial feature vectors of the sampling points under each scale by adopting linear interpolation according to the feature vectors of all the grid points. The hash function h used in the invention maps the lattice point to the hash table as follows:
wherein h (X) represents the eigenvector of the lattice point X, X 1 、x 3 And x 3 Representing the horizontal, vertical and vertical coordinates of the grid point X,representing element-wise exclusive-or operations, pi 1 ,π 2 ,π 3 Representing the first, second and third large prime numbers, and T represents the number of entries of the hash table. For coarser scales, the total number of lattice points of CT data is less than the number of hash table entries T, there is a 1:1 mapping. For finer scales, hash collisions may exist, and hash collisions are not handled because lattice points are unlikely to collide at all scales even though they have collided at scale l.
For example, as shown in FIG. 3, for sample point P εR 3 At the scale l at voxel V l In this voxel, there are 8 lattice points. Mapping each lattice point into a hash table to obtain a respective eigenvector { c } 1 ,c 2 ,…,c 8 Obtaining a characteristic vector f of the sampling point P under the scale l by linear interpolation l (the distance from the sampling point to the lattice point is taken as a weight, and a weighted sum is made with the feature vector of the lattice point).
And splicing the initial feature vectors of the sampling points under all scales to obtain the final feature vectors of the sampling points.
S4: and inputting the feature vectors of the sampling points into a multi-layer perceptron for processing to obtain attenuation coefficients of the sampling points.
For sample point P, the hashed feature is f. The attenuation coefficient sigma of the multi-layer perceptron MLP is predicted by the multi-layer perceptron MLP, expressed as:
σ=MLP(f)
s5: and calculating total loss according to the attenuation coefficient of the sampling point, and adjusting the parameters of the multi-layer perceptron according to the total loss to obtain the trained multi-layer perceptron.
Synthesizing attenuation coefficients of all sampling points on the rays to obtain synthesized projection of the rays; specific: according to beer's law, the attenuation of an X-ray penetrating substance is an exponential integral of the attenuation coefficient in its propagation path, expressed in discrete form as:
wherein I represents a synthetic projection of the ray, I 0 Representing the initial intensity of rays, σ i The attenuation coefficient of the ith sampling point is represented, and N represents the sampling point number on the ray; delta i Represents the distance, delta, between the ith sample point and the (i+1) th sample point i =‖P i+1 -P i ‖。
Calculating total loss according to the synthesized projection and the real projection (CT projection data obtained by scanning the real object by a CBCT scanner) of the rays; specific: model optimization is performed by minimizing the L2 loss between the real projection and the synthesized projection, and the total model loss is as follows:
wherein Loss represents the total Loss, i r (r) represents the real projection of ray r, I s (R) represents a composite projection of the real projections, R represents a ray R, and R represents a ray set.
S6: and processing the real object to be CT reconstructed by adopting the trained multi-layer perceptron to obtain a CT reconstruction result of the real object.
The final reconstructed CT data should be a discrete 3D matrix. Thus constructing a grid with the required resolution; processing the real object to be reconstructed by using the trained multi-layer perceptron, and outputting the attenuation coefficient of the predicted sampling point; the grid coordinates are transferred to the attenuation coefficients, resulting in reconstructed CT data.
In summary, the present invention represents CT data as a mapping of spatial coordinates to attenuation coefficients in combination with a self-supervised network framework to support reconstruction of CT data from two-dimensional projection to three-dimensional. Mapping the input of the neural network to a high-dimensional space by adopting multi-scale hash coding so as to better fit the high-frequency changed data, thereby ensuring the accuracy of reconstructing CT data; the CT data reconstruction is carried out by the invention, so that the balance speed and accuracy are good, and the CBCT projection is adopted, thereby being beneficial to reducing the incidence of cancer of patients during auxiliary medical examination.
While the foregoing is directed to embodiments, aspects and advantages of the present invention, other and further details of the invention may be had by the foregoing description, it will be understood that the foregoing embodiments are merely exemplary of the invention, and that any changes, substitutions, alterations, etc. which may be made herein without departing from the spirit and principles of the invention.

Claims (7)

1. A cone beam CT reconstruction method based on multi-scale hash coding, comprising:
s1: scanning the real object and collecting CT projection data;
s2: generating rays from the X-ray source to the projection pixel direction according to CT projection data, and uniformly sampling N points at the intersection part of the rays and the object;
s3: mapping the position information of the sampling points to a high-dimensional space by adopting multi-scale hash coding to obtain characteristic vectors of the sampling points;
s4: inputting the feature vectors of the sampling points into a multi-layer perceptron for processing to obtain attenuation coefficients of the sampling points;
s5: calculating total loss according to attenuation coefficients of the sampling points, and adjusting parameters of the multi-layer perceptron according to the total loss to obtain a trained multi-layer perceptron;
s6: and processing the real object to be CT reconstructed by adopting the trained multi-layer perceptron to obtain a CT reconstruction result of the real object.
2. The method of cone beam CT reconstruction based on multi-scale hash coding as claimed in claim 1, wherein the generated rays are expressed as:
wherein r is k,i,j Representing the X-ray source to projection pixel I k,i,j Rays in the direction, O k Representing the generation of projection I k T represents the distance of the sampling point from the source,representing ray r k,i,j Is usually a unit vector.
3. The cone beam CT reconstruction method based on multi-scale hash coding as claimed in claim 1, wherein the mapping the position information of the sampling points to the high-dimensional space comprises:
mapping all grid points of voxels where the sampling points are located under each scale into a hash table to obtain feature vectors of all grid points under each scale;
according to the feature vector of each lattice point, linear interpolation is adopted to respectively obtain initial feature vectors of sampling points under each scale;
and splicing the initial feature vectors of the sampling points under all scales to obtain the final feature vectors of the sampling points.
4. A cone beam CT reconstruction method based on multi-scale hash coding as claimed in claim 3, wherein the formula for mapping lattice points to hash tables is:
wherein h (X) represents the eigenvector of the lattice point X, X 1 、x 2 And x 3 Representing the horizontal, vertical and vertical coordinates of the grid point X,representing element-wise exclusive-or operations, pi 1 ,π 2 ,π 3 Representing the first, second and third large prime numbers, and T represents the number of entries of the hash table.
5. The method for cone beam CT reconstruction based on multi-scale hash coding as claimed in claim 1, wherein the process of calculating the total loss from the attenuation coefficient of the sampling point comprises:
synthesizing attenuation coefficients of all sampling points on the rays to obtain synthesized projection of the rays;
the total loss is calculated from the composite projection and the real projection of the rays.
6. The method for cone beam CT reconstruction as recited in claim 5, wherein the attenuation coefficients of all sampling points on the synthetic radiation are formulated as:
wherein I represents a synthetic projection of the ray, I 0 Representing the initial intensity of rays, σ i The attenuation coefficient representing the ith sample point, N represents the number of sample points on the ray, delta i Representing the distance between the i-th sampling point and the i + 1-th sampling point.
7. The method for cone beam CT reconstruction based on multi-scale hash coding as recited in claim 5, wherein the formula for calculating the total loss is:
wherein Loss represents total Loss, I r (r) represents the real projection of ray r, I s (R) represents a composite projection of the real projections, R represents a ray R, and R represents a ray set.
CN202311753282.8A 2023-12-18 2023-12-18 Conical beam CT reconstruction method based on multi-scale hash coding Pending CN117710343A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311753282.8A CN117710343A (en) 2023-12-18 2023-12-18 Conical beam CT reconstruction method based on multi-scale hash coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311753282.8A CN117710343A (en) 2023-12-18 2023-12-18 Conical beam CT reconstruction method based on multi-scale hash coding

Publications (1)

Publication Number Publication Date
CN117710343A true CN117710343A (en) 2024-03-15

Family

ID=90151278

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311753282.8A Pending CN117710343A (en) 2023-12-18 2023-12-18 Conical beam CT reconstruction method based on multi-scale hash coding

Country Status (1)

Country Link
CN (1) CN117710343A (en)

Similar Documents

Publication Publication Date Title
US20220292646A1 (en) System and method for image reconstruction
US8615118B2 (en) Techniques for tomographic image by background subtraction
US10255696B2 (en) System and method for image reconstruction
US11475611B2 (en) System and method for image reconstruction
CN101473348A (en) Method and system for error compensation
CA2711115A1 (en) Dose reduction and image enhancement in tomography through the utilization of the object's surroundings as dynamic constraints
US20150086097A1 (en) Fast statistical imaging reconstruction via denoised ordered-subset statistically-penalized algebraic reconstruction technique
CN108338802B (en) Method for reducing image artifacts
US20110019791A1 (en) Selection of optimal views for computed tomography reconstruction
Lappas et al. Automatic contouring of normal tissues with deep learning for preclinical radiation studies
Zhang et al. Dynamic cone-beam CT reconstruction using spatial and temporal implicit neural representation learning (STINR)
US7272205B2 (en) Methods, apparatus, and software to facilitate computing the elements of a forward projection matrix
EP3629294A1 (en) Method of providing a training dataset
CN105578963B (en) Image data Z axis coverage area for tissue dose estimation extends
Liang et al. Quantitative cone-beam CT imaging in radiotherapy: Parallel computation and comprehensive evaluation on the TrueBeam system
CN117710343A (en) Conical beam CT reconstruction method based on multi-scale hash coding
Zhong et al. 3D‐2D Deformable Image Registration Using Feature‐Based Nonuniform Meshes
CN111583303A (en) System and method for generating pseudo CT image based on MRI image
Pluta et al. A New Statistical Approach to Image Reconstruction with Rebinning for the X-Ray CT Scanners with Flying Focal Spot Tube
Cheng et al. An Integrated Framework of Projection and Attenuation Correction for Quantitative SPECT/CT Reconstruction
TW201822717A (en) Reduction method for boundary artifact on the tomosynthesis
Zhang et al. Image Reconstruction from Projection
Sumida et al. Introduction to CT/MR simulation in radiotherapy
Zhou et al. U-Net Transfer Learning for Image Restoration on Sparse CT Reconstruction in Pre-Clinical Research
CN117808911A (en) Cone beam CT reconstruction method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination