CN112862722B - Dual-energy X-ray subtraction method and device - Google Patents
Dual-energy X-ray subtraction method and device Download PDFInfo
- Publication number
- CN112862722B CN112862722B CN202110214832.3A CN202110214832A CN112862722B CN 112862722 B CN112862722 B CN 112862722B CN 202110214832 A CN202110214832 A CN 202110214832A CN 112862722 B CN112862722 B CN 112862722B
- Authority
- CN
- China
- Prior art keywords
- energy
- ray
- image
- low
- contrast agent
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000011410 subtraction method Methods 0.000 title claims abstract description 20
- 239000002872 contrast media Substances 0.000 claims abstract description 41
- 238000013528 artificial neural network Methods 0.000 claims abstract description 32
- 238000000034 method Methods 0.000 claims abstract description 27
- 238000012549 training Methods 0.000 claims abstract description 16
- 230000005855 radiation Effects 0.000 claims abstract description 7
- 238000002583 angiography Methods 0.000 claims abstract description 4
- 238000004364 calculation method Methods 0.000 claims abstract description 3
- 238000001228 spectrum Methods 0.000 claims description 16
- 230000006870 function Effects 0.000 claims description 15
- 238000011176 pooling Methods 0.000 claims description 12
- 238000012360 testing method Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 8
- 230000008602 contraction Effects 0.000 claims description 8
- 238000012795 verification Methods 0.000 claims description 8
- 230000009977 dual effect Effects 0.000 claims description 5
- 230000004913 activation Effects 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 230000010339 dilation Effects 0.000 claims description 2
- 239000010902 straw Substances 0.000 claims 2
- 238000003384 imaging method Methods 0.000 abstract description 5
- 210000000056 organ Anatomy 0.000 abstract description 5
- 238000002347 injection Methods 0.000 abstract description 3
- 239000007924 injection Substances 0.000 abstract description 3
- 239000010410 layer Substances 0.000 description 18
- 210000004204 blood vessel Anatomy 0.000 description 11
- 238000005516 engineering process Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 238000003745 diagnosis Methods 0.000 description 5
- 238000013507 mapping Methods 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 238000004088 simulation Methods 0.000 description 4
- 230000007547 defect Effects 0.000 description 3
- 210000001519 tissue Anatomy 0.000 description 3
- ZCYVEMRRCGMTRW-UHFFFAOYSA-N 7553-56-2 Chemical compound [I] ZCYVEMRRCGMTRW-UHFFFAOYSA-N 0.000 description 2
- 238000003332 Raman imaging Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 229910052740 iodine Inorganic materials 0.000 description 2
- 239000011630 iodine Substances 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- AMDBBAQNWSUWGN-UHFFFAOYSA-N Ioversol Chemical compound OCCN(C(=O)CO)C1=C(I)C(C(=O)NCC(O)CO)=C(I)C(C(=O)NCC(O)CO)=C1I AMDBBAQNWSUWGN-UHFFFAOYSA-N 0.000 description 1
- 206010028980 Neoplasm Diseases 0.000 description 1
- 238000003841 Raman measurement Methods 0.000 description 1
- 238000001069 Raman spectroscopy Methods 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 230000000740 bleeding effect Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 230000002490 cerebral effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 208000029078 coronary artery disease Diseases 0.000 description 1
- 210000004351 coronary vessel Anatomy 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 239000002355 dual-layer Substances 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000007917 intracranial administration Methods 0.000 description 1
- 229960004537 ioversol Drugs 0.000 description 1
- 230000001678 irradiating effect Effects 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 210000000653 nervous system Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 210000004872 soft tissue Anatomy 0.000 description 1
- 239000000243 solution Substances 0.000 description 1
- 238000010186 staining Methods 0.000 description 1
- 238000002560 therapeutic procedure Methods 0.000 description 1
- 230000002792 vascular Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10116—X-ray image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention provides a dual-energy X-ray subtraction method, wherein the method comprises the steps of simultaneously acquiring two high-energy X-ray projection images and low-energy X-ray projection images through one X-ray scanning after a contrast medium is injected; inputting the high-energy X-ray projection image into a trained U-shaped deep neural network to generate a low-energy X-ray projection image without a contrast agent; subtracting the acquired low-energy X-ray projection image with the actual contrast-agent-containing low-energy X-ray image to obtain a final digital subtraction angiography DSA image; the device for realizing the method comprises an X-ray source, a detector, a contrast agent, a ray filter and a computer for deep neural network training and calculation. The invention with the scheme solves the technical problems that in the prior art, DSA imaging needs to be performed with two X-ray scans before and after contrast agent injection respectively, namely, the radiation dose is increased, and the DSA image quality is influenced by the movement of a patient organ between the two X-ray scans.
Description
Technical Field
The invention relates to the technical field of clinical medical image diagnosis, in particular to a dual-energy X-ray subtraction method and device.
Background
Digital angiography (DSA) is a new medical imaging technology that appears after CT in the 80 th 20 th century, and is a new blood vessel examination method combining a conventional X-ray angiography machine and an electronic computer. The basic principle is that before and after the injection of contrast agent, an X-ray image is obtained, the image without contrast agent is called mask, the image with contrast agent is called filling film, after the two images are inputted into a computer, real-time subtraction is carried out to eliminate the muscle, bone and soft tissue parts, and only the interested region with blood vessel is left. The DSA technology can well display the blood vessel region in the human body through subtraction, display the shape and the structure of the blood vessel, reflect the basic information of various diseases, provide reliable basis for the diagnosis and treatment and the curative effect evaluation of doctors, and is a clinical auxiliary diagnosis and treatment means with wide application. For example, the trend of cerebral vessels and the shape and position of abnormal vessels can be displayed in the nervous system, and the bleeding part can be defined; can provide the staining condition of intracranial tumor vessels, the number of supply and drainage vessels; can display the running and variation of each atrioventricular and coronary vessels of the heart to screen coronary heart disease; can be applied to interventional therapy.
Existing DSA techniques are classified into time-type subtraction and energy-type subtraction according to the imaging principle. At present, a time-based silhouette is widely used in clinical applications, i.e. two X-ray scans at different times are used to obtain X-ray images before and after the injection of contrast medium, and then subtraction is performed to obtain DSA images. This approach suffers from artifacts due to organ motion during the acquisition, as well as a large radiation dose. The energy-type silhouette obtains two images with different energy levels in a very short time, and obtains a DSA image of the contrast agent by using the physical characteristics that the attenuation coefficients of the contrast agent and surrounding tissues have obvious difference to X-rays with different energy levels. Energy-type silhouettes can effectively overcome the problem of motion artifacts, but the K-edge energy of the iodine which is a common contrast agent is low, so that a high-quality DSA image is difficult to obtain in practical application.
The method and the device for obtaining DSA subtraction by one-time X-ray exposure by utilizing the U-shaped depth neural network are provided, through learning the mapping relation among high-energy and low-energy image data, after a contrast agent is injected, mask generation among high-energy and low-energy images is realized through a non-contrast agent region through the mapping relation, so that subtraction is realized, and a DSA image only containing blood vessels is obtained.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, the first objective of the present invention is to provide a dual-energy X-ray subtraction method, which only needs to perform one X-ray scan after injecting contrast agent, then input the high-energy X-ray projection image into the trained U-shaped deep neural network, generate a low-energy X-ray projection image corresponding to the low-energy X-ray projection image without contrast agent, and subtract the low-energy X-ray projection image with contrast agent, so as to obtain the final DSA image. The invention can effectively reduce the radiation dose and the artifacts caused by organ movement between two exposures of the traditional DSA, effectively improve the real-time performance of the DSA, and can be more conveniently applied to clinical DSA diagnosis and interventional operation treatment.
A second object of the present invention is to provide an apparatus for implementing the dual-energy X-ray subtraction method.
A third object of the invention is to propose another device.
A fourth object of the invention is to propose a non-transitory computer-readable storage medium.
In order to achieve the above object, an embodiment of an aspect of the present invention provides a dual-energy X-ray subtraction method, including the following steps:
step S10, acquiring two X-ray projection images of high energy and low energy through one X-ray scanning after the contrast agent is injected;
step S20, inputting the high-energy X-ray projection image containing the contrast agent into a trained U-shaped deep neural network to generate a low-energy X-ray projection image without the contrast agent, wherein the X-ray projection image without the contrast agent is artificially generated through U-net, the X-ray projection image without the contrast agent is called a mask, and the mask is generated by adding one more scan in the conventional DSA; and
and S30, subtracting the acquired actual low-energy X-ray image from the low-energy X-ray projection image to obtain a final DSA image.
In addition, the dual-energy X-ray subtraction method according to the above embodiment of the present invention may be further implemented by:
further, in an embodiment of the first aspect of the present invention, the U-shaped deep neural network includes a contraction path and an expansion path symmetrical thereto; the contraction path comprises a convolution layer and a pooling layer and is used for extracting features of all layers of the image.
Further, in an embodiment of the first aspect of the present invention, the convolution kernels of the convolutional layers are n × n matrices, 2-n-10 and odd, randomly generated and normally distributed.
Further, in an embodiment of the first aspect of the present invention, the pooling layers employ an average pooling of m × m matrices, 1-m-5, with each image length and width reduced by half, and a linear rectification function ReLu function as the activation function at the output portion of each layer.
Further, in an embodiment of the first aspect of the present invention, the dilation path is implemented by upsampling, the upsampling being filled using nearest neighbor interpolation.
Further, in an embodiment of the first aspect of the present invention, in the step S20, the U-shaped deep neural network includes the following steps:
s21, collecting a high-energy X-ray air value projection image and a low-energy X-ray air value projection image which are directly irradiated on the double-layer detector by X-ray under the condition that a scanning object is not placed;
wherein the low energy image in step S20 is an image simulated by Unet according to the high energy image in step S10, and is not true, and the purpose is to subtract the simulated image from the low energy image (i.e. true value) in step S10 to obtain the blood vessel;
s22, under the condition that the voltage and current parameters of the X-ray machine are the same, acquiring a training data set, a verification data set and a test data set for training the U-shaped deep neural network;
images acquired by the verification data set and the test data set are both high-energy X-ray projection images and low-energy X-ray projection images containing contrast agents;
s23, each pair of high-energy X-ray projection image and low-energy X-ray projection image of the training data set, the verification data set and the test data set is respectively divided by the high-energy spectrum air value and the low-energy spectrum air value under the same working parameters to obtain a processed projection data set; and
and S24, establishing a U-shaped deep neural network.
Further in an embodiment of the first aspect of the present invention, said U-shaped deep neural network comprises a Loss function, denoted Loss,
loss=αL MsSSIM +(1-α)L MAE ,
wherein, alpha =0.84 ± 0.1,L MAE Is the absolute value error of two images, L MsSSIM It is expressed as the degree to which the two graphs are similar.
To achieve the above object, an embodiment of the second aspect of the present invention provides an apparatus for implementing a dual-energy X-ray subtraction method, including an X-ray source, a detector, a contrast agent, a ray filter, and a computer for deep neural network training and calculation.
To achieve the above object, an embodiment of a third aspect of the present invention provides a computer device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the above method when executing the computer program.
To achieve the above object, an embodiment of a fourth aspect of the present invention proposes a non-transitory computer-readable storage medium having a computer program stored thereon, the computer program, when executed by a processor, implementing the above method
The existing clinical DSA technology needs to take a plurality of images in succession to well perform the blood vessel subtraction, so the existing clinical DSA technology has the defects of large radiation dose, influence of motion artifacts between images and the like. Compared with the prior art, the method realizes the technology of obtaining the DSA image only by using two different energy spectrum X-ray projections by establishing the U-shaped depth neural network and the high-frequency component structure, overcomes the defects of high radiation dose and motion artifacts of the two-time exposure silhouette commonly adopted by the current clinical DSA at different moments, is suitable for the vascular subtraction technology of all parts of the human body, and has wide market application prospect and value.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow diagram according to a first embodiment of the present invention;
FIG. 2 is a U-shaped deep neural network structure of the above embodiment;
FIG. 3 is a high frequency component diagram of the above embodiment;
FIG. 4 is a schematic diagram of the subtraction result of the dual-energy X-ray blood vessels of the brain according to the above embodiment;
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The dual energy X-ray subtraction method and apparatus of an embodiment of the present invention are described below with reference to the accompanying drawings. First, a dual energy X-ray subtraction method proposed according to an embodiment of the present invention will be described with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a dual-energy X-ray subtraction method according to a first embodiment of the present invention.
As shown in fig. 1, the dual energy X-ray subtraction method includes the steps of:
step S10, acquiring a high-energy X-ray projection image through one X-ray scanning after the contrast agent is injected;
s20, inputting the high-energy X-ray projection image into a trained U-shaped deep neural network to generate a low-energy X-ray projection image; and
and step S30, subtracting the acquired actual low-energy X-ray image from the low-energy X-ray projection image to obtain a final DSA image.
In order to obtain X-ray projection images at two different energies for a patient injected with a contrast agent, the following two approaches may be used: 1) In a single X-ray exposure, an energy spectrum detector is used to collect X-ray photons with different energies, such as a dual-layer sandwich structure dual-energy detector or a photon counting detector, to obtain two or more energy spectrum projection images at a time. 2) Within a short time, for example within 1ms, two X-ray exposures of different energy spectra are achieved and X-ray projections of different energies are acquired separately. No matter which acquisition mode is adopted, the method can realize DSA imaging by utilizing X-ray projections of at least two different energy spectrums, and the specific technical scheme is illustrated by taking the obtained high-energy spectrum image and the low-energy spectrum image as examples.
First, considering that an X-ray source emits a beam of X-rays, which is incident into a detector along a straight line after passing through an object and is collected, the process may be represented as follows:
wherein, I 0 Representing an X-ray sourceThe intensity of the emitted X-rays before irradiating the scanned object; i represents the intensity of the X-rays received by the detector after propagating along a straight line and being attenuated by different tissues and organs,the value of the equivalent line attenuation coefficient of an X-ray beam of a certain energy spectrum distribution is represented by a certain tissue organ.
In X-ray DSA imaging, a surface array detector is often used to acquire data, and the acquired data often needs to calculate an equivalent line attenuation coefficient integral value on each X-ray path and an X-ray projection value according to the following formula:
the dual-energy X-ray DSA technology needs to obtain projection images under two different energy spectrums, which are respectively marked as proj L 、proj H Respectively, low-and high-energy spectrum X-ray projections.
Aiming at the defects of the existing DSA technology, the invention provides the method for reducing radiation dose and motion artifacts through an energy type silhouette method, and the core idea is to realize the learning and conversion of high and low energy projection images by utilizing a U-shaped deep neural network. The specific steps of the technical scheme are described in detail below by taking a double-layer detector as an example:
(1) Under the condition that a scanning object is not placed, high-energy and low-energy images of X-ray directly irradiated on the double-layer detector are collected, and multi-frame image superposition is often needed for reducing noise.
(2) Obtaining training dual-energy data, wherein a simulation model can be used, clinical dual-energy images of volunteers can also be used, and in order to ensure the representativeness of the data, the dual-energy images at different angles are needed only under the condition that the condition allows, and different simulation models or volunteers are used as much as possible; this part of the data will be used as a training data set of the U-shaped deep neural network.
(3) Collecting verification data set and test data set, filling contrast agent with certain concentration, such as ioversol or NaI solution, into the hose simulating blood vessel, and sticking the hose to the simulation model to simulate the blood vessel trend in human body. And (3) shooting a plurality of high-energy and low-energy images containing the contrast agent and containing different angles and different simulation models under the voltage and current parameters of the X-ray machine corresponding to the step (2) to be used as a verification set and a test set of the neural network.
(4) Data preprocessing: and (3) dividing each pair of high-low energy spectrum images in the training set, the testing set and the verifying set by the high-low energy spectrum air value under the same working parameter respectively to obtain a negative logarithm to obtain a processed projection data set.
(5) A U-shaped deep neural network is established, as shown in fig. 2, the network is composed of two parts: the first half part is a contraction path mainly formed by convolution and pooling, the characteristics of each layer of the image are extracted, and the second half part is an expansion path symmetrical to the contraction path and is mainly realized by up-sampling. The convolution layers of the U-shaped deep neural network are all randomly generated and are in accordance with a 3 × 3 matrix (3 × 3 matrix is suggested, 5 × 5 is also used, and the like) of normal distribution, a filling mode is adopted to keep the image size before and after convolution unchanged, the pooling layers adopt 2 × 2 average pooling to reduce the length and width of the image by half respectively, and a ReLu function is adopted as an activation function at the output part of each layer. The number of convolution kernels is initially 32, the number of obtained corresponding feature maps is also 64, and the number of the convolution kernels and the number of the feature maps are doubled with each pooling. The up-sampling part is filled by adopting a nearest neighbor interpolation method, so that a checkerboard effect of the image caused by deconvolution is avoided, the length and width of the image are doubled each time the up-sampling is carried out, and the number of convolution kernels and the number of characteristic images are halved. As shown in fig. 3, the high frequency component map is obtained by performing nearest neighbor interpolation on the feature map obtained after pooling the U-shaped deep nerves each time to obtain a feature map having the same size as the original image before pooling, and subtracting the feature map from the original image. The jumper connection structure: the high-frequency component images obtained by each layer of the U-shaped deep neural network are spliced with the corresponding up-sampling images, so that the effective utilization of the characteristic images of each layer and the multi-scale characteristic fusion are realized. And performing convolution for 3 x 3 times on the spliced feature map. And the convolution kernel of the last layer is 1 x 1, the output of the characteristic diagram is 1, and the predicted low-energy image corresponding to the high-energy image is obtained. The network sets an initial learning rate, and exponentially decays along with the iteration times, the initial large learning rate is beneficial to fast converging to the vicinity of a minimum value, and the primary school learning rate is beneficial to accurately converging to a local minimum value rather than oscillating nearby.
Specifically, as shown in fig. 2, the jumping structure in the embodiment of the present application refers to a gray arrow, and the operation of copy and crop is to superimpose the image on the left side of the arrow into the component of the right image. If 64 pictures are arranged on the left side of the arrow of the first layer and 64 pictures are originally arranged on the right side of the arrow, the 64 pictures overlapped with the left picture become 128 pictures, and each layer is overlapped in the same way.
The high-frequency component diagram originally directly superimposes the left image, but now, in the embodiment of the present application, after the left image is processed by the process shown in fig. 3, the obtained high-frequency component diagram is superimposed on the right image instead of the original image on the left of the gray arrow.
(6) The loss function is crucial to the training process of the deep neural network, and the method is set as follows:
loss=αL MsSSIM +(1-α)L MAE (3)
wherein α =0.84 or a variation thereabouts, L MAE The absolute value error of the two images is shown, and the smaller the error is, the closer the two images are, the following formula is obtained:
y pred output of predicted low-energy images for the network, y true The low energy image corresponding to the network input high energy image is the true value.
L MsSSIM The formula is as follows:
L MsSSIM =1-MsSSIM(p) (5)
wherein the formula of MsSSIM (p) is as follows:
μ x and mu y Representing the image mean of images x and y respectively,and &>Representing the variance, σ, of the images x and y, respectively xy Representing the covariance of the two images. l (p) is a brightness contrast factor, cs (p) is an inter-image contrast and structure contrast factor under different scales M, namely different resolutions, when M is 1, the original size is represented, when M =2, the length and width of the image are respectively reduced by half, and so on. MsSSIM (p) denotes y pred And y true The evaluation indexes of image Similarity under different scales p are based on SSIM (Structural Similarity) index deformation, and the mathematical meaning is as follows: the index 0 indicates that the two images are completely dissimilar, and the index 1 indicates that the images are completely similar. L is MsSSIM The degree of similarity between the two graphs is indicated, and a smaller value indicates a higher degree of similarity. Mu in the specific application of the invention x Is a low energy projection image calculated during the training process, and mu y Then is the corresponding true value.
(7) And inputting the high-energy image of the scanned object in the training set into the U-shaped depth neural network, outputting the high-energy image as a low-energy image, and updating network parameters by learning the high-energy projection image to the low-energy projection image, so as to output the low-energy image which is predicted by the U-shaped depth neural network and corresponds to the high-energy image. And continuously reducing the loss value in the process of multiple iterations until the loss value continuously fluctuates in a certain range and is not continuously reduced and converged, finishing the training of the U-shaped deep neural network, and storing parameters such as the weight and the like as a weight parameter model after the parameters are optimal.
(8) And inputting the high-energy projection image containing the contrast agent into the trained U-shaped deep neural network, outputting a predicted low-energy image, and subtracting the predicted low-energy image from the low-energy image containing the contrast agent synchronously acquired during acquisition of the high-energy projection image. Theoretically, the region without the contrast agent can find the corresponding mapping relation through the U-shaped deep neural network, and the region with the contrast agent is reserved because the region without the corresponding mapping can not be eliminated; that is, the silhouette structure of the contrast agent can be obtained from the dual-energy image by the above-described silhouette.
Thus, the dual-energy vessel subtraction method of the present invention is set forth.
Fig. 4 shows the real experimental result of the dual-energy vessel subtraction in the brain vessel subtraction by the patented technology of the present invention, and it can be found from the image that: by the method, a clear blood vessel distribution image of the iodine-containing contrast agent can be obtained, effective help is provided for clinical disease diagnosis of doctors, and the effectiveness of the patented technology is verified.
The invention is not limited to the above embodiments, and the principle of implementing high spatial resolution imaging raman method based on multi-wavelength coupling proposed in the invention can be widely applied to the field and other fields related thereto, and can be implemented in various other embodiments. For example, based on the above method, the raman imaging speed is further improved by combining surface enhanced raman measurement, and the like. Therefore, the Raman imaging measurement with some simple changes or modifications by adopting the design idea of the invention falls into the protection scope of the invention.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Further, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
Claims (9)
1. A dual energy X-ray subtraction method, comprising the steps of:
step S10, acquiring two X-ray projection images of high energy and low energy through one X-ray scanning after the contrast agent is injected;
step S20, inputting the high-energy X-ray projection image containing the contrast agent into a trained U-shaped deep neural network to generate a low-energy X-ray projection image without the contrast agent, wherein the U-shaped deep neural network comprises a contraction path and an expansion path symmetrical to the contraction path, the contraction path is used for extracting the characteristics of each layer of the image, the contraction path comprises a convolution layer and a pooling layer, and the convolution kernel of the convolution layer is an n X n matrix which is randomly generated and conforms to normal distribution; and
and step S30, subtracting the low-energy X-ray projection image without the contrast agent from the acquired low-energy X-ray image with the contrast agent to obtain a final DSA (digital subtraction angiography) image.
2. The dual-energy X-ray subtraction method according to claim 1, wherein n is an odd number and 2-straw n-straw 10.
3. The dual-energy X-ray subtraction method according to claim 2, wherein the pooling layer employs an average pooling of m X m matrices, 1-m-5, with image length and width dimensions each reduced by half, and a linear rectification function ReLu function is applied as an activation function at an output portion of each layer.
4. The dual-energy X-ray subtraction method according to claim 1, wherein the dilation path is implemented by upsampling, which is filled in with nearest neighbor interpolation.
5. The dual-energy X-ray subtraction method according to claim 1, wherein the step S20 includes:
s21, collecting a high-energy X-ray air value projection image and a low-energy X-ray air value projection image which are directly irradiated on the double-layer detector by X-ray under the condition that a scanning object is not placed;
s22, under the condition that the voltage and current parameters of the X-ray machine are the same, acquiring a training data set, a verification data set and a test data set for training the U-shaped deep neural network;
images acquired by the verification dataset and the test dataset are both high-energy X-ray projection images and low-energy X-ray projection images containing contrast agents;
s23, each pair of high-energy X-ray projection image and low-energy X-ray projection image of the training data set, the verification data set and the test data set is respectively divided by the high-energy spectrum air value and the low-energy spectrum air value under the same working parameters to obtain a processed projection data set; and
and S24, establishing a U-shaped deep neural network.
6. The dual energy X-ray subtraction method according to claim 1, wherein the U-shaped deep neural network comprises a Loss function, denoted Loss,
loss=αL MsSSIM +(1-α)L MAE ,
wherein, α =0.84 ± 0.1,L MAE Is the absolute value error of two images, L MsSSIM It is expressed as the degree to which the two graphs are similar.
7. An apparatus for carrying out the method of any one of claims 1 to 6, comprising an X-ray source, a detector, a contrast agent, a radiation filter, a computer for deep neural network training and calculation.
8. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any one of claims 1-6 when executing the computer program.
9. A non-transitory computer-readable storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the method of any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110214832.3A CN112862722B (en) | 2021-02-25 | 2021-02-25 | Dual-energy X-ray subtraction method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110214832.3A CN112862722B (en) | 2021-02-25 | 2021-02-25 | Dual-energy X-ray subtraction method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112862722A CN112862722A (en) | 2021-05-28 |
CN112862722B true CN112862722B (en) | 2023-03-24 |
Family
ID=75991688
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110214832.3A Active CN112862722B (en) | 2021-02-25 | 2021-02-25 | Dual-energy X-ray subtraction method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112862722B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114219820A (en) * | 2021-12-08 | 2022-03-22 | 苏州工业园区智在天下科技有限公司 | Neural network generation method, denoising method and device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106562797A (en) * | 2016-10-27 | 2017-04-19 | 南京航空航天大学 | System and method for single-exposure digital subtraction angiography imaging |
CN107157507A (en) * | 2017-06-28 | 2017-09-15 | 南京航空航天大学 | A kind of CT digital subtraction angiographys imaging system and method |
CN110163809A (en) * | 2019-03-31 | 2019-08-23 | 东南大学 | Confrontation network DSA imaging method and device are generated based on U-net |
CN111839557A (en) * | 2019-11-15 | 2020-10-30 | 苏州博思得电气有限公司 | Dual-energy exposure control method and device of X-ray high-voltage generator |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2982526C (en) * | 2015-04-13 | 2020-04-14 | Case Western Reserve University | Dual energy x-ray coronary calcium grading |
US10945695B2 (en) * | 2018-12-21 | 2021-03-16 | Canon Medical Systems Corporation | Apparatus and method for dual-energy computed tomography (CT) image reconstruction using sparse kVp-switching and deep learning |
-
2021
- 2021-02-25 CN CN202110214832.3A patent/CN112862722B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106562797A (en) * | 2016-10-27 | 2017-04-19 | 南京航空航天大学 | System and method for single-exposure digital subtraction angiography imaging |
CN107157507A (en) * | 2017-06-28 | 2017-09-15 | 南京航空航天大学 | A kind of CT digital subtraction angiographys imaging system and method |
CN110163809A (en) * | 2019-03-31 | 2019-08-23 | 东南大学 | Confrontation network DSA imaging method and device are generated based on U-net |
CN111839557A (en) * | 2019-11-15 | 2020-10-30 | 苏州博思得电气有限公司 | Dual-energy exposure control method and device of X-ray high-voltage generator |
Non-Patent Citations (5)
Title |
---|
Development of a deep neural network for generating synthetic dual-energy chest x-ray images with single x-ray exposure;Donghoon Lee等;《Physics in Medicine and Biology》;20191231;第1-23页 * |
Dual-energy CT–based deep learning radiomics can improve lymph node metastasis risk prediction for gastric cancer;Jing Li等;《GASTROINTESTINAL》;20200117;第1-10页 * |
Effects of X-ray scatter in quantitative dual -energy imaging using dual - layer flat panel detectors;Chumin Zhao等;《Medical Imaging 2021:Physics of Medical Imaging》;20210215;第1-10页 * |
基于深度学习的低剂量DSA算法研究;宋雨;《中国优秀博硕士学位论文全文数据库(硕士)医药卫生科技辑(月刊)》;20200615(第06期);第E062-36页 * |
胸部解剖结构回归模型的虚拟双能量X线减影方法;陈胜等;《中国图象图形学报》;20160916;第21卷(第09期);第1247-1255页 * |
Also Published As
Publication number | Publication date |
---|---|
CN112862722A (en) | 2021-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11126914B2 (en) | Image generation using machine learning | |
US8818065B2 (en) | Methods and apparatus for scatter correction for CBCT system and cone-beam image reconstruction | |
US20210056688A1 (en) | Using deep learning to reduce metal artifacts | |
DE102015014908B4 (en) | Apparatus, method and program for processing a medical image, and X-ray therapy apparatus | |
KR20190103227A (en) | Deep learning based estimation of data for use in tomography reconstruction | |
JP6139207B2 (en) | System and method for contrast medium estimation in X-ray imaging | |
KR20190058285A (en) | Apparatus and method for ct image denoising based on deep learning | |
CN111292386B (en) | CT projection metal trace complement metal artifact correction method based on U-net | |
KR20130069506A (en) | Image processing apparatus, image processing method, and computer-readable storage medium | |
CN104644200A (en) | Method and device for reducing artifacts in computed tomography image reconstruction | |
CN106462952A (en) | Noise reduction in tomograms | |
CN104517263A (en) | Artifact reducing method and device for CT (computed tomography) image reconstruction | |
KR20200059712A (en) | System and method for scatter correction of x-ray image | |
CN112862722B (en) | Dual-energy X-ray subtraction method and device | |
KR20170073640A (en) | Methods and systems for normalizing contrast across multiple acquisitions | |
Wang et al. | Inner-ear augmented metal artifact reduction with simulation-based 3D generative adversarial networks | |
McCollough et al. | Milestones in CT: past, present, and future | |
Li et al. | An enhanced SMART-RECON algorithm for time-resolved C-arm cone-beam CT imaging | |
EP3404618B1 (en) | Poly-energetic reconstruction method for metal artifacts reduction | |
Choi et al. | Integration of 2D iteration and a 3D CNN-based model for multi-type artifact suppression in C-arm cone-beam CT | |
JP7454435B2 (en) | Medical image processing device and medical image processing method | |
Gomi et al. | Development of a denoising convolutional neural network-based algorithm for metal artifact reduction in digital tomosynthesis for arthroplasty: A phantom study | |
Khodadad et al. | CT and PET Image Registration: Application to Thorax Area | |
Júnior et al. | Ensemble of convolutional neural networks for sparse-view cone-beam computed tomography | |
Seyyedi et al. | Evaluation of low-dose CT perfusion for the liver using reconstruction of difference |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |