KR102441033B1 - Deep-learning based limited-angle computed tomography image reconstruction system - Google Patents

Deep-learning based limited-angle computed tomography image reconstruction system Download PDF

Info

Publication number
KR102441033B1
KR102441033B1 KR1020200160978A KR20200160978A KR102441033B1 KR 102441033 B1 KR102441033 B1 KR 102441033B1 KR 1020200160978 A KR1020200160978 A KR 1020200160978A KR 20200160978 A KR20200160978 A KR 20200160978A KR 102441033 B1 KR102441033 B1 KR 102441033B1
Authority
KR
South Korea
Prior art keywords
image
learning model
deep learning
layer
dataset
Prior art date
Application number
KR1020200160978A
Other languages
Korean (ko)
Other versions
KR20220073156A (en
Inventor
이승완
임도빈
Original Assignee
건양대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 건양대학교 산학협력단 filed Critical 건양대학교 산학협력단
Priority to KR1020200160978A priority Critical patent/KR102441033B1/en
Publication of KR20220073156A publication Critical patent/KR20220073156A/en
Application granted granted Critical
Publication of KR102441033B1 publication Critical patent/KR102441033B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Surgery (AREA)
  • Public Health (AREA)
  • General Physics & Mathematics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Pathology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Optics & Photonics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Veterinary Medicine (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Pulmonology (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

본 발명은 고화질의 원본 CT 영상 및 이를 통해 변환된 훈련용 사이노그램을 사용하여 딥 러닝 모델을 학습시키되 입력된 원본 CT 영상과 딥 러닝 모델의 출력 영상 간의 차이가 최소화되도록 반복학습시켜 제한된 각도에서의 촬영된 영상의 재구성시 구조적 결손 및 왜곡을 감소시킬 수 있는 딥 러닝 기반의 제한각도 컴퓨터 단층촬영상 재구성 시스템에 관한 것이다.The present invention trains a deep learning model using a high-quality original CT image and a training sinogram converted through it, and repeats learning so that the difference between the input original CT image and the output image of the deep learning model is minimized at a limited angle. It relates to a deep learning-based limited-angle computed tomography image reconstruction system that can reduce structural defects and distortion during reconstruction of captured images.

Description

딥 러닝 기반의 제한각도 컴퓨터 단층촬영상 재구성 시스템 {Deep-learning based limited-angle computed tomography image reconstruction system}Deep-learning-based limited-angle computed tomography image reconstruction system {Deep-learning based limited-angle computed tomography image reconstruction system}

본 발명은 CT라고 불리는 컴퓨터 단층촬영상 재구성에 관한 것으로, 자세하게는 고화질의 원본 CT 영상 및 이를 통해 변환된 훈련용 사이노그램을 사용하여 딥 러닝 모델을 학습시키되 입력된 원본 CT 영상과 딥 러닝 모델의 출력 영상 간의 차이가 최소화되도록 반복학습시켜 제한된 각도에서의 촬영된 영상의 재구성시 구조적 결손 및 왜곡을 감소시킬 수 있는 딥 러닝 기반의 제한각도 컴퓨터 단층촬영상 재구성 시스템에 관한 것이다.The present invention relates to reconstruction of a computed tomography image called CT. In detail, a deep learning model is trained using a high-quality original CT image and a training sinogram converted through it, but the input original CT image and the deep learning model are trained. It relates to a deep learning-based limited-angle computed tomography image reconstruction system that can reduce structural defects and distortion when reconstructing images taken at a limited angle by repeatedly learning to minimize the difference between output images of

컴퓨터 단층촬영(Computed Tomography: CT)은 비침습적으로 고해상도 인체 단면 영상을 획득하여 뼈와 장기, 혈관 등 인체 내부 조직을 관찰하는데 유용한 의료영상 기법으로, X선 촬영장치가 회전하며 다양한 각도에서 촬영이 이루어지는 원통형의 기계 내에서 신체를 가로지르는 횡단면상을 효과적으로 획득할 수 있다.Computed tomography (CT) is a medical imaging technique useful for observing internal tissues such as bones, organs, and blood vessels by acquiring high-resolution cross-section images of the human body non-invasively. It is possible to effectively obtain a cross-sectional image that crosses the body in a cylindrical machine.

이러한 CT는 단순 X선 촬영에 비해 구조물이 겹쳐지는 것이 적어 구조물 및 병변을 좀 더 명확히 볼 수 있는 장점이 있어 대부분의 장기 및 질환의 정밀검사에서 폭넓게 사용되고 있으며, CT 영상의 품질은 병변의 정확한 진단을 위한 매우 중요한 요소임에 따라 CT 시스템의 발전과 함께 CT 영상의 품질을 향상시키기 위한 노력이 계속되고 있다.Compared to simple X-ray imaging, CT has the advantage of being able to see structures and lesions more clearly because there is less overlap of structures, so it is widely used in close examination of most organs and diseases. Efforts to improve the quality of CT images are continuing with the development of CT systems as it is a very important factor for

그러나 CT 영상의 품질을 향상은 다양한 각도에서 수백 회에 이르는 촬영을 수반하므로 고선량의 방사선 피폭을 야기하므로 최근 방사선 피폭에 대한 사회 인식을 고려하여, 고품질의 진단 이미지를 획득하기 위한 노력은 방사선량을 최소화하기 위한 노력을 수반하고 있다.However, improving the quality of a CT image entails taking hundreds of scans from various angles, resulting in high-dose radiation exposure. Efforts are being made to minimize

이러한 노력의 일환으로 제한된 각도에서 촬영된 X선 영상을 재구성한 제한각도 CT에 대한 연구들이 이루어지고 있다.As part of these efforts, studies on limited-angle CT that reconstructed X-ray images taken at a limited angle are being conducted.

도 1은 FBP로 재구성된 제한각도 CT에서 발생한 아티팩트를 나타낸 예시도로서, Full-angle 촬영이 이루어진 Ground-truth 영상과 FBP(filtered back-projection)사용한 제한각도 CT 및 이들의 차이(ERROR)를 각각 나타내고 있다.1 is an exemplary view showing the artifacts generated in the limited-angle CT reconstructed with FBP, and the ground-truth image with full-angle imaging and the limited-angle CT using FBP (filtered back-projection) and their difference (ERROR), respectively is indicating

제한각도 CT는 비교적 적은 방사선량으로 영상을 획득할 수 있지만 불충분한 데이터로 인해 FBP와 같은 기존의 재구성 알고리즘을 사용하여 재구성된 CT 영상은 도 1과 같이 심각한 구조적 정보 결손 및 왜곡을 유발할 수 있다.Although limited-angle CT can acquire images with a relatively low radiation dose, a CT image reconstructed using a conventional reconstruction algorithm such as FBP due to insufficient data may cause severe structural information loss and distortion as shown in FIG. 1 .

따라서 고품질의 제한 각도 CT 영상을 획득할 수 있는 효과적인 재구성 기술개발이 요구되고 있다.Therefore, it is required to develop an effective reconstruction technique that can acquire high-quality limited-angle CT images.

한편, 딥러닝 기술은 특정한 목적을 달성하기 위해 주어진 훈련 데이터 세트를 사용하여 스스로 학습하는 새로운 방식의 데이터 기반 컴퓨팅기술로서, 다양한 영상처리분야에서뛰어난 성능을 인정받고 있다 .On the other hand, deep learning technology is a new type of data-based computing technology that learns by itself using a given training data set to achieve a specific purpose, and has been recognized for its outstanding performance in various image processing fields.

그러나 딥러닝 모델이 바람직한 성능을 발휘하기 위해서는 충분히 많은 훈련 데이터 세트를 확보하는 것이 요구될 뿐 아니라, 딥러닝 모델이 목표로 하는 결과를 얻기 위해 적절한 훈련 데이터 세트를 구성하는 방법이 수반되어야만 한다.However, in order for a deep learning model to achieve desired performance, it is not only required to obtain a sufficiently large training data set, but also a method of constructing an appropriate training data set to achieve the desired result of the deep learning model.

특히, 안정성 및 정확도가 중시되는 의료 영상에 딥러닝을 적용하기 위해서는 이러한 전제조건을 만족시키는 효과적인 딥러닝 모델의 개발이 더더욱 중요하다고 할 수 있다.In particular, in order to apply deep learning to medical images where stability and accuracy are important, it is even more important to develop an effective deep learning model that satisfies these prerequisites.

대한민국 등록특허 제10-2039472호(2019.10.28)Republic of Korea Patent Registration No. 10-2039472 (2019.10.28)

본 발명은 상기와 같은 요구에 따라 창출된 것으로, 본 발명의 목적은 고화질의 원본 CT 영상 및 이를 통해 변환된 훈련용 사이노그램을 쌍으로 하여 딥 러닝 모델을 학습시켜 제한된 각도에서의 사이노그램을 고화질로 재구성할 수 있도록 함으로 방사선 노출을 최소화하면서도 고품질의 CT 영상을 얻을 수 있는 딥 러닝 기반의 제한각도 컴퓨터 단층촬영상 재구성 시스템을 제공하는 것이다.The present invention was created in response to the above needs, and an object of the present invention is to learn a deep learning model by pairing a high-quality original CT image and a sinogram for training converted through it to learn a sinogram at a limited angle. To provide a deep-learning-based limited-angle computed tomography image reconstruction system that can obtain high-quality CT images while minimizing radiation exposure by enabling high-quality reconstruction of

상기와 같은 목적을 위해 본 발명은 전결합층과 컨볼루션층과 디컨볼루션층 순서를 갖는 레이어 구조 및 하이퍼파라미터를 설정함으로 입력된 사이노그램을 복원영상으로 재구성하는 딥 러닝 모델을 생성하는 학습모델생성부; 고화질의 원본 CT 영상으로 구성된 제1데이터세트를 활용하여 사이노그램으로 변환된 제2데이터세트를 생성하는 학습데이터생성부; 상기 제1데이터세트를 구성하는 원본 CT 영상 및 대응하는 제2데이터세트의 사이노그램을 생성된 딥 러닝 모델에 입력하고, 입력된 원본 CT 영상과 딥 러닝 모델의 출력영상의 차이가 최소화되도록 딥러닝 모델을 훈련시키는 학습부; 딥 러닝 모델을 구성하는 각 레이어의 가중치 값을 최적화하는 최적화부; 로 이루어지는 것을 특징으로 한다.For the above purpose, the present invention is a learning to create a deep learning model that reconstructs an input sinogram into a reconstructed image by setting a layer structure and hyperparameters having the order of the precoupling layer, the convolution layer, and the deconvolution layer. model generation unit; a learning data generator for generating a second dataset converted into a sinogram by using a first dataset composed of a high-quality original CT image; The original CT image constituting the first dataset and the sinogram of the corresponding second dataset are input to the generated deep learning model, and the difference between the input original CT image and the output image of the deep learning model is minimized. a learning unit that trains a learning model; an optimization unit that optimizes the weight values of each layer constituting the deep learning model; characterized in that it consists of

이때 상기 학습부는, 입력된 원본 CT 영상과 딥 러닝 모델의 출력 영상의 차이가 최소화되도록 아래의 [수학식 1]의 손실함수를 통해 딥러닝 모델을 반복 훈련시키는 것이 바람직하다.At this time, it is preferable that the learning unit repeatedly trains the deep learning model through the loss function of [Equation 1] below so that the difference between the input original CT image and the output image of the deep learning model is minimized.

[수학식 1][Equation 1]

Figure 112020127574811-pat00001
Figure 112020127574811-pat00001

(G는 ground-truth 영상이며 I는 딥러닝 모델의 출력 영상이다. 또한, n은 훈련데이터의 개수)(G is the ground-truth image, I is the output image of the deep learning model. Also, n is the number of training data)

이때 상기 학습모델생성부는, 전결합층과 컨볼루션층은 활성화함수(activation function)로서 각각 Hyperbolic tangent(tahn)과 rectified linear unit(ReLU)를 사용하고, 최적화함수(optimizer)로 adaptive moment estimation(ADAM)을 사용하며, 학습률(learning rate)은 10-4로 설정되는 것이 바람직하다.At this time, the learning model generation unit uses a hyperbolic tangent (tahn) and a rectified linear unit (ReLU) as activation functions for the precoupling layer and the convolution layer, respectively, and adaptive moment estimation (ADAM) as an optimizer. ), and the learning rate is preferably set to 10 -4 .

또한, 상기 학습모델생성부는, 3개의 전결합층과 5개의 컨볼루션층 및 1개의 디컨볼루션층으로 구성하되, 첫 번째와 두 번째 컨볼루션층은 3×3 커널을 사용하였고, 세 번째부터 다섯 번째 컨볼루션층은 5×5 커널을 사용하며, 디컨볼루션층은 7×7 커널을 사용하는 것이 바람직하다.In addition, the learning model generation unit consists of three precoupling layers, five convolutional layers, and one deconvolutional layer, but the first and second convolutional layers use a 3×3 kernel, and from the third It is preferable that the fifth convolution layer uses a 5×5 kernel, and the deconvolution layer uses a 7×7 kernel.

본 발명을 통해 제한 각도 CT 영상을 통해 결손 및 왜곡이 감소된 고품질의 CT 영상을 얻을 수 있으며, 종래 고품질 CT 영상을 위해 불가피했던 방사선 피폭량도 크게 줄일 수 있다.Through the present invention, a high-quality CT image with reduced defects and distortion can be obtained through the limited-angle CT image, and the radiation dose, which was unavoidable for a conventional high-quality CT image, can be greatly reduced.

도 1은 FBP로 재구성된 제한각도 CT에서 발생한 아티팩트를 나타낸 예시도,
도 2는 본 발명에 따른 제한각도 CT 영상 재구성을 위한 딥러닝 신경망의 구조를 나타낸 개념도,
도 3은 본 발명의 실시예에 따른 구성 및 연결관계를 나타낸 블록도,
도 4는 본 발명의 실시예에 따른 영상 재구성 과정을 나타낸 순서도,
도 5는 Ground-truth 영상대비 종래의 FBP 방식 및 본 발명이 적용된 영상을 비교한 도면이다.
1 is an exemplary view showing artifacts generated in the limited-angle CT reconstructed with FBP;
2 is a conceptual diagram showing the structure of a deep learning neural network for reconstructing a limited-angle CT image according to the present invention;
3 is a block diagram showing the configuration and connection relationship according to an embodiment of the present invention;
4 is a flowchart illustrating an image reconstruction process according to an embodiment of the present invention;
5 is a view comparing a ground-truth image compared to a conventional FBP method and an image to which the present invention is applied.

이하, 첨부된 도면을 참조하여 본 발명 딥 러닝 기반의 제한각도 컴퓨터 단층촬영상 재구성 시스템의 구성을 구체적으로 설명한다.Hereinafter, the configuration of the deep learning-based limited-angle computed tomography image reconstruction system of the present invention will be described in detail with reference to the accompanying drawings.

도 2는 본 발명에 따른 제한각도 CT 영상 재구성을 위한 딥러닝 신경망의 구조를 나타낸 개념도, 도 3은 본 발명의 실시예에 따른 구성 및 연결관계를 나타낸 블록도, 도 4는 본 발명의 실시예에 따른 영상 재구성 과정을 나타낸 순서도이다.2 is a conceptual diagram showing the structure of a deep learning neural network for reconstructing a limited-angle CT image according to the present invention, FIG. 3 is a block diagram showing the configuration and connection relationship according to an embodiment of the present invention, and FIG. 4 is an embodiment of the present invention It is a flowchart showing the image reconstruction process according to

본 발명은 제한각도 컴퓨터 단층촬영상 재구성하여 전 방향(Full-degree)에서 촬영이 이루어진 원본 영상(Ground-truth)에 근접한 영상으로 종래 기술(FBP) 대비 구조적 결손 및 왜곡이 크게 감소된 영상을 얻을 수 있도록 첨부된 도 2와 같은 딥 러닝 모델 구조를 활용하게 되며, 다음과 같은 구성 및 단계를 통해 진행이 이루어진다.The present invention is an image that is close to the original image (ground-truth) taken in full-degree by reconstructing the limited-angle computed tomography image to obtain an image with significantly reduced structural defects and distortion compared to the prior art (FBP). The deep learning model structure as shown in FIG. 2 is used to make it possible, and progress is made through the following configurations and steps.

첫 번째 단계(S 110)는 딥 러닝 모델을 생성하는 단계로서, 학습모델생성부(110)를 통해 전결합층(fully connected layer; FC)과 컨볼루션층(convolution later; Conv)과 디컨볼루션층(deconvolution layer; Deconv) 순서를 갖는 레이어 구조 및 하이퍼파라미터를 설정함으로 입력된 사이노그램을 복원 영상으로 재구성하는 딥 러닝 모델을 생성한다.The first step (S 110) is a step of generating a deep learning model, and a fully connected layer (FC), a convolution layer (Conv) and a deconvolution are performed through the learning model generator 110 . A deep learning model is created that reconstructs an input sinogram into a reconstructed image by setting a layer structure and hyperparameters having a deconvolution layer (deconv) order.

본 발명에서는 바람직한 실시예로 딥 러닝 모델의 레이어 구조를 3개의 전결합층과 5개의 컨볼루션층 및 1개의 디컨볼루션층으로 구성하였다.In the present invention, as a preferred embodiment, the layer structure of the deep learning model consists of three precoupling layers, five convolutional layers, and one deconvolutional layer.

상기 전결합층은 인접하는 계층과의 뉴런이 모두 연결되는 레이어로서, 사이노그램은 재구성되는 영상에 대한 모든 정보들을 담고 있기 때문에 관련 데이터들을 온전히 학습에 사용하기 위해 전결합층을 이용하게 된다.The precoupling layer is a layer in which all neurons with adjacent layers are connected, and since the sinogram contains all information on the reconstructed image, the precoupling layer is used to fully use the related data for learning.

즉 전결합층은 입력되는 영상 데이터의 모든 픽셀 넣어주고 데이터를 구성하는 각 픽셀의 정보를 다음 층으로 연결하며, 각 사이노그램 데이터가 출력영상을 구성하는 모든 픽셀에 정보를 전달하도록 하여 신호 도메인에서 이미지 도메인으로의 변환을 가능하게 한다.That is, the precoupling layer puts all the pixels of the input image data, connects the information of each pixel constituting the data to the next layer, and transmits the information to all the pixels constituting the output image with each sinogram data, so that the signal domain to the image domain.

종래에는 이러한 전 결합층만 활용하거나 순서상 레이어 구조의 뒤쪽에 위치하게 되나 본 발명에서는 전결합층이 레이어 구조에서 앞쪽에 위치하는 것이 중요한 특징 중 하나이다.Conventionally, only such a pre-bonding layer is utilized or it is positioned at the back of the layer structure in order, but in the present invention, it is one of the important features that the pre-bonding layer is positioned at the front in the layer structure.

상기 컨볼루션층은 이미지의 핵심정보를 추출하는 역할로서 본 발명의 실시예에서는 512×512 픽셀단위의 이미지를 커널단위로 묶어 특징을 추출하면서 다음 층으로 전달하게 된다. 본 발명의 실시예에서 5개의 컨볼루션층 중 첫 번째와 두 번째 컨볼루션층은 3×3 커널을 사용하였고, 세 번째부터 다섯 번째 컨볼루션층은 5×5 커널을 사용하였다.The convolution layer serves to extract core information of an image, and in an embodiment of the present invention, a 512×512 pixel unit image is bundled into a kernel unit to extract features while transferring it to the next layer. In the embodiment of the present invention, 3×3 kernels were used for the first and second convolution layers among the five convolution layers, and 5×5 kernels were used for the third to fifth convolution layers.

상기 디컨볼루션층은 앞의 컨볼루션층을 통해 축소된 데이터를 펼치는 역할로 7×7 커널을 사용하였다.The deconvolution layer used a 7×7 kernel to expand the reduced data through the previous convolution layer.

이때 상기 전결합층과 컨볼루션층은 활성화함수(activation function)로서 각각 Hyperbolic tangent(tahn)과 rectified linear unit(ReLU)를 사용하였다. 또한, 최적화함수(optimizer)로 adaptive moment estimation(ADAM)을 사용하였으며 학습률(learning rate)은 10-4로 설정하였다.In this case, the hyperbolic tangent (tahn) and the rectified linear unit (ReLU) were used as activation functions for the precoupling layer and the convolution layer, respectively. In addition, adaptive moment estimation (ADAM) was used as the optimizer, and the learning rate was set to 10 -4 .

두 번째 단계(S 120)는 학습데이터를 생성하는 단계로서, 안정성이 특히 중시되는 의료영상에 딥 러닝 모델이 효과적으로 적용될 수 있도록 제한각도 CT 영상의 고품질 재구성을 위한 딥 러닝 모델의 훈련데이터세트를 생성하게 된다.The second step (S 120) is a step of generating training data, and a training dataset of a deep learning model for high-quality reconstruction of a limited-angle CT image is generated so that the deep learning model can be effectively applied to medical images where stability is particularly important. will do

이를 위해 학습데이터생성부(120)는 고화질의 원본 CT 영상으로 구성된 제1데이터세트를 활용하여 사이노그램으로 변환된 제2데이터세트를 생성하며, 훈련용 제1데이터세트를 입력받아 이를 라돈 변환 시뮬레이터를 적용하여 훈련용 제2데이터세트를 생성하게 된다.To this end, the learning data generator 120 generates a second dataset converted into a sinogram by using a first dataset composed of high-quality original CT images, and receives the first dataset for training and transforms it into radon. A second dataset for training is generated by applying the simulator.

세 번째 단계(S 130)는 딥러닝 모델을 훈련시키는 단계로서, 상기 학습모델생성부(110)를 통해 생성된 딥 러닝 모델이 제한각도 사이노그램을 고화질로 재구성하는 기능을 갖도록 상기 학습데이터생성부(120)를 통해 생성된 제1데이터세트 및 제2데이터세트를 활용하여 훈련시키는 것이다.The third step (S 130) is a step of training the deep learning model, and the training data is generated so that the deep learning model generated through the learning model generating unit 110 has a function of reconstructing the limited angle sinogram in high quality. Training is performed using the first and second datasets generated through the unit 120 .

이를 위해 학습부(130)는 상기 제1데이터세트를 구성하는 원본 CT 영상(Ground-truth) 영상마다 이에 대응하는 제2데이터세트를 구성하는 제한각도 사이노그램을 쌍으로 상기 딥 러닝 모델의 입력으로 전달하고, 입력된 원본 CT 영상과 딥 러닝 모델의 출력 영상의 차이가 최소화되도록 아래의 [수학식 1]과 같은 손실함수를 통해 딥러닝 모델을 반복 훈련시킨다.To this end, the learning unit 130 receives the input of the deep learning model by pairing the limiting angle sinogram constituting the second dataset corresponding to each original CT image (Ground-truth) image constituting the first dataset. and train the deep learning model repeatedly through a loss function as in [Equation 1] below so that the difference between the input original CT image and the output image of the deep learning model is minimized.

Figure 112020127574811-pat00002
Figure 112020127574811-pat00002

여기서 G는 ground-truth 영상이며 I는 딥러닝 모델의 출력 영상이다. 또한, n은 훈련데이터의 개수이다.where G is the ground-truth image and I is the output image of the deep learning model. Also, n is the number of training data.

이때 최적화부(140)를 통해 상기 딥 러닝 모델을 구성하는 각 레이어의 가중치 값을 손실함수를 통한 오차를 기반으로 역전파과정(back-propagation)을 통해 최적화된다.At this time, the weight value of each layer constituting the deep learning model is optimized through the back-propagation process based on the error through the loss function through the optimization unit 140 .

도 5는 Ground-truth 영상대비 종래의 FBP 방식 및 본 발명이 적용된 영상을 비교한 도면으로, (a)는 모든 각도에서 촬영한 ground-truth 영상, (b)는 기존의 FBP 방식을 적용하여 재구성한 영상, (c)는 본 발명에 따른 딥러닝 기반의 제한각도 컴퓨터 단층촬영상 재구성 방식을 적용한 영상을 나타내고 있다.5 is a view comparing the ground-truth image compared to the conventional FBP method and the image to which the present invention is applied. One image, (c) shows an image to which the deep learning-based limited-angle computed tomography image reconstruction method according to the present invention is applied.

도 5와 같이 본 발명에 따른 딥러닝 기반의 제한각도 컴퓨터 단층촬영상 재구성 방식을 적용시 기존의 FBP 방식을 적용한 영상에 비해 구조적 결손 및 왜곡이 크게 감소한 것을 확인할 수 있다.As shown in FIG. 5 , when the deep learning-based limited-angle computed tomography image reconstruction method according to the present invention is applied, it can be confirmed that structural defects and distortion are significantly reduced compared to the image to which the conventional FBP method is applied.

본 발명의 권리는 위에서 설명된 실시예에 한정되지 않고 청구범위에 기재된 바에 의해 정의되며, 본 발명의 분야에서 통상의 지식을 가진 자가 청구범위에 기재된 권리범위 내에서 다양한 변형과 개작을 할 수 있다는 것은 자명하다.The right of the present invention is not limited to the above-described embodiments, but is defined by the claims, and a person of ordinary skill in the art can make various modifications and adaptations within the scope of the claims. it is self-evident

110: 학습모델생성부 120: 학습데이터생성부
130: 학습부 140: 최적화부
110: learning model generation unit 120: learning data generation unit
130: learning unit 140: optimization unit

Claims (4)

3개의 전결합층과 5개의 컨볼루션층과 1개의 디컨볼루션층 순서를 갖는 레이어 구조 및 하이퍼파라미터를 설정함으로 입력된 사이노그램을 복원영상으로 재구성하는 딥 러닝 모델을 생성하되, 첫 번째와 두 번째 컨볼루션층은 3×3 커널을 세 번째부터 다섯 번째 컨볼루션층은 5×5 커널을 디컨볼루션층은 7×7 커널을 각각 사용하며, 상기 전결합층과 컨볼루션층은 활성화함수(activation function)로서 각각 Hyperbolic tangent(tahn)과 rectified linear unit(ReLU)를 사용하고, 최적화함수(optimizer)로 adaptive moment estimation(ADAM)을 사용하며, 학습률(learning rate)은 10-4로 설정되는 학습모델생성부(110);
고화질의 원본 CT 영상으로 구성된 제1데이터세트를 활용하여 사이노그램으로 변환된 제2데이터세트를 생성하는 학습데이터생성부(120);
상기 제1데이터세트를 구성하는 원본 CT 영상 및 대응하는 제2데이터세트의 사이노그램을 생성된 딥 러닝 모델에 입력하고, 입력된 원본 CT 영상과 딥 러닝 모델의 출력영상의 차이가 최소화되도록 아래의 [수학식 1]의 손실함수를 통해 딥러닝 모델을 훈련시키는 학습부(130);
딥 러닝 모델을 구성하는 각 레이어의 가중치 값을 최적화하는 최적화부(140); 로 이루어지는 것을 특징으로 하는 컴퓨터 단층촬영상 재구성 시스템.

[수학식 1]
Figure 112022047583566-pat00009

(G는 ground-truth 영상, I는 딥러닝 모델의 출력 영상, n은 훈련데이터의 개수)
Create a deep learning model that reconstructs the input sinogram into a reconstructed image by setting the layer structure and hyperparameters having the order of 3 pre-connected layers, 5 convolutional layers, and 1 deconvolution layer, but with the first and The second convolution layer uses a 3×3 kernel, the third to fifth convolution layers use a 5×5 kernel, and the deconvolution layer uses a 7×7 kernel, respectively, and the precoupling layer and the convolution layer have activation functions. Hyperbolic tangent (tahn) and rectified linear unit (ReLU) are used as the activation function, respectively, adaptive moment estimation (ADAM) is used as the optimizer, and the learning rate is set to 10 -4 . Learning model generation unit 110;
a learning data generation unit 120 that generates a second dataset converted into a sinogram by using a first dataset composed of a high-quality original CT image;
The original CT image constituting the first dataset and the sinogram of the corresponding second dataset are input to the generated deep learning model, and the difference between the input original CT image and the output image of the deep learning model is minimized below. Learning unit 130 for training the deep learning model through the loss function of [Equation 1];
an optimization unit 140 that optimizes the weight values of each layer constituting the deep learning model; Computer tomography image reconstruction system, characterized in that consisting of.

[Equation 1]
Figure 112022047583566-pat00009

(G is the ground-truth image, I is the output image of the deep learning model, n is the number of training data)
삭제delete 삭제delete 삭제delete
KR1020200160978A 2020-11-26 2020-11-26 Deep-learning based limited-angle computed tomography image reconstruction system KR102441033B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020200160978A KR102441033B1 (en) 2020-11-26 2020-11-26 Deep-learning based limited-angle computed tomography image reconstruction system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020200160978A KR102441033B1 (en) 2020-11-26 2020-11-26 Deep-learning based limited-angle computed tomography image reconstruction system

Publications (2)

Publication Number Publication Date
KR20220073156A KR20220073156A (en) 2022-06-03
KR102441033B1 true KR102441033B1 (en) 2022-09-05

Family

ID=81983057

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020200160978A KR102441033B1 (en) 2020-11-26 2020-11-26 Deep-learning based limited-angle computed tomography image reconstruction system

Country Status (1)

Country Link
KR (1) KR102441033B1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102659991B1 (en) * 2022-09-16 2024-04-22 연세대학교 원주산학협력단 System for CT imaging brain in ambulance and method thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200043204A1 (en) 2018-08-06 2020-02-06 General Electric Company Iterative image reconstruction framework
JP2020168353A (en) * 2019-04-01 2020-10-15 キヤノンメディカルシステムズ株式会社 Medical apparatus and program

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10475214B2 (en) * 2017-04-05 2019-11-12 General Electric Company Tomographic reconstruction based on deep learning
KR102039472B1 (en) 2018-05-14 2019-11-01 연세대학교 산학협력단 Device and method for reconstructing computed tomography image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200043204A1 (en) 2018-08-06 2020-02-06 General Electric Company Iterative image reconstruction framework
JP2020168353A (en) * 2019-04-01 2020-10-15 キヤノンメディカルシステムズ株式会社 Medical apparatus and program

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Bo Zhou, "Limited Angle Tomography Reconstruction: Synthetic Reconstruction via Unsupervised Sinogram Adaptation", Information Processing in Medical Imaging(2019)*

Also Published As

Publication number Publication date
KR20220073156A (en) 2022-06-03

Similar Documents

Publication Publication Date Title
US11120582B2 (en) Unified dual-domain network for medical image formation, recovery, and analysis
Kang et al. Deep convolutional framelet denosing for low-dose CT via wavelet residual network
He et al. Radon inversion via deep learning
JP7234064B2 (en) Iterative image reconstruction framework
Zhou et al. DuDoDR-Net: Dual-domain data consistent recurrent network for simultaneous sparse view and metal artifact reduction in computed tomography
CN107481297B (en) CT image reconstruction method based on convolutional neural network
Yuan et al. SIPID: A deep learning framework for sinogram interpolation and image denoising in low-dose CT reconstruction
US20210007695A1 (en) Apparatus and method using physical model based deep learning (dl) to improve image quality in images that are reconstructed using computed tomography (ct)
Ko et al. Rigid and non-rigid motion artifact reduction in X-ray CT using attention module
KR102039472B1 (en) Device and method for reconstructing computed tomography image
AU2019271915A1 (en) Method and system for motion correction in CT imaging
CN112419173A (en) Deep learning framework and method for generating CT image from PET image
EP3739522A1 (en) Deep virtual contrast
Mizusawa et al. Computed tomography image reconstruction using stacked U-Net
Fournié et al. CT field of view extension using combined channels extension and deep learning methods
Jang et al. Head motion correction based on filtered backprojection for x‐ray CT imaging
KR102441033B1 (en) Deep-learning based limited-angle computed tomography image reconstruction system
CN110599530B (en) MVCT image texture enhancement method based on double regular constraints
Chan et al. An attention-based deep convolutional neural network for ultra-sparse-view CT reconstruction
JP2022161857A (en) System and method for utilizing deep learning network to correct bad pixel in computed tomography detector
Gong et al. Low-dose dual energy CT image reconstruction using non-local deep image prior
CN111860836A (en) Self-supervision learning method and application
Chang et al. Deep learning image transformation under radon transform
JP2024507766A (en) Contrast enhancement using machine learning
WO2022094779A1 (en) Deep learning framework and method for generating ct image from pet image

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right
GRNT Written decision to grant