CN114862881A - Cross-modal attention tumor segmentation method and system based on PET-CT - Google Patents
Cross-modal attention tumor segmentation method and system based on PET-CT Download PDFInfo
- Publication number
- CN114862881A CN114862881A CN202210807701.0A CN202210807701A CN114862881A CN 114862881 A CN114862881 A CN 114862881A CN 202210807701 A CN202210807701 A CN 202210807701A CN 114862881 A CN114862881 A CN 114862881A
- Authority
- CN
- China
- Prior art keywords
- image
- pet
- modal
- cross
- attention
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10104—Positron emission tomography [PET]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a PET-CT (positron emission tomography-computed tomography) -based cross-modal attention tumor segmentation method, a system and equipment, which relate to the PET-CT-based tumor segmentation in the technical field of image processing and aim to solve the problems that in the prior art, the fusion efficiency of various modal image features is low and the accurate segmentation of a tumor region is difficult to realize when a PET-CT-based multi-modal image is segmented, and mainly comprises the steps of firstly respectively extracting the features in a PET image and a CT image by using a self-attention mechanism, and then fusing the single-modal features in the PET image and the CT image by using the self-attention mechanism in a cross-modal manner to obtain the cross-modal fusion image features; and finally, segmenting the tumor region based on the cross-modal fusion image characteristics. The self-attention mechanism realizes the expression of the single-mode characteristics through the interaction between the characteristics of different areasTo fusion of image featuresBy fusing image featuresThe method has different dimension information, realizes cross-mode efficient fusion of the PET image and the CT image, and realizes accurate segmentation of the tumor region.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a PET-CT-based tumor segmentation method, a PET-CT-based cross-modal attention tumor segmentation system and a PET-CT-based cross-modal attention tumor segmentation method.
Background
Pet (positron Emission tomography), the chinese name of which is positron Emission computed tomography, is a molecular level imaging technique. In the PET imaging process, firstly, a radioactive isotope tracer is injected into a human body, the radioactive isotope generates positrons in the decay process of the human body, and a probe of a PET scanner reflects the metabolism condition of each tissue and organ of the human body by reconstructing the concentration distribution of the radioactive isotope in the human body. The PET imaging technology has the characteristics of high sensitivity and strong specificity in detection of pathological tissues, and can detect metabolic changes of the pathological tissues before morphological changes of the pathological tissues, for example, 18F-labeled Fluorodeoxyglucose (18F-FDG) is a PET tracer commonly used in the field of oncology at present, and the absorption amount of the 18F-FDG by malignant tissues is far greater than that of normal tissues and organs, so that the concentration of radionuclide in the malignant regions is higher, and the image intensity of the malignant regions is higher than that of the normal tissues and organs when the radioactive nuclide is reflected in a PET image. Therefore, compared with the traditional imaging technology, PET is more sensitive to the tumor region, PET imaging can detect the diseased tissue of the human body earlier, and the method has obvious advantages in early diagnosis and treatment of cancer. However, the PET image has low spatial resolution, and has the characteristics of image blurring and high noise.
Ct (computed tomography), which is called electronic computed tomography (ct) in chinese name, scans a human body with X-rays, receives the transmitted X-rays by a detector, and finally processes the X-rays into an image by a computer. Compared with the PET image, the CT image has higher image resolution, but the CT image has a complex structure, the image intensity of a tumor region and that of a normal soft tissue region in the CT image are similar, and the tumor region is difficult to distinguish through the CT image.
Based on the characteristics of sensitivity of PET to tumor regions and high spatial resolution of CT, more and more tumor segmentation models based on PET-CT multi-modal images appear for segmenting the tumor regions, so that quantitative reference basis is provided for patient condition evaluation and treatment scheme formulation, and finally the effect of treatment schemes of patients such as surgery, radiotherapy, chemotherapy and the like is improved. However, these models generally use a simple image fusion strategy to fuse information in images of different modalities, and use the same weight for all voxels of the same fault, which cannot fully utilize the advantages and features of images of various modalities; the research on how to efficiently fuse the complementary information of PET and CT images and the reasonable and effective PET-CT multi-modal tumor segmentation method have important significance on the evaluation and treatment of tumors.
Disclosure of Invention
The invention aims to: the invention provides a cross-modal attention tumor segmentation method, a cross-modal attention tumor segmentation system and a cross-modal attention tumor segmentation device based on PET-CT, and aims to solve the problems that in the prior art, fusion efficiency of image features of each modality is low and accurate segmentation of a tumor region is difficult to achieve when multi-modal image segmentation is based on PET-CT.
The invention specifically adopts the following technical scheme for realizing the purpose:
a cross-modal attention tumor segmentation method based on PET-CT comprises the following steps:
step S1, acquiring a PET image and a CT image;
step S2, respectively extracting the characteristics in the PET image and the CT image by using a self-attention mechanism;
step S3, using a self-attention mechanism to perform cross-modal fusion on the single-modal characteristics in the PET image and the CT image to obtain cross-modal fusion image characteristics;
step S4, segmenting the tumor region based on the cross-modality fusion image features.
In step S1, a PET/CT scanner is used to acquire a registered PET image and a CT image.
In step S2, the PET image and the CT image acquired in step S1 are respectively segmented, and the segmented images are converted from a matrix to a vector form to obtain vectors(ii) a Amount of coincidenceAnd carrying out nonlinear transformation to obtain the single-mode image characteristics of the PET image and the CT image.
In step S3, the single-mode image features of the PET image obtained in step S2 are linearly transformed、Andto obtain different vector expressions、Andexpress the vector、Andrewriting to matrix form Q 1 、K 1 And V 1 Then, the matrix form is obtained by self-attention calculation:
Performing linear transformation on the single-mode image characteristics of the CT image obtained in step S2、Andto obtain different vector expressions、Andexpress the vector、Andrewriting to matrix form Q 2 、K 2 And V 2 Then, the matrix form is obtained by self-attention calculation:
Form of matrixIn matrix formAre superposed and fused to obtain a matrix form C, i.e.Then, linear transformation is performed on the matrix form C、Andto obtain different vector expressions、Andexpress the vector、Andrewriting to matrix form Q 3 、K 3 And V 3 Then proceed with self-attentionCalculating to obtain matrix form,Namely, the image characteristics are fused across the modes:
wherein the content of the first and second substances,is to fuse image featuresT is the transpose of the matrix,、andis a learnable parameter matrix.
A PET-CT based cross-modal attention tumor segmentation system comprising:
the image acquisition module is used for acquiring a PET image and a CT image;
the characteristic extraction module is used for respectively extracting the characteristics in the PET image and the CT image acquired by the image acquisition module by using a self-attention mechanism;
the characteristic fusion module is used for extracting and obtaining single-mode characteristics in the PET image and the CT image by using the self-attention mechanism cross-mode fusion characteristic extraction module to obtain cross-mode fusion image characteristics;
and the tumor segmentation module is used for segmenting the tumor region based on the cross-modal fusion image features obtained by the feature fusion module.
The image acquisition module acquires a registered PET image and a registered CT image by using a PET/CT scanner.
The characteristic extraction module is used for respectively blocking the PET image and the CT image acquired by the image acquisition module, and converting the blocked images into a vector form from a matrix to obtain a vector(ii) a Amount of coincidenceAnd carrying out nonlinear transformation to obtain the single-mode image characteristics of the PET image and the CT image.
The characteristic fusion module carries out linear transformation on the single-mode image characteristics of the PET image obtained by the characteristic extraction module、Andto obtain different vector expressions、Andexpress the vector、Andrewriting to matrix form Q 1 、K 1 And V 1 Then, the matrix form is obtained by self-attention calculation:
The characteristic fusion module carries out linear transformation on the single-mode image characteristics of the CT image obtained by the characteristic extraction module、Andto obtain different vector expressions、Andexpress the vector、Andrewriting to matrix form Q 2 、K 2 And V 2 Then, the matrix form is obtained by self-attention calculation:
Feature fusion module realigning matrix formIn matrix formAre superposed and fused to obtain a matrix form C, namelyThen, linear transformation is performed on the matrix form C、Andto obtain different vector expressions、Andexpress the vector、Andrewriting to matrix form Q 3 、K 3 And V 3 Then, the matrix form is obtained by self-attention calculation,Namely, the cross-modal fusion image characteristics:
wherein the content of the first and second substances,is to fuse image featuresT is the transpose of the matrix,、andis a learnable parameter matrix.
A computer device comprising a memory storing a computer program and a processor implementing the steps of a PET-CT based cross-modality attention tumor segmentation method as described above when the computer program is executed.
A computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of a PET-CT based cross-modal attention tumor segmentation method as described above.
The invention has the following beneficial effects:
1、compared with convolution operation, the self-attention mechanism can better model the spatial relationship among the features, avoids the loss of characteristic information caused by pooling, and has greater application potential in a multi-modal segmentation task; the self-attention mechanism in the application realizes the expression of the single-mode characteristics through the interaction between the characteristics of different areasTo fusion of image featuresBy fusing image featuresThe method has different dimension information, realizes cross-mode efficient fusion of the PET image and the CT image, and realizes accurate segmentation of the tumor region.
2. In the invention, the PET/CT scanner is an integrated imaging device which is manufactured by integrating two imaging devices of PET and CT, and the obtained PET and CT images are well registered, thereby being more beneficial to the subsequent cross-modal characteristic fusion, improving the efficiency and effect of the cross-modal fusion and being more beneficial to the accurate segmentation of a tumor region.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a self-attention mechanism calculation schematic of the present invention.
Detailed Description
Example 1
The present embodiment provides a PET-CT based cross-modal attention tumor segmentation method, as shown in fig. 1, which includes 4 steps, respectively: step S1, acquiring a PET image and a CT image of a patient; step S2 of extracting features (single-mode image features) in the PET image and the CT image using the self-attention mechanism on the basis of the image acquired in step S1; step S3, on the basis of the single-mode image features extracted in the step S2, cross-mode fusion PET images and single-mode features in CT images are fused by using a self-attention mechanism to obtain cross-mode fusion image features; and step S4, accurately segmenting the tumor region based on the cross-modal fusion image characteristics after fusion. The steps will be explained in detail below:
step S1, acquiring a PET image and a CT image of a patient;
different from the traditional manual feature-based segmentation method, the deep neural network automatically learns how to extract task-related abstract features from data, and the extracted features have stronger expression capability and higher translation invariance. Therefore, when the PET/CT scanner is used for acquiring images, the PET/CT scanner is an integrated imaging device formed by integrating two imaging devices of PET and CT, and the obtained PET images and the obtained CT images are well registered.
Step S2, on the basis of the images, respectively extracting the characteristics in the PET images and the CT images by using a self-attention mechanism;
respectively blocking the PET image and the CT image obtained in step S1 (for example, 3 × 3), and converting the blocked images from a matrix to a vector form to obtain vectors(ii) a And aligning the vectors by applying a self-attention method layer by layerAnd carrying out nonlinear transformation to obtain the single-mode image characteristics of the PET image and the CT image. The self-attention method is a method of performing nonlinear transformation on a vector.
Step S3, on the basis of the single-mode image features extracted in the step S2, cross-mode fusion PET images and single-mode features in CT images are fused by using a self-attention mechanism to obtain cross-mode fusion image features;
performing linear transformation on the single-mode image features of the PET image obtained in step S2、Andto obtain different vector expressions、Andexpress the vector、Andrewriting into matrix form Q1, K1 and V1, and performing self-attention calculation to obtain matrix form:
Performing linear transformation on the single-mode image characteristics of the CT image obtained in step S2、Andto obtain different vector expressions、Andexpress the vector、Andrewriting to matrix form Q 2 、K 2 And V 2 Then, the matrix form is obtained by self-attention calculation:
Form of matrixIn matrix formAre superposed and fused to obtain a matrix form C, namelyThen, linear transformation is performed on the matrix form C、Andto obtain different vector expressions、Andexpress the vector、Andrewriting to matrix form Q 3 、K 3 And V 3 Then, the matrix form is obtained by self-attention calculation,Namely, the cross-modal fusion image characteristics:
wherein the content of the first and second substances,is to fuse image featuresT is the transpose of the matrix,、andis a learnable parameter matrix.
As shown in figure 2 of the drawings, in which,is a single-mode feature of different regions of a PET image and a CT image, which comprises、And3 parts, by applying linear transformations、Andobtaining different vector representations of the data、Andthen the vector is expressed、Andthe fusion image characteristics are obtained by rewriting into matrix forms Q, K and V and finally calculating based on the self-attention mechanismIn the form of a matrix。
And step S4, accurately segmenting the tumor region based on the cross-modal fusion image characteristics after fusion.
In the embodiment, the self-attention mechanism realizes expression through interaction between different regional characteristicsToIn which the expression isAndwith different dimensions. Compared with convolution operation, the spatial relationship among the features can be better modeled by the self-attention mechanism, loss of feature information caused by pooling is avoided, and the self-attention mechanism has a great application potential in a multi-modal segmentation task. In addition, the self-attention mechanism used in this patent is to fuse PET and CT image features in a learnable way, where、Andboth represent learnable parameter matrixes, and the maximum value pooling and the mean value pooling are both fixed operator calculations, and have weak feature fusion capability.
Example 2
A PET-CT based cross-modal attention tumor segmentation system comprising:
the image acquisition module is used for acquiring a PET image and a CT image;
the characteristic extraction module is used for respectively extracting the characteristics in the PET image and the CT image acquired by the image acquisition module by using a self-attention mechanism;
the characteristic fusion module is used for extracting and obtaining single-mode characteristics in the PET image and the CT image by using the self-attention mechanism cross-mode fusion characteristic extraction module to obtain cross-mode fusion image characteristics;
and the tumor segmentation module is used for segmenting the tumor region based on the cross-modal fusion image features obtained by the feature fusion module.
The image acquisition module acquires a registered PET image and a registered CT image by using a PET/CT scanner.
The characteristic extraction module is used for respectively blocking the PET image and the CT image acquired by the image acquisition module, and converting the blocked images into a vector form from a matrix to obtain a vector(ii) a And applying the self-attention method layer by layer to vectorAnd carrying out nonlinear transformation to obtain the single-mode image characteristics of the PET image and the CT image. The self-attention method is a method of performing nonlinear transformation on a vector.
The characteristic fusion module carries out linear transformation on the single-mode image characteristics of the PET image obtained by the characteristic extraction module、Andto obtain different vector expressions、Andexpress the vector、Andrewriting to matrix form Q 1 、K 1 And V 1 Then, the matrix form is obtained by self-attention calculation:
The characteristic fusion module carries out linear transformation on the single-mode image characteristics of the CT image obtained by the characteristic extraction module、Andto obtain different vector expressions、Andexpress the vector、Andrewriting to matrix form Q 2 、K 2 And V 2 Then, the matrix form is obtained by self-attention calculation:
Feature fusion module realigning matrix formIn matrix formAre superposed and fused to obtain a matrix form C, namelyThen, linear transformation is performed on the matrix form C、Andto obtain different vector expressions、Andexpress the vector、Andrewriting to matrix form Q 3 、K 3 And V 3 Then, the matrix form is obtained by self-attention calculation,Namely, the cross-modal fusion image characteristics:
wherein the content of the first and second substances,is to fuse image featuresT is the transpose of the matrix,、andis a learnable parameter matrix.
Example 3
The present embodiment provides a computer device comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the PET-CT based cross-modal attention tumor segmentation method according to embodiment 1 when executing the computer program.
Example 4
The present embodiment provides a computer readable storage medium having stored thereon a computer program which, when being executed by a processor, implements the steps of the PET-CT based cross-modal attention tumor segmentation method of embodiment 1.
Claims (8)
1. A PET-CT-based cross-modal attention tumor segmentation method is characterized by comprising the following steps:
step S1, acquiring a PET image and a CT image;
step S2, respectively extracting the characteristics in the PET image and the CT image by using a self-attention mechanism;
step S3, using a self-attention mechanism to perform cross-modal fusion on the single-modal characteristics in the PET image and the CT image to obtain cross-modal fusion image characteristics;
step S4, segmenting the tumor region based on the cross-modality fusion image features.
2. The PET-CT based cross-modal attention tumor segmentation method of claim 1, wherein: in step S1, the registered PET image and CT image are acquired using a PET/CT scanner.
3. The PET-CT based cross-modal attention tumor segmentation method of claim 1, wherein: in step S2, the PET image and the CT image acquired in step S1 are respectively segmented, and the segmented images are transformed from a matrix to a vector form to obtain vectors(ii) a Amount of coincidenceAnd carrying out nonlinear transformation to obtain the single-mode image characteristics of the PET image and the CT image.
4. The PET-CT based cross-modal attention tumor segmentation method of claim 1, wherein: in step S3, the single-mode image features of the PET image obtained in step S2 are linearly transformed、Andto obtain different vector expressions、Andexpress the vector、Andrewriting to matrix form Q 1 、K 1 And V 1 Then, the matrix form is obtained by self-attention calculation:
Performing linear transformation on the single-mode image characteristics of the CT image obtained in step S2、Andto obtain different vector expressions、Andexpress the vector、Andrewriting to matrix form Q 2 、K 2 And V 2 Then, the matrix form is obtained by self-attention calculation:
Form of matrixIn matrix formAre superposed and fused to obtain a matrix form C, i.e.Then, linear transformation is performed on the matrix form C、Andto obtain different vector expressions、Andexpress the vector、Andrewriting to matrix form Q 3 、K 3 And V 3 Then, the matrix form is obtained by self-attention calculation,Namely, the cross-modal fusion image characteristics:
5. A PET-CT based cross-modal attention tumor segmentation system, comprising:
the image acquisition module is used for acquiring a PET image and a CT image;
the characteristic extraction module is used for respectively extracting the characteristics in the PET image and the CT image acquired by the image acquisition module by using a self-attention mechanism;
the characteristic fusion module is used for extracting and obtaining single-mode characteristics in the PET image and the CT image by using the self-attention mechanism cross-mode fusion characteristic extraction module to obtain cross-mode fusion image characteristics;
and the tumor segmentation module is used for segmenting the tumor region based on the cross-modal fusion image features obtained by the feature fusion module.
6. The PET-CT based cross-modal attention tumor segmentation system of claim 5, wherein the image acquisition module acquires the registered PET image and CT image using a PET/CT scanner.
7. The PET-CT-based cross-modal attention tumor segmentation system of claim 5, wherein the feature extraction module is used for respectively segmenting the PET image and the CT image acquired by the image acquisition module, and transforming the segmented images from a matrix to a vector form to obtain a vector(ii) a Amount of coincidenceAnd carrying out nonlinear transformation to obtain the single-mode image characteristics of the PET image and the CT image.
8. The PET-CT-based cross-modal attention tumor segmentation system of claim 5, wherein the feature fusion module performs linear transformation on the single-modal image features of the PET image obtained by the feature extraction module、Andto obtain different vector expressions、Andexpress the vector、Andrewriting to matrix form Q 1 、K 1 And V 1 Then, the matrix form is obtained by self attention calculation:
The characteristic fusion module carries out linear transformation on the single-mode image characteristics of the CT image obtained by the characteristic extraction module、Andto obtain different vector expressions、Andexpress the vector、Andrewriting to matrix form Q 2 、K 2 And V 2 Then, the matrix form is obtained by self attention calculation:
Feature fusion module recouplingMatrix formIn matrix formAre superposed and fused to obtain a matrix form C, namelyThen, linear transformation is performed on the matrix form C、Andto obtain different vector expressions、Andexpress the vector、Andrewriting to matrix form Q 3 、K 3 And V 3 Then, the matrix form is obtained by self-attention calculation,Namely, the cross-modal fusion image characteristics:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210807701.0A CN114862881A (en) | 2022-07-11 | 2022-07-11 | Cross-modal attention tumor segmentation method and system based on PET-CT |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210807701.0A CN114862881A (en) | 2022-07-11 | 2022-07-11 | Cross-modal attention tumor segmentation method and system based on PET-CT |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114862881A true CN114862881A (en) | 2022-08-05 |
Family
ID=82627039
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210807701.0A Pending CN114862881A (en) | 2022-07-11 | 2022-07-11 | Cross-modal attention tumor segmentation method and system based on PET-CT |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114862881A (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109503590A (en) * | 2018-11-22 | 2019-03-22 | 四川大学华西医院 | Using 7-deazaadenine base as mother nucleus18F-PET/CT tracer agent and preparation method thereof |
US20190114773A1 (en) * | 2017-10-13 | 2019-04-18 | Beijing Curacloud Technology Co., Ltd. | Systems and methods for cross-modality image segmentation |
CN113035334A (en) * | 2021-05-24 | 2021-06-25 | 四川大学 | Automatic delineation method and device for radiotherapy target area of nasal cavity NKT cell lymphoma |
CN113496495A (en) * | 2021-06-25 | 2021-10-12 | 华中科技大学 | Medical image segmentation model building method capable of realizing missing input and segmentation method |
CN113951866A (en) * | 2021-10-28 | 2022-01-21 | 北京深睿博联科技有限责任公司 | Deep learning-based uterine fibroid diagnosis method and device |
WO2022032823A1 (en) * | 2020-08-10 | 2022-02-17 | 中国科学院深圳先进技术研究院 | Image segmentation method, apparatus and device, and storage medium |
CN114266726A (en) * | 2021-11-22 | 2022-04-01 | 中国科学院深圳先进技术研究院 | Medical image segmentation method, system, terminal and storage medium |
US20220108478A1 (en) * | 2020-10-02 | 2022-04-07 | Google Llc | Processing images using self-attention based neural networks |
CN114359642A (en) * | 2022-01-12 | 2022-04-15 | 大连理工大学 | Multi-modal medical image multi-organ positioning method based on one-to-one target query Transformer |
-
2022
- 2022-07-11 CN CN202210807701.0A patent/CN114862881A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190114773A1 (en) * | 2017-10-13 | 2019-04-18 | Beijing Curacloud Technology Co., Ltd. | Systems and methods for cross-modality image segmentation |
CN109503590A (en) * | 2018-11-22 | 2019-03-22 | 四川大学华西医院 | Using 7-deazaadenine base as mother nucleus18F-PET/CT tracer agent and preparation method thereof |
WO2022032823A1 (en) * | 2020-08-10 | 2022-02-17 | 中国科学院深圳先进技术研究院 | Image segmentation method, apparatus and device, and storage medium |
US20220108478A1 (en) * | 2020-10-02 | 2022-04-07 | Google Llc | Processing images using self-attention based neural networks |
CN113035334A (en) * | 2021-05-24 | 2021-06-25 | 四川大学 | Automatic delineation method and device for radiotherapy target area of nasal cavity NKT cell lymphoma |
CN113496495A (en) * | 2021-06-25 | 2021-10-12 | 华中科技大学 | Medical image segmentation model building method capable of realizing missing input and segmentation method |
CN113951866A (en) * | 2021-10-28 | 2022-01-21 | 北京深睿博联科技有限责任公司 | Deep learning-based uterine fibroid diagnosis method and device |
CN114266726A (en) * | 2021-11-22 | 2022-04-01 | 中国科学院深圳先进技术研究院 | Medical image segmentation method, system, terminal and storage medium |
CN114359642A (en) * | 2022-01-12 | 2022-04-15 | 大连理工大学 | Multi-modal medical image multi-organ positioning method based on one-to-one target query Transformer |
Non-Patent Citations (7)
Title |
---|
BICAO LI等: "CSpA-DN:Channel and Spatial attention Dense Network for Fusing PET AND MRI images", 《2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION》 * |
LEI Y等: "Low dose PET imaging WITH CT-aided cycle-consistent adversarial networks", 《MEDICAL IMAGING 2020:PHYSICS OF MEDICAL IMAGING》 * |
WANG JIANYONG等: "A new delay connection for long-short Term memory networks", 《INTERNATIONAL JOURNAL OF NEURAL SYSTEMS》 * |
冯诺等: "特征重用和注意力机制下肝肿瘤自动分类", 《中国图象图形学报》 * |
朱晨光: "《机器阅读理解算法与实践》", 30 April 2020, 机械工业出版社 * |
李林等: "PET/CT真正全身显像在结外NK/T细胞淋巴瘤(鼻型)患者中的应用价值", 《四川大学学报(医学版)》 * |
石磊等: "自然语言处理中的注意力机制研究综述", 《数据分析与知识发现》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wang et al. | 3D auto-context-based locality adaptive multi-modality GANs for PET synthesis | |
Wang et al. | Machine learning in quantitative PET: A review of attenuation correction and low-count image reconstruction methods | |
Wang et al. | Predicting standard-dose PET image from low-dose PET and multimodal MR images using mapping-based sparse representation | |
Spuhler et al. | Full‐count PET recovery from low‐count image using a dilated convolutional neural network | |
Kang et al. | Prediction of standard‐dose brain PET image by using MRI and low‐dose brain [18F] FDG PET images | |
Lin et al. | Deep learning based automatic segmentation of metastasis hotspots in thorax bone SPECT images | |
Cheng et al. | Applications of artificial intelligence in nuclear medicine image generation | |
WO2016033458A1 (en) | Restoring image quality of reduced radiotracer dose positron emission tomography (pet) images using combined pet and magnetic resonance (mr) | |
Chowdhury et al. | Concurrent segmentation of the prostate on MRI and CT via linked statistical shape models for radiotherapy planning | |
Yang et al. | A hybrid approach for fusing 4D‐MRI temporal information with 3D‐CT for the study of lung and lung tumor motion | |
Xie et al. | Anatomically aided PET image reconstruction using deep neural networks | |
CN110415310A (en) | Medical scanning imaging method, device, storage medium and computer equipment | |
Jin et al. | Registration of PET and CT images based on multiresolution gradient of mutual information demons algorithm for positioning esophageal cancer patients | |
Ao et al. | Improved dosimetry for targeted radionuclide therapy using nonrigid registration on sequential SPECT images | |
Tong et al. | Disease quantification on PET/CT images without explicit object delineation | |
Lin et al. | Classifying functional nuclear images with convolutional neural networks: a survey | |
Toyonaga et al. | Deep learning–based attenuation correction for whole-body PET—a multi-tracer study with 18F-FDG, 68 Ga-DOTATATE, and 18F-Fluciclovine | |
Torkaman et al. | Direct image-based attenuation correction using conditional generative adversarial network for SPECT myocardial perfusion imaging | |
Bauer et al. | Automated measurement of uptake in cerebellum, liver, and aortic arch in full‐body FDG PET/CT scans | |
Amirkolaee et al. | Development of a GAN architecture based on integrating global and local information for paired and unpaired medical image translation | |
Sanaat et al. | A cycle-consistent adversarial network for brain PET partial volume correction without prior anatomical information | |
Li et al. | Adaptive 3D noise level‐guided restoration network for low‐dose positron emission tomography imaging | |
Wang et al. | 3D multi-modality Transformer-GAN for high-quality PET reconstruction | |
CN114862881A (en) | Cross-modal attention tumor segmentation method and system based on PET-CT | |
Lei et al. | Estimating standard-dose PET from low-dose PET with deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220805 |
|
RJ01 | Rejection of invention patent application after publication |