CN117594235A - Advanced high-grade serous ovarian cancer HRD state prediction method based on TransPyramid model - Google Patents

Advanced high-grade serous ovarian cancer HRD state prediction method based on TransPyramid model Download PDF

Info

Publication number
CN117594235A
CN117594235A CN202311548338.6A CN202311548338A CN117594235A CN 117594235 A CN117594235 A CN 117594235A CN 202311548338 A CN202311548338 A CN 202311548338A CN 117594235 A CN117594235 A CN 117594235A
Authority
CN
China
Prior art keywords
region
model
hrd
ovarian cancer
tumor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311548338.6A
Other languages
Chinese (zh)
Inventor
李海明
刘再毅
郭勤浩
顾雅佳
崔艳芬
林佳泰
韩楚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University Shanghai Cancer Center
Original Assignee
Fudan University Shanghai Cancer Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University Shanghai Cancer Center filed Critical Fudan University Shanghai Cancer Center
Priority to CN202311548338.6A priority Critical patent/CN117594235A/en
Publication of CN117594235A publication Critical patent/CN117594235A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Pathology (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention relates to a method for predicting the HRD state of advanced high-grade serous ovarian cancer based on a Transpyramid model, which comprises the steps of dividing an HGSOC tumor focus, dividing an MRI image into a tumor area and other areas, respectively extracting characteristic areas, inputting all sub-area characteristics into a Transformer model together for characteristic fusion, calculating the fused multi-sequence MRI image characteristics through a full-connection layer to obtain a HRD state prediction result, and realizing the prediction of the HRD state of the advanced high-grade serous ovarian cancer. Compared with the prior art, the method provided by the invention uses the TransPyramid characteristic of the multi-sequence MRI image, provides richer and more sensitive characteristics than CT image characteristics for the HRD state prediction task of the HGSOC patient, constructs a multi-scale sub-area image pyramid for extracting multi-scale information of the image, improves the characteristic information quantity, and further improves the prediction efficiency of a prediction model.

Description

Advanced high-grade serous ovarian cancer HRD state prediction method based on TransPyramid model
Technical Field
The invention relates to the technical field of medical image artificial intelligence, in particular to a method for predicting the HRD state of advanced high-grade serous ovarian cancer based on a TransPyramid model.
Background
Ovarian cancer is one of the most common gynecological malignant tumors, and the death rate of the ovarian cancer is high, which is the first of female reproductive system malignant tumors, and seriously threatens female health in China. High grade serous ovarian cancer (High-grade serous ovarian carcinoma, HGSOC) is the most common and aggressive histological subtype of ovarian cancer, with about 70% of patients already advanced at the visit. According to the national integrated cancer network (NCCN) guidelines, the first treatment regimen for advanced HGSOC patients remains tumor cell debulking in combination with platinum-based chemotherapy. Although the clinical remission rate of this treatment can reach 80%, 70% of patients will have disease recurrence to drug resistance within two or three years after the initial treatment, with a survival rate of only about 30-40% in 5 years, and therefore, the current situation and challenges faced in the overall management of advanced HGSOC are the great difficulty of treatment and poor overall prognosis.
In recent years, with the advent of a poly (adenosine diphosphate) ribose polymerase (Poly adenosine diphosphate ribose polymerase, PARP) inhibitor (PARPi), the treatment pattern of advanced ovarian cancer has been changed from therapeutic concept to clinical practice, and a new mode of "surgery+chemotherapy+maintenance therapy" has been fully entered. A series of high-level evidence-based medical evidence suggests that PARPi first-line maintenance therapy has become the standard therapy after complete or partial remission of the initial therapy of patients with advanced ovarian cancer, especially for patients with positive HRD (homologous recombination defect) scores, and can significantly prolong disease progression-free survival (Progression free survival, PFS) and improve patient prognosis. Therefore, the HRD status plays an important role as an important molecular marker in the management of advanced HGSOC patients, helping to stratify the patients and screen out the population that can truly benefit from PARPi clinic.
Currently, HRD detection in clinical patients is mostly based on recent tumor tissue, which is obtained by invasive surgery or puncture followed by second generation sequencing. However, the HRD status inspection lacks a unified evaluation method, and the consistency of the detection result and the prediction efficiency of the scoring algorithm also lack standardized verification. Furthermore, the complex heterogeneity of tumors can lead to missed diagnosis of tumor samples or needle biopsies, leading to a selection barrier for the corresponding targeted drug. Thus, a comprehensive disclosure of tumor heterogeneity requires that the clinician must take multiple samples, which limits invasive biopsy-based detection methods, but precisely provides great opportunity for medical image-based image analysis techniques.
In recent years, with the evolution of artificial intelligence (Artificial intelligence, AI) technology typified by deep learning, medical image AI research has been pushed to an entirely new level. Magnetic Resonance Imaging (MRI) is one of the important imaging methods for pre-operative evaluation of ovarian cancer patients, and MRI can provide higher sensitivity and more information than CT, with greater mining potential. Through multidimensional MRI data, AI can extract abundant characteristic information, accurately analyze time, space, morphology and metabolic characteristics of tumor, and realize accurate image analysis and quantitative evaluation of tumor. In addition, the AI technology can efficiently identify and divide tumor subregions, distinguish different heterogeneity regions and provide important references for personalized diagnosis and treatment. The macroscopic spatial heterogeneity information contained in the advanced HGSOC preoperative multi-sequence MRI image is deeply mined, the highly suspicious region is visually marked, a prediction model for noninvasively predicting the HRD state of the advanced HGSOC patient is realized, clinical decision making is assisted, and the method has important theoretical significance and application value.
At present, the existing research is to construct a model based on CT images, and the existing method is not capable of comprehensively describing the space heterogeneity inside HGSOC tumors due to the fact that the model is not constructed based on the HGSOC tumor sub-region-global characteristics of multi-sequence MRI for assisting clinical decision making. However, the amount of information of features extracted from the radiological image is decisive for the task of HRD state prediction of HGSOC patients, and thus the existing methods have drawbacks in:
(1) Multiple sequence MRI images can provide more information than CT images, however most existing approaches accomplish HRD state prediction tasks by extracting image features of CT images.
(2) The existing methods lack the spatial relationship of deep mining focus in images, and the existing methods directly extract the image features of focus areas and even full-image areas through depth models and do not define sub-areas and mine the spatial relationship between the focus areas and the full-image areas.
(3) The existing HGSOC patient HRD state prediction method lacks the mining of multi-scale image information. The existing method is mainly to directly use the provided input image to extract the characteristics and predict, and lack to mine the image information under different scales and then fuse.
Therefore, it is highly desirable to develop a prediction method that can comprehensively depict macroscopic spatial heterogeneity information contained in advanced HGSOC preoperative multi-sequence MRI images, provide features that are more abundant and sensitive than CT image features, and assist in clinical decision making.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a method for predicting the HRD state of advanced high-grade serous ovarian cancer based on a TransPyramid model, and the fused focus global feature has stronger focus space feature characterization capability by separately extracting and re-fusing a tumor focus and other areas, so that the prediction efficiency of a prediction model is improved.
The aim of the invention can be achieved by the following technical scheme:
the invention provides a method for predicting the HRD state of advanced high-grade serous ovarian cancer based on a TransPyramid model, which comprises the following steps:
step (1): segmenting the HGSOC tumor focus, and segmenting a HGSOC tumor primary focus region of the multi-sequence MRI through a semantic segmentation model;
step (2): establishing a tumor focus sub-region, dividing an MRI image into tumor regions phi according to the tumor primary focus segmentation result obtained in the step (1) tumor And other areas phi other And other areas phi other Split into N sub-regions, i.e
Step (3): constructing a multi-scale sub-region pyramid, wherein N sub-regions phi determined in the step (2) are subjected to x Downsampling several times to construct a 3D image pyramid of a multi-scale sub-region
Step (4): sub-region feature extraction, feature extraction is performed on the multi-scale sub-region 3D image pyramid in the step (3) by an image encoder based on 3D Swin Transformer, and the feature extraction is marked as F tumor And
Step (5): sub-region feature fusion, inputting all the sub-region features in the step (4) into a transducer model together for feature fusion, and marking the features after model fusion as global multi-sequence MRI image features F radiomics
Step (6): HRD state prediction for multi-sequence MRI image feature F in step (5) radiomics And calculating to obtain an HRD state prediction result y', and based on comparison with a sample true value, gradually correcting model parameters to obtain a final TransPyramid model, and predicting the HRD state of the advanced high-grade serous ovarian cancer through the final TransPyramid model.
Further, in the step (1), the semantic segmentation model is specifically a 3D semantic segmentation model trained on the tumor focus semantic segmentation dataset through the existing semantic segmentation technology.
Further, in the step (2), other regions Φ are subjected to MRI image according to the peritoneal cancer index evaluation method other Divided into thirteen subregions.
Further, the thirteen subregions are specifically: region 0-central region, region 1-right upper abdomen, region 2-upper abdomen, region 3-left upper abdomen, region 4-left lower abdomen, region 5-left lower abdomen, region 6-pelvic cavity, region 7-right lower abdomen, region 8-right lower abdomen, region 9-upper jejunum, region 10-lower jejunum, region 11-upper ileum and region 12-lower ileum.
Further, in step (3), the N subregions Φ are interpolated bi-linearly according to different resolutions for the original 3D image x Downsampling is performed several times.
Further, in step (4), the tumor region Φ tumor The 3D image pyramid samples of the other sub-regions are extracted as subspace features for 3D token input to the Swin transform encoder.
Further, in the step (6), the method further includes calculating a cross entropy loss, and the specific calculation formula is as follows:
wherein y is the HRD state real label, and y' is the HRD state prediction result.
Further, in the step (6), after the model updating amount is calculated according to the total loss function, the steps (4) - (6) are repeated until the model is trained.
Further, in step (6), HRD state prediction result y' is obtained by full link layer calculation.
Further, the overall loss function calculation formula is:to constrain the model.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. the invention can provide features which are richer and more sensitive than CT image features and provide more information by processing MRI multi-sequence data.
2. The invention constructs the multi-scale sub-area image pyramid for extracting the multi-scale information of the image, and improves the characteristic information quantity.
3. According to the invention, the spatial heterogeneity inside the HGSOC tumor is depicted by fusing the features of the image sub-regions into the global features, so that the spatial information of the focus in the image can be better represented.
Drawings
Fig. 1 is a flowchart of a method for predicting HRD status of advanced high grade serous ovarian cancer based on a transforamid model.
Detailed Description
The following describes in detail specific embodiments of the present invention by way of examples, which are given as detailed embodiments and specific operation procedures based on the embodiments of the present invention, but the scope of the present invention is not limited to the examples described below.
The invention will be further elucidated with reference to the drawings and the specific embodiments. Features such as a part model, a material name, a connection structure, an algorithm model and the like which are not explicitly described in the technical scheme are all regarded as common technical features disclosed in the prior art.
Example 1
The embodiment provides a method for predicting the HRD state of advanced high-grade serous ovarian cancer based on a TransPyramid model, which is shown in figure 1 and comprises the following steps:
step (1): HGSOC tumor lesion segmentation. Training a 3D semantic segmentation model on a tumor focus semantic segmentation data set by the existing semantic segmentation technology, such as: 3D Swin Transformer, and then segmenting the HGSOC tumor primary focus region of the multi-sequence MRI by using the semantic segmentation model which is completed with the training.
Step (2): a tumor lesion sub-area is established. Dividing the MRI image into two large regions, namely tumor region phi, according to the tumor primary focus segmentation result obtained in the step (1) tumor And other areas phi other As shown in fig. 1. In other areas phi other In order to further explore the spatial distribution in this region, the abdomen was divided into thirteen subregions on MRI images according to the peritoneal cancer index (Peritoneal carcinomatosis index, PCI) evaluation method, specifically: region 0-central region, region 1-right upper abdomen, region 2-upper abdomen, region 3-left upper abdomen, region 4-left side abdomen, region 5-left lower abdomen, region 6-pelvic cavity, region 7-right lower abdomen, region 8-right side abdomen, region 9-upper jejunum, region 10-lower jejunum, region 11-upper ileum and region 12-lower ileum, i.e. divided into N subregions,
step (3): and constructing a multi-scale subregion pyramid. For the N subregions Φ determined in step (2) x Downsampling is carried out for a plurality of times in a bilinear interpolation mode, namely, the original 3D image is downsampled according to different resolutions, and a 3D image pyramid of a multi-scale sub-region is constructedTo extract multi-scale image features.
Step (4): and extracting subregion features. Feature extraction is performed on the multi-scale sub-region image pyramid in step (3) using a 3D Swin Transformer-based image encoder. Pyramid sampling of 3D images of tumor regions and other sub-regions as 3D token input to Swin transducer encoder to extract subspace features, denoted F tumor And
Step (5): subregion feature fusion. Inputting all the subregion features in the step (4) into a transducer model together for feature fusion, and recording the features after the model fusion as global multi-sequence MRI image features F radiomics
Step (6): HRD state prediction trainingAnd (5) predicting. Characterizing the final multi-sequence MRI image of step (5) F radiomics And calculating to obtain an HRD state prediction result y' through a full connection layer. Calculating the cross entropy loss of the prediction result and the real label:
wherein y is the HRD state real label, and y' is the HRD state prediction result.
Recalculating the overall loss function of the entire modelThereby restricting the TransPyramid model; and (3) after calculating the model updating amount according to the loss function and updating the model, repeating the steps (4) - (6) until the TransPyramid model is trained, realizing gradual correction of model parameters based on comparison with a sample true value, obtaining a final TransPyramid model, and realizing prediction of the HRD state of the advanced high-level serous ovarian cancer through the final TransPyramid model.
By processing the MRI multi-sequence data, it is possible to provide features that are richer and more sensitive than CT image features, as well as to provide more information; and a multi-scale sub-area image pyramid is constructed for extracting multi-scale information of the image, so that the characteristic information quantity is improved. By fusing the features of the image sub-regions into global features to depict the spatial heterogeneity inside the HGSOC tumor, the spatial information of the focus in the image can be better characterized.
The previous description of the embodiments is provided to facilitate a person of ordinary skill in the art in order to make and use the present invention. It will be apparent to those skilled in the art that various modifications can be readily made to these embodiments and the generic principles described herein may be applied to other embodiments without the use of the inventive faculty. Therefore, the present invention is not limited to the above-described embodiments, and those skilled in the art, based on the present disclosure, should make improvements and modifications without departing from the scope of the present invention.

Claims (10)

1. The method for predicting the HRD state of the advanced high-grade serous ovarian cancer based on the TransPyramid model is characterized by comprising the following steps of:
step (1): segmenting the HGSOC tumor focus, and segmenting a HGSOC tumor primary focus region of the multi-sequence MRI through a semantic segmentation model;
step (2): establishing a tumor focus sub-region, dividing an MRI image into tumor regions phi according to the tumor primary focus segmentation result obtained in the step (1) tumor And other areas phi other And other areas phi other Split into N sub-regions, i.e
Step (3): constructing a multi-scale sub-region pyramid, wherein N sub-regions phi determined in the step (2) are subjected to x Downsampling several times to construct a 3D image pyramid of a multi-scale sub-region
Step (4): sub-region feature extraction, feature extraction is performed on the multi-scale sub-region 3D image pyramid in the step (3) by an image encoder based on 3D Swin Transformer, and the feature extraction is marked as F tumor And
Step (5): sub-region feature fusion, inputting all the sub-region features in the step (4) into a transducer model together for feature fusion, and marking the features after model fusion as global multi-sequence MRI image features F radiomics
Step (6): HRD state prediction for multi-sequence MRI image feature F in step (5) radiomics And calculating to obtain an HRD state prediction result y', and based on comparison with a sample true value, gradually correcting model parameters to obtain a final TransPyramid model, and predicting the HRD state of the advanced high-grade serous ovarian cancer through the final TransPyramid model.
2. The method for HRD status prediction of advanced high-grade serous ovarian cancer based on a transforamid model according to claim 1, wherein in step (1), the semantic segmentation model is specifically a 3D semantic segmentation model trained on a tumor lesion semantic segmentation dataset by an existing semantic segmentation technique.
3. The method for predicting the HRD status of advanced high grade serous ovarian cancer based on the TransPyramid model as claimed in claim 1, wherein in step (2), other regions Φ are on the MRI image according to the peritoneal cancer index evaluation method other Divided into thirteen subregions.
4. The method for HRD status prediction of advanced high grade serous ovarian cancer based on the transfyramid model of claim 3, wherein the thirteen subregions are specifically: region 0-central region, region 1-right upper abdomen, region 2-upper abdomen, region 3-left upper abdomen, region 4-left lower abdomen, region 5-left lower abdomen, region 6-pelvic cavity, region 7-right lower abdomen, region 8-right lower abdomen, region 9-upper jejunum, region 10-lower jejunum, region 11-upper ileum and region 12-lower ileum.
5. The method for predicting the advanced high-grade serous ovarian cancer HRD state based on TransPyramid model as claimed in claim 1, wherein in the step (3), N subregions Φ are interpolated by bilinear interpolation according to different resolutions on the original 3D image x Downsampling is performed several times.
6. The TransPyramid model-based method according to claim 1The method for predicting the HRD state of advanced high-grade serous ovarian cancer is characterized in that in the step (4), the tumor area phi is treated tumor The 3D image pyramid samples of the other sub-regions are extracted as subspace features for 3D token input to the Swin transform encoder.
7. The method for HRD status prediction of advanced high grade serous ovarian cancer based on the transfyramid model of claim 1, further comprising calculating cross entropy loss in step (6), wherein the specific calculation formula is as follows:
wherein y is the real label of the HRD state of the patient, and y' is the prediction result of the HRD state.
8. The method for predicting the HRD status of advanced high grade serous ovarian cancer based on a TransPyramid model according to claim 1, wherein in step (6), after calculating the model update amount according to the total loss function, the steps (4) - (6) are repeated until the model is trained.
9. The method for predicting the HRD status of advanced high-grade serous ovarian cancer based on a TransPyramid model according to claim 1, wherein in the step (6), the HRD status prediction result y' is obtained by full-connection layer calculation.
10. The method for HRD status prediction of advanced high-grade serous ovarian cancer based on a transforamid model according to claim 8, wherein the overall loss function calculation formula is:to constrain the model.
CN202311548338.6A 2023-11-20 2023-11-20 Advanced high-grade serous ovarian cancer HRD state prediction method based on TransPyramid model Pending CN117594235A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311548338.6A CN117594235A (en) 2023-11-20 2023-11-20 Advanced high-grade serous ovarian cancer HRD state prediction method based on TransPyramid model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311548338.6A CN117594235A (en) 2023-11-20 2023-11-20 Advanced high-grade serous ovarian cancer HRD state prediction method based on TransPyramid model

Publications (1)

Publication Number Publication Date
CN117594235A true CN117594235A (en) 2024-02-23

Family

ID=89921227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311548338.6A Pending CN117594235A (en) 2023-11-20 2023-11-20 Advanced high-grade serous ovarian cancer HRD state prediction method based on TransPyramid model

Country Status (1)

Country Link
CN (1) CN117594235A (en)

Similar Documents

Publication Publication Date Title
CN111657945A (en) Nasopharyngeal carcinoma prognosis auxiliary evaluation method based on enhanced MRI (magnetic resonance imaging) imaging omics
Ding et al. Optimizing the peritumoral region size in radiomics analysis for sentinel lymph node status prediction in breast cancer
Wu et al. Prediction of molecular subtypes of breast cancer using BI-RADS features based on a “white box” machine learning approach in a multi-modal imaging setting
CN105005714A (en) Non-small cell lung cancer prognosis method based on tumor phenotypic characteristics
Christ et al. SurvivalNet: Predicting patient survival from diffusion weighted magnetic resonance images using cascaded fully convolutional and 3D convolutional neural networks
CN112263217B (en) Improved convolutional neural network-based non-melanoma skin cancer pathological image lesion area detection method
CN110728239B (en) Gastric cancer enhanced CT image automatic identification system utilizing deep learning
CN114677378B (en) Computer-aided diagnosis and treatment system based on ovarian tumor benign and malignant prediction model
Li et al. A novel radiogenomics framework for genomic and image feature correlation using deep learning
Luo et al. Deep learning in breast cancer imaging: A decade of progress and future directions
Dai et al. Clinical application of AI-based PET images in oncological patients
Zou et al. Multi-task deep learning based on T2-Weighted Images for predicting Muscular-Invasive Bladder Cancer
Diao et al. Computer aided cancer regions detection of hepatocellular carcinoma in whole-slide pathological images based on deep learning
Cabrera et al. Segmentation of axillary and supraclavicular tumoral lymph nodes in PET/CT: A hybrid CNN/component-tree approach
Chen et al. Enhancement of Breast Mammography to Rapid Screen Abnormalities Using 2D Spatial Fractional‐Order Feature Extraction and Multilayer Machine Vision Classifier
Lu et al. AugMS-Net: Augmented multiscale network for small cervical tumor segmentation from MRI volumes
CN117594235A (en) Advanced high-grade serous ovarian cancer HRD state prediction method based on TransPyramid model
Santhosh et al. Deep Learning Techniques for Brain Tumor Diagnosis: A Review
Wang et al. Spatial attention lesion detection on automated breast ultrasound
CN112233058A (en) Method for detecting lymph nodes in head and neck CT image
Yan et al. 3D convolutional network with edge detection for prostate gland and tumor segmentation on T2WI and ADC
Mohamed Aarif et al. Deep MammoNet: Early Diagnosis of Breast Cancer Using Multi-layer Hierarchical Features of Deep Transfer Learned Convolutional Neural Network
Dai et al. GA-Net: A geographical attention neural network for the segmentation of body torso tissue composition
Du et al. Segmentation of pancreatic tumors based on multi‐scale convolution and channel attention mechanism in the encoder‐decoder scheme
Zhang et al. Clinical nursing and postoperative prediction of gastrointestinal cancer based on CT deep learning model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination