CN114612478B - Female pelvic cavity MRI automatic sketching system based on deep learning - Google Patents

Female pelvic cavity MRI automatic sketching system based on deep learning Download PDF

Info

Publication number
CN114612478B
CN114612478B CN202210276547.9A CN202210276547A CN114612478B CN 114612478 B CN114612478 B CN 114612478B CN 202210276547 A CN202210276547 A CN 202210276547A CN 114612478 B CN114612478 B CN 114612478B
Authority
CN
China
Prior art keywords
sketching
module
file
image
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210276547.9A
Other languages
Chinese (zh)
Other versions
CN114612478A (en
Inventor
郭礼华
肖宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202210276547.9A priority Critical patent/CN114612478B/en
Publication of CN114612478A publication Critical patent/CN114612478A/en
Application granted granted Critical
Publication of CN114612478B publication Critical patent/CN114612478B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a female pelvic cavity MRI automatic sketching system based on deep learning, which comprises: the original file preprocessing module is used for loading and preprocessing the female pelvic cavity MRI desensitization file and generating an image file with stronger readability and a corresponding image mask file; the sketching model training module is used for preprocessing an image, automatically encoding the characteristic and learning the potential characteristic by utilizing a deep learning network model constructed by a convolutional neural network, and obtaining an optimally sketched prediction model through multiple iterations; the automatic sketching and predicting module is used for carrying out full-automatic organ sketching and predicting on the image without the corresponding mask; and the prediction result writing module is used for writing the sketching result obtained by the automatic sketching prediction module into the original MRI file. The invention builds an end-to-end medical image segmentation model, provides a spanning type feature extraction method and enhanced feature representation, extracts and integrates image features, greatly improves segmentation accuracy, and improves the efficiency of delineating organs at risk in the radiotherapy diagnosis process.

Description

Female pelvic cavity MRI automatic sketching system based on deep learning
Technical Field
The invention relates to the technical field of deep learning and biomedicine, in particular to a female pelvic MRI automatic sketching system based on deep learning.
Background
In modern society, it is normal to use medical images to assist in the treatment of patients. In radiooncology diagnosis, a physician applies a highly conformal dose distribution to a tumor target by delineating the target. This work is time consuming and laborious and the subjectivity of the different doctors can greatly affect the sketching. How to automatically and quickly sketch medical images becomes an important subject.
Cervical cancer is the most common malignant tumor of women of childbearing age, the incidence rate is inferior to ovarian cancer, and MRI has obvious effect in diagnosis of cervical cancer. Radiation therapy is one of the main treatment means of cervical cancer, and aims at accurately drawing a target area and normal tissues so as to give accurate prescription doses.
Currently, methods for automatic segmentation of medical images can be divided into three categories: 1) The traditional image segmentation algorithm directly segments the image by utilizing the characteristics of the image, such as texture, pixel distribution and the like, such as an edge detection method, a region growing method, a threshold method, a clustering method and the like. 2) Image segmentation algorithms based on graph theory treat images as an undirected graph, wherein pixels correspond to vertexes in the graph, and every two adjacent pixels are connected to form an edge. Meanwhile, two special vertexes S and T are introduced, S represents a Source point (Source), T represents a Sink point (Sink), and finally the image is segmented by utilizing the maximum flow minimum cutting theory. 3) By means of the deep learning network, image features are learned and predicted through pre-training.
Disclosure of Invention
The invention aims to overcome the defects and shortcomings of the prior art, and provides a deep learning-based automatic female pelvic MRI sketching system, which provides an end-to-end segmentation model frame, overcomes the inefficiency and complexity of traditional manual sketching, and improves the segmentation precision of female pelvic MRI organs.
In order to achieve the above purpose, the technical scheme provided by the invention is as follows: a deep learning-based female pelvic MRI automatic delineation system, comprising:
the original file preprocessing module is used for loading and preprocessing the female pelvic cavity MRI desensitization file and generating an image file with stronger readability and a corresponding image mask file;
The sketching model training module is used for preprocessing an image, automatically encoding the characteristic and learning the potential characteristic by utilizing a deep learning network model constructed by a convolutional neural network, and obtaining an optimally sketched prediction model through multiple iterations;
The automatic sketching and predicting module is used for carrying out full-automatic organ sketching and predicting on the image without the corresponding mask;
and the prediction result writing module is used for writing the sketching result obtained by the automatic sketching prediction module into the original MRI file.
Further, the original file preprocessing module comprises a data loading module, a data cleaning module and a data conversion module;
The data loading module reads DCM format medical MRI files from the local place, wherein the DCM format medical MRI files comprise an original image data file and a sketching file, and the DCM format medical MRI files and the sketching file are aligned according to a standard;
The data cleaning module automatically screens the loaded files by taking the cases as references, and discards partial cases with incomplete sketching;
The data conversion module converts the cleaned data into visual picture files, including an image picture file and a mask picture file, wherein the mask picture file is converted from a closed area surrounded by a contour sketched by a doctor, and different organs/backgrounds are represented by different numbers: 0-background, 1-rectum, 2-anal canal, 3-left femoral head, 4-right femoral head, 5-bladder, and save the image and mask as png or jpg picture format.
Further, the sketching model training module specifically performs the following operations:
1) Pretreatment: cutting unnecessary background areas of the picture according to foreground distribution characteristics of the picture and storage capacity of hardware equipment, scaling according to the proportion, and finally carrying out data enhancement processing in a mode of randomly rotating, randomly cutting, overturning, adjusting contrast and adding white noise;
2) Dividing data: dividing the random average of the pictures into 5 parts, wherein the number is A, B, C, D, E, taking the data of the number A as a verification set and the other data as training sets;
3) Building a network model: building a proper deep learning network model according to the image characteristics;
4) Model training and selection: inputting the training set into a deep learning network model for training, and selecting the most suitable parameters through multiple iterations to obtain a prediction model M A containing the parameters;
5) And (3) replacing the verification set with B, C, D, E and repeating the steps 3) and 4) respectively to obtain a prediction model M B、MC、MD、ME containing parameters.
Further, the building of the network model is to model potential features of the image by using a deep learning network, locate and distinguish the same region, and the method comprises the following steps:
3.1 Encoding process: using an encoder to encode each picture for multiple times, extracting semantic features contained in the medical image, and obtaining feature images under different resolutions, wherein each encoding operation consists of convolution, batch normalization, nonlinear activation and pooling processes;
the number relationship of the feature map in the encoding process is expressed as follows:
In the above formula, m represents a feature layer, l represents a layer where the feature layer is located, F Encode (·) represents an encoding process, Θ represents a parameter in the encoding process, x represents an input original picture, Representing the current coding space, H being the height of the original input picture, W being the width of the original input picture, C l representing the number of feature maps, wherein:
l
Cl=2C0
Wherein C 0 represents the initial feature map number;
the nonlinear activation process is completed by Relu function f (x):
f(x)=max(0,x)
3.2 Cross-over feature extraction): the crossing type feature extraction process extracts and integrates semantic features of each layer of the coding process, models internal connection of features under different resolutions through a global attention mechanism module, senses global context information and combines the global context information with features with different resolutions in the decoding process;
3.3 Decoding process): decoding the feature map for multiple times by using a decoder to restore and reconstruct features, wherein each decoding operation consists of convolution, batch normalization, nonlinearity and linear interpolation;
3.4 Reinforcing features represent: in the encoding process and the decoding process, a channel attention mechanism is added to automatically allocate different weights for the feature images in the training process, and the relation between channel features is learned.
Further, the automatic sketching prediction module inputs the images without corresponding masks into the prediction model M A、MB、MC、MD、ME obtained by the sketching model training module respectively to obtain 5 groups of probability matrixes, and the final prediction result is obtained by weighting and integrating the probability matrixes, and the integration process is represented by the following formula:
Wherein:
In the above formula, i represents different models, i ε { M A,MB,MC,MD,ME }; j represents the serial number of the organ to be segmented; h ij is represented as a layer of a probability matrix, W ij is a weighting coefficient; d ij is the Dice coefficient calculated for the organ with the sequence number j under the ith model, the Dice coefficient is the difference between the real label and the predicted label, and the calculation mode is expressed by the following formula:
Wherein: d represents the Dice coefficient, and TP, FP, FN represent the number of correctly classified, misclassified, and misclassified pixels, respectively.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. And an end-to-end medical image segmentation model is constructed by a deep learning method, so that the efficiency of delineating the organs at risk in the radiotherapy diagnosis process is improved.
2. The traditional method for segmenting the medical image by using deep learning is improved, a spanning type feature extraction method and a reinforced feature representation are provided, the image features are extracted and integrated, and the segmentation accuracy is greatly improved.
3. The combination of medicine and computer processing is beneficial to promoting the combined development of medicine and industry.
Drawings
FIG. 1 is a schematic diagram of the relationship between the various modules of the system of the present invention.
FIG. 2 is a flow chart of the training and prediction of the system of the present invention.
FIG. 3 is a schematic diagram of a deep learning network used in the system of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but embodiments of the present invention are not limited thereto.
The embodiment provides a deep learning-based female pelvic MRI automatic sketching system, which is an end-to-end automatic system developed on a Linux system by using Python language. The relation among the modules of the system is shown in fig. 1, the flow chart of system training and prediction is shown in fig. 2, and the system training and prediction comprises the following functional modules:
the original file preprocessing module is used for loading and preprocessing the female pelvic cavity MRI desensitization file and generating an image file with stronger readability and a corresponding image mask file;
The sketching model training module is used for preprocessing an image, automatically encoding the characteristic and learning the potential characteristic by utilizing a deep learning network model constructed by a convolutional neural network, and obtaining an optimally sketched prediction model through multiple iterations;
The automatic sketching and predicting module is used for carrying out full-automatic organ sketching and predicting on the image without the corresponding mask;
and the prediction result writing module is used for writing the sketching result obtained by the automatic sketching prediction module into the original MRI file.
The original file preprocessing module comprises a data loading module, a data cleaning module and a data conversion module.
The data loading module reads DCM format medical MRI files, including original image data files and sketching files, locally and aligns the two files according to a standard.
The original file preprocessing module comprises a data loading module, a data cleaning module and a data conversion module;
The data loading module reads DCM format medical MRI files from the local place, wherein the DCM format medical MRI files comprise an original image data file and a sketching file, and the DCM format medical MRI files and the sketching file are aligned according to a standard;
The data cleaning module automatically screens the loaded files by taking the cases as references, and discards partial cases with incomplete sketching;
The data conversion module converts the cleaned data into visual picture files, including an image picture file and a mask picture file, wherein the mask picture file is converted from a closed area surrounded by a contour sketched by a doctor, and different organs/backgrounds are represented by different numbers: 0-background, 1-rectum, 2-anal canal, 3-left femoral head, 4-right femoral head, 5-bladder, and save the image and mask as png or jpg picture format.
The sketching model training module specifically performs the following operations:
1) Pretreatment: cutting unnecessary background areas of the picture according to foreground distribution characteristics of the picture and storage capacity of hardware equipment, scaling according to the proportion, and finally carrying out data enhancement processing in a mode of randomly rotating, randomly cutting, overturning, adjusting contrast and adding white noise;
2) Dividing data: dividing the random average of the pictures into 5 parts, wherein the number is A, B, C, D, E, taking the data of the number A as a verification set and the other data as training sets;
3) Building a network model: building a proper deep learning network model according to the image characteristics;
4) Model training and selection: inputting the training set into a deep learning network model for training, and selecting the most suitable parameters through multiple iterations to obtain a prediction model M A containing the parameters;
5) And (3) replacing the verification set with B, C, D, E and repeating the steps 3) and 4) respectively to obtain a prediction model M B、MC、MD、ME containing parameters.
The step of building a network model, as shown in fig. 3, uses a deep learning network to model potential features of an image, positions and distinguishes the same region, and includes the following steps:
3.1 Encoding process: using an encoder to encode each picture for multiple times, extracting semantic features contained in the medical image, and obtaining feature images under different resolutions, wherein each encoding operation consists of convolution, batch normalization, nonlinear activation and pooling processes;
the number relationship of the feature map in the encoding process is expressed as follows:
In the above formula, m represents a feature layer, l represents a layer where the feature layer is located, F Encode (·) represents an encoding process, Θ represents a parameter in the encoding process, x represents an input original picture, Representing the current coding space, H being the height of the original input picture, W being the width of the original input picture, C l representing the number of feature maps, wherein:
l
Cl=2C0
Wherein C 0 represents the initial feature map number;
the nonlinear activation process is completed by Relu function f (x):
f(x)=max(0,x)
3.2 Cross-over feature extraction): the crossing type feature extraction process extracts and integrates semantic features of each layer of the coding process, models internal connection of features under different resolutions through a global attention mechanism module, senses global context information and combines the global context information with features with different resolutions in the decoding process;
3.3 Decoding process): decoding the feature map for multiple times by using a decoder to restore and reconstruct features, wherein each decoding operation consists of convolution, batch normalization, nonlinearity and linear interpolation;
3.4 Reinforcing features represent: in the encoding process and the decoding process, a channel attention mechanism is added to automatically allocate different weights for the feature images in the training process, and the relation between channel features is learned.
The automatic sketching prediction module inputs images without corresponding masks into a prediction model M A、MB、MC、MD、ME obtained by the sketching model training module respectively to obtain 5 groups of probability matrixes, and a final prediction result is obtained by weighting and integrating the probability matrixes, and the integration process is represented by the following formula:
Wherein:
In the above formula, i represents different models, i ε { M A,MB,MC,MD,ME }; j represents the serial number of the organ to be segmented; h ij is represented as a layer of a probability matrix, W ij is a weighting coefficient; d ij is the Dice coefficient calculated for the organ with the sequence number j under the ith model, the Dice coefficient is the difference between the real label and the predicted label, and the calculation mode is expressed by the following formula:
Wherein: d represents the Dice coefficient, and TP, FP, FN represent the number of correctly classified, misclassified, and misclassified pixels, respectively.
The above examples are preferred embodiments of the present invention, but the embodiments of the present invention are not limited to the above examples, and any other changes, modifications, substitutions, combinations, and simplifications that do not depart from the spirit and principle of the present invention should be made in the equivalent manner, and the embodiments are included in the protection scope of the present invention.

Claims (3)

1. A female pelvic MRI automatic delineation system based on deep learning, characterized by comprising:
the original file preprocessing module is used for loading and preprocessing the female pelvic cavity MRI desensitization file and generating an image file with stronger readability and a corresponding image mask file;
The sketching model training module is used for preprocessing an image, automatically encoding the characteristic and learning the potential characteristic by utilizing a deep learning network model constructed by a convolutional neural network, and obtaining an optimally sketched prediction model through multiple iterations;
The automatic sketching and predicting module is used for carrying out full-automatic organ sketching and predicting on the image without the corresponding mask;
the prediction result writing module writes the sketching result obtained by the automatic sketching prediction module into the original MRI file;
The sketching model training module specifically performs the following operations:
1) Pretreatment: cutting unnecessary background areas of the picture according to foreground distribution characteristics of the picture and storage capacity of hardware equipment, scaling according to the proportion, and finally carrying out data enhancement processing in a mode of randomly rotating, randomly cutting, overturning, adjusting contrast and adding white noise;
2) Dividing data: dividing the random average of the pictures into 5 parts, wherein the number is A, B, C, D, E, taking the data of the number A as a verification set and the other data as training sets;
3) Building a network model: building a proper deep learning network model according to the image characteristics;
4) Model training and selection: inputting the training set into a deep learning network model for training, and selecting the most suitable parameters through multiple iterations to obtain a prediction model M A containing the parameters;
5) Replacing the verification set with B, C, D, E and repeating the steps 3) and 4) respectively to obtain a prediction model M B、MC、MD、ME containing parameters;
The network model is built by utilizing a deep learning network, modeling potential features of an image, positioning and distinguishing the same region, and comprises the following steps:
3.1 Encoding process: using an encoder to encode each picture for multiple times, extracting semantic features contained in the medical image, and obtaining feature images under different resolutions, wherein each encoding operation consists of convolution, batch normalization, nonlinear activation and pooling processes;
the number relationship of the feature map in the encoding process is expressed as follows:
In the above formula, m represents a feature layer, l represents a layer where the feature layer is located, F Encode (·) represents an encoding process, Θ represents a parameter in the encoding process, x represents an input original picture, Representing the current coding space, H being the height of the original input picture, W being the width of the original input picture, C l representing the number of feature maps, wherein:
Cl=2lC0
Wherein C 0 represents the initial feature map number;
the nonlinear activation process is completed by Relu function f (x):
f(x)=max(0,x)
3.2 Cross-over feature extraction): the crossing type feature extraction process extracts and integrates semantic features of each layer of the coding process, models internal connection of features under different resolutions through a global attention mechanism module, senses global context information and combines the global context information with features with different resolutions in the decoding process;
3.3 Decoding process): decoding the feature map for multiple times by using a decoder to restore and reconstruct features, wherein each decoding operation consists of convolution, batch normalization, nonlinearity and linear interpolation;
3.4 Reinforcing features represent: in the encoding process and the decoding process, a channel attention mechanism is added to automatically allocate different weights for the feature images in the training process, and the relation between channel features is learned.
2. The deep learning-based female pelvic MRI automatic delineation system of claim 1, wherein: the original file preprocessing module comprises a data loading module, a data cleaning module and a data conversion module;
The data loading module reads DCM format medical MRI files from the local place, wherein the DCM format medical MRI files comprise an original image data file and a sketching file, and the DCM format medical MRI files and the sketching file are aligned according to a standard;
The data cleaning module automatically screens the loaded files by taking the cases as references, and discards partial cases with incomplete sketching;
The data conversion module converts the cleaned data into visual picture files, including an image picture file and a mask picture file, wherein the mask picture file is converted from a closed area surrounded by a contour sketched by a doctor, and different organs/backgrounds are represented by different numbers: 0-background, 1-rectum, 2-anal canal, 3-left femoral head, 4-right femoral head, 5-bladder, and save the image and mask as png or jpg picture format.
3. The deep learning-based female pelvic MRI automatic delineation system of claim 1, wherein: the automatic sketching prediction module inputs images without corresponding masks into a prediction model M A、MB、MC、MD、ME obtained by the sketching model training module respectively to obtain 5 groups of probability matrixes, and a final prediction result is obtained by weighting and integrating the probability matrixes, and the integration process is represented by the following formula:
Wherein:
In the above formula, i represents different models, i ε { M A,MB,MC,MD,ME }; j represents the serial number of the organ to be segmented; h ij is represented as a layer of a probability matrix, W ij is a weighting coefficient; d ij is the Dice coefficient calculated for the organ with the sequence number j under the ith model, the Dice coefficient is the difference between the real label and the predicted label, and the calculation mode is expressed by the following formula:
Wherein: d represents the Dice coefficient, and TP, FP, FN represent the number of correctly classified, misclassified, and misclassified pixels, respectively.
CN202210276547.9A 2022-03-21 2022-03-21 Female pelvic cavity MRI automatic sketching system based on deep learning Active CN114612478B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210276547.9A CN114612478B (en) 2022-03-21 2022-03-21 Female pelvic cavity MRI automatic sketching system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210276547.9A CN114612478B (en) 2022-03-21 2022-03-21 Female pelvic cavity MRI automatic sketching system based on deep learning

Publications (2)

Publication Number Publication Date
CN114612478A CN114612478A (en) 2022-06-10
CN114612478B true CN114612478B (en) 2024-05-10

Family

ID=81864562

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210276547.9A Active CN114612478B (en) 2022-03-21 2022-03-21 Female pelvic cavity MRI automatic sketching system based on deep learning

Country Status (1)

Country Link
CN (1) CN114612478B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116452614B (en) * 2023-06-15 2023-09-01 北京大学 Ultrasonic image segmentation method and system based on deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107403201A (en) * 2017-08-11 2017-11-28 强深智能医疗科技(昆山)有限公司 Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method
CN111047594A (en) * 2019-11-06 2020-04-21 安徽医科大学 Tumor MRI weak supervised learning analysis modeling method and model thereof
CN114170193A (en) * 2021-12-10 2022-03-11 中国科学技术大学 Automatic nasopharyngeal carcinoma target area delineating method and system based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107403201A (en) * 2017-08-11 2017-11-28 强深智能医疗科技(昆山)有限公司 Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method
CN111047594A (en) * 2019-11-06 2020-04-21 安徽医科大学 Tumor MRI weak supervised learning analysis modeling method and model thereof
CN114170193A (en) * 2021-12-10 2022-03-11 中国科学技术大学 Automatic nasopharyngeal carcinoma target area delineating method and system based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
深度卷积神经网络在放射治疗计划图像分割中的应用;邓金城等;《中国医学物理学杂志》;20180625(第06期);全文 *

Also Published As

Publication number Publication date
CN114612478A (en) 2022-06-10

Similar Documents

Publication Publication Date Title
CN110310287B (en) Automatic organ-at-risk delineation method, equipment and storage medium based on neural network
CN111798462B (en) Automatic delineation method of nasopharyngeal carcinoma radiotherapy target area based on CT image
CN113674281B (en) Liver CT automatic segmentation method based on deep shape learning
CN110321920A (en) Image classification method, device, computer readable storage medium and computer equipment
CN110889853A (en) Tumor segmentation method based on residual error-attention deep neural network
CN111640120A (en) Pancreas CT automatic segmentation method based on significance dense connection expansion convolution network
CN112418329A (en) Cervical OCT image classification method and system based on multi-scale textural feature fusion
CN113674253A (en) Rectal cancer CT image automatic segmentation method based on U-transducer
CN110689525A (en) Method and device for recognizing lymph nodes based on neural network
CN110008992B (en) Deep learning method for prostate cancer auxiliary diagnosis
CN114494296A (en) Brain glioma segmentation method and system based on fusion of Unet and Transformer
CN111242956A (en) U-Net-based ultrasonic fetal heart and fetal lung deep learning joint segmentation method
Skeika et al. Convolutional neural network to detect and measure fetal skull circumference in ultrasound imaging
CN112785609B (en) CBCT tooth segmentation method based on deep learning
CN115018809A (en) Target area segmentation and identification method and system of CT image
CN112634265B (en) Method and system for constructing and segmenting fully-automatic pancreas segmentation model based on DNN (deep neural network)
CN114612478B (en) Female pelvic cavity MRI automatic sketching system based on deep learning
CN114511554A (en) Automatic nasopharyngeal carcinoma target area delineating method and system based on deep learning
CN117058307A (en) Method, system, equipment and storage medium for generating heart three-dimensional nuclear magnetic resonance image
CN114581474A (en) Automatic clinical target area delineation method based on cervical cancer CT image
CN113035334B (en) Automatic delineation method and device for radiotherapy target area of nasal cavity NKT cell lymphoma
Vinod et al. Ensemble Technique for Brain Tumour Patient Survival Prediction
CN114049357A (en) Breast ultrasonic segmentation method based on feature set association degree
CN114387282A (en) Accurate automatic segmentation method and system for medical image organs
CN114331996A (en) Medical image classification method and system based on self-coding decoder

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant