CN115187577A - Method and system for automatically delineating breast cancer clinical target area based on deep learning - Google Patents
Method and system for automatically delineating breast cancer clinical target area based on deep learning Download PDFInfo
- Publication number
- CN115187577A CN115187577A CN202210937016.XA CN202210937016A CN115187577A CN 115187577 A CN115187577 A CN 115187577A CN 202210937016 A CN202210937016 A CN 202210937016A CN 115187577 A CN115187577 A CN 115187577A
- Authority
- CN
- China
- Prior art keywords
- image
- layer
- mask
- target area
- clinical target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 206010006187 Breast cancer Diseases 0.000 title claims abstract description 28
- 208000026310 Breast neoplasm Diseases 0.000 title claims abstract description 28
- 238000013135 deep learning Methods 0.000 title claims abstract description 22
- 238000012549 training Methods 0.000 claims abstract description 62
- 230000011218 segmentation Effects 0.000 claims abstract description 40
- 238000003062 neural network model Methods 0.000 claims abstract description 36
- 238000013145 classification model Methods 0.000 claims abstract description 25
- 230000000877 morphologic effect Effects 0.000 claims abstract description 17
- 230000008859 change Effects 0.000 claims description 59
- 238000005070 sampling Methods 0.000 claims description 20
- 238000010606 normalization Methods 0.000 claims description 11
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000012952 Resampling Methods 0.000 claims description 5
- 238000003709 image segmentation Methods 0.000 claims description 4
- 238000011176 pooling Methods 0.000 claims description 3
- 230000017105 transposition Effects 0.000 claims description 3
- 230000000694 effects Effects 0.000 description 8
- 238000001959 radiotherapy Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 4
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000003902 lesion Effects 0.000 description 3
- 210000001165 lymph node Anatomy 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000013434 data augmentation Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 210000000779 thoracic wall Anatomy 0.000 description 2
- 241000710177 Citrus tristeza virus Species 0.000 description 1
- 206010028980 Neoplasm Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000001174 ascending effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 201000011510 cancer Diseases 0.000 description 1
- 231100000504 carcinogenesis Toxicity 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000036210 malignancy Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
- A61B6/032—Transmission computed tomography [CT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/50—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
- A61B6/502—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of breast, i.e. mammography
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
- A61B6/5217—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30068—Mammography; Breast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Multimedia (AREA)
- Optics & Photonics (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Animal Behavior & Ethology (AREA)
- Databases & Information Systems (AREA)
- Surgery (AREA)
- Heart & Thoracic Surgery (AREA)
- High Energy & Nuclear Physics (AREA)
- Pathology (AREA)
- Mathematical Physics (AREA)
- Pulmonology (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- Computational Linguistics (AREA)
- Dentistry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Physiology (AREA)
- Image Analysis (AREA)
Abstract
A breast cancer clinical target area automatic delineation method and a system based on deep learning are provided, the method comprises the following steps: acquiring a CT image and a corresponding clinical target area mask of a patient, and generating a training sample set based on the CT image and the corresponding clinical target area mask; classifying each layer of the CT images in the training sample set based on morphological differences to obtain the type of each layer in each CT image; training a classification neural network model based on each CT image in the sample set and the type of each layer in the image to obtain a CT image layer classification model; training a multichannel neural network model based on each CT image in the sample set, the type of each layer in the CT image and a mask corresponding to the CT image to obtain a clinical target area segmentation model; inputting the CT image to be sketched into a CT image layer classification model to obtain the type of each layer of the CT image to be sketched; and inputting the CT image to be delineated into a clinical target area segmentation model according to the type of each layer of the CT image to be delineated to obtain a clinical target area mask of the CT image.
Description
Technical Field
The invention relates to the technical field of clinical target area delineation, in particular to a breast cancer clinical target area automatic delineation method and system based on deep learning.
Background
Breast cancer is the most common malignancy in women and is also the leading cause of cancer death in women. The life expectancy of breast cancer patients has been increasing in recent years, and radiation therapy plays an important role therein. The rapid development of computer technology and imaging technology in the last 20 years has pushed the radiotherapy technology to fully enter the three-dimensional radiotherapy era from the two-dimensional radiotherapy era. For patients after breast cancer operation, because macroscopic subclinical lesions may exist around the lesions and in lymph node drainage areas, most patients still need auxiliary radiotherapy after operation, and the three-dimensional area of the subclinical lesion is defined as Clinical Target Volume (CTV) of radiotherapy. Accurate delineation of CTVs based on localized CT images is a core task for radiotherapy physicians. The standard CT positioning image usually contains dozens of sections, a doctor needs to individually draw a CTV in each section, and the drawing process is time-consuming and labor-consuming. In addition, the difference of target areas drawn by doctors with different experience levels is large, and the treatment effect is influenced.
In recent decades, with the development of machine learning, computer-aided target delineation methods have begun to be applied clinically. The existing automatic sketching method mainly comprises two methods: the first is a map-set based approach. This method requires the physician to select the template image and target area in advance as a map set. During the drawing, a doctor selects a template image closest to the image to be drawn for registration, and then generates a target area of the image to be drawn through a deformation field matrix. The second is automatic target delineation based on deep learning, and this kind of technique needs to collect certain quantity of image and target area data in advance, trains automatic network model based on data, utilizes the model that trains to carry out the automatic delineation of target area.
In breast cancer patients undergoing mastectomy, the CTV includes the chest wall and adjacent lymph node drainage areas. The target area is large in volume, irregular in shape and susceptible to body position. Atlas-based methods are limited by the accuracy of the selected template and the match. The existing breast cancer clinical target area delineation method based on deep learning mainly collects the sub-target area outlines or masks of the chest wall and each lymph node drainage area delineated manually by a doctor, respectively trains the models and then integrates into a complete CTV. The disadvantages are that data collection is difficult, delineation efficiency is low and the continuity of the overall target region is easily neglected.
Disclosure of Invention
In view of the foregoing analysis, embodiments of the present invention aim to provide a method and a system for automatically delineating a breast cancer clinical target area based on deep learning, so as to solve the problems of low efficiency and low accuracy of the existing methods.
On one hand, the embodiment of the invention provides a breast cancer clinical target area automatic delineation method based on deep learning, which comprises the following steps:
acquiring a CT image and a corresponding clinical target area mask of a patient, and generating a training sample set based on the CT image and the corresponding clinical target area mask;
classifying each layer of the CT images in the training sample set based on morphological difference to obtain the type of each layer in each CT image;
training a classification neural network model based on each CT image in the sample set and the type of each layer in the image to obtain a CT image layer classification model;
training a multichannel neural network model based on each CT image in the sample set, the type of each layer in the CT image and a mask corresponding to the CT image to obtain a clinical target area segmentation model;
inputting the CT image to be sketched into a CT image layer classification model to obtain the type of each layer of the CT image to be sketched; and inputting the CT image to be delineated into a clinical target area segmentation model according to the type of each layer of the CT image to be delineated to obtain a clinical target area mask of the CT image.
Based on the further improvement of the technical scheme, classifying each layer of the CT images in the training sample set based on the morphological difference comprises the following steps:
regarding each CT image, taking the first layer with a corresponding mask as a first type; sequentially traversing each layer, and calculating the center offset distance and the size change rate between the current layer and the previous layer based on the clinical target area mask corresponding to the CT image;
if the center offset distance or the size change rate is larger than the threshold value, the current layer and the previous layer are of different types; otherwise, the current level and the previous level are of the same type.
Further, the dimensional change rate includes a length change rate and a width change rate;
calculating the center offset distance and the size change rate between the current layer and the previous layer based on the clinical target area mask corresponding to the CT image, comprising the following steps:
calculating the minimum external rectangle of each layer of the CT image corresponding to the mask;
calculating the center offset distance and the size change rate between the current level and the previous level by adopting the following formulas:
length change rate = (l) i -l i-1 )/l i
Width change rate = (w) i -w i-1 )/w i
Wherein (x) i ,y i ) The coordinate of the center point of the minimum bounding rectangle representing the mask corresponding to the current layer, (x) i-1 ,y i-1 ) The center point of the smallest circumscribed rectangle representing the mask corresponding to the previous layer of the current layer, l i Length, w, of the smallest bounding rectangle representing the mask corresponding to the current level i Width of the smallest circumscribed rectangle representing the mask corresponding to the current level,/ i-1 Represents the length, w, of the smallest bounding rectangle of the mask corresponding to the layer preceding the current layer i-1 The width of the smallest circumscribed rectangle representing the mask corresponding to the layer preceding the current layer.
Furthermore, the number of input channels of the multichannel neural network model is the same as the number of layer types;
training a multichannel neural network model based on each CT image in a sample set, the type of each layer in the CT image and a mask corresponding to the CT image to obtain a target region segmentation model, and the method comprises the following steps:
and inputting the layers with the same type in each CT image into the same input channel to train the multi-channel neural network model.
Further, the loss of the multi-channel neural network model is calculated by using the following formula:
L=αL dice +βL ce
wherein, P k Mask matrix corresponding to the kth level type representing model prediction, G k And C represents the number of the layer classes, alpha and beta represent weight parameters, and L represents the total loss of the sample.
Further, generating a training sample set based on the CT images and corresponding clinical target zone masks, comprising:
resampling the CT image and the corresponding clinical target area mask to a standard voxel by adopting a linear difference method;
normalizing the voxel value of the CT image;
and carrying out image segmentation on the normalized CT image to obtain a body mask, calculating a minimum external cuboid of the body mask, extracting the CT image in the minimum external cuboid and a corresponding clinical target area mask, and generating a training sample set.
Furthermore, the multi-channel neural network model comprises a multi-channel convolution layer, a down-sampling convolution module and an up-sampling convolution module which are connected in sequence;
the multi-channel convolution layer is used for extracting multi-channel characteristic images from a plurality of input channels by adopting convolution; the down-sampling convolution module is used for extracting features of different levels of the multi-channel feature image, and the up-sampling convolution module is used for up-sampling the extracted features and outputting a segmentation result;
the downsampling convolution module comprises a plurality of downsampling convolution units, and each downsampling convolution unit comprises a three-dimensional convolution layer, a LeakyReLU layer, a batch normalization layer and a maximum pooling layer which are sequentially connected;
the up-sampling convolution module comprises a plurality of up-sampling convolution units, and each up-sampling convolution unit comprises a three-dimensional convolution layer, a LeakyReLU layer, a batch normalization layer and a transposition convolution layer which are sequentially connected.
On the other hand, the embodiment of the invention provides an automatic breast cancer clinical target area delineation system based on deep learning, which comprises the following modules:
the sample set generating module is used for acquiring a CT image and a corresponding clinical target area mask of a patient and generating a training sample set based on the CT image and the corresponding clinical target area mask;
the system comprises a layer classification module, a classification module and a classification module, wherein the layer classification module is used for classifying each layer of CT images in a training sample set based on morphological difference to obtain the type of each layer in each CT image;
the classification model training module is used for training a classification neural network model based on the type of each layer in each CT image in the sample set to obtain a CT image layer classification model;
the segmentation model training module is used for training a multichannel neural network model based on each CT image in the sample set, the type of each layer in the CT image and a mask corresponding to the CT image to obtain a clinical target region segmentation model;
the automatic delineation module is used for inputting the CT image to be delineated into the CT image layer classification model to obtain the type of each layer of the CT image to be delineated; and inputting the CT image to be delineated into a clinical target area segmentation model according to the type of each layer of the CT image to be delineated to obtain a clinical target area mask of the CT image.
Further, the slice classification module classifies each slice of the CT images in the training sample set by:
regarding each CT image, taking the first layer with a corresponding mask as a first type; sequentially traversing each layer, and calculating the center offset distance and the size change rate between the current layer and the previous layer based on the clinical target area mask corresponding to the CT image;
if the center offset distance or the size change rate is larger than the threshold value, the current layer and the previous layer are of different types; otherwise, the current level and the previous level are of the same type.
Further, the dimensional change rate includes a length change rate and a width change rate;
calculating the center offset distance and the size change rate between the current layer and the previous layer based on the clinical target area mask corresponding to the CT image, comprising the following steps:
calculating the minimum external rectangle of each layer of the CT image corresponding to the mask;
calculating the center offset distance and the size change rate between the current level and the previous level by adopting the following formulas:
length change rate = (l) i -l i-1 )/l i
Width change rate = (w) i -w i-1 )/w i
Wherein (x) i ,y i ) The coordinate of the center point of the minimum circumscribed rectangle representing the mask corresponding to the current layer, (x) i-1 ,y i-1 ) The center point of the smallest circumscribed rectangle representing the mask corresponding to the previous layer of the current layer, l i Length, w, of the smallest bounding rectangle representing the mask corresponding to the current level i Width of the smallest circumscribed rectangle representing the mask corresponding to the current level,/ i-1 Representing the length, w, of the smallest bounding rectangle of the mask corresponding to the layer preceding the current layer i-1 The width of the smallest circumscribed rectangle of the corresponding mask in the previous layer of the current layer is shown.
Compared with the prior art, the invention classifies the layers of the CT images in the training set based on the morphological difference, only needs to collect the whole breast cancer clinical target area of the CT images when collecting samples, does not need doctors to outline the sub-target area, reduces the sample processing time, improves the efficiency, can automatically classify the layers of the CT images through the trained classification model, and carries out multi-channel neural network model training based on the layer type, thereby combining the morphological characteristics and ensuring that the model segmentation is more accurate.
In the invention, the technical schemes can be combined with each other to realize more preferable combination schemes. Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
Drawings
The drawings are only for purposes of illustrating particular embodiments and are not to be construed as limiting the invention, wherein like reference numerals are used to designate like parts throughout.
FIG. 1 is a flowchart of a method for automatically delineating a breast cancer clinical target area based on deep learning according to an embodiment of the present invention;
FIG. 2 is a block diagram of a system for automatically delineating a breast cancer clinical target area based on deep learning according to an embodiment of the present invention;
FIG. 3 is a schematic view of a CT slice type according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a model structure of a multi-channel neural network according to an embodiment of the present invention;
fig. 5 is a schematic diagram of variation of dimension index in different methods according to an embodiment of the present invention.
Detailed Description
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate preferred embodiments of the invention and together with the description, serve to explain the principles of the invention and not to limit the scope of the invention.
The embodiment of the invention discloses a method for automatically delineating a breast cancer clinical target area based on deep learning, which comprises the following steps of:
s1, acquiring a CT image and a corresponding clinical target area mask of a patient, and generating a training sample set based on the CT image and the corresponding clinical target area mask;
s2, classifying each layer of the CT images in the training sample set based on morphological differences to obtain the type of each layer in each CT image;
s3, training a classification neural network model based on the type of each layer in each CT image in the sample set to obtain a CT image layer classification model;
s4, training a multichannel neural network model based on each CT image in the sample set, the type of each layer in the CT image and a mask corresponding to the CT image to obtain a clinical target area segmentation model;
s5, inputting the CT image to be outlined into the CT image layer classification model to obtain the type of each layer of the CT image to be outlined; and inputting the CT image to be delineated into a clinical target area segmentation model according to the type of each layer of the CT image to be delineated to obtain a clinical target area mask of the CT image.
The aspect of the CT image concentrated by training is classified based on morphological difference, the whole breast cancer clinical target area of the CT image only needs to be collected when a sample is collected, a doctor does not need to draw the sub-target area, the sample processing time is shortened, the efficiency is improved, the classification model can automatically classify the aspect of the CT image, multi-channel neural network model training is carried out based on the aspect type, and therefore morphological characteristics are combined, and the model segmentation is more accurate.
Through the trained CT image aspect classification model, the CT image to be sketched can be directly classified according to aspect types, and automatic segmentation is more accurate based on the aspect types. Meanwhile, the result of the layer classification can be fed back to a doctor, and the doctor can achieve the purpose of quickly modifying the sketching by directly adjusting the type of the layer according to experience, so that the interaction between the doctor and the automatic sketching model is realized, and the efficiency, the interpretability and the reliability of the automatic sketching are improved.
During implementation, a CT image of a patient and a corresponding clinical target mask are obtained, and the clinical target mask is an integral target mask for radiotherapy after radical mastectomy. The CT image is a positioning CT image, and is three-dimensional image data, and the target area mask is three-dimensional image data having a size equal to that of the CT image, and indicates a list of corresponding points on the CT image, where, for example, 0 indicates that the point does not belong to the target area, and 1 indicates that the point belongs to the target area. The slice of the CT image is a truncated view (slice in the head-foot direction) of the CT image.
Specifically, the step S1 of generating a training sample set based on the CT image and the corresponding clinical target mask includes:
s11, resampling the CT image and the corresponding clinical target area mask to a standard voxel by adopting a linear difference method;
since different CT images may have the problem of voxel inconsistency, in order to achieve more accurate segmentation, the CT image and the corresponding clinical target area mask process are first resampled to normalize the image voxels. During implementation, a linear difference method is adopted for resampling, and the standard voxel is the median of the voxel spacing of all CT sample data. The voxel spacing is the distance between two voxels of the image.
S12, normalizing the voxel value of the CT image;
to facilitate the data, the voxel values of the CT image are normalized. Firstly, according to a CT image in a sample set and a corresponding clinical target area mask, ordering voxel values of a target area part of the CT image according to an ascending order, and in order to eliminate the influence of an extreme value on normalization, taking 5 thousandths of a voxel value ordering sequence as a lower limit lim _ down of the voxel value, and 995 thousandths of the voxel value ordering sequence as an upper limit lim _ up of the voxel value.
And (4) normalizing the voxel value of the CT image according to a formula after normalization and the value of the voxel before normalization, namely-lim _ down)/(lim _ up-lim _ down).
S13, carrying out image segmentation on the normalized CT image to obtain a body mask, calculating a minimum external cuboid of the body mask, extracting the CT image in the minimum external cuboid and a corresponding clinical target area mask, and generating a training sample set.
During implementation, a threshold-based segmentation method is adopted to segment the normalized CT image to obtain a body mask, a maximum connected domain algorithm is adopted to calculate the three-dimensional maximum connected domain of the body mask, the minimum external cuboid of the body mask is obtained according to the three-dimensional maximum connected domain of the body mask, the CT image in the minimum external cuboid of the body mask and the corresponding clinical target area mask are extracted, preprocessed sample data are obtained, and a training sample set is generated.
Specifically, the step S2 of classifying each slice of the CT image in the training sample set based on the morphological difference includes:
regarding each CT image, taking the first layer with a corresponding mask as a first type; sequentially traversing each layer, and calculating the center offset distance and the size change rate between the current layer and the previous layer based on the clinical target area mask corresponding to the CT image;
if the center offset distance or the size change rate is larger than the threshold value, the current layer and the previous layer are of different types; otherwise, the current level and the previous level are of the same type.
That is, for a CT image, the traversal is performed sequentially from the first level to the last level, the first level having the corresponding target mask is of the first type, for example, labeled as morphology type 1, and the shape difference, i.e., the center offset distance and the size change rate, between the second level and the first level is calculated. If the center offset distance is greater than a preset threshold or the size change rate is greater than a preset threshold, it is indicated that the second level is greater than the first level in difference, and therefore the second level is marked as a form type 2, otherwise, it is indicated that the second level is not greater than the first level in difference in form, and the second level is marked as a form type 1, and each level is classified and marked by adopting the method in sequence. As shown in fig. 3, the CT slices are classified into 4 classes according to morphological differences.
The method classifies the CT image aspect by calculating the form difference between the aspects, thereby avoiding manual marking of doctors, saving time, particularly for some small regions, having larger standard error of manual drawing, influencing the segmentation effect, classifying according to the form difference, avoiding the marking error of the small regions, and improving the segmentation accuracy.
Specifically, the dimensional change rate includes a length change rate and a width change rate;
calculating the center offset distance and the size change rate between the current layer and the previous layer based on the clinical target area mask corresponding to the CT image, comprising the following steps:
calculating the minimum circumscribed rectangle of the corresponding mask of each layer of the CT image;
calculating the center offset distance and the size change rate between the current level and the previous level by adopting the following formulas:
length change rate =(l i -l i-1 )/l i
Width change rate = (w) i -w i-1 )/w i
Wherein (x) i ,y i ) The coordinate of the center point of the minimum bounding rectangle representing the mask corresponding to the current layer, (x) i-1 ,y i-1 ) The center point of the minimum bounding rectangle representing the previous layer of the current layer corresponding to the mask, l i Length, w, of the smallest bounding rectangle representing the mask corresponding to the current level i Width of the smallest circumscribed rectangle representing the mask corresponding to the current level,/ i-1 Representing the length, w, of the smallest bounding rectangle of the mask corresponding to the layer preceding the current layer i-1 The width of the smallest circumscribed rectangle of the corresponding mask in the previous layer of the current layer is shown.
The morphology difference measurement is carried out through the center offset distance, the length change rate and the width change rate, if any index checks a threshold value, the current layer and the previous layer are considered to be layers with larger morphology difference, and otherwise, the current layer and the previous layer are considered to be layers with similar morphology. In implementation, the threshold values corresponding to the center offset distance, the length variation rate and the width variation rate can be obtained by counting the corresponding data between two adjacent layers belonging to different sub-target areas respectively in the clinical target area. For example, the threshold corresponding to the center offset distance may count the center offset distances between two adjacent slices belonging to different sub-target regions in the clinical target region, and after removing the abnormal values, calculate the mean and the standard deviation, and the threshold may be set to be the mean +3 × standard deviation.
And after the layer of each CT image in the sample is classified, training a classification neural network model based on each CT image in the sample set and the type of each layer in the image to obtain a CT image layer classification model. In implementation, the classification model may use machine learning methods such as random forest, XGboost, adaboost, and the like, or may use a convolutional neural network. The patent uses the encoder part of the unet network, and adds the classification network constructed by the full connection layer after the encoder. Cross entropy loss is used.
Through the trained CT image aspect classification model, the CT image to be sketched can be directly classified according to aspect types, and automatic segmentation is more accurate based on the aspect types. Meanwhile, the result of the layer classification can be fed back to a doctor, and the doctor can directly adjust the type of the layer according to experience, so that the aim of quickly modifying the target area is fulfilled, the interaction between the doctor and the automatic delineation model is realized, and the efficiency, the interpretability and the reliability of the automatic delineation are improved.
The type of each layer of the CT image is obtained through morphological difference, and a clinical target area segmentation model is obtained based on each CT image in the sample set, the type of each layer in the CT image and a mask corresponding to the CT image training multichannel neural network model.
Specifically, the number of input channels of the multi-channel neural network model is the same as the number of layer types;
training a multi-channel neural network model based on each CT image in a sample set, the type of each layer in the CT image and a mask corresponding to the CT image to obtain a target area segmentation model, and the method comprises the following steps:
and inputting the layers with the same type in each CT image into the same input channel to train the multi-channel neural network model.
It should be noted that, inputting the layers with the same type in each CT image into the same input channel to train the multi-channel neural network model means that each input channel only reserves the layer with the corresponding type, and sets the pixel values of other layers to 0. That is, the data-scale CT image samples for each input channel are the same. For example, the first input channel corresponds to the slice of the modality type 1, and the second input channel corresponds to the slice of the modality type 2, so that for one CT image sample, the input data of the first input channel is obtained by keeping the pixel value of the slice of the modality type 1 unchanged, setting the pixel value of the slice of the other type to 0, and the input data of the second input channel is obtained by keeping the pixel value of the slice of the modality type 2 unchanged, and setting the pixel value of the slice of the other type to 0.
The constructed multichannel neural network model comprises a multichannel convolution layer, a downsampling convolution module and an upsampling convolution module which are connected in sequence;
the multi-channel convolution layer is used for extracting multi-channel characteristic images from a plurality of input channels by adopting convolution; the multi-channel input data is connected by the multi-channel convolutional layer. The multi-channel convolution layer performs convolution operation on input data of each channel respectively, weighting and adding the convolution results of the channels to obtain a multi-channel characteristic image, and the weights of the channels are obtained through network training. Different weights are gradually distributed to different channels in back propagation, so that the correlation between the spatial information and the morphological information of the relevant positions is determined.
The down-sampling convolution module is used for extracting features of different levels of the multi-channel feature image, and the up-sampling convolution module is used for up-sampling the extracted features and outputting a segmentation result;
the downsampling convolution module comprises a plurality of downsampling convolution units, and each downsampling convolution unit comprises a three-dimensional convolution layer, a LeakyReLU layer, a batch normalization layer and a maximum pooling layer which are connected in sequence;
the up-sampling convolution module comprises a plurality of up-sampling convolution units, and each up-sampling convolution unit comprises a three-dimensional convolution layer, a LeakyReLU layer, a batch normalization layer and a transposition convolution layer which are sequentially connected.
As shown in fig. 4, the downsampling convolution module includes 4 downsampling convolution units, the upsampling convolution module includes 4 upsampling convolution units, the downsampling convolution units correspond to the upsampling convolution units one to one, one upsampling convolution unit is connected with an upper layer of upsampling convolution unit and a downsampling convolution unit which is the same as the feature channel dimension of the upper layer of convolution unit in the downsampling convolution module, and the features extracted by the upper layer of upsampling convolution unit and the features extracted by the downsampling convolution unit are spliced on the channel dimension, so that more dimension/position information is retained, and the segmentation is more accurate.
And the LeakyReLU and the batch normalization layer are adopted to accelerate the convergence speed of the model and reduce the problem of gradient disappearance.
And calculating the model loss according to the prediction result of the multi-channel neural network model, and performing back propagation to adjust the model parameters according to the model loss. And when the model loss is less than the threshold value and is stable, ending the training to obtain the clinical target segmentation model.
Specifically, the loss of the multi-channel neural network model is calculated by adopting the following formula:
L=αL dice +βL ce
wherein, P k Mask matrix corresponding to the kth level type representing model prediction, G k And C represents the number of the layer classes, alpha and beta represent weight parameters, and L represents the total loss of the sample. Wherein, L represents the total loss of one sample, and the average value of the total loss of all the training samples is calculated to obtain the total training loss so as to adjust the model parameters.
Wherein L is dice Representing the Dice loss function for measuring the similarity of the model predicted mask and the gold standard mask, L ce And the cross entropy loss function is expressed and used for measuring whether the statistical distribution of the mask predicted by the model is consistent with the distribution of the gold standard mask or not, and the Dice loss function and the cross entropy loss function are combined, so that the loss calculation is more accurate, and the prediction accuracy and the training efficiency are improved.
It should be noted that the gold standard mask matrix G corresponding to the kth level type k The method includes that in a clinical target area mask corresponding to a CT image, a mask value corresponding to the kth layer type is reserved, and mask values corresponding to other types of layers are 0 to obtain a mask matrix.
In order to effectively utilize sample data and improve the segmentation accuracy, data augmentation can be performed on the sample data before the multichannel neural network model training is performed, for example: carrying out scaling on the CT image in the sample and the corresponding mask in the same proportion, wherein the proportion range can be 0.7-1.4; adjusting a contrast map of the CT image by gamma conversion; rotating the CT image and the corresponding mask by the same angle, for example, rotating by 0-30 degrees in three directions of a three-dimensional space respectively; a region of interest of size [28,256 ] is randomly cropped on the data and mask where data augmentation is complete, and if the data is less than [28,256 ], then 0 is padded around the data. If the region of interest does not include the target region, resculpting until the region of interest contains the target region.
After a clinical target area segmentation model is trained, inputting a CT image to be delineated into a CT image layer classification model to obtain the type of each layer of the CT image to be delineated; and inputting the CT image to be delineated into a clinical target area segmentation model according to the type of each layer of the CT image to be delineated to obtain a clinical target area mask of the CT image. During implementation, if the voxel of the CT image to be delineated is inconsistent with the standard voxel, or the voxel value is not within the normalized numerical range, firstly, preprocessing such as normalization can be performed on the CT image to be delineated according to the processing procedures of steps S11 to S13, image segmentation is performed on the normalized CT image to obtain a body mask, the minimum external cuboid of the body mask is calculated, the CT image in the minimum external cuboid is extracted to obtain a model input image corresponding to the CT image to be delineated, the obtained model input image is input into the classification model for level classification, the corresponding CT level image is input into the corresponding channel of the clinical target area segmentation model according to the level classification result to obtain a segmentation result, namely a clinical target area mask, the maximum connected domain of the mask is taken, and the broken points are removed to obtain the clinical target area mask corresponding to the model input image. And resampling the clinical target area mask to the voxel size of the original CT image, and restoring the resampled clinical target area mask to the size of the original CT image according to the size of the body mask, thereby obtaining the clinical target area mask corresponding to the original CT image to be sketched.
The effect of the clinical target segmentation model of the present invention is illustrated by experimental data below.
As shown in fig. 5, the change of the face Dice index in the head and foot direction by using the two-dimensional unet model, the three-dimensional unet model and the breast cancer clinical target area automatic delineation method of the present invention is shown, and it can be seen from the figure that the Dice in the face of the head and foot direction of the model of the present invention is closer to 1, and especially, the effect of the face of the head direction is significantly improved.
The Dice index is as follows: the value range is [0,1], the coincidence degree between two input binary images (usually applied to a prediction mask and a gold standard mask) is reflected, and the closer to 1, the higher the coincidence degree is, and the better the effect is.
Table 1 shows the DICE mean, 95 hausdorff distance, and average surface distance between the mask prediction results obtained by different methods and the golden standard mask (i.e., the original mask outlined by the doctor), compared with the two-dimensional unet and the three-dimensional unet, the DICE value mean value using the automatic delineation method of the present invention is larger, and the 95 hausdorff distance and average surface distance are smaller, which indicates that the automatic delineation method of the present invention has better effect, smaller standard deviation, and more stable effect.
TABLE 1 DICE mean, 95 Housdov distance and mean surface distance for different methods
The invention discloses a breast cancer clinical target area automatic delineation system based on deep learning, which comprises the following modules as shown in figure 2:
the sample set generating module is used for acquiring a CT image and a corresponding clinical target area mask of a patient and generating a training sample set based on the CT image and the corresponding clinical target area mask;
the system comprises a layer classification module, a classification module and a classification module, wherein the layer classification module is used for classifying each layer of CT images in a training sample set based on morphological difference to obtain the type of each layer in each CT image;
the classification model training module is used for training a classification neural network model based on the type of each layer in each CT image in the sample set to obtain a CT image layer classification model;
the segmentation model training module is used for training a multichannel neural network model based on each CT image in the sample set, the type of each layer in the CT image and a mask corresponding to the CT image to obtain a clinical target region segmentation model;
the automatic delineation module is used for inputting the CT image to be delineated into the CT image layer classification model to obtain the type of each layer of the CT image to be delineated; and inputting the CT image to be delineated into a clinical target area segmentation model according to the type of each layer of the CT image to be delineated to obtain a clinical target area mask of the CT image.
Preferably, the slice classification module classifies each slice of the CT image in the training sample set by the following steps:
regarding each CT image, taking the first layer with the corresponding mask as a first type; sequentially traversing each layer, and calculating the center offset distance and the size change rate between the current layer and the previous layer based on the clinical target area mask corresponding to the CT image;
if the center offset distance or the size change rate is larger than the threshold value, the current layer and the previous layer are of different types; otherwise, the current level and the previous level are of the same type.
Preferably, the dimensional change rate includes a length change rate and a width change rate;
calculating the center offset distance and the size change rate between the current layer and the previous layer based on the clinical target area mask corresponding to the CT image, comprising the following steps:
calculating the minimum circumscribed rectangle of the corresponding mask of each layer of the CT image;
calculating the center offset distance and the size change rate between the current level and the previous level by adopting the following formulas:
length change rate = (l) i -l i-1 )/l i
Width change rate = (w) i -w i-1 )/w i
Wherein (x) i ,y i ) The coordinate of the center point of the minimum circumscribed rectangle representing the mask corresponding to the current layer, (x) i-1 ,y i-1 ) Is shown asThe front layer of the front layer corresponds to the center point of the smallest circumscribed rectangle of the mask, l i Length, w, of the smallest bounding rectangle representing the mask corresponding to the current level i Width of the smallest circumscribed rectangle representing the mask corresponding to the current level,/ i-1 Representing the length, w, of the smallest bounding rectangle of the mask corresponding to the layer preceding the current layer i-1 The width of the smallest circumscribed rectangle of the corresponding mask in the previous layer of the current layer is shown.
The method embodiment and the system embodiment are based on the same principle, and related parts can be referenced mutually, and the same technical effect can be achieved. For a specific implementation process, reference is made to the foregoing embodiments, which are not described herein again.
Those skilled in the art will appreciate that all or part of the processes for implementing the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, for instructing the relevant hardware. The computer readable storage medium is a magnetic disk, an optical disk, a read-only memory or a random access memory.
While the invention has been described with reference to specific preferred embodiments, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the following claims.
Claims (10)
1. A breast cancer clinical target area automatic delineation method based on deep learning is characterized by comprising the following steps:
acquiring a CT image and a corresponding clinical target area mask of a patient, and generating a training sample set based on the CT image and the corresponding clinical target area mask;
classifying each layer of the CT images in the training sample set based on morphological differences to obtain the type of each layer in each CT image;
training a classification neural network model based on each CT image in the sample set and the type of each layer in the image to obtain a CT image layer classification model;
training a multichannel neural network model based on each CT image in the sample set, the type of each layer in the CT image and a mask corresponding to the CT image to obtain a clinical target area segmentation model;
inputting the CT image to be sketched into a CT image layer classification model to obtain the type of each layer of the CT image to be sketched; and inputting the CT image to be delineated into a clinical target area segmentation model according to the type of each layer of the CT image to be delineated to obtain a clinical target area mask of the CT image.
2. The method for automatically delineating a breast cancer clinical target area based on deep learning of claim 1, wherein classifying each slice of the CT images in the training sample set based on morphological differences comprises:
regarding each CT image, taking the first layer with a corresponding mask as a first type; sequentially traversing each layer, and calculating the center offset distance and the size change rate between the current layer and the previous layer based on the clinical target area mask corresponding to the CT image;
if the center offset distance or the size change rate is larger than the threshold value, the current layer and the previous layer are of different types; otherwise, the current level and the previous level are of the same type.
3. The deep learning-based breast cancer clinical target zone automatic delineation method of claim 2 wherein the size change rate comprises a length change rate and a width change rate;
calculating the center offset distance and the size change rate between the current layer and the previous layer based on the clinical target area mask corresponding to the CT image, comprising the following steps:
calculating the minimum external rectangle of each layer of the CT image corresponding to the mask;
calculating the center offset distance and the size change rate between the current level and the previous level by adopting the following formulas:
length change rate = (l) i -l i-1 )/l i
Width change rate = (w) i -w i-1 )/w i
Wherein (x) i ,y i ) The coordinate of the center point of the minimum circumscribed rectangle representing the mask corresponding to the current layer, (x) i-1 ,y i-1 ) The center point of the minimum bounding rectangle representing the previous layer of the current layer corresponding to the mask, l i Length, w, of the smallest bounding rectangle representing the mask corresponding to the current level i Width of the smallest circumscribed rectangle representing the mask corresponding to the current level,/ i-1 Representing the length, w, of the smallest bounding rectangle of the mask corresponding to the layer preceding the current layer i-1 The width of the smallest circumscribed rectangle representing the mask corresponding to the layer preceding the current layer.
4. The method for automatically delineating a breast cancer clinical target zone based on deep learning as claimed in claim 1, wherein the number of input channels of the multi-channel neural network model is the same as the number of bedding types;
training a multichannel neural network model based on each CT image in a sample set, the type of each layer in the CT image and a mask corresponding to the CT image to obtain a target region segmentation model, and the method comprises the following steps:
and inputting the layers with the same type in each CT image into the same input channel to train the multi-channel neural network model.
5. The method for automatically delineating a breast cancer clinical target zone based on deep learning as claimed in claim 1, wherein the following formula is adopted to calculate the loss of the multichannel neural network model:
L=αL dice +βL ce
wherein, P k Mask matrix corresponding to the kth level type representing model prediction, G k And C represents the number of the layer classes, alpha and beta represent weight parameters, and L represents the total loss of the sample.
6. The method of claim 1, wherein generating a training sample set based on the CT images and corresponding clinical target masks comprises:
resampling the CT image and the corresponding clinical target area mask to a standard voxel by adopting a linear difference method;
normalizing the voxel value of the CT image;
and carrying out image segmentation on the normalized CT image to obtain a body mask, calculating a minimum external cuboid of the body mask, extracting the CT image in the minimum external cuboid and a corresponding clinical target area mask, and generating a training sample set.
7. The method for automatically delineating a breast cancer clinical target area based on deep learning of claim 1, wherein the multichannel neural network model comprises a multichannel convolutional layer, a downsampling convolutional module and an upsampling convolutional module which are connected in sequence;
the multi-channel convolution layer is used for extracting multi-channel characteristic images from a plurality of input channels by adopting convolution; the down-sampling convolution module is used for extracting features of different levels of the multi-channel feature image, and the up-sampling convolution module is used for up-sampling the extracted features and outputting a segmentation result;
the downsampling convolution module comprises a plurality of downsampling convolution units, and each downsampling convolution unit comprises a three-dimensional convolution layer, a LeakyReLU layer, a batch normalization layer and a maximum pooling layer which are sequentially connected;
the up-sampling convolution module comprises a plurality of up-sampling convolution units, and each up-sampling convolution unit comprises a three-dimensional convolution layer, a LeakyReLU layer, a batch normalization layer and a transposition convolution layer which are sequentially connected.
8. The utility model provides an automatic system of delineating in breast cancer clinical target area based on deep learning which characterized in that includes following module:
the sample set generating module is used for acquiring a CT image and a corresponding clinical target area mask of a patient and generating a training sample set based on the CT image and the corresponding clinical target area mask;
the layer classification module is used for classifying each layer of the CT images in the training sample set based on morphological difference to obtain the type of each layer in each CT image;
the classification model training module is used for training a classification neural network model based on the type of each layer in each CT image in the sample set to obtain a CT image layer classification model;
the segmentation model training module is used for training a multichannel neural network model based on each CT image in the sample set, the type of each layer in the CT image and a mask corresponding to the CT image to obtain a clinical target area segmentation model;
the automatic delineation module is used for inputting the CT image to be delineated into the CT image layer classification model to obtain the type of each layer of the CT image to be delineated; and inputting the CT image to be delineated into a clinical target area segmentation model according to the type of each layer of the CT image to be delineated to obtain a clinical target area mask of the CT image.
9. The deep learning-based breast cancer clinical target zone automatic delineation system of claim 8 wherein the slice classification module classifies each slice of the CT images in the training sample set using the following steps:
regarding each CT image, taking the first layer with a corresponding mask as a first type; sequentially traversing each layer, and calculating the center offset distance and the size change rate between the current layer and the previous layer based on the clinical target area mask corresponding to the CT image;
if the center offset distance or the size change rate is larger than the threshold value, the current layer and the previous layer are of different types; otherwise, the current level and the previous level are of the same type.
10. The deep learning based breast cancer clinical target volume automatic delineation system of claim 9, wherein the size change rate comprises a length change rate and a width change rate;
calculating the center offset distance and the size change rate between the current layer and the previous layer based on the clinical target area mask corresponding to the CT image, comprising the following steps:
calculating the minimum circumscribed rectangle of the corresponding mask of each layer of the CT image;
calculating the center offset distance and the size change rate between the current level and the previous level by adopting the following formulas:
length change rate = (l) i -l i-1 )/l i
Width change rate = (w) i -w i-1 )/w i
Wherein (x) i ,y i ) The coordinate of the center point of the minimum circumscribed rectangle representing the mask corresponding to the current layer, (x) i-1 ,y i-1 ) The center point of the smallest circumscribed rectangle representing the mask corresponding to the previous layer of the current layer, l i Length, w, of the smallest bounding rectangle representing the mask corresponding to the current level i Width of the smallest circumscribed rectangle representing the mask corresponding to the current level,/ i-1 Represents the length, w, of the smallest bounding rectangle of the mask corresponding to the layer preceding the current layer i-1 The width of the smallest circumscribed rectangle representing the mask corresponding to the layer preceding the current layer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210937016.XA CN115187577B (en) | 2022-08-05 | 2022-08-05 | Automatic drawing method and system for breast cancer clinical target area based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210937016.XA CN115187577B (en) | 2022-08-05 | 2022-08-05 | Automatic drawing method and system for breast cancer clinical target area based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115187577A true CN115187577A (en) | 2022-10-14 |
CN115187577B CN115187577B (en) | 2023-05-09 |
Family
ID=83520846
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210937016.XA Active CN115187577B (en) | 2022-08-05 | 2022-08-05 | Automatic drawing method and system for breast cancer clinical target area based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115187577B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115409837A (en) * | 2022-11-01 | 2022-11-29 | 北京大学第三医院(北京大学第三临床医学院) | Endometrial cancer CTV automatic delineation method based on multi-modal CT image |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB202007256D0 (en) * | 2020-05-15 | 2020-07-01 | Univ Oxford Innovation Ltd | Functional imaging features from computed tomography images |
CN111951276A (en) * | 2020-07-28 | 2020-11-17 | 上海联影智能医疗科技有限公司 | Image segmentation method and device, computer equipment and storage medium |
CN112017185A (en) * | 2020-10-30 | 2020-12-01 | 平安科技(深圳)有限公司 | Focus segmentation method, device and storage medium |
CN112132917A (en) * | 2020-08-27 | 2020-12-25 | 盐城工学院 | Intelligent diagnosis method for rectal cancer lymph node metastasis |
CN112862808A (en) * | 2021-03-02 | 2021-05-28 | 王建 | Deep learning-based interpretability identification method of breast cancer ultrasonic image |
CN113288193A (en) * | 2021-07-08 | 2021-08-24 | 广州柏视医疗科技有限公司 | Automatic delineation method of CT image breast cancer clinical target area based on deep learning |
CN113706441A (en) * | 2021-03-15 | 2021-11-26 | 腾讯科技(深圳)有限公司 | Image prediction method based on artificial intelligence, related device and storage medium |
CN114202545A (en) * | 2020-08-27 | 2022-03-18 | 东北大学秦皇岛分校 | UNet + + based low-grade glioma image segmentation method |
CN114722925A (en) * | 2022-03-22 | 2022-07-08 | 北京安德医智科技有限公司 | Lesion classification device and nonvolatile computer readable storage medium |
-
2022
- 2022-08-05 CN CN202210937016.XA patent/CN115187577B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB202007256D0 (en) * | 2020-05-15 | 2020-07-01 | Univ Oxford Innovation Ltd | Functional imaging features from computed tomography images |
CN111951276A (en) * | 2020-07-28 | 2020-11-17 | 上海联影智能医疗科技有限公司 | Image segmentation method and device, computer equipment and storage medium |
CN112132917A (en) * | 2020-08-27 | 2020-12-25 | 盐城工学院 | Intelligent diagnosis method for rectal cancer lymph node metastasis |
CN114202545A (en) * | 2020-08-27 | 2022-03-18 | 东北大学秦皇岛分校 | UNet + + based low-grade glioma image segmentation method |
CN112017185A (en) * | 2020-10-30 | 2020-12-01 | 平安科技(深圳)有限公司 | Focus segmentation method, device and storage medium |
CN112862808A (en) * | 2021-03-02 | 2021-05-28 | 王建 | Deep learning-based interpretability identification method of breast cancer ultrasonic image |
CN113706441A (en) * | 2021-03-15 | 2021-11-26 | 腾讯科技(深圳)有限公司 | Image prediction method based on artificial intelligence, related device and storage medium |
CN113288193A (en) * | 2021-07-08 | 2021-08-24 | 广州柏视医疗科技有限公司 | Automatic delineation method of CT image breast cancer clinical target area based on deep learning |
CN114722925A (en) * | 2022-03-22 | 2022-07-08 | 北京安德医智科技有限公司 | Lesion classification device and nonvolatile computer readable storage medium |
Non-Patent Citations (3)
Title |
---|
KARL FRITSCHER,ET AL.: "Deep Neural Networks for Fast Segmentation of 3D Medical Images" * |
刘裕良: "结合目标定位与分割的鼻咽癌靶区和危及器官自动勾画方法研究" * |
吴锐帆;代海洋;杨坦;江颖;蔡志杰;: "直肠癌***转移的智能诊断研究" * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115409837A (en) * | 2022-11-01 | 2022-11-29 | 北京大学第三医院(北京大学第三临床医学院) | Endometrial cancer CTV automatic delineation method based on multi-modal CT image |
Also Published As
Publication number | Publication date |
---|---|
CN115187577B (en) | 2023-05-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Liu et al. | A cascaded dual-pathway residual network for lung nodule segmentation in CT images | |
CN111798462B (en) | Automatic delineation method of nasopharyngeal carcinoma radiotherapy target area based on CT image | |
Aoyama et al. | Computerized scheme for determination of the likelihood measure of malignancy for pulmonary nodules on low‐dose CT images | |
CN109934235B (en) | Unsupervised abdominal CT sequence image multi-organ simultaneous automatic segmentation method | |
CN109493325A (en) | Tumor Heterogeneity analysis system based on CT images | |
US20100067761A1 (en) | Automatic interpretation of 3-d medicine images of the brain and methods for producing intermediate results | |
WO2007044508A2 (en) | System and method for whole body landmark detection, segmentation and change quantification in digital images | |
CN111008984A (en) | Method and system for automatically drawing contour line of normal organ in medical image | |
KR20210020618A (en) | Abnominal organ automatic segmentation based on deep learning in a medical image | |
Dutande et al. | Deep residual separable convolutional neural network for lung tumor segmentation | |
CN111145185B (en) | Lung substance segmentation method for extracting CT image based on clustering key frame | |
Maitra et al. | Detection of abnormal masses using divide and conquer algorithmin digital mammogram | |
Wu et al. | Coarse-to-fine lung nodule segmentation in CT images with image enhancement and dual-branch network | |
CN111862021A (en) | Deep learning-based automatic head and neck lymph node and drainage area delineation method | |
CN115187577A (en) | Method and system for automatically delineating breast cancer clinical target area based on deep learning | |
CN114445429A (en) | Whole-heart ct segmentation method and device based on multiple labels and multiple decoders | |
CN112348826B (en) | Interactive liver segmentation method based on geodesic distance and V-net | |
Cabrera et al. | Segmentation of axillary and supraclavicular tumoral lymph nodes in PET/CT: A hybrid CNN/component-tree approach | |
Zhang et al. | CCS-net: cascade detection network with the convolution kernel switch block and statistics optimal anchors block in hypopharyngeal cancer MRI | |
CN110533667A (en) | Lung tumors CT images 3D dividing method based on image pyramid fusion | |
WO2021197176A1 (en) | Systems and methods for tumor characterization | |
CN108416792A (en) | Medical computer tomoscan image dividing method based on movable contour model | |
Tan et al. | A segmentation method of lung parenchyma from chest CT images based on dual U-Net | |
Lacerda et al. | A parallel method for anatomical structure segmentation based on 3d seeded region growing | |
KR102332472B1 (en) | Tumor automatic segmentation using deep learning based on dual window setting in a medical image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |