CN112790782A - Automatic delineation method of pelvic tumor CTV based on deep learning - Google Patents

Automatic delineation method of pelvic tumor CTV based on deep learning Download PDF

Info

Publication number
CN112790782A
CN112790782A CN202110142618.1A CN202110142618A CN112790782A CN 112790782 A CN112790782 A CN 112790782A CN 202110142618 A CN202110142618 A CN 202110142618A CN 112790782 A CN112790782 A CN 112790782A
Authority
CN
China
Prior art keywords
ctv
partition
drainage
clinical
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110142618.1A
Other languages
Chinese (zh)
Other versions
CN112790782B (en
Inventor
刘守亮
魏军
沈烁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Baishi Data Technology Co ltd
Guangzhou Boshi Medical Technology Co ltd
Original Assignee
Guangzhou Baishi Data Technology Co ltd
Guangzhou Boshi Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Baishi Data Technology Co ltd, Guangzhou Boshi Medical Technology Co ltd filed Critical Guangzhou Baishi Data Technology Co ltd
Priority to CN202110142618.1A priority Critical patent/CN112790782B/en
Publication of CN112790782A publication Critical patent/CN112790782A/en
Application granted granted Critical
Publication of CN112790782B publication Critical patent/CN112790782B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30028Colon; Small intestine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Pathology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Veterinary Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Public Health (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Pulmonology (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses an automatic pelvic tumor CTV delineation method based on deep learning, which is suitable for pelvic lymph drainage areas, cervical cancer CTV and rectal cancer CTV, and comprises the following steps: step S1: acquiring CT image data and clinician labeling drainage area partitions, and preprocessing the image data; step S2: constructing a deep learning segmentation model of the drainage distinguishing area; step S3: processing the CT image data and the clinician labeling drainage partition image in the steps S1 and S2, inputting the CT image data and the clinician labeling drainage partition image into a network, training the network, and obtaining a partition outline; step S4: automatically generating a cervical cancer CTV contour through the partition contour; and step S5: generating automatically a rectal cancer CTV contour by a partition contour. The automatic delineation method can assist doctors to delineate cervical cancer CTV and rectal cancer CTV according to the patient disease condition and the stage condition, and the introduced dense network can effectively improve the identification capability of the pelvic lymph drainage area.

Description

Automatic delineation method of pelvic tumor CTV based on deep learning
Technical Field
The invention relates to the technical field of medical images and computers, in particular to a deep learning network-based automatic delineation method for pelvic cavity lymph drainage areas, cervical cancer CTV and rectal cancer CTV in CT images.
Background
In the field of radiotherapy, accurate delineation of cervical cancer CTV (tumor target area) and rectal cancer CTV has important clinical significance. During the course of cervical cancer CTV and rectal cancer CTV delineation, clinicians must refer to the pelvic lymph drainage area while performing CTV delineation in consideration of the clinical staging and treatment regimen of the patient. In addition, when the clinician delineates the target area, the clinician also needs to perform delineation by combining with the pelvic lymph drainage area. Therefore, the automatic delineation of pelvic lymph drainage areas, cervical cancer CTV and rectal cancer CTV has very important clinical significance.
At present, the mediastinal drainage area, the cervical cancer CTV and the rectal cancer CTV are completely delineated manually by a clinician. This conventional manual method has the following disadvantages: firstly, manual delineation consumes a lot of time, often requiring several hours to delineate a patient; secondly, the sketching has objective errors easily, is difficult to be found and causes medical accidents easily. Third, the clinician's clinical experience determines the quality of the delineation. Fourth, different doctors have differences in subjective understanding considering the clinical stages and treatment plans of the same patient, resulting in inconsistent delineation styles. Therefore, the automatic delineation of the pelvic lymph drainage area, the cervical cancer CTV and the rectal cancer CTV can effectively avoid the problems, and can assist a doctor to rapidly, concisely, accurately and highly consistently delineate.
The information disclosed in this background section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
The information disclosed in this background section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
Disclosure of Invention
The invention aims to provide an automatic pelvic tumor CTV delineation method based on deep learning, which can assist a doctor to delineate cervical cancer CTV and rectal cancer CTV according to the patient disease condition and the staging condition, and an introduced dense network can effectively improve the identification capability of a pelvic lymph drainage area.
In order to achieve the aim, the invention provides an automatic pelvic tumor CTV delineation method based on deep learning, which is suitable for pelvic lymph drainage areas, cervical cancer CTV and rectal cancer CTV, and comprises the following steps: step S1: acquiring CT image data and clinician labeling drainage area partitions, and preprocessing the image data; step S2: constructing a deep learning segmentation model of the drainage distinguishing area; step S3: processing the CT image data and the clinician labeling drainage partition image in the steps S1 and S2, inputting the CT image data and the clinician labeling drainage partition image into a network, training the network, and obtaining a partition outline; step S4: automatically generating a cervical cancer CTV contour through the partition contour; and step S5: generating automatically a rectal cancer CTV contour by a partition contour.
In a preferred embodiment, the step of preprocessing the image data in step S1 includes the steps of: step S11: acquiring a large number of multi-modal CT three-dimensional images and corresponding clinician labeling drainage area sectional profile maps; step S12: acquiring an image body contour, and intercepting a CT image from the CT image according to the generation size of the image body contour; step S13: normalizing the pixel value of the two-dimensional CT image to an abdomen window; step S14: resampling the image to a fixed size, and normalizing; and step S15: performing data enhancement includes: random flipping, random rotation, random warping, random noise, random affine transformation.
In a preferred embodiment, step S2 includes the following steps: step S21: constructing a lymphatic drainage area division network model, firstly, constructing a sub-module basic block of the lymphatic drainage area division network model, wherein a coding block consists of a residual module and a pooling layer, and simultaneously has three inputs and three outputs, wherein the inputs are from a lower sampling feature of a parent node on the upper layer, a brother node feature on the same layer and an upper sampling feature of a son node on the lower layer, and the inputs are used for outputting an upper sampling feature, a node feature on the same layer and a lower sampling feature; step S22: constructing a pelvic cavity drainage area identification model framework, wherein three paths consisting of basic blocks comprise a down-sampling path, a middle layer path and an up-sampling path; the input of the basic block of the down-sampling path only comes from the feature graph of the parent node, the basic block of the down-sampling path only outputs the up-sampling feature, and meanwhile, a short connection exists between the down-sampling path and the up-sampling path so as to accelerate network convergence; and step S23: in the network model, expansion convolution is introduced into a first layer basic block, so that the network receptive field is enlarged, and the identification capability of the network on a large drainage area is enhanced;
in a preferred embodiment, step S3 includes the following steps: step S31: inputting a large amount of preprocessed multi-modal CT patient data, performing data enhancement to prevent overfitting, and dividing a labeled image drainage area into two groups, wherein the first group is a clinical partition, and the second group is a partition only used for assisting generation of rectal cancer CTV and cervical cancer CTV; step S32: randomly forming a group by the images after data enhancement, inputting the group into a network, training the network until the group is kept unchanged in the evaluation standard, and keeping the model; step S33: inputting new data into the stored network model, and outputting a probability map of clinical lymph partition and auxiliary lymph partition; and step S34: and (4) performing post-processing partition.
In a preferred embodiment, step S4 includes the following steps: step S41: respectively converting the clinical drainage area and the auxiliary area of the area outline into binary images; step S42: retaining or removing a portion of the clinical drainage zone and the auxiliary zone according to the case stage and cervical cancer characteristics; step S43: respectively fusing the clinical drainage areas with the bilaterally symmetrical structures into two large areas by using a traditional method; step S44: the two large regions generated by the cervical cancer aid-partitioning and the separate fusion are fused using conventional methods to generate the cervical cancer CTV.
In a preferred embodiment, step S5 includes the following steps: step S51: respectively converting the clinical drainage area and the auxiliary area of the area outline into binary images; step S52: retaining or removing a portion of the clinical drainage zone and the auxiliary zone according to the case stage and rectal cancer characteristics; step S53: dividing the clinical subarea into a direct fusion clinical subarea and an indirect fusion clinical subarea, and fusing an auxiliary subarea of the rectal cancer and the direct fusion clinical subarea to generate a large area; and step S54: and (3) fusing the indirectly fused clinical subarea and the large area generated by fusion into one area by using a traditional method to generate the rectal cancer CTV.
Compared with the prior art, the automatic pelvic tumor CTV delineation method based on deep learning has the following beneficial effects: by introducing a dense residual error network structure, the network can extract multi-scale information at the same time, so that the network can better position and divide a small diversion area; by introducing expansion convolution, the receptive field is enlarged, so that a large drainage area can be well divided; the network can generate clinical lymph partition and auxiliary partition at the same time, so that automatic generation with controllable CTV becomes possible. The division model of the pelvic lymph drainage area can help a doctor to accurately draw a target area and lymph nodes, and simultaneously cervical cancer CTV and rectal cancer CTV are automatically generated according to stages of patients; therefore, the doctor's sketching efficiency can be greatly improved, and the whole sketching flow is improved.
Drawings
FIG. 1 is a schematic flow diagram of an automatic delineation method according to an embodiment of the invention;
fig. 2 is a schematic diagram of a deep learning network structure of an automatic delineation method according to an embodiment of the present invention.
Detailed Description
The following detailed description of the present invention is provided in conjunction with the accompanying drawings, but it should be understood that the scope of the present invention is not limited to the specific embodiments.
Throughout the specification and claims, unless explicitly stated otherwise, the word "comprise", or variations such as "comprises" or "comprising", will be understood to imply the inclusion of a stated element or component but not the exclusion of any other element or component.
As shown in fig. 1 to fig. 1, an automatic mapping method of a deep learning-based pelvic tumor CTV according to a preferred embodiment of the present invention is applicable to pelvic lymphatic drainage, cervical cancer CTV and rectal cancer CTV, and includes the following steps:
step S1: and acquiring CT image data and clinician labeling drainage area partitions, and preprocessing the image data. The preprocessing of the image data in step S1 includes the steps of: step S11: acquiring a large number of multi-modal CT three-dimensional images and corresponding clinician labeling drainage area sectional profile maps; step S12: acquiring an image body contour, and intercepting a CT image from the CT image according to the size generated by the image body contour; step S13: the two-dimensional CT image pixel values are normalized to the abdominal window.
The normalized calculation method comprises the following steps:
lower=c-w/2;
higher=c+w/2;
x[x<lower]=0;
x[x>higher]=higher;
x=(x-lower)/(higher-lower);
wherein x is the CT pixel matrix, c is the window level, and w is the window width;
step S14: resampling the image to a fixed size, and normalizing; step S15: performing data enhancement includes: random flipping, random rotation, random warping, random noise, random affine transformation.
Step S2: and constructing a deep learning segmentation model of the drainage distinguishing area. The step S2 includes the following steps:
step S21: constructing a lymphatic drainage area cutting network model. Firstly, a basic block of a sub-module is constructed, a coding block is composed of a residual module and a pooling layer, and simultaneously, the coding block has three inputs and three outputs, wherein the inputs are from a parent node downsampling characteristic of the previous layer, a brother node characteristic of the same layer and a son node upsampling characteristic of the next layer, and the inputs are used for outputting an upsampling characteristic, a brother node characteristic and a downsampling characteristic. Step S22: constructing a pelvic cavity drainage area identification model framework, wherein three paths consisting of basic blocks comprise a down-sampling path, a middle layer path and an up-sampling path; the input of the basic block of the downsampling path only has the feature graph from the parent node, and the basic block of the downsampling path only outputs the upsampling feature. While a short connection between the down-sampling path and the up-sampling path also exists to speed up network convergence. Step S23: in the network model, expansion convolution is introduced into the first layer basic block, so that the network receptive field is enlarged, and the identification capability of the network to a large drainage area is enhanced.
Step S3: and (4) processing the above steps to obtain CT image data and clinician labeling drainage partition image input networks, training the networks, and obtaining partition outlines. The step S3 includes the following steps:
step S31: inputting a large amount of preprocessed multi-modal CT patient data, enhancing the data to prevent overfitting, and dividing the labeled image into two groups in a drainage way, wherein the first group is a clinical partition; the second group is the partitions used only to aid in the generation of rectal cancer CTV and cervical cancer CTV; step S32: randomly forming a group by the images after data enhancement, inputting the group into a network, and training the network; and calculating a training error between the prediction result and the doctor label, and guiding the network to learn the supervision information of the lymph drainage area. The evaluation error is calculated until the evaluation error is preserved, and the model is preserved.
The calculation method of the training error comprises the following steps:
Figure BDA0002929517750000061
and N refers to N data, the ith pixel point in the prediction result image is represented, and the ith pixel point in the gold mark image is represented.
Step S33: inputting new data into the stored network model, and outputting a probability map of clinical lymph partition and auxiliary lymph partition; step S34: and (4) performing post-processing partition.
Step S4: the cervical cancer CTV contours are automatically generated by the zonal contours. The construction of the deep learning segmentation model in the step S4 includes the following steps: step S41: respectively converting the clinical drainage area and the auxiliary area of the area outline into binary images; step S42: depending on the case stage and cervical cancer characteristics, certain clinical drainage and accessory compartments are retained or removed; step S43: respectively fusing the clinical drainage areas with the bilaterally symmetrical structures into two large areas by using a traditional method; step S44: the cervical cancer CTV is generated by fusing the cervical cancer auxiliary partition and the two large regions generated as described above using a conventional method.
Step S5: generating automatically a rectal cancer CTV contour by a partition contour. Step S5 includes the following steps: step S51: respectively converting the clinical drainage area and the auxiliary area of the area outline into binary images; step S52: depending on the case stage and rectal cancer characteristics, certain clinical drainage and auxiliary compartments are retained or removed; step S53: dividing the clinical zone into a directly fusible clinical zone and an indirectly fusible clinical zone; fusing a colorectal cancer auxiliary partition and a directly-fused clinical partition into a large area; step S54: the indirectly fusible clinical compartment and the generated large domain described above are fused to a region, i.e. rectal cancer CTV, using conventional methods.
In conclusion, the automatic pelvic tumor CTV delineation method based on deep learning has the following advantages: by introducing a dense residual error network structure, the network can extract multi-scale information at the same time, so that the network can better position and divide a small diversion area; by introducing expansion convolution, the receptive field is enlarged, so that a large drainage area can be well divided; the network can generate clinical lymph partition and auxiliary partition at the same time, so that automatic generation with controllable CTV becomes possible. The division model of the pelvic lymph drainage area can help a doctor to accurately draw a target area and lymph nodes, and simultaneously cervical cancer CTV and rectal cancer CTV are automatically generated according to stages of patients; therefore, the doctor's sketching efficiency can be greatly improved, and the whole sketching flow is improved. Meanwhile, the doctor can be assisted to draw up the cervical cancer CTV and the rectal cancer CTV according to the patient disease condition and the stage condition; the introduced dense network can effectively improve the identification capability of the pelvic lymph drainage area.
The foregoing descriptions of specific exemplary embodiments of the present invention have been presented for purposes of illustration and description. It is not intended to limit the invention to the precise form disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiments were chosen and described in order to explain certain principles of the invention and its practical application to enable one skilled in the art to make and use various exemplary embodiments of the invention and various alternatives and modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims and their equivalents.

Claims (6)

1. An automatic pelvic tumor CTV delineation method based on deep learning is suitable for pelvic lymph drainage areas, cervical cancer CTV and rectal cancer CTV in CT images, and is characterized by comprising the following steps:
step S1: acquiring CT image data and clinician labeling drainage area partitions, and preprocessing the image data;
step S2: constructing a deep learning segmentation model of the drainage distinguishing area;
step S3: processing in the step S1 and the step S2 to obtain CT image data and clinician labeling drainage partition images, inputting the CT image data and the clinician labeling drainage partition images into a network, training the network, and obtaining a partition outline;
step S4: automatically generating a cervical cancer CTV contour by the partition contour; and
step S5: automatically generating a rectal cancer CTV contour through the partition contour.
2. The method for automatic delineation of a deep learning based pelvic tumor (CTV) according to claim 1, wherein the step S1, the pre-processing of the image data comprises the steps of:
step S11: acquiring a large number of multi-modal CT three-dimensional images and corresponding clinician labeling drainage area sectional profile maps;
step S12: acquiring an image body contour, and intercepting a CT image from the CT image according to the generation size of the image body contour;
step S13: normalizing the pixel value of the two-dimensional CT image to an abdomen window;
step S14: resampling the image to a fixed size, and normalizing; and
step S15: performing data enhancement includes: random flipping, random rotation, random warping, random noise, random affine transformation.
3. The method for automatically delineating a deep learning-based pelvic tumor (CTV) according to claim 1, wherein the step S2 comprises the steps of:
step S21: constructing a lymphatic drainage area division network model, firstly, constructing a sub-module basic block of the lymphatic drainage area division network model, wherein a coding block consists of a residual module and a pooling layer, and simultaneously has three inputs and three outputs, wherein the inputs are from a lower sampling feature of a parent node on the upper layer, a brother node feature on the same layer and an upper sampling feature of a son node on the lower layer, and the inputs are used for outputting an upper sampling feature, a node feature on the same layer and a lower sampling feature;
step S22: constructing a pelvic cavity drainage area identification model framework, wherein three paths consisting of basic blocks comprise a down-sampling path, a middle layer path and an up-sampling path; the input of the basic block of the down-sampling path only comes from the feature graph of the parent node, the basic block of the down-sampling path only outputs the up-sampling feature, and meanwhile, a short connection exists between the down-sampling path and the up-sampling path so as to accelerate network convergence; and
step S23: in the network model, expansion convolution is introduced into the first layer basic block, so that the network receptive field is enlarged, and the identification capability of the network to a large drainage area is enhanced.
4. The method for automatically delineating a deep learning-based pelvic tumor (CTV) according to claim 1, wherein the step S3 comprises the steps of:
step S31: inputting a large amount of preprocessed multi-modal CT patient data, performing data enhancement to prevent overfitting, and dividing a labeled image drainage area into two groups, wherein the first group is a clinical partition, and the second group is a partition only used for assisting generation of rectal cancer CTV and cervical cancer CTV;
step S32: randomly forming a group by the images after data enhancement, inputting the group into a network, training the network until the group is kept unchanged in an evaluation standard, and keeping the model;
step S33: inputting new data into the stored network model, and outputting a probability map of clinical lymph partition and auxiliary lymph partition; and
step S34: and (4) performing post-processing partition.
5. The method for automatically delineating a deep learning-based pelvic tumor (CTV) according to claim 1, wherein the step S4 comprises the steps of:
step S41: converting the clinical drainage area and the auxiliary area of the area outline into binary images respectively;
step S42: retaining or removing a portion of said clinical drainage zone and said secondary partition according to case stage and cervical cancer characteristics;
step S43: respectively fusing the clinical drainage areas with the left and right symmetrical structures into two large areas by using a traditional method;
step S44: fusing the two large regions generated by the auxiliary partition and the respective fusion of cervical cancer using conventional methods to generate a cervical cancer CTV.
6. The method for automatically delineating a deep learning-based pelvic tumor (CTV) according to claim 1, wherein the step S5 comprises the steps of:
step S51: converting the clinical drainage area and the auxiliary area of the area outline into binary images respectively;
step S52: retaining or removing a portion of said clinical drainage zone and said secondary partition according to case stage and rectal cancer characteristics;
step S53: dividing said clinical zone into a directly fusible clinical zone and an indirectly fusible clinical zone, fusing said auxiliary zone and said directly fusible clinical zone of rectal cancer to generate a large region; and
step S54: the indirectly fusionable clinical partition and the one large domain resulting from fusion are fused into one domain using conventional methods to generate the rectal cancer CTV.
CN202110142618.1A 2021-02-02 2021-02-02 Automatic pelvic tumor CTV (computer-to-volume) delineation system based on deep learning Active CN112790782B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110142618.1A CN112790782B (en) 2021-02-02 2021-02-02 Automatic pelvic tumor CTV (computer-to-volume) delineation system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110142618.1A CN112790782B (en) 2021-02-02 2021-02-02 Automatic pelvic tumor CTV (computer-to-volume) delineation system based on deep learning

Publications (2)

Publication Number Publication Date
CN112790782A true CN112790782A (en) 2021-05-14
CN112790782B CN112790782B (en) 2022-06-24

Family

ID=75813712

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110142618.1A Active CN112790782B (en) 2021-02-02 2021-02-02 Automatic pelvic tumor CTV (computer-to-volume) delineation system based on deep learning

Country Status (1)

Country Link
CN (1) CN112790782B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113288193A (en) * 2021-07-08 2021-08-24 广州柏视医疗科技有限公司 Automatic delineation method of CT image breast cancer clinical target area based on deep learning
CN113488146A (en) * 2021-07-29 2021-10-08 广州柏视医疗科技有限公司 Automatic delineation method for drainage area and metastatic lymph node of head and neck nasopharyngeal carcinoma
CN113689419A (en) * 2021-09-03 2021-11-23 电子科技大学长三角研究院(衢州) Image segmentation processing method based on artificial intelligence
CN114494496A (en) * 2022-01-27 2022-05-13 深圳市铱硙医疗科技有限公司 Automatic intracranial hemorrhage delineation method and device based on head CT flat scanning image
CN116570848A (en) * 2023-07-13 2023-08-11 神州医疗科技股份有限公司 Radiotherapy clinical auxiliary decision-making method and system based on automatic sketching
CN117351489A (en) * 2023-12-06 2024-01-05 四川省肿瘤医院 Head and neck tumor target area delineating system for whole-body PET/CT scanning

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780491A (en) * 2017-01-23 2017-05-31 天津大学 The initial profile generation method used in GVF methods segmentation CT pelvis images
CN108062754A (en) * 2018-01-19 2018-05-22 深圳大学 Segmentation, recognition methods and device based on dense network image
CN109118490A (en) * 2018-06-28 2019-01-01 厦门美图之家科技有限公司 A kind of image segmentation network generation method and image partition method
CN109754402A (en) * 2018-03-15 2019-05-14 京东方科技集团股份有限公司 Image processing method, image processing apparatus and storage medium
CN109949309A (en) * 2019-03-18 2019-06-28 安徽紫薇帝星数字科技有限公司 A kind of CT image for liver dividing method based on deep learning
CN111127444A (en) * 2019-12-26 2020-05-08 广州柏视医疗科技有限公司 Method for automatically identifying radiotherapy organs at risk in CT image based on depth semantic network
CN111797779A (en) * 2020-07-08 2020-10-20 兰州交通大学 Remote sensing image semantic segmentation method based on regional attention multi-scale feature fusion
CN111862021A (en) * 2020-07-13 2020-10-30 中山大学 Deep learning-based automatic head and neck lymph node and drainage area delineation method
CN111968120A (en) * 2020-07-15 2020-11-20 电子科技大学 Tooth CT image segmentation method for 3D multi-feature fusion
CN112057751A (en) * 2020-08-14 2020-12-11 中南大学湘雅医院 Automatic delineation method for organs endangered in pelvic cavity radiotherapy
CN112150470A (en) * 2020-09-22 2020-12-29 平安科技(深圳)有限公司 Image segmentation method, image segmentation device, image segmentation medium, and electronic device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780491A (en) * 2017-01-23 2017-05-31 天津大学 The initial profile generation method used in GVF methods segmentation CT pelvis images
CN108062754A (en) * 2018-01-19 2018-05-22 深圳大学 Segmentation, recognition methods and device based on dense network image
CN109754402A (en) * 2018-03-15 2019-05-14 京东方科技集团股份有限公司 Image processing method, image processing apparatus and storage medium
CN109118490A (en) * 2018-06-28 2019-01-01 厦门美图之家科技有限公司 A kind of image segmentation network generation method and image partition method
CN109949309A (en) * 2019-03-18 2019-06-28 安徽紫薇帝星数字科技有限公司 A kind of CT image for liver dividing method based on deep learning
CN111127444A (en) * 2019-12-26 2020-05-08 广州柏视医疗科技有限公司 Method for automatically identifying radiotherapy organs at risk in CT image based on depth semantic network
CN111797779A (en) * 2020-07-08 2020-10-20 兰州交通大学 Remote sensing image semantic segmentation method based on regional attention multi-scale feature fusion
CN111862021A (en) * 2020-07-13 2020-10-30 中山大学 Deep learning-based automatic head and neck lymph node and drainage area delineation method
CN111968120A (en) * 2020-07-15 2020-11-20 电子科技大学 Tooth CT image segmentation method for 3D multi-feature fusion
CN112057751A (en) * 2020-08-14 2020-12-11 中南大学湘雅医院 Automatic delineation method for organs endangered in pelvic cavity radiotherapy
CN112150470A (en) * 2020-09-22 2020-12-29 平安科技(深圳)有限公司 Image segmentation method, image segmentation device, image segmentation medium, and electronic device

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113288193A (en) * 2021-07-08 2021-08-24 广州柏视医疗科技有限公司 Automatic delineation method of CT image breast cancer clinical target area based on deep learning
CN113288193B (en) * 2021-07-08 2022-04-01 广州柏视医疗科技有限公司 Automatic delineation system of CT image breast cancer clinical target area based on deep learning
CN113488146A (en) * 2021-07-29 2021-10-08 广州柏视医疗科技有限公司 Automatic delineation method for drainage area and metastatic lymph node of head and neck nasopharyngeal carcinoma
CN113488146B (en) * 2021-07-29 2022-04-01 广州柏视医疗科技有限公司 Automatic delineation method for drainage area and metastatic lymph node of head and neck nasopharyngeal carcinoma
CN113689419A (en) * 2021-09-03 2021-11-23 电子科技大学长三角研究院(衢州) Image segmentation processing method based on artificial intelligence
CN114494496A (en) * 2022-01-27 2022-05-13 深圳市铱硙医疗科技有限公司 Automatic intracranial hemorrhage delineation method and device based on head CT flat scanning image
CN114494496B (en) * 2022-01-27 2022-09-20 深圳市铱硙医疗科技有限公司 Automatic intracranial hemorrhage delineation method and device based on head CT flat scanning image
CN116570848A (en) * 2023-07-13 2023-08-11 神州医疗科技股份有限公司 Radiotherapy clinical auxiliary decision-making method and system based on automatic sketching
CN116570848B (en) * 2023-07-13 2023-09-15 神州医疗科技股份有限公司 Radiotherapy clinical auxiliary decision-making system based on automatic sketching
CN117351489A (en) * 2023-12-06 2024-01-05 四川省肿瘤医院 Head and neck tumor target area delineating system for whole-body PET/CT scanning
CN117351489B (en) * 2023-12-06 2024-03-08 四川省肿瘤医院 Head and neck tumor target area delineating system for whole-body PET/CT scanning

Also Published As

Publication number Publication date
CN112790782B (en) 2022-06-24

Similar Documents

Publication Publication Date Title
CN112790782B (en) Automatic pelvic tumor CTV (computer-to-volume) delineation system based on deep learning
CN112950651B (en) Automatic delineation method of mediastinal lymph drainage area based on deep learning network
CN111798462B (en) Automatic delineation method of nasopharyngeal carcinoma radiotherapy target area based on CT image
US6961454B2 (en) System and method for segmenting the left ventricle in a cardiac MR image
Commowick et al. An efficient locally affine framework for the smooth registration of anatomical structures
CN113674253B (en) Automatic segmentation method for rectal cancer CT image based on U-transducer
CN111127444B (en) Method for automatically identifying radiotherapy organs at risk in CT image based on depth semantic network
CN106683104B (en) Prostate Magnetic Resonance Image Segmentation method based on integrated depth convolutional neural networks
CN110706246A (en) Blood vessel image segmentation method and device, electronic equipment and storage medium
CN109214397A (en) The dividing method of Lung neoplasm in a kind of lung CT image
CN110363802B (en) Prostate image registration system and method based on automatic segmentation and pelvis alignment
CN109509193B (en) Liver CT atlas segmentation method and system based on high-precision registration
CN111275712B (en) Residual semantic network training method oriented to large-scale image data
CN105389811A (en) Multi-modality medical image processing method based on multilevel threshold segmentation
CN104424629A (en) X-ray chest radiography lung segmentation method and device
CN112529909A (en) Tumor image brain region segmentation method and system based on image completion
CN112102373A (en) Carotid artery multi-mode image registration method based on strong constraint affine deformation feature learning
Jin et al. Object recognition in medical images via anatomy-guided deep learning
EP2191440A1 (en) Object segmentation using dynamic programming
Lee et al. Tubule segmentation of fluorescence microscopy images based on convolutional neural networks with inhomogeneity correction
Park et al. Cardiac segmentation on CT Images through shape-aware contour attentions
CN113362360B (en) Ultrasonic carotid plaque segmentation method based on fluid velocity field
Wang et al. Accurate lung nodule segmentation with detailed representation transfer and soft mask supervision
CN116523983B (en) Pancreas CT image registration method integrating multipath characteristics and organ morphology guidance
CN116862930B (en) Cerebral vessel segmentation method, device, equipment and storage medium suitable for multiple modes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant