CN109829918B - Liver image segmentation method based on dense feature pyramid network - Google Patents

Liver image segmentation method based on dense feature pyramid network Download PDF

Info

Publication number
CN109829918B
CN109829918B CN201910001654.9A CN201910001654A CN109829918B CN 109829918 B CN109829918 B CN 109829918B CN 201910001654 A CN201910001654 A CN 201910001654A CN 109829918 B CN109829918 B CN 109829918B
Authority
CN
China
Prior art keywords
network
segmentation
convolution
liver
dense
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910001654.9A
Other languages
Chinese (zh)
Other versions
CN109829918A (en
Inventor
王正刚
程荣
黄宜庆
王冠凌
杨会成
赵发
代广珍
魏安静
邱意敏
金震妮
朱世东
朱卫东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Polytechnic University
Original Assignee
Anhui Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Polytechnic University filed Critical Anhui Polytechnic University
Priority to CN201910001654.9A priority Critical patent/CN109829918B/en
Publication of CN109829918A publication Critical patent/CN109829918A/en
Application granted granted Critical
Publication of CN109829918B publication Critical patent/CN109829918B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a liver image segmentation method based on a dense feature pyramid network, which relates to the technical field of image processing, and adopts the dense feature pyramid network to process the segmentation problem of a multi-stage liver CT image, wherein the network is based on a full-convolution segmentation network, introduces a feature pyramid, uses dense connection to enhance a feature stream, and realizes the liver segmentation of a pixel level on multi-stage CT; the method has the advantage that the Dice value in the public 3DIRCADb database is 95.0 percent, so that the segmentation performance is improved; and the public data set and the clinical data set prove that the DPFN trained by the CT image with larger layer thickness can be seamlessly popularized to the CT image with smaller layer thickness.

Description

Liver image segmentation method based on dense feature pyramid network
Technical Field
The invention relates to the technical field of image processing, in particular to a liver image segmentation method based on a dense feature pyramid network.
Background
Automatic segmentation of the liver in multi-phase CT images is an important step in liver diagnosis and therapy planning. Accurate liver segmentation is essential in many clinical applications, such as diagnosis of liver disease, functional testing and surgical planning. Manual segmentation is a heavy, error-prone and time-consuming task, especially on large volumes of CT data. Therefore, automatic segmentation of the liver is very necessary. However, this is a challenging task because of the complex liver anatomy, blurred boundaries, and various morphologies in CT images.
Several methods of segmentation based on CT images have been proposed by the present scholars. Methods that can be largely classified into non-machine learning and machine learning. Non-machine learning methods typically rely on statistical distribution of Hounsfield Unit (HU) values in CT data, including atlas-based, active shape model (ASM-based), level-set-based, and graph-cut-based methods. For example, wang et al (Wang et al. A new segmentation frame based on sparse shape mapping in real planning system. Medical physics.2013,40 (5): 051913.) combines the sparse shape composition model with the fast iterative level set method to realize the synchronous accurate segmentation of the liver, hepatic vein and tumor. AlShaikhli et al (AlShaikhli, yang, rosenhahn. Automatic 3D liver segmentation using sparse representation of global and local image information level set formation. Computer science. 2015) developed a level set method for automatically segmenting 3D liver using sparse representations of global and local image information. Li et al (Li et al. Automatic cover segmentation based on shape con-structures and deformable graphics in CT images. IEEE transactions-operations on Image processing.2015,24 (12): 5315.) propose a deformation graph cut that incorporates shape constraints into the area cost and boundary cost of the graph cut. The method of machine learning trains classifiers with manually designed features to achieve good segmentation.
In recent years, deep learning has been excellent in various challenging tasks such as classification, segmentation, detection, and the like. Several automatic liver segmentation methods based on convolutional neural networks have been proposed by the scholars. Lu et al (Lu et al. Automatic 3D lift location and segmentation view a volumetric neural network and graph cut. International Journal of Computer Assisted radio surgery.2017, 12 (2): 171.) propose a 3D FCN and post-process using graph cut method. However, the inventor finds that the current segmentation method still has the problems of low segmentation performance and incompatible popularization among segmentation models trained by CT images with different layer thicknesses in the process of studying segmentation of liver CT images.
Disclosure of Invention
In view of the above, the present invention is directed to a liver image segmentation method based on a dense feature pyramid network, so as to overcome all or part of the deficiencies in the prior art.
The invention provides a liver image segmentation method based on a dense feature pyramid network, which is a method for constructing a dense feature pyramid model network formed by dense connection based on a full-convolution segmentation network, a feature pyramid and realizing pixel-level liver segmentation on liver multi-stage CT by adopting the model network in image processing.
In some optional embodiments, the full convolution based segmentation network is an end-to-end full convolution segmentation network.
In some optional embodiments, the end-to-end full convolution partition network is composed of an encoder and a decoder, and the features of the encoder and the features of the decoder are directly connected on the same scale.
In some alternative embodiments, the encoder is comprised of a plurality of convolution blocks for extracting semantic features and compressing feature maps, the convolution blocks being comprised of two concatenated convolution layers and one max-pooling layer.
In some alternative embodiments, the decoder is comprised of a plurality of deconvolution blocks comprised of one deconvolution layer and two concatenated convolution layers.
In some optional embodiments, the feature pyramid is an input module, down-sampling of different multiples is implemented to obtain a multi-scale feature map, and is integrated into each layer of the network afterwards, features of different scales are matched with the map, the multi-scale features are aligned together for convolution by a tandem operation, and the following relationship is satisfied:
C i =Concat(D(I),D(O i-1 ))
wherein, I: original input feature mapping, O i : output of each convolution block, C i Input for each volume block, concat: the join operation along that dimension of the channel, D is the down-sampling operation.
In some alternative embodiments, the dense connections comprise tight connections of different levels of layers, and the output of each layer of tight connections satisfies the following relationship:
x l =H l ([D(x 0 ),…D(x l-2 ),x l-1 ])
wherein x is l : output of the first layer, H l : is a combination of operations such as convolution, pooling, and activation, D: is a method for matching the output scale of the previous layer to x l-1 A down-sampling operation of (2).
In some alternative embodiments, the dense feature pyramid model network is implemented with Tensorflow1.4, the network parameters are randomly initialized using Gaussian distributions, and the weighted cross-entropy loss satisfies the following relationship:
Figure GDA0003826445110000031
wherein the content of the first and second substances,
Figure GDA0003826445110000032
probability that pixel x belongs to the corresponding class, ω i: weight factor, C i : class, n: total number of pixels, N: the number of classes.
From the above, the liver image segmentation method based on the dense feature pyramid network provided by the invention adopts the dense feature pyramid network (DPFN) to process the segmentation problem of the multi-stage liver CT image, and the network is based on the full convolution segmentation network (FCN), introduces the feature pyramid, and uses the dense connection enhanced feature stream to realize the pixel-level liver segmentation on the multi-stage CT. The method has a Dice value of 95.0% in a public 3DIRCADb database, and improves the segmentation performance. And the public data set and the clinical data set prove that the DPFN trained by the CT image with larger layer thickness can be seamlessly popularized to the CT image with smaller layer thickness.
Drawings
FIG. 1 is a diagram showing a complex anatomical structure, a fuzzy boundary and a plurality of morphologies of a liver in a CT image according to the prior art;
a-cross section, b-coronal, c-sagittal;
FIG. 2 is a diagram illustrating the structure of portions of a DPFN in accordance with an embodiment of the present invention;
box-feature map, thin arrow-downsampling operation with thick arrow-convolution and max pooling;
FIG. 3 is an architecture diagram of a DPFN in accordance with an embodiment of the present invention;
fig. 4 is a diagram illustrating the result of the DPFN trained on CT images with large layer thickness generalized to the segmentation of CT images with small layer thickness in the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments and the accompanying drawings.
In order to solve the problems that in the prior art, the segmentation performance of a liver multi-stage CT image is low in an image segmentation method, and segmentation models trained by CT images with different thicknesses cannot be popularized in a compatible mode, the embodiment of the invention provides a liver image segmentation method based on a dense feature pyramid network.
In order to realize the liver image segmentation method based on the dense feature pyramid network provided by the embodiment of the invention, the specific steps are as follows: the DPFN shown in fig. 3, consists of three parts: end-to-end FCN, pyramid feature input module, and dense connection.
1) End-to-end FCN: FCN solves the problem by automatically learning a hierarchical structure of features from the labeled CT images, and fig. 2 (a) shows that FCN has learned a hierarchical structure of features. The model of the present invention uses the experience of U-Net (Ronneberger, fischer, brox 2015) for liver segmentation. The model consists of a decoder and an encoder, and is on the same scale. The encoder is composed of a plurality of convolution blocks for extracting semantic features and compressing feature maps. Each volume block consists of two cascaded convolutional layers and one max-pooling layer. The output of the convolution block has a different scale due to the presence of the maximum pooling layer. The decoder replaces the convolution block with a deconvolution block to increase the resolution of each intermediate output. The deconvolution block consists of one deconvolution layer and two cascaded convolution layers. On the same scale, the features of the encoder and the features of the decoder are directly connected to synthesize a finer segmentation result. In general, a deeply partitioned FCN is constructed that can be trained in an end-to-end manner. Considering the pixel-level imbalance of organ sizes, a weighted cross-entropy penalty is defined as
Figure GDA0003826445110000041
Wherein the content of the first and second substances,
Figure GDA0003826445110000051
probability that pixel x belongs to the corresponding class, ω i: weight factor, C i : class, n: total number of pixels, N: the number of classes.
2) Characteristic pyramid input module
By integrating the multi-scale features, the image pyramid input can effectively improve the segmentation performance. In contrast to models where the DPFN uses different multiples of downsampling to obtain the multi-scale feature map and integrates into each layer of the network afterwards, the multi-scale images are applied separately to the multi-branch network and then the final feature map is synthesized at the last layer. The multi-scale characteristics of the FCN encoder are smoothly integrated, and a hierarchical structure containing multi-scale feature mapping is calculated by using an expansion step 2. Downsampling is implemented using the maximum pooling layer and a multi-scale input is constructed at the encoder, as shown in fig. 2 (b) for downsampling the features to construct the multi-scale input.
Specifically, the raw input feature map is represented as I, and the output of each convolution block is represented as O i . The input C of each convolution block i Is defined as:
C i =Concat(D(I),D(O i-1 ))
where Concat is the join operation along that dimension of the channel and D is the downsample operation (maximum pooling in the method in this embodiment). The downsampling operation matches the graph with features of different scales, and the concatenation operation aligns the multi-scale features together for convolution. Thus, few additional parameters are required to integrate multi-scale features into the network.
3) Dense connection
In multi-stage CNN, x is l Expressed as output of layer l, x l Can be defined as:
x l =H l (x l-1 )
wherein H l Is a combination of operations such as convolution, pooling, and activation. To speed up convergence, avoid gradient vanishing, residual learning is introduced, the connection is H l Is integrated with the identification mapping of the previous layer features to enhance information transfer, i.e. the residual block. Can be defined as:
x l =H l (x l-1 )+x l-1
however, the outputs of the two branches add directly, which reduces the flow of information in the network. To further improve the information flow within the network, dense connections are used, i.e. the output of the previous layer is connected to the outputs of all subsequent layers. Dense concatenation expands the concept of residual learning to the extreme. Specifically, x l Can be defined as:
x l =H l ([x 0 ,x 1 ,…,x l-1 ])
wherein [ x ] 0 ,x 1 ,…,x l-1 ]Refers to the connection layer 0, ·, l-1 that produces the feature map. Dense connectivity exists only between successive convolutional layers at the same scale (called dense blocks). To further improve information flow between different scale features, different levels of tight junctions are constructed on the systolic path of the FCN as shown in fig. 2 (c) to establish dense connectivity on different scale feature maps. Then x l Is defined as:
x l =H l ([D(x 0 ),…D(x l-2 ),x l-1 ])
d is a scale to match the previous layer output to x l-1 A down-sampling operation of (2).
DPFN predicts the segmentation results and then performs connected domain analysis (CCA) to reject false positives. The largest split connected domain is retained and the rest is discarded.
As shown in fig. 3, the DPFN in the embodiment of the present invention includes 1 basic 5-layer FCN and 4 residual learning connected to 4-layer encoder and decoder paths. In the encoder path, there are 2 convolutional layers per layer, followed by 1 max pooling layer. In the decoder path, there are 2 convolutional layers per layer, followed by 1 deconvolution layer. A dense connected features pyramid is applied on the encoder path. The step size for both convolution and deconvolution is 1, and the step size for maximum pooling is 2. The convolution kernel size is 3 and the deconvolution and maximum pooling kernel sizes are 2. To take advantage of the context, successive CT images are stacked along the channel dimension before being input to the network.
DPFN was realized with Tensorflow 1.4. The network parameters are initialized randomly using a gaussian distribution (μ =0, σ = 0.01). An Adam optimizer with an initial learning rate of 0.0001 was used for parameter update. The weights of the cross-entropy loss are set to 1 and 16, respectively, taking into account the size of the background and liver.
The data set and evaluation index tests were performed under the above-described DPFN, and the DPFN proposed in the embodiment of the present invention was evaluated on the public library access 3DIRCADb database. Consisting of 3D CT images of 10 females and 10 males, 75% of patients had liver tumors. To explore the generalization ability of the segmentation method, the datasets in the MICCAI 2017LiTS challenge were used. The LiTS dataset contains 131 and 70 venous phase enhanced three-dimensional abdominal CT scans. The data set was from a different medical centre in europe with in-plane resolution varying widely from 0.55mm to 1.0mm, with layer thicknesses varying from 0.45mm to 6.0mm. In the experiment, whether the model trained by using the CT image with the larger layer thickness can be popularized to the model with the smaller layer thickness is aimed at the experiment, and 51 CT images with the slice thickness smaller than 1mm are selected.
For data pre-processing, the HU values of all CT images are clipped to [ -75,175] to eliminate extraneous tissue according to the clinician's recommendations. The segmentation performance is measured by a common Dice, and the method calculates the ratio of the intersection point and the union of the manual label and the model prediction result. And (3) the foreground in the manual labeling is called A, the predicted foreground is B, and then a Dice similarity index is defined:
Figure GDA0003826445110000071
where | | | denotes the number of pixels belonging to the foreground in binary segmentation, and | a ≦ B | denotes the number of pixels in which a and B belong to the foreground in common. For comprehensive evaluation, volume Overlay Error (VOE), relative Volume Difference (RVD), average symmetric surface distance (ASD), root mean square symmetric surface distance (RMSD) are also used to measure the accuracy of the segmentation results. For these four evaluation indexes, the smaller the value, the better the segmentation result, and in order to verify the effectiveness and robustness of the segmentation method provided in the embodiment of the present invention, experiments were performed on the 3d lcd data set, which is shown in tables 1 and 2.
TABLE 1 3Dircadb liver segmentation quantification results
Figure GDA0003826445110000072
Table 1 shows the comparison of liver segmentation performance with the most advanced method on the 3DIRCADb dataset (Li et al.2018) (Christ et al.2017) (Han 2017) (Chlebus et al.2017). Therefore, the segmentation method provided by the embodiment of the invention has a better effect on liver segmentation. Through experimental comparison, the superiority of the segmentation method provided by the embodiment of the invention compared with other methods is verified. In the future work, the dense pyramid feature input module is applied to the three-dimensional network.
In the research process, DPFN is also found to have good generalization capability, the liver segmentation can be extended from CT images with a large layer thickness to CT images with a small layer thickness. To demonstrate this, DPFN was trained on 3 DIRCADb. From the LiTS data set, 51 rolls with a thickness of less than 1mm were selected as a test set, and the results are shown in table 2. Although the training set for this experiment was 60 volumes, with only 20 volumes in venous phase, better results of 90.97% were obtained for DPFN. FIG. 4 is a graph of the results of the segmentation of two samples with slice thicknesses of 0.8mm and 0.7mm, respectively. The first line in the figure is the original CT image, the second line is manually labeled, and the third line is the result produced by the DPFN. The first three columns in the figure are respectively the transverse, sagittal and coronal planes with a slice thickness of 0.8 mm. The last three columns are respectively the transverse, sagittal and coronal planes with a slice thickness of 0.7 mm. The boundaries of fig. 4 are more clear than in fig. 1.
TABLE 2 quantification of CT images with smaller layer thicknesses trained with CT images with larger layer thicknesses
Method Dice Similarity Index Per Case(%)
Baseline 85.01±14.50
DPFN 90.97±4.43
In the embodiment of the invention, a liver image segmentation method based on a dense feature pyramid network is characterized in that a dense feature pyramid model network (DPFN) consisting of a full convolution-based segmentation network, a feature pyramid and dense connections is constructed in an image processor and used for automatically segmenting a liver on CT data with enhanced contrast, and pyramid features and dense connections are introduced into an FCN. The method has good effect on the public data set and has strong generalization capability. DPFN trained with CT images with larger layer thicknesses can be seamlessly extended to CT images with smaller layer thicknesses. It can also be easily extended to 3D networks and applied in other medical image segmentation.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, is limited to these examples; within the idea of the invention, also technical features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity.
The embodiments of the invention are intended to embrace all such alternatives, modifications and variances which fall within the broad scope of the appended claims. Therefore, any omissions, modifications, substitutions, improvements and the like that may be made without departing from the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (2)

1. A liver image segmentation method based on a dense feature pyramid network is characterized in that the segmentation method is a method for constructing a dense feature pyramid model network which is formed by dense connection based on a full convolution segmentation network, a feature pyramid and a model network in image processing and then realizing pixel-level liver segmentation on liver multi-stage CT by adopting the model network;
the full convolution based segmentation network is an end-to-end full convolution segmentation network;
the end-to-end full convolution partition network consists of an encoder and a decoder, and the characteristics of the encoder and the characteristics of the decoder are directly connected on the same scale;
the encoder is composed of a plurality of convolution blocks and is used for extracting semantic features and compressing feature mapping, and each convolution block is composed of two cascaded convolution layers and a maximum pooling layer;
the decoder is composed of a plurality of deconvolution blocks, and each deconvolution block is composed of one deconvolution layer and two cascaded convolution layers;
the feature pyramid is an input module, downsampling of different multiples is achieved to obtain a multi-scale feature map, the multi-scale feature map is integrated into each layer of a later network, features of different scales are matched with the map, the multi-scale features are aligned together through series operation to be convolved, and the following relations are met:
C i =Concat(D(I),D(O i-1 ))
wherein, I: original input feature mapping, O i : output of each convolution block, C i Input for each volume block, concat: a join operation along that dimension of the channel, D being a downsample operation;
the dense connection comprises tight connections of different levels of layers, and the output of each layer of tight connections satisfies the following relation:
x l =H l ([D(x 0 ),…D(x l-2 ),x l-1 ])
wherein x is l : output of the first layer, H l : is a combination of operations such as convolution, pooling, and activation, D: is a method for matching the output scale of the previous layer to x l-1 A down-sampling operation of (2).
2. The liver image segmentation method based on the dense feature pyramid network as claimed in claim 1, wherein the dense feature pyramid model network is implemented by Tensorflow1.4, network parameters are randomly initialized by using Gaussian distribution, and weighted cross entropy loss satisfies the following relationship:
Figure FDA0003826445100000011
wherein the content of the first and second substances,
Figure FDA0003826445100000012
probability that pixel x belongs to the corresponding class, ω i: weight factor, C i : class, n: total number of pixels, N: the number of classes.
CN201910001654.9A 2019-01-02 2019-01-02 Liver image segmentation method based on dense feature pyramid network Active CN109829918B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910001654.9A CN109829918B (en) 2019-01-02 2019-01-02 Liver image segmentation method based on dense feature pyramid network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910001654.9A CN109829918B (en) 2019-01-02 2019-01-02 Liver image segmentation method based on dense feature pyramid network

Publications (2)

Publication Number Publication Date
CN109829918A CN109829918A (en) 2019-05-31
CN109829918B true CN109829918B (en) 2022-10-11

Family

ID=66860210

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910001654.9A Active CN109829918B (en) 2019-01-02 2019-01-02 Liver image segmentation method based on dense feature pyramid network

Country Status (1)

Country Link
CN (1) CN109829918B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110310281B (en) * 2019-07-10 2023-03-03 重庆邮电大学 Mask-RCNN deep learning-based pulmonary nodule detection and segmentation method in virtual medical treatment
CN110728683B (en) * 2019-09-29 2021-02-26 吉林大学 Image semantic segmentation method based on dense connection
CN110853046A (en) * 2019-10-12 2020-02-28 沈阳航空航天大学 Pancreatic tissue segmentation method based on deep learning
CN110728238A (en) * 2019-10-12 2020-01-24 安徽工程大学 Personnel re-detection method of fusion type neural network
CN110910408A (en) * 2019-11-28 2020-03-24 慧影医疗科技(北京)有限公司 Image segmentation method and device, electronic equipment and readable storage medium
CN111145170B (en) * 2019-12-31 2022-04-22 电子科技大学 Medical image segmentation method based on deep learning
CN111815628B (en) * 2020-08-24 2021-02-19 武汉精测电子集团股份有限公司 Display panel defect detection method, device, equipment and readable storage medium
CN112767407B (en) * 2021-02-02 2023-07-07 南京信息工程大学 CT image kidney tumor segmentation method based on cascade gating 3DUnet model
CN112967294A (en) * 2021-03-11 2021-06-15 西安智诊智能科技有限公司 Liver CT image segmentation method and system
CN115578404B (en) * 2022-11-14 2023-03-31 南昌航空大学 Liver tumor image enhancement and segmentation method based on deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018125580A1 (en) * 2016-12-30 2018-07-05 Konica Minolta Laboratory U.S.A., Inc. Gland segmentation with deeply-supervised multi-level deconvolution networks
CN109035197A (en) * 2018-05-31 2018-12-18 东南大学 CT contrastographic picture tumor of kidney dividing method and system based on Three dimensional convolution neural network
CN109063710A (en) * 2018-08-09 2018-12-21 成都信息工程大学 Based on the pyramidal 3D CNN nasopharyngeal carcinoma dividing method of Analysis On Multi-scale Features

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10600185B2 (en) * 2017-03-08 2020-03-24 Siemens Healthcare Gmbh Automatic liver segmentation using adversarial image-to-image network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018125580A1 (en) * 2016-12-30 2018-07-05 Konica Minolta Laboratory U.S.A., Inc. Gland segmentation with deeply-supervised multi-level deconvolution networks
CN109035197A (en) * 2018-05-31 2018-12-18 东南大学 CT contrastographic picture tumor of kidney dividing method and system based on Three dimensional convolution neural network
CN109063710A (en) * 2018-08-09 2018-12-21 成都信息工程大学 Based on the pyramidal 3D CNN nasopharyngeal carcinoma dividing method of Analysis On Multi-scale Features

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于新型深度全卷积网络的肝脏CT影像三维区域自动分割;孙明建等;《中国生物医学工程学报》;20180820(第04期);全文 *
结合改进的U-Net和Morphsnakes的肝脏分割;刘哲等;《中国图象图形学报》;20180816(第08期);全文 *

Also Published As

Publication number Publication date
CN109829918A (en) 2019-05-31

Similar Documents

Publication Publication Date Title
CN109829918B (en) Liver image segmentation method based on dense feature pyramid network
CN111311592B (en) Three-dimensional medical image automatic segmentation method based on deep learning
US9968257B1 (en) Volumetric quantification of cardiovascular structures from medical imaging
CN110097550B (en) Medical image segmentation method and system based on deep learning
CN111798462B (en) Automatic delineation method of nasopharyngeal carcinoma radiotherapy target area based on CT image
CN112465827B (en) Contour perception multi-organ segmentation network construction method based on class-by-class convolution operation
CN110309860B (en) Method for classifying malignancy degree of lung nodule based on convolutional neural network
WO2021203795A1 (en) Pancreas ct automatic segmentation method based on saliency dense connection expansion convolutional network
CN111340828A (en) Brain glioma segmentation based on cascaded convolutional neural networks
CN111091573B (en) CT image pulmonary vessel segmentation method and system based on deep learning
Shen et al. Efficient symmetry-driven fully convolutional network for multimodal brain tumor segmentation
CN113808146B (en) Multi-organ segmentation method and system for medical image
CN112767417B (en) Multi-modal image segmentation method based on cascaded U-Net network
Fan et al. Unsupervised cerebrovascular segmentation of TOF-MRA images based on deep neural network and hidden Markov random field model
Hammouda et al. A deep learning-based approach for accurate segmentation of bladder wall using MR images
CN110570394A (en) medical image segmentation method, device, equipment and storage medium
Amiri et al. Bayesian Network and Structured Random Forest Cooperative Deep Learning for Automatic Multi-label Brain Tumor Segmentation.
Wu et al. Cascaded fully convolutional DenseNet for automatic kidney segmentation in ultrasound images
Pang et al. A modified scheme for liver tumor segmentation based on cascaded FCNs
Du et al. TSU-net: two-stage multi-scale cascade and multi-field fusion U-net for right ventricular segmentation
Tan et al. Automatic prostate segmentation based on fusion between deep network and variational methods
Cui et al. An improved combination of faster R-CNN and U-net network for accurate multi-modality whole heart segmentation
CN111667488B (en) Medical image segmentation method based on multi-angle U-Net
CN117152173A (en) Coronary artery segmentation method and system based on DUNetR model
Yuan et al. FM-Unet: Biomedical image segmentation based on feedback mechanism Unet

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant