CN114862869A - Kidney tissue segmentation method and device based on CT (computed tomography) image - Google Patents
Kidney tissue segmentation method and device based on CT (computed tomography) image Download PDFInfo
- Publication number
- CN114862869A CN114862869A CN202210330739.3A CN202210330739A CN114862869A CN 114862869 A CN114862869 A CN 114862869A CN 202210330739 A CN202210330739 A CN 202210330739A CN 114862869 A CN114862869 A CN 114862869A
- Authority
- CN
- China
- Prior art keywords
- segmentation
- image
- sampling
- layer
- layers
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 92
- 238000000034 method Methods 0.000 title claims abstract description 46
- 210000005084 renal tissue Anatomy 0.000 title claims abstract description 14
- 238000002591 computed tomography Methods 0.000 title description 20
- 238000000605 extraction Methods 0.000 claims abstract description 16
- 210000003734 kidney Anatomy 0.000 claims abstract description 16
- 238000012800 visualization Methods 0.000 claims abstract description 9
- 238000007781 pre-processing Methods 0.000 claims abstract description 7
- 238000012545 processing Methods 0.000 claims abstract description 5
- 238000005070 sampling Methods 0.000 claims description 44
- 230000004927 fusion Effects 0.000 claims description 20
- 230000006870 function Effects 0.000 claims description 12
- 230000009466 transformation Effects 0.000 claims description 12
- 230000003213 activating effect Effects 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 7
- 230000007246 mechanism Effects 0.000 claims description 6
- 206010028980 Neoplasm Diseases 0.000 claims description 5
- 238000010586 diagram Methods 0.000 claims description 5
- 238000012805 post-processing Methods 0.000 claims description 5
- 230000007704 transition Effects 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 239000011159 matrix material Substances 0.000 claims description 4
- 230000000877 morphologic effect Effects 0.000 claims description 4
- 238000013459 approach Methods 0.000 claims description 2
- 210000004204 blood vessel Anatomy 0.000 claims description 2
- 230000007797 corrosion Effects 0.000 claims 1
- 238000005260 corrosion Methods 0.000 claims 1
- 239000012634 fragment Substances 0.000 claims 1
- 238000005549 size reduction Methods 0.000 claims 1
- 208000008839 Kidney Neoplasms Diseases 0.000 abstract description 5
- 210000002254 renal artery Anatomy 0.000 abstract description 5
- 210000002796 renal vein Anatomy 0.000 abstract description 5
- 230000002308 calcification Effects 0.000 description 2
- 230000003902 lesion Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 206010010356 Congenital anomaly Diseases 0.000 description 1
- 206010058314 Dysplasia Diseases 0.000 description 1
- 208000035656 Perirenal haematoma Diseases 0.000 description 1
- 208000035965 Postoperative Complications Diseases 0.000 description 1
- 206010061481 Renal injury Diseases 0.000 description 1
- 206010000269 abscess Diseases 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000001990 intravenous administration Methods 0.000 description 1
- 230000009545 invasion Effects 0.000 description 1
- 208000037806 kidney injury Diseases 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001717 pathogenic effect Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 206010038534 renal tuberculosis Diseases 0.000 description 1
- 238000007487 urography Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30084—Kidney; Renal
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Medical Informatics (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The kidney tissue segmentation method and device based on the CT image can improve the accuracy of segmentation of the kidney, the renal artery blood vessel, the renal vein blood vessel and the renal tumor. The method comprises the following steps: (1) acquiring an image to be segmented; (2) slicing an image to be segmented; (3) preprocessing the sliced image; (4) inputting the image into a pre-trained region-of-interest extraction model to obtain a region-of-interest; (5) inputting the image in the region of interest into a pre-trained segmentation model; (6) processing the segmentation result to remove some wrong segments; (7) and carrying out three-dimensional visualization on the segmentation result.
Description
Technical Field
The invention relates to the technical field of medical image processing, in particular to a kidney tissue segmentation method based on a CT (computed tomography) image and a kidney tissue segmentation device based on the CT image.
Background
A renal CT, i.e., a CT examination of a kidney, is a method for examining a kidney by CT images.
It has the following meanings:
1. the position, size, shape and invasion range of the tumor can be found out; the mass can be identified as cystic, parenchymal, fatty or calcified lesions, so that a qualitative diagnosis can be made.
2. When an intravenous urography examination reveals a dysfunctional kidney, CT can determine the location, nature, or congenital dysplasia of the lesion.
3. Can detect tiny calcification, calculus or negative calculus which can not be developed by common X-ray examination.
4. The kit has great value in diagnosing the renal tuberculosis, and can display the conditions of intrarenal destruction, pathogenic calcification, perirenal abscess and the like.
5. Can be used for judging the position, range and perirenal hematoma of kidney injury and postoperative complications.
However, it is difficult to improve the accuracy of segmentation of CT images of the kidney, the renal artery blood vessels, the renal vein blood vessels, and the renal tumor.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a kidney tissue segmentation method based on a CT image, which can improve the accuracy of segmentation of the kidney, the renal artery blood vessel, the renal vein blood vessel and the renal tumor.
The technical scheme of the invention is as follows: the kidney tissue segmentation method based on the CT image comprises the following steps:
(1) acquiring an image to be segmented;
(2) slicing an image to be segmented;
(3) preprocessing the sliced image;
(4) inputting the image into a pre-trained region-of-interest extraction model to obtain a region-of-interest;
(5) inputting the image in the region of interest into a pre-trained segmentation model;
(6) processing the segmentation result to remove some wrong segments;
(7) performing three-dimensional visualization on the segmentation result;
in the step (5), the segmentation model is a multi-scale fusion Attention-DenseUnet network, and the network architecture is divided into two parts: the method comprises the steps of encoding a down-sampling path and decoding an up-sampling path, wherein the down-sampling path comprises 4 dense connection modules, and the size of each dense connection module is 6 layers, 12 layers, 16 layers and 24 layers from top to bottom; in a densely connected module, the input to each layer is related to not only the output of the i-1 layer, but also the outputs of all previous layers, as in equation (1):
X l =H l ([X 0 ,X 1 ,…X l-1 ]) (1)
wherein]Represents X 0 To X l-1 All outputs of the layers are organized together according to a channel, H l Representing the nonlinear transformation, the network structure adopts the combination of BatchNorm + ReLU + Conv1 × 1+ Conv3 × 3 to carry out the nonlinear transformation.
The region of interest extraction model is a ResUNet network, and the network can accurately capture the characteristics of the kidney part on the CT image and output eight vertex coordinates of a bounding box containing the whole kidney part structure. The segmentation model is a multi-scale fusion Attention-DenseUNet model, and the extraction result of the region of interest is input into the segmentation model for fine segmentation. A multi-scale feature fusion mechanism used in a segmentation model is an important method for improving network performance, low-level features contain more position information, high-level features have the phenomenon of information loss, and the perception capability of details is reduced. Therefore, the low-level features and the high-level features are fused, and the multi-scale spatial information can be effectively utilized to improve the segmentation accuracy. The model performs up-sampling for 4 times, a feature map is output after each up-sampling, features with the same channel number are obtained after convolution of each layer of feature images with the convolution kernel size of 1 x 1, and the features are added to obtain a final feature map. And activating the final feature graph through a Sigmoid function to obtain a final segmentation result. After the segmentation is completed, morphological post-processing needs to be performed on the segmentation result, some wrong segments are removed, and then three-dimensional visualization is performed on the segmentation result, so that the accuracy of the segmentation of the kidney, the renal artery blood vessel, the renal vein blood vessel and the renal tumor can be further improved.
Also provided is a kidney tissue segmentation device based on CT images, comprising:
an acquisition module configured to acquire an image to be segmented;
a slicing module configured to slice an image to be segmented;
a pre-processing module configured to pre-process the sliced image;
the interesting region extraction module is configured to input images into a pre-trained interesting region extraction model to obtain an interesting region;
a segmentation module configured to input images within the region of interest to a pre-trained segmentation model;
a post-processing module configured to process the segmentation result to remove some erroneous segments;
a display module configured to perform a three-dimensional visualization of the segmentation results;
in the segmentation module, a segmentation model is a multi-scale fusion Attention-DenseUNet network, and the network architecture is divided into two parts: the method comprises the steps of coding a downsampling path and a decoding upsampling path, wherein the downsampling path comprises 4 dense connection modules, and the size of each dense connection module is 6 layers, 12 layers, 16 layers and 24 layers from top to bottom; in a densely connected module, the input to each layer is related to not only the output of the i-1 layer, but also the outputs of all previous layers, as in equation (1):
X l =H l ([X 0 ,X 1 ,…X l-1 ]) (1)
wherein [ 2 ]]Represents the reaction of X 0 To X l-1 All outputs of the layers are organized together according to a channel, H l Representing nonlinear transformation, the network structure adopts the combination of BatchNorm + ReLU + Conv1 × 1+ Conv3 × 3 to carry out nonlinear transformation;
a spatial attention module is added in a jump connection layer between the lower sampling layer and the upper sampling layer, so that the segmentation precision is further improved;
adopting a multi-scale fusion strategy in a decoding part, carrying out up-sampling on a model for 4 times, outputting a feature map after each up-sampling, obtaining features with the same channel number after convolution of each layer of feature images with the convolution kernel size of 1 x 1, and adding to obtain a final feature map; and activating the final feature graph through a Sigmoid function to obtain a final segmentation result.
Drawings
Fig. 1 is a flow chart of a method for kidney tissue segmentation based on CT images according to the present invention.
Fig. 2 shows ResUnet for region of interest extraction.
FIG. 3 shows a 6-layer Dense connection module (Dense Block).
Fig. 4 shows the structure of the entire Attention module.
FIG. 5 is a diagram of a multi-scale fusion Attention-DenseUNet network architecture.
Detailed Description
As shown in fig. 1, the method for segmenting kidney tissue based on CT image includes the following steps:
(1) acquiring an image to be segmented;
(2) slicing an image to be segmented;
(3) preprocessing the sliced image;
(4) inputting the image into a pre-trained region-of-interest extraction model to obtain a region-of-interest;
(5) inputting the image in the region of interest into a pre-trained segmentation model;
(6) processing the segmentation result to remove some wrong segments;
(7) performing three-dimensional visualization on the segmentation result;
as shown in fig. 2, in the step (5), the segmentation model is a multi-scale fusion Attention-DenseUNet network, and the network architecture is divided into two parts: the method comprises the steps of encoding a down-sampling path and decoding an up-sampling path, wherein the down-sampling path comprises 4 dense connection modules, and the size of each dense connection module is 6 layers, 12 layers, 16 layers and 24 layers from top to bottom; in a densely connected module, the input to each layer is related to not only the output of the i-1 layer, but also the outputs of all previous layers, as in equation (1):
X l =H l ([X 0 ,X 1 ,…X l-1 ]) (1)
wherein]Represents the reaction of X 0 To X l-1 All outputs of the layers are organized together according to a channel, H l Representing the nonlinear transformation, the network structure adopts the combination of BatchNorm + ReLU + Conv1 × 1+ Conv3 × 3 to carry out the nonlinear transformation.
The region of interest extraction model is a ResUNet network, and the network can accurately capture the characteristics of the kidney part on the CT image and output eight vertex coordinates of a bounding box containing the whole kidney part structure. The segmentation model is a multi-scale fusion Attention-DenseUNet model, and the extraction result of the region of interest is input into the segmentation model for fine segmentation. A multi-scale feature fusion mechanism used in a segmentation model is an important method for improving network performance, low-level features contain more position information, high-level features have the phenomenon of information loss, and the perception capability of details is reduced. Therefore, the low-level features and the high-level features are fused, and the multi-scale spatial information can be effectively utilized to improve the segmentation accuracy. The model performs up-sampling for 4 times, a feature map is output after each up-sampling, features with the same number of channels are obtained after convolution of each layer of feature image with the convolution kernel size of 1 x 1, and the features are added to obtain a final feature map. And activating the final feature graph through a Sigmoid function to obtain a final segmentation result. After the segmentation is completed, morphological post-processing needs to be performed on the segmentation result, some wrong segments are removed, and then three-dimensional visualization is performed on the segmentation result, so that the accuracy of the segmentation of the kidney, the renal artery blood vessel, the renal vein blood vessel and the renal tumor can be further improved.
As shown in fig. 5, in the step (5), the segmentation model is multi-scale fusion Attention-DenseUNet. In addition, four Dense blocks are added in the encoder part of the model, and the four Dense blocks are respectively 6 layers, 12 layers, 36 layers and 24 layers from top to bottom. The Dense block ensures the maximum information flow between layers and improves the gradient flow, thereby reducing the burden of finding the optimal solution in the ultra-deep neural network. To reduce the amount of computation and increase the receptive field, a down-transition layer is used after each dense block. Each transition layer is composed of the ReLU activation function, Batch Normalization (BN), bottleneck layer (i.e., 1 × 1 convolution), and average pooling layer (2 × 2). The decoding stage of the model is subjected to multi-scale fusion, some detail information is definitely lost when an up-sampling part of a decoder is recovered to high resolution from low resolution, the final segmentation result is influenced, the multi-scale fusion module is added at a decoding end, the multi-scale spatial information can be effectively utilized to improve the segmentation precision, and the features of each layer of the decoder in the multi-scale fusion segmentation model are subjected to up-sampling and are recovered to high resolution for fusion. The extracted feature information is superposed in parallel, and complementation of a plurality of feature information is realized, so that the segmentation is more accurate. The model performs up-sampling for 4 times, a feature map is output after each up-sampling, features with the same number of channels are obtained after convolution of each layer of feature image with a convolution kernel size of 1 × 1, and the features are added to obtain a final feature map. And activating the final feature graph through a Sigmoid function to obtain a final segmentation result. In the traditional UNet structure, the connection of the downsampling coding part and the upsampling decoding part is usually to splice the front and back features directly through a Concatenate mechanism, so that the problem of insufficient extraction of the downsampling part features in the splicing process exists. The present model uses an attention module to concatenate the upsampled and downsampled portions. The segmentation accuracy can be further improved.
Preferably, in the step (2), the three-dimensional image to be segmented is decomposed into a single two-dimensional slice in the coronal plane direction, and is saved as a matrix file.
Preferably, in the step (3), the size of the single slice is fixed to 512 × 512 by using bilinear interpolation, the gray value Hounsfield unit value of the slice is adjusted to the range [ -1024,600] HU, and then the data is normalized to zero mean and unit variance.
Preferably, in the step (4), the model is calculated by combining the labels of the tumor and kidney parts into one class by using a cross entropy loss function.
Preferably, as shown in fig. 3, in the step (5), each Dense connection module (Dense) is connected to another Dense connection module (Dense) by using a Transition layer, and the Transition layer is composed of a Conv (1 × 1) and a Maxpool.
Preferably, in the step (5), the upsampling path includes a spatial attention and intensive supervision mechanism, and the spatial attention module increases the weight of the target area for the network, so as to improve the segmentation accuracy; the dense supervision converts each up-sampled characteristic image channel into a convolution of 3 multiplied by 1, and the calculation loss is reduced according to the size of a target image; thus each upsampling will resemble the target as much as possible, and the structure of the entire Attention-DenseUNet is shown in FIG. 5.
Wherein the spatial attention mechanism is implemented by a spatial attention module, as shown in fig. 4. Downsampled hierarchical layer feature map g i Performing 1 × 1 convolution operation, and performing 1 × 1 convolution on the feature map of the layer above the upper sampling layer; adding the two feature maps, then performing ReLU, and then obtaining final attention weight through 1 × 1 convolution and Sigmoid function; and performing 1 x 1 convolution on the characteristic diagram in each up-sampling module, converting the characteristic diagram into a three-channel matrix, and performing Loss calculation, so that each up-sampling is similar to the target as much as possible.
Preferably, in the step (6), as for the segmentation result of the blood vessel, firstly performing a morphological closing operation, and then performing an expansion-first erosion-second operation so as to connect the disconnected parts of the segmentation; the segmentation results of the kidney and tumor were processed using a 3D connected component approach, removing some erroneous slices.
Preferably, in the step (7), the segmentation result is three-dimensionally modeled by using a VTK open source framework for direct observation by a doctor.
It will be understood by those skilled in the art that all or part of the steps in the method of the above embodiments may be implemented by hardware related to instructions of a program, where the program may be stored in a computer-readable storage medium, and when executed, the program includes the steps of the method of the above embodiments, and the storage medium may be: ROM/RAM, magnetic disks, optical disks, memory cards, and the like. Therefore, in accordance with the method of the present invention, the present invention also includes a kidney tissue segmentation device based on CT images, which is generally expressed in the form of functional modules corresponding to the steps of the method. The device comprises an acquisition module configured to acquire an image to be segmented;
an acquisition module configured to acquire an image to be segmented;
a slicing module configured to slice an image to be segmented;
a pre-processing module configured to pre-process the sliced image;
the interesting region extraction module is configured to input images into a pre-trained interesting region extraction model to obtain an interesting region;
a segmentation module configured to input images within the region of interest to a pre-trained segmentation model;
a post-processing module configured to process the segmentation result to remove some erroneous segments;
a display module configured to perform a three-dimensional visualization of the segmentation results;
in the segmentation module, a segmentation model is a multi-scale fusion Attention-DenseUNet network, and the network architecture is divided into two parts: the method comprises the steps of encoding a down-sampling path and decoding an up-sampling path, wherein the down-sampling path comprises 4 dense connection modules, and the size of each dense connection module is 6 layers, 12 layers, 16 layers and 24 layers from top to bottom; in a densely connected module, the input to each layer is related to not only the output of the i-1 layer, but also the outputs of all previous layers, as in equation (1):
X l =H l ([X 0 ,X 1 ,…X l-1 ]) (1)
wherein]Represents the reaction of X 0 To X l-1 All outputs of the layers are organized together according to a channel, H l Representing nonlinear transformation, the network structure adopts the combination of BatchNorm + ReLU + Conv1 × 1+ Conv3 × 3 to carry out nonlinear transformation;
a spatial attention module is added in a jump connection layer between the lower sampling layer and the upper sampling layer, so that the segmentation precision is further improved;
adopting a multi-scale fusion strategy in a decoding part, carrying out up-sampling on a model for 4 times, outputting a feature map after each up-sampling, obtaining features with the same channel number after convolution of each layer of feature images with the convolution kernel size of 1 x 1, and adding to obtain a final feature map; and activating the final feature graph through a Sigmoid function to obtain a final segmentation result.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way, and all simple modifications, equivalent variations and modifications made to the above embodiment according to the technical spirit of the present invention still belong to the protection scope of the technical solution of the present invention.
Claims (10)
1. The kidney tissue segmentation method based on the CT image is characterized by comprising the following steps: which comprises the following steps:
(1) acquiring an image to be segmented;
(2) slicing an image to be segmented;
(3) preprocessing the sliced image;
(4) inputting the image into a pre-trained region-of-interest extraction model to obtain a region-of-interest;
(5) inputting the image in the region of interest into a pre-trained segmentation model;
(6) processing the segmentation result, and removing some wrong fragments;
(7) performing three-dimensional visualization on the segmentation result;
in the step (5), the segmentation model is a multi-scale fusion Attention-DenseUnet network, and the network architecture is divided into two parts: the method comprises the steps of encoding a down-sampling path and decoding an up-sampling path, wherein the down-sampling path comprises 4 dense connection modules, and the size of each dense connection module is 6 layers, 12 layers, 16 layers and 24 layers from top to bottom; in a dense connection module, the input to each layer is related to not only the output of the i-1 layer, but also the outputs of all previous layers, as in equation (1):
X l =H l ([X 0 ,X 1 ,…X l-1 ]) (1)
wherein]Represents the reaction of X 0 To X l-1 All outputs of the layers are organized together according to a channel, H l Representing the nonlinear transformation, the network structure adopts the combination of BatchNorm + ReLU + Conv1 × 1+ Conv3 × 3 to carry out the nonlinear transformation.
2. The method of claim 1, wherein the method comprises: in the step (2), the three-dimensional image to be segmented is decomposed into single two-dimensional slices in the coronal plane direction, and the single two-dimensional slices are stored as a matrix file.
3. The method of claim 2, wherein the method comprises: in the step (3), the size of a single slice is fixed to 512 × 512 by using a bilinear interpolation method, then Hounsfield unit values of the slice data are adjusted to the range [ -1024,600] HU, and then the gray values of the slice are normalized to zero mean and unit variance.
4. The method of claim 3 for kidney tissue segmentation based on CT images, wherein: in the step (4), the model adopts a cross entropy loss function, and labels of the tumor and the kidney part are combined into one class for calculation.
5. The method of claim 4, wherein the method comprises: in the step (5), each dense connection module is connected by using a Transition layer, and the Transition layer is composed of a Conv (1 × 1) and a Maxpool.
6. The method of claim 5, wherein the method comprises: in the step (5), a multi-scale fusion strategy is adopted in a decoding part, the model performs up-sampling for 4 times, a feature map is output after each up-sampling, features with the same number of channels are obtained after convolution of each layer of feature images with the convolution kernel size of 1 x 1, and the features are added to obtain a final feature map; and activating the final feature graph through a Sigmoid function to obtain a final segmentation result.
7. The method of claim 6, wherein the method comprises: in the step (5), the up-sampling path includes a space attention and intensive supervision mechanism, and the space attention module increases the weight of the target area for the network, so as to improve the segmentation precision; the dense supervision converts each up-sampled characteristic map channel into 3 × 1 convolution, and the loss is calculated according to the size reduction of the target image;
downsampled hierarchical layer feature map g i Performing 1 × 1 convolution operation, and performing 1 × 1 convolution on the feature map of the layer above the upper sampling layer; adding the two feature maps, then performing ReLU, and then obtaining final attention weight through 1 × 1 convolution and Sigmoid function;
and performing 1 x 1 convolution on the characteristic diagram in each up-sampling module, converting the characteristic diagram into a three-channel matrix, and performing Loss calculation, so that each up-sampling is similar to the target as much as possible.
8. The method of claim 7, wherein the method comprises: in the step (6), for the segmentation result of the blood vessel, firstly performing morphological closing operation, and then performing expansion and corrosion operation so as to connect the disconnected parts of the segmentation; the segmentation results of the kidney and tumor were processed using a 3D connected component approach, removing some erroneous slices.
9. The method of claim 8, wherein the method comprises: in the step (7), a VTK open source frame is adopted to carry out three-dimensional modeling on the segmentation result so as to facilitate direct observation of a doctor.
10. Kidney tissue segmenting device based on CT image, its characterized in that: it includes:
an acquisition module configured to acquire an image to be segmented;
a slicing module configured to slice an image to be segmented;
a pre-processing module configured to pre-process the sliced image;
the region of interest extraction module is configured to input images into a pre-trained region of interest extraction model to obtain a region of interest;
a segmentation module configured to input images within the region of interest to a pre-trained segmentation model;
a post-processing module configured to process the segmentation result to remove some erroneous segments;
a display module configured to perform a three-dimensional visualization of the segmentation results;
in the segmentation module, a segmentation model is a multi-scale fusion Attention-DenseUnet network, and the network architecture is divided into two parts: the method comprises the steps of encoding a down-sampling path and decoding an up-sampling path, wherein the down-sampling path comprises 4 dense connection modules, and the size of each dense connection module is 6 layers, 12 layers, 16 layers and 24 layers from top to bottom; in a densely connected module, the input to each layer is related to not only the output of the i-1 layer, but also the outputs of all previous layers, as in equation (1):
X l =H l ([X 0 ,X 1 ,…X l-1 ]) (1)
wherein]Represents the reaction of X 0 To X l-1 All outputs of the layers are organized together according to a channel, H l Representing nonlinear transformation, the network structure adopts the combination of BatchNorm + ReLU + Conv1 × 1+ Conv3 × 3 to carry out nonlinear transformation;
a spatial attention module is added in a jump connection layer between the lower sampling layer and the upper sampling layer, so that the segmentation precision is further improved;
adopting a multi-scale fusion strategy in a decoding part, carrying out up-sampling on a model for 4 times, outputting a feature map after each up-sampling, obtaining features with the same channel number after convolution of each layer of feature images with the convolution kernel size of 1 x 1, and adding to obtain a final feature map; and activating the final feature graph through a Sigmoid function to obtain a final segmentation result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210330739.3A CN114862869A (en) | 2022-03-30 | 2022-03-30 | Kidney tissue segmentation method and device based on CT (computed tomography) image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210330739.3A CN114862869A (en) | 2022-03-30 | 2022-03-30 | Kidney tissue segmentation method and device based on CT (computed tomography) image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114862869A true CN114862869A (en) | 2022-08-05 |
Family
ID=82629485
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210330739.3A Pending CN114862869A (en) | 2022-03-30 | 2022-03-30 | Kidney tissue segmentation method and device based on CT (computed tomography) image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114862869A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109389587A (en) * | 2018-09-26 | 2019-02-26 | 上海联影智能医疗科技有限公司 | A kind of medical image analysis system, device and storage medium |
CN111325729A (en) * | 2020-02-19 | 2020-06-23 | 青岛海信医疗设备股份有限公司 | Biological tissue segmentation method based on biomedical images and communication terminal |
US20210241027A1 (en) * | 2018-11-30 | 2021-08-05 | Tencent Technology (Shenzhen) Company Limited | Image segmentation method and apparatus, diagnosis system, storage medium, and computer device |
WO2022036972A1 (en) * | 2020-08-17 | 2022-02-24 | 上海商汤智能科技有限公司 | Image segmentation method and apparatus, and electronic device and storage medium |
-
2022
- 2022-03-30 CN CN202210330739.3A patent/CN114862869A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109389587A (en) * | 2018-09-26 | 2019-02-26 | 上海联影智能医疗科技有限公司 | A kind of medical image analysis system, device and storage medium |
US20210241027A1 (en) * | 2018-11-30 | 2021-08-05 | Tencent Technology (Shenzhen) Company Limited | Image segmentation method and apparatus, diagnosis system, storage medium, and computer device |
CN111325729A (en) * | 2020-02-19 | 2020-06-23 | 青岛海信医疗设备股份有限公司 | Biological tissue segmentation method based on biomedical images and communication terminal |
WO2022036972A1 (en) * | 2020-08-17 | 2022-02-24 | 上海商汤智能科技有限公司 | Image segmentation method and apparatus, and electronic device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110934606B (en) | Cerebral apoplexy early-stage flat-scan CT image evaluation system and method and readable storage medium | |
CN113506334B (en) | Multi-mode medical image fusion method and system based on deep learning | |
CN111354002A (en) | Kidney and kidney tumor segmentation method based on deep neural network | |
CN112991365B (en) | Coronary artery segmentation method, system and storage medium | |
CN111815766B (en) | Processing method and system for reconstructing three-dimensional model of blood vessel based on 2D-DSA image | |
CN113344951A (en) | Liver segment segmentation method based on boundary perception and dual attention guidance | |
CN111696126B (en) | Multi-view-angle-based multi-task liver tumor image segmentation method | |
WO2022105623A1 (en) | Intracranial vascular focus recognition method based on transfer learning | |
CN106981090B (en) | Three-dimensional reconstruction method for in-tube stepping unidirectional beam scanning tomographic image | |
CN114092439A (en) | Multi-organ instance segmentation method and system | |
Raza et al. | Brain image representation and rendering: A survey | |
EP4118617A1 (en) | Automated detection of tumors based on image processing | |
CN114037714A (en) | 3D MR and TRUS image segmentation method for prostate system puncture | |
CN111275712A (en) | Residual semantic network training method oriented to large-scale image data | |
CN116503607B (en) | CT image segmentation method and system based on deep learning | |
JP7423338B2 (en) | Image processing device and image processing method | |
CN114529562A (en) | Medical image segmentation method based on auxiliary learning task and re-segmentation constraint | |
CN110738633A (en) | organism tissue three-dimensional image processing method and related equipment | |
CN115205298B (en) | Method and device for segmenting blood vessels of liver region | |
CN117152173A (en) | Coronary artery segmentation method and system based on DUNetR model | |
CN112070778A (en) | Multi-parameter extraction method based on intravascular OCT and ultrasound image fusion | |
CN114862869A (en) | Kidney tissue segmentation method and device based on CT (computed tomography) image | |
CN113947593B (en) | Segmentation method and device for vulnerable plaque in carotid ultrasound image | |
CN115760754A (en) | Multi-modality MRI image auditory nerve sheath tumor segmentation method | |
CN115294023A (en) | Liver tumor automatic segmentation method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |