CN112634285A - Method for automatically segmenting abdominal CT visceral fat area - Google Patents

Method for automatically segmenting abdominal CT visceral fat area Download PDF

Info

Publication number
CN112634285A
CN112634285A CN202011542684.XA CN202011542684A CN112634285A CN 112634285 A CN112634285 A CN 112634285A CN 202011542684 A CN202011542684 A CN 202011542684A CN 112634285 A CN112634285 A CN 112634285A
Authority
CN
China
Prior art keywords
image
visceral fat
attention
net network
abdominal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011542684.XA
Other languages
Chinese (zh)
Other versions
CN112634285B (en
Inventor
彭博
左昊
贾维
张傲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Petroleum University
Original Assignee
Southwest Petroleum University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Petroleum University filed Critical Southwest Petroleum University
Priority to CN202011542684.XA priority Critical patent/CN112634285B/en
Publication of CN112634285A publication Critical patent/CN112634285A/en
Application granted granted Critical
Publication of CN112634285B publication Critical patent/CN112634285B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a method for automatically segmenting an abdominal CT visceral fat area, which comprises the following steps: selecting a clinical abdomen CT image as a data set; preprocessing an image in the data set; extracting an image of a final visceral fat region; constructing an Attention U-net network; training an Attention U-net network; preprocessing an abdomen CT image needing to be predicted, inputting the preprocessed image into a trained Attention U-net network, wherein an image output by the Attention U-net network is a segmentation image. The invention can accelerate the speed of abdominal CT visceral fat area segmentation, simplify the segmentation steps, perform batch segmentation, greatly improve the segmentation efficiency, simplify the early operation of abdominal visceral fat amount calculation, and provide a better basis for the subsequent visceral fat amount calculation.

Description

Method for automatically segmenting abdominal CT visceral fat area
Technical Field
The invention belongs to the technical field of medical image processing, and particularly relates to a method for automatically segmenting an abdominal CT visceral fat area.
Background
An increase in the body fat in general and an increase in abdominal fat, particularly in the abdominal cavity, in the human body may cause cardiovascular diseases such as diabetes, hyperlipidemia, hypertension, insulin resistance and hyperuricemia. Therefore, the quantitative analysis of the abdominal fat content has important clinical value for preventing and treating related cardiovascular diseases. Currently, the most common abdominal fat measurement is performed by CT, which can accurately quantify the volume of the Adipose Tissue (AT) of the human body, particularly the area AT such as subcutaneous fat area (SA) of the abdominal wall or visceral fat area (VA) in the abdominal cavity. AT present, the application range of the CT measurement AT technology has been related to a plurality of fields such as clinical nutrition, geriatric medicine, epidemiology, genetics, especially endocrine metabolism and cardiovascular system.
In the process of measuring AT by CT, a region of interest (ROI) needs to be drawn when measuring SA and VA, but in many previous studies, segmentation is performed by manual or conventional segmentation methods. As used on siemens CT workstation (VE40D), is a watershed-based segmentation method. The watershed segmentation method is used for the segmentation of the visceral fat, and when the abdominal wall muscle layer is thin or is less continuous, the visceral fat and the abdominal wall fat layer can be segmented together. At this time, manual adjustment is needed to delineate the visceral fat area, so the degree of automation is not enough, which results in low efficiency and is not suitable for developing general investigation.
Deep learning is currently applied in the field of segmentation of medical images. Long et al first proposed a full convolutional neural network model (FCN) that can achieve end-to-end image segmentation, extending image-level classification to pixel-level classification, and replacing the convolutional layer with a full connection layer in a classification network framework. However, the FCN results are not fine enough, do not take into account the pixel-to-pixel correlation, and are not sensitive to the details of the image. Ronneberger et al propose that a U-net network fuses high-level information and shallow-level information of an image by using a cascade operation between a decoder and an encoder, thereby avoiding the loss of high-level semantic information and achieving a good effect on segmentation tasks of some medical images. And the U-net network can effectively utilize the training data set, and reduce the pressure on sample requirements. Millettari et al propose to use a voxel-based full-convolution V-net network for the segmentation of three-dimensional medical images and to use a framework of residual networks on the part of the encoder to prevent gradient disappearance or gradient explosion while the network is continuously deepened. Zhao et al propose a pyramid scene parsing network (PSPNet). The PSPNet network obtains more context information by utilizing feature fusion on the basis of the FCN network, and then improves the obtaining capability of the global information by aggregating the context information of different areas.
Disclosure of Invention
The invention mainly overcomes the defects in the prior art and provides a method for automatically segmenting an abdominal CT visceral fat area.
The invention solves the technical problems, and the provided technical scheme is as follows: a method of automatically segmenting abdominal CT visceral fat regions, comprising:
s100, selecting clinical abdomen CT images of different age groups, different abdomen positions and different slice thicknesses as a data set;
s200, preprocessing an image in a data set to obtain a preprocessed data set image;
step S300, manually dividing a mask image of the visceral fat area according to the preprocessed data set image, and performing AND operation on pixel points corresponding to the divided mask image and the preprocessed data set image to extract a final visceral fat area image;
s400, constructing an Attention U-net network, and inputting the preprocessed data set image and the final image of the visceral fat area into the constructed Attention U-net network as training and predicting data;
s500, training an Attention U-net network according to the value of the loss function and the result of the precision;
step S600, preprocessing the abdominal CT image needing to be predicted, and then inputting the preprocessed image into a trained Attention U-net network, wherein the image output by the Attention U-net network is a segmentation image.
The method has the further technical scheme that in the step S100, midriff images of young and strong people and old people are selected, and the slice width is 1 mm; and selecting images of the upper abdomen and the lower abdomen of young and middle-aged people and old people, wherein the slice width is 5 mm.
The further technical scheme is that the specific process of the preprocessing in the step S200 is as follows: extracting pixel information in the DICOM file to obtain an original CT image, and performing image binarization on the original CT image to distinguish an visceral fat region from a background region.
The further technical scheme is that the gray value between the pixel values of 874-974 in the image binarization is set to be 255, and the gray value of the pixel value greater than 974 or less than 874 is set to be 0.
The further technical scheme is that the Attention U-net network in the step S500 comprises two parts, an encoding part and a decoding part;
the coding part comprises a 5-layer structure, the first 4-layer structure comprises two convolutional layers and a maximum pooling layer, the 5 th-layer structure comprises two convolutional layers, the convolutional layers use relu as an activation function, the size of a convolutional kernel is 3 x 3, the number of output channels of the first-layer structure is 32, and then the number of output channels of each-layer structure is 2 times that of the last layer;
the decoding part comprises 5 layers of structures, each layer of structure comprises a bilinear interpolation up-sampling structure, an attention structure, three layers of convolution layers and a splicing structure, and the convolution layers all use relu as an activation function; the convolution kernel size is 3 x 3.
The further technical solution is that the process of step S600 is: extracting the pixel information in the DICOM file, then carrying out image binarization according to the preprocessing method in the step S200, inputting the image subjected to image binarization into a trained Attention U-net network, wherein the image output by the Attention U-net network is a segmentation image.
The invention has the following beneficial effects: the method can accelerate the speed of abdominal CT visceral fat area segmentation, simplify the segmentation steps, perform batch segmentation, and does not need manual adjustment, thereby greatly improving the segmentation efficiency, simplifying the early operation of abdominal visceral fat amount calculation, and providing a better basis for the subsequent visceral fat amount calculation.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention;
FIG. 2 is a diagram of an Attention U-net network model;
FIG. 3 is an attention mechanism module;
FIG. 4 is a graph comparing results of visceral fat segmentation;
fig. 5 is a process diagram of CT image processing.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, a method for automatically segmenting abdominal CT visceral fat region according to the present invention comprises:
s100, selecting midriff images of young and strong people (age is 30 +/-10, average value is +/-10) and old people (60 +/-10), wherein the slice width is 1 mm; selecting images of the upper abdomen and the lower abdomen of young and middle-aged people and old people, and taking the section width of 5mm as a data set;
s200, preprocessing an image in a data set to obtain a preprocessed data set image;
the specific process of the pretreatment comprises the following steps: firstly, extracting pixel information in a DICOM file to obtain an original CT image, and performing image binarization on the original CT image to distinguish a visceral fat area and a background area as shown in FIG. 5a, wherein in order to ensure that a set threshold value can distinguish the visceral fat area and a tissue with higher density through a pixel value; the threshold value set by the invention is as follows: the gray scale value between 874-974 and 0 is set to be 255, the gray scale value of more than 974 or less than 874 is set to be 0, the image is divided into two parts by the threshold value, as shown in FIG. 5 b;
step S300, manually dividing a mask image of the visceral fat region according to the preprocessed data set image, as shown in fig. 5c, and then performing an and operation on pixel points corresponding to the divided mask image and the preprocessed data set image, to extract a final visceral fat region image, as shown in fig. 5 d;
s400, constructing an Attention U-net network, and inputting the preprocessed data set image and the final image of the visceral fat area into the constructed Attention U-net network as training and predicting data;
the AttentionU-net network includes an encoding portion and a decoding portion;
the coding part comprises a 5-layer structure, the first 4-layer structure comprises two convolutional layers and a maximum pooling layer, the 5 th-layer structure comprises two convolutional layers, each convolutional layer uses relu as an activation function, the size of a convolutional kernel is 3 x 3, the number of output channels of the first-layer structure is 32, and then the number of output channels of each-layer structure is 2 times that of the last layer;
the decoding part comprises 5 layers of structures, each layer of structure comprises a bilinear interpolation up-sampling, an attention structure, three layers of convolutional layers and a splicing structure, and each layer of convolutional layer uses relu as an activation function. The convolution kernel size is 3 × 3, and its attention structure (AG) is shown in fig. 3: taking the output of a down-sampling layer and the output of an up-sampling layer as input, enabling the two inputs to pass through a convolution layer (convolution kernel 1) and a batch normalization layer respectively, then performing addition operation, rule activation function, convolution layer (convolution kernel 1), batch normalization and sigmoid activation function, and finally performing multiplication operation on the output of the sigmoid activation function and the input of the down-sampling layer;
s500, training an Attention U-net network according to the value of the loss function and the result of the precision;
in the training stage of the whole network, in order to prevent the over-fitting problem and improve the generalization capability of the Attention U-Net model, a Dropout layer is added after two convolutions before and after the fourth down-sampling; the Dropout layer randomly subtracts some neurons in the training of each batch, and can set the probability of how much neurons are removed by each Dropout layer, when the training of the first batch is carried out, a part of neurons are taken out according to the preset probability, then the training is started, and only the neurons which are not taken out and the corresponding weight parameters are updated and kept; after all the parameters are updated, a part of neurons are taken out again according to the corresponding probability, then training is carried out, and if the neuron which is used for training newly is trained, the parameters of the neuron are continuously updated; the neuron which is taken off for the second time, and the weight parameter which has been updated for the first time is kept, and is not modified until the nth time when the batch performs Dropout, the parameter is not deleted; dropout needs to be added in the training stage to prevent overfitting from improving the generalization capability of the model, and a Dropout layer is not added in the testing stage; through cross validation, the effect is best when the Dropout rate is 0.5, and the Dropout randomly generates the most network structures when the Dropout rate is 0.5;
in the whole network training process, the iteration times of the Attention U-net network training are 120 times, the batch size is 2, and the learning rate is set to be 1.0 e-5; the iteration times of the U-net network training are 120 times, the batch size is 4, and the learning rate is set to be 1.0 e-5; training the optimizer using Adam as a model, the tensor of the input data is 2018 × 1 × 256;
and S600, extracting pixel information in a DICOM file of the abdominal CT image needing to be predicted, performing image binarization according to the preprocessing method in the S200, inputting the image subjected to image binarization into a trained Attention U-net network, wherein the image output by the Attention U-net network is a segmented image.
Examples
FIG. 4 is a comparison of the manual segmentation of the doctor, the U-net network, and the Attention U-net network in the test set:
the segmentation Precision (SA), the under-segmentation rate (UR), the over-segmentation rate (OR), the Precision, and the Recall were calculated for the segmentation result and the golden standard image, and the calculation results are shown in table 1:
TABLE 1 comparison of visceral fat segmentation results
Figure BDA0002852739760000071
As can be seen from the calculation results in table 1, the accuracy of the two deep learning networks for the CT image segmentation of different populations and different abdomen positions is relatively high, and the over-segmentation rate and the under-segmentation rate are both low. Because the Attention gate structure is added into the Attention U-Net network, the segmentation effect of the model is effectively enhanced, and the accuracy rate is higher compared with the U-Net network. And from the segmentation effect of FIG. 4, the extension U-Net segments less redundant area than the U-Net network. The OR of the Attention U-Net model is 1.87 percent less than that of the U-Net model, and the two networks have good segmentation on the detailed structure of the visceral fat area, so that the segmentation rate can be seen from the under-segmentation rate. However, the division is performed for the purpose of calculating the visceral fat area, which is calculated by the number of pixel points in the divided image, and therefore, the result is more accurate as the unnecessary area is divided, but the present invention divides the unnecessary area, but the divided parts are within an acceptable error range compared with the manual division. The method of the present invention can meet the requirement of automatic segmentation of visceral fat regions.
Although the present invention has been described with reference to the above embodiments, it should be understood that the present invention is not limited to the above embodiments, and those skilled in the art can make various changes and modifications without departing from the scope of the present invention.

Claims (6)

1. A method for automatically segmenting abdominal CT visceral fat regions, comprising:
s100, selecting clinical abdomen CT images of different age groups, different abdomen positions and different slice thicknesses as a data set;
s200, preprocessing an image in a data set to obtain a preprocessed data set image;
step S300, manually dividing a mask image of the visceral fat area according to the preprocessed data set image, and performing AND operation on pixel points corresponding to the divided mask image and the preprocessed data set image to extract a final visceral fat area image;
s400, constructing an Attention U-net network, and inputting the preprocessed data set image and the final image of the visceral fat area into the constructed Attention U-net network as training and predicting data;
s500, training an Attention U-net network according to the value of the loss function and the result of the precision;
step S600, preprocessing the abdominal CT image needing to be predicted, and then inputting the preprocessed image into a trained Attention U-net network, wherein the image output by the Attention U-net network is a segmentation image.
2. The method for automatically segmenting the abdominal CT visceral fat area of claim 1, wherein the midriff images of young and old people with a slice width of 1mm are selected in step S100; and selecting images of the upper abdomen and the lower abdomen of young and middle-aged people and old people, wherein the slice width is 5 mm.
3. The method for automatically segmenting abdominal CT visceral fat area according to claim 1, wherein the preprocessing in step S200 comprises: extracting pixel information in the DICOM file to obtain an original CT image, and performing image binarization on the original CT image to distinguish an visceral fat region from a background region.
4. The method as claimed in claim 3, wherein the gray scale value between 874-974 pixel values in the image binarization is set as 255, and the gray scale value with pixel value greater than 974 or less than 874 is set as 0.
5. The method for automatically segmenting abdominal CT visceral fat area according to claim 1, wherein the Attention U-net network in step S500 comprises two parts, an encoding part and a decoding part;
the coding part comprises a 5-layer structure, the first 4-layer structure comprises two convolutional layers and a maximum pooling layer, the 5 th-layer structure comprises two convolutional layers, the convolutional layers use relu as an activation function, the size of a convolutional kernel is 3 x 3, the number of output channels of the first-layer structure is 32, and then the number of output channels of each-layer structure is 2 times that of the last layer;
the decoding part comprises 5 layers of structures, each layer of structure comprises a bilinear interpolation up-sampling structure, an attention structure, three layers of convolution layers and a splicing structure, and the convolution layers all use relu as an activation function; the convolution kernel size is 3 x 3.
6. The method for automatically segmenting abdominal CT visceral fat area according to claim 4, wherein the step S600 comprises the steps of: extracting the pixel information in the DICOM file, then carrying out image binarization according to the preprocessing method in the step S200, inputting the image subjected to image binarization into a trained Attention U-net network, wherein the image output by the Attention U-net network is a segmentation image.
CN202011542684.XA 2020-12-23 2020-12-23 Method for automatically segmenting abdominal CT visceral fat area Active CN112634285B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011542684.XA CN112634285B (en) 2020-12-23 2020-12-23 Method for automatically segmenting abdominal CT visceral fat area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011542684.XA CN112634285B (en) 2020-12-23 2020-12-23 Method for automatically segmenting abdominal CT visceral fat area

Publications (2)

Publication Number Publication Date
CN112634285A true CN112634285A (en) 2021-04-09
CN112634285B CN112634285B (en) 2022-11-22

Family

ID=75321969

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011542684.XA Active CN112634285B (en) 2020-12-23 2020-12-23 Method for automatically segmenting abdominal CT visceral fat area

Country Status (1)

Country Link
CN (1) CN112634285B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516624A (en) * 2021-04-28 2021-10-19 武汉联影智融医疗科技有限公司 Determination of puncture forbidden zone, path planning method, surgical system and computer equipment
CN114271796A (en) * 2022-01-25 2022-04-05 泰安市康宇医疗器械有限公司 Method and device for measuring human body components by using body state density method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001054066A1 (en) * 2000-01-18 2001-07-26 The University Of Chicago Automated method and system for the segmentation of lung regions in computed tomography scans
CN110675406A (en) * 2019-09-16 2020-01-10 南京信息工程大学 CT image kidney segmentation algorithm based on residual double-attention depth network
CN111784713A (en) * 2020-07-26 2020-10-16 河南工业大学 Attention mechanism-introduced U-shaped heart segmentation method
CN114092439A (en) * 2021-11-18 2022-02-25 深圳大学 Multi-organ instance segmentation method and system
CN114219943A (en) * 2021-11-24 2022-03-22 华南理工大学 CT image organ-at-risk segmentation system based on deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001054066A1 (en) * 2000-01-18 2001-07-26 The University Of Chicago Automated method and system for the segmentation of lung regions in computed tomography scans
CN110675406A (en) * 2019-09-16 2020-01-10 南京信息工程大学 CT image kidney segmentation algorithm based on residual double-attention depth network
CN111784713A (en) * 2020-07-26 2020-10-16 河南工业大学 Attention mechanism-introduced U-shaped heart segmentation method
CN114092439A (en) * 2021-11-18 2022-02-25 深圳大学 Multi-organ instance segmentation method and system
CN114219943A (en) * 2021-11-24 2022-03-22 华南理工大学 CT image organ-at-risk segmentation system based on deep learning

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
LEE森: "AI学习笔记(十四)CNN之图像分割", 《HTTPS://BLOG.CSDN.NET/QQ_35813161/ARTICLE/DETAILS/111145981》 *
OZAN OKTAY等: "Attention U-Net:Learning Where to Look for the Pancreas", 《COMPUTER VISION AND PATTERN RECOGNITION (CS.CV)》 *
XIAOYUN YANG等: "Multilabel Region Classification and Semantic Linking for Colon Segmentation in CT Colonography", 《IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING》 *
咫尺小厘米: "[论文笔记] Attention U-Net", 《HTTPS://ZHUANLAN.ZHIHU.COM/P/114471013》 *
孟祥海: "基于改进Unet的脑腹部多模态影像分割", 《中国优秀博硕士学位论文全文数据库(硕士)医药卫生科技辑》 *
李庆勃等: "基于V-Net的腹部多器官图像分割", 《数字技术与应用》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516624A (en) * 2021-04-28 2021-10-19 武汉联影智融医疗科技有限公司 Determination of puncture forbidden zone, path planning method, surgical system and computer equipment
CN114271796A (en) * 2022-01-25 2022-04-05 泰安市康宇医疗器械有限公司 Method and device for measuring human body components by using body state density method

Also Published As

Publication number Publication date
CN112634285B (en) 2022-11-22

Similar Documents

Publication Publication Date Title
CN108898175B (en) Computer-aided model construction method based on deep learning gastric cancer pathological section
CN111627019B (en) Liver tumor segmentation method and system based on convolutional neural network
CN112489061B (en) Deep learning intestinal polyp segmentation method based on multi-scale information and parallel attention mechanism
CN111369565B (en) Digital pathological image segmentation and classification method based on graph convolution network
CN111145181B (en) Skeleton CT image three-dimensional segmentation method based on multi-view separation convolutional neural network
CN113674253B (en) Automatic segmentation method for rectal cancer CT image based on U-transducer
CN111563902A (en) Lung lobe segmentation method and system based on three-dimensional convolutional neural network
CN112634285B (en) Method for automatically segmenting abdominal CT visceral fat area
CN113034505B (en) Glandular cell image segmentation method and glandular cell image segmentation device based on edge perception network
CN110930378B (en) Emphysema image processing method and system based on low data demand
CN113223005B (en) Thyroid nodule automatic segmentation and grading intelligent system
CN113205537A (en) Blood vessel image segmentation method, device, equipment and medium based on deep learning
Pérez-Benito et al. A deep learning system to obtain the optimal parameters for a threshold-based breast and dense tissue segmentation
CN115908241A (en) Retinal vessel segmentation method based on fusion of UNet and Transformer
CN114419000A (en) Femoral head necrosis index prediction system based on multi-scale geometric embedded convolutional neural network
CN110992309B (en) Fundus image segmentation method based on deep information transfer network
CN115115570A (en) Medical image analysis method and apparatus, computer device, and storage medium
Li et al. Automated classification of solitary pulmonary nodules using convolutional neural network based on transfer learning strategy
CN116486156A (en) Full-view digital slice image classification method integrating multi-scale feature context
CN116433654A (en) Improved U-Net network spine integral segmentation method
CN113763343B (en) Deep learning-based Alzheimer's disease detection method and computer-readable medium
CN115019955A (en) Method and system for constructing traditional Chinese medicine breast cancer syndrome prediction model based on ultrasonic imaging omics characteristics
CN112967269A (en) Pulmonary nodule identification method based on CT image
CN114359308A (en) Aortic dissection method based on edge response and nonlinear loss
CN111932486A (en) Brain glioma segmentation method based on 3D convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant