CN109886922B - Automatic grading method for hepatocellular carcinoma based on SE-DenseNet deep learning framework and enhanced MR image - Google Patents

Automatic grading method for hepatocellular carcinoma based on SE-DenseNet deep learning framework and enhanced MR image Download PDF

Info

Publication number
CN109886922B
CN109886922B CN201910042749.5A CN201910042749A CN109886922B CN 109886922 B CN109886922 B CN 109886922B CN 201910042749 A CN201910042749 A CN 201910042749A CN 109886922 B CN109886922 B CN 109886922B
Authority
CN
China
Prior art keywords
layer
densenet
dense
component
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910042749.5A
Other languages
Chinese (zh)
Other versions
CN109886922A (en
Inventor
纪建松
戴亚康
周志勇
徐民
陈敏江
周庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Institute of Biomedical Engineering and Technology of CAS
Lishui Central Hospital
Original Assignee
Suzhou Institute of Biomedical Engineering and Technology of CAS
Lishui Central Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Institute of Biomedical Engineering and Technology of CAS, Lishui Central Hospital filed Critical Suzhou Institute of Biomedical Engineering and Technology of CAS
Priority to CN201910042749.5A priority Critical patent/CN109886922B/en
Publication of CN109886922A publication Critical patent/CN109886922A/en
Application granted granted Critical
Publication of CN109886922B publication Critical patent/CN109886922B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an automatic grading method of hepatocellular carcinoma based on SE-DenseNet deep learning framework and enhanced MR image, comprising 1) collecting data; 2) Preprocessing all MR-enhanced three-dimensional images of hepatocellular carcinoma; 3) Enhancing the training data; 4) Training a hierarchical prediction model of hepatocellular carcinoma based on the enhanced training data: a SE-DenseNet network; 5) And carrying out grading prediction on the test data by adopting the trained model, and evaluating the classification performance of the grading prediction model of the hepatocellular carcinoma. According to the invention, by the automatic pathological grading method of the multi-mode enhanced MR image of the hepatocellular carcinoma, which is composed of image preprocessing, image enhancement, SE-DenseNet network training and SE-DenseNet network testing, the automatic grading of the hepatocellular carcinoma can be realized, and the problems of labor and time consumption and subjectivity difference existing in manual grading of hepatocytes can be overcome.

Description

Automatic grading method for hepatocellular carcinoma based on SE-DenseNet deep learning framework and enhanced MR image
Technical Field
The invention relates to the field of medical image processing, in particular to an automatic hepatocellular carcinoma grading method based on a SE-DenseNet deep learning framework and a multi-mode enhanced MR image.
Background
Worldwide, lung/liver/stomach and intestinal tumors currently account for nearly half (46%) of all cancer deaths. Hepatocellular carcinoma (HCC), fibrotic carcinoma, cholangiocarcinoma, angiosarcoma, and hepatoblastoma are referred to as primary liver cancers, of which hepatocellular carcinoma (HCC) is the third largest tumor that leads to global cancer death, with about 50 or more tens of thousands of people suffering from the disease. In the classification of hepatocellular carcinoma, class II and class III account for the majority. The prognosis of liver cancer patients is related to the degree of differentiation of liver cancer and the treatment regimen of liver cancer, so that grading of liver cancer has important clinical significance. The manual marking of lesion classification information has the following disadvantages: firstly, the labor cost is high, and secondly, subjective differences exist, namely the same pathological section exists, and the observation results of different people are different. At present, the research on the aspect of liver cancer differentiation degree evaluation is less, on the one hand, the pathological differentiation degree evaluation based on medical image has a certain difficulty, and on the other hand, because the clinical data of liver cancer is difficult to collect, the image quality of the liver cancer image data set disclosed on the internet is poor, and the pathological differentiation information is lacking and is difficult to use. With the development and maturity of computer technology, computer clinical auxiliary diagnosis technology is rapidly developed into an emerging subject and technology, and is increasingly widely applied to clinic. Therefore, the computer-aided diagnosis of the early liver cancer through the multi-mode image data has a certain clinical significance for prognosis and importance of patients.
Disclosure of Invention
The invention aims to solve the problems of labor and time consumption and subjective difference existing in the traditional method of classifying hepatocytes by relying on manpower, and provides an automatic classification method of hepatocellular carcinoma based on a SE-DenseNet deep learning framework and a multi-mode enhanced MR image.
In order to solve the technical problems, the invention adopts the following technical scheme: an automatic grading method of hepatocellular carcinoma based on SE-DenseNet deep learning framework and enhanced MR image, comprising the following steps:
1) Clinically acquiring a multi-mode enhanced MR (magnetic resonance) hepatocellular carcinoma three-dimensional image and a pathological grading result;
2) Preprocessing all the MR-enhanced three-dimensional images of the hepatocellular carcinoma to serve as training data;
3) Enhancing the training data and amplifying the training data quantity;
4) Training a hierarchical prediction model of hepatocellular carcinoma based on the enhanced training data: a SE-DenseNet network;
5) Carrying out grading prediction on the test data by adopting a trained model, and evaluating the classification performance of a hepatocellular carcinoma grading prediction model;
the SE-DenseNet network is obtained by combining DenSE, denSE-BC and SE frameworks, namely: the Squeeze-and-Excitation Networks framework.
Preferably, the step 1) includes: acquiring preoperative hepatocellular carcinoma image data of multi-modal enhanced MR from clinic, including arterial phase MR sequence, venous phase MR sequence and delayed phase MR sequence; grading of hepatocellular carcinoma per patient was according to Edmondson and Steiner system grading method: primary, i.e. high differentiation, secondary, i.e. medium differentiation, tertiary, i.e. low differentiation, and quaternary, i.e. undifferentiated, wherein primary and secondary belong to low grade, tertiary and quaternary belong to high grade; each MR sequence is labeled as either low-ranking or high-ranking, as a clinically common gold standard applied to supervised learning.
Preferably, the pretreatment in the step 2) is as follows: a tumor region of interest, i.e. ROI region, is extracted from each MR image and normalized, specifically including:
2-1) manually delineating the approximate area of the tumor, and roughly dividing the tumor;
2-2) extracting the ROI area and removing background information;
2-3) normalizing the ROI area to a fixed size: tumors with RO area I larger than the standardized size are intercepted into the standardized size by taking the center of the tumor as a reference, and tumors with ROI smaller than the standardized size are expanded and zero filled to the standardized size;
2-4) carrying out pixel normalization on the ROI area with the standardized size to obtain a preprocessed image which is used as training data; wherein, pixel normalization adopts a Z-score-based method: the mean and variance of the ROI area are calculated and the pixels of the tumor portion are subtracted from the mean and divided by the variance.
Preferably, the SE-DenseNet network is obtained by adding an SE layer between a transition layer and a dense block by taking DenseNet as a basic framework, and comprises an input layer, a 3D convolution layer, a 3D average pooling layer, N SE-DenseNet component-V1, a dense block, a 3D average pooling layer and a full connection layer which are connected in sequence;
wherein, SE-DenseNet component-V1 is composed of dense blocks, transition layers, SE layers, and the dense blocks in SE-DenseNet component-V1 are composed of M complex functions;
wherein the network inputsWherein C is 0 For the number of channels, H 0 ×W 0 ×L 0 A three-dimensional size for each channel image;
the training method of the SE-DenseNet network comprises the following steps: firstly, input is subjected to a 3D convolution layer and a 3D average pooling layer to obtain outputOutput 0 Through N SE-DenseNet component-V1 composed of dense blocks, transition layers and SE layers; the output of the last SE-DenseNet component-V1 is subject to object classification through a dense block, a 3D average pooling layer and a full connection layer;
the SE layer comprises a global 3D average pooling layer, a full connection layer, a ReLU, a full connection layer and a Sigmoid which are sequentially connected; wherein, reLU and Sigmoid are both activation functions:
ReLU:
Sigmoid:
the SE-DenseNet component-V1 treatment method specifically comprises the following steps:
in the nth SE-DenseNet component-V1, the output of the mth complex function in the dense block is expressed as The output of the transition layer is denoted +.>
Output of transition layer before SE layer nT After a series of continuous operations of a global 3D average pooling layer, a full connection layer, a ReLU, a full connection layer and a Sigmoid, the method is obtainedFinally Weight and Output are combined nT Multiplication results in Output nT ′:
And then the obtained Output is used again nT ' the next layer in SE layer input into the network is dense blocks.
Preferably, the SE-DenseNet network uses DenseNet as a basic framework, and is obtained by adding an SE layer into a dense block, namely, after a composite function layer, to form a new SE-dense block; the device comprises an input layer, a 3D convolution layer, a 3D average pooling layer, N SE-DenseNet component-V2, SE-dense blocks, a 3D average pooling layer and a full connection layer which are connected in sequence;
SE-DenseNet component-V2 consists of SE-dense blocks consisting of M successive complex functions and SE layers; the SE-dense block consists of M continuous complex functions and SE layers;
input to the networkWherein C is 0 For the number of channels, H 0 ×W 0 ×L 0 A three-dimensional size for each channel image;
the training method of the SE-DenseNet network comprises the following steps: firstly, input is obtained after passing through a 3D convolution layer and a 3D average pooling layerOutput 0 Through N SE-DenseNet component-V2 composed of SE-dense blocks and transition layers; the output of the last SE-DenseNet component-V2 goes through SE-dense blocks, 3D average pooling layer and full connection layer for object classification.
Preferably, the SE-DenseNet network takes DenseNet as a basic framework, and is obtained by adding an SE layer between a transition layer and a dense block and after a composite function layer which is the inside of the dense block; the device comprises an input layer, a 3D convolution layer, a 3D average pooling layer, N SE-DenseNet component-V3, SE-dense blocks, a 3D average pooling layer and a full connection layer which are connected in sequence;
SE-DenseNet component-V3 is composed of SE-dense blocks, transition layers and SE layers, wherein the SE-dense blocks are composed of M continuous composite functions and the SE layers;
input to the networkWherein C is 0 For the number of channels, H 0 ×W 0 ×L 0 A three-dimensional size for each channel image;
the training method of the SE-DenseNet network comprises the following steps: firstly, the Input is obtained after passing through a 3D convolution layer and a 3D average pooling layerOutput 0 Through N SE-DenseNet component-V3 composed of SE-dense blocks, transition layers and SE layers, the SE-dense blocks are composed of M continuous composite functions and the SE layers; last oneThe outputs of the individual SE-DenseNet component-V3 are subject to object classification via SE-dense blocks, 3D average pooling layer and full connected layer.
Preferably, the SE-DenseNet network takes DenseNet-BC as a basic framework, and is obtained by adding an SE layer between a transition layer and a dense block; the device comprises an input layer, a 3D convolution layer, a 3D average pooling layer, N SE-DenseNet component-V4, dense blocks, a 3D average pooling layer and a full connection layer which are connected in sequence;
SE-DenseNet component-V4 is composed of a dense block, a transition layer and a SE layer, wherein the dense block is composed of M continuous bottleneck layers and a composite function;
input to the networkWherein C is 0 For the number of channels, H 0 ×W 0 ×L 0 A three-dimensional size for each channel image;
the training method of the SE-DenseNet network comprises the following steps: firstly, the Input is obtained after passing through a 3D convolution layer and a 3D average pooling layerOutput 0 Through N SE-DenseNet component-V4 composed of dense blocks, transition layers and SE layers; the output of the last SE-DenseNet component-V4 goes through dense block, 3D average pooling layer and full connection layer for object classification.
Preferably, the SE-densnet network uses densnet-BC as a basic framework, and is obtained by adding an SE layer into a dense block, namely, after a composite function layer, to form a new SE-dense block; the device comprises an input layer, a 3D convolution layer, a 3D average pooling layer, N SE-DenseNet component-V5, dense blocks, a 3D average pooling layer and a full connection layer which are connected in sequence;
SE-DenseNet component-V5 consists of SE-dense blocks consisting of M successive bottleneck layers, a composite function and SE layers;
input to the networkWherein C is 0 For the number of channels, H 0 ×W 0 ×L 0 A three-dimensional size for each channel image;
the training method of the SE-DenseNet network comprises the following steps: firstly, the Input is obtained after passing through a 3D convolution layer and a 3D average pooling layerOutput 0 Through N SE-DenseNet component-V5 composed of SE-dense blocks and transition layers; the output of the last SE-DenseNet component-V5 goes through SE-dense blocks, 3D average pooling layer and full connection layer for object classification.
Preferably, the SE-DenseNet network takes DenseNet-BC as a basic framework, and is obtained by adding an SE layer between a transition layer and a dense block and after a composite function layer which is the inside of the dense block; the device comprises an input layer, a 3D convolution layer, a 3D average pooling layer, N SE-DenseNet component-V6, SE-dense blocks, a 3D average pooling layer and a full connection layer which are connected in sequence;
SE-DenseNet component-V6 is composed of SE-dense blocks, transition layers and SE layers, wherein the SE-dense blocks are composed of M continuous bottleneck layers, a composite function and SE layers;
input to the networkWherein C is 0 For the number of channels, H 0 ×W 0 ×L 0 A three-dimensional size for each channel image;
the training method of the SE-DenseNet network comprises the following steps: firstly, the Input is obtained after passing through a 3D convolution layer and a 3D average pooling layerOutput 0 Through N SE-DenseNet component-V6 composed of SE-dense blocks, transition layers and SE layers; the output of the last SE-DenseNet component-V6 goes through SE-dense blocks, 3D average pooling layer and full connection layer for object classification.
Preferably, wherein the SE-DenseNet network uses a weighted sum of cross entropy losses and complex losses as the total loss of the network:
loss=cross_loss+λR(ω) (2)
in the formula (1), n is the number of input samples X, x= [ X ] 0 ,x 1 ,…,x n-1 ],y i For the label of the corresponding sample, y i ∈[0,1,…,class-1]Class is the number of sample classes, the class is the number of sample classes,predicting an input x to y for a network i Probability values of (2);
in the formula (2), R (omega) is an index for evaluating the complexity of the model, lambda is the weight of the complex loss, and lambda is selected according to the rule: lambda is multiplied by R (omega) so that lambda R (omega) becomes a lambda range of the same order of magnitude as cross_loss;
in the formula (3), ω is a weight matrix of each layer in the network.
The beneficial effects of the invention are as follows: according to the invention, the automatic pathological grading method of the multi-mode enhanced MR image of the hepatocellular carcinoma, which is composed of image preprocessing, image enhancement, SE-DenseNet network training and SE-DenseNet network testing, can realize the automatic grading of the hepatocellular carcinoma, has high grading precision and efficiency, can overcome the problems of manpower and time consumption and subjective difference existing in manually grading the hepatocellular carcinoma, and has good clinical application value.
Drawings
FIG. 1 is a flow chart of the automated hepatocellular carcinoma grading method of the present invention based on a SE-DenseNet deep learning framework and enhanced MR images;
FIG. 2 is a diagram of an image with rough segmentation map and background information removed in accordance with one embodiment of the present invention;
FIG. 3 is a pixel normalized image according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a framework one of a SE-DenseNet network in accordance with one embodiment of the invention;
FIG. 5 is a schematic diagram of a second architecture of a SE-DenseNet network in accordance with one embodiment of the invention;
FIG. 6 is a schematic diagram of a framework III of a SE-DenseNet network in accordance with one embodiment of the invention;
FIG. 7 is a schematic diagram of a framework four of an SE-DenseNet network in accordance with one embodiment of the invention;
FIG. 8 is a schematic diagram of a framework five of an SE-DenseNet network in accordance with one embodiment of the invention;
FIG. 9 is a schematic diagram of a framework six of an SE-DenseNet network in accordance with one embodiment of the invention;
FIG. 10 is a schematic diagram of the structure of dense blocks in frame one in one embodiment of the invention;
FIG. 11 is a schematic diagram of the structure of SE-dense blocks in frame two in one embodiment of the invention;
FIG. 12 is a schematic diagram of the structure of dense blocks in frame four in one embodiment of the invention;
FIG. 13 is a schematic diagram of the structure of SE-dense blocks in frame five in one embodiment of the invention;
FIG. 14 is a schematic view of the structure of the SE layer in one embodiment of the invention;
FIG. 15 is a schematic diagram of the bottleneck layer according to an embodiment of the present invention;
FIG. 16 is a schematic diagram of a composite function in an embodiment of the invention;
fig. 17 is a schematic structural diagram of a transition layer in an embodiment of the present invention.
Detailed Description
The present invention is described in further detail below with reference to examples to enable those skilled in the art to practice the same by referring to the description.
It will be understood that terms, such as "having," "including," and "comprising," as used herein, do not preclude the presence or addition of one or more other elements or groups thereof.
The automatic grading method for hepatocellular carcinoma based on SE-DenseNet deep learning framework and enhanced MR image of the embodiment comprises the following steps:
1) Clinically acquiring a three-dimensional image and a pathological grading result of the preoperative hepatocellular carcinoma of the multi-modal enhanced MR;
2) Preprocessing all the MR-enhanced three-dimensional images of the hepatocellular carcinoma to serve as training data;
3) Enhancing the training data and amplifying the training data quantity;
4) Training a hierarchical prediction model of hepatocellular carcinoma based on the enhanced training data: a SE-DenseNet network;
5) And carrying out grading prediction on the test data by adopting the trained model, and evaluating the classification performance of the grading prediction model of the hepatocellular carcinoma.
Referring to table 1 below, the layers involved in the present invention are compared in terms of chinese and english, and the main functions of the layers are briefly described to facilitate understanding of the present invention.
TABLE 1
Referring to fig. S1, the specific steps of the present invention are:
step 1, data collection:
acquiring preoperative hepatocellular carcinoma image data of multi-modal enhanced MR from clinic, including arterial phase MR sequence, venous phase MR sequence and delayed phase MR sequence; grading of hepatocellular carcinoma per patient was according to Edmondson and Steiner system grading method: primary, i.e. high differentiation, secondary, i.e. medium differentiation, tertiary, i.e. low differentiation, and quaternary, i.e. undifferentiated, wherein primary and secondary belong to low grade, tertiary and quaternary belong to high grade; each MR sequence is labeled as either low-ranking or high-ranking, as a clinically common gold standard applied to supervised learning.
Step S2, image preprocessing: a tumor region of interest, i.e. ROI region, is extracted from each MR image and normalized, specifically including:
2-1) manually delineating the general area of the tumor, and roughly segmenting the tumor, as shown in FIG. 2;
2-2) extracting the ROI area with a smallest cuboid box and removing background information, as shown in fig. 2;
2-3) normalizing the ROI area to a fixed size, in this example a fixed size of 200 x 10: tumors with RO area I larger than the standardized size are intercepted into the standardized size by taking the center of the tumor as a reference, and tumors with ROI smaller than the standardized size are expanded and zero filled to the standardized size;
2-4) carrying out pixel normalization on the ROI area with the standardized size to obtain a preprocessed image which is used as training data; wherein, pixel normalization adopts a Z-score-based method: the mean and variance of the ROI area are calculated and the pixels of the tumor portion are subtracted from the mean and divided by the variance as shown in fig. 3.
Step S3, data enhancement:
and (3) carrying out a series of continuous operations of random displacement, random up-down left-right overturn, random rotation and random noise addition on each ROI region of the image obtained in the step (2), amplifying the training data quantity and increasing the diversity of the training data.
Step S4 training SE-DenseNet:
the SE-DenseNet network is obtained by combining DenSE, denSE-BC and SE frameworks, so as to optimize the performances of the DenSE and the DenSE-BC, and the SE frameworks are as follows: the frame, SE layer, is Squeeze-and-Excitation Networks. In this embodiment, the SE-DenseNet network comprises a 6-way framework, as shown in FIGS. 4-9.
As in fig. 4, frame one: the SE-DenseNet network is obtained by adding an SE layer between a transition layer and a Dense block by taking DenseNet as a basic framework, and comprises an input layer, a 3D convolution layer, a 3D average pooling layer, N SE-DenseNet component-V1, a Dense block (Dense block in fig. 4), a 3D average pooling layer and a full connection layer which are sequentially connected;
wherein, SE-DenseNet component-V1 is composed of dense blocks, transition layers and SE layers, as shown in FIG. 10, the dense blocks in SE-DenseNet component-V1 are composed of M complex functions; FIG. 14 is a frame of a SE layer; FIG. 16 is a framework of composite functions; FIG. 17 is a frame of a transition layer;
input to the networkWherein C is 0 For the number of channels, H 0 ×W 0 ×L 0 A three-dimensional size for each channel image; the training method of the SE-DenseNet network comprises the following steps: firstly, input is subjected to a 3D convolution layer and a 3D average pooling layer to obtain outputOutput 0 Through N SE-DenseNet component-V1 composed of dense blocks, transition layers (as in FIG. 17) and SE layers; n can be self-defined, N generally takes 3, the dense block consists of M complex functions, M can be self-defined, and M generally takes 12. The output of the last SE-DenseNet component-V1 goes through dense block, 3D average pooling layer and full connection layer for object classification.
As shown in fig. 14, the SE layer includes a global 3D average pooling layer, a fully connected layer, a ReLU, a fully connected layer, and Sigmoid connected in sequence; wherein, reLU and Sigmoid are both activation functions:
ReLU:
Sigmoid:
the SE-DenseNet component-V1 treatment method specifically comprises the following steps:
in the nth SE-DenseNet component-V1, the output of the mth complex function (e.g., FIG. 16) in the dense block is represented as The output of the transition layer is denoted +.>
As shown in FIG. 14, the SE layer is preceded by an Output of the transition layer nT After a series of continuous operations of a global 3D average pooling layer, a full connection layer, a ReLU, a full connection layer and a Sigmoid, the method is obtained Finally Weight and Output are combined nT Multiplication results in Output nT ′:
And then the obtained Output is used again nT ' the next layer in SE layer input into the network is dense blocks.
As shown in fig. 5, frame two: the SE-DenseNet network takes DenseNet as a basic framework, and is obtained by adding an SE layer into a dense block, namely a composite function layer, and then forming a new SE-dense block (SE-DenseNet block in fig. 5); the device comprises an input layer, a 3D convolution layer, a 3D average pooling layer, N SE-DenseNet component-V2, SE-dense blocks, a 3D average pooling layer and a full connection layer which are connected in sequence;
SE-DenseNet component-V2 consists of SE-dense blocks consisting of M successive complex functions and SE layers; the SE-dense block is composed of M successive complex functions and SE layers, as shown in FIG. 11;
input to the networkWherein C is 0 For the number of channels, H 0 ×W 0 ×L 0 A three-dimensional size for each channel image; the training method of the SE-DenseNet network comprises the following steps: firstly, input is obtained after passing through a 3D convolution layer and a 3D average pooling layerOutput 0 Through N SE-DenseNet component-V2 composed of SE-dense blocks and transition layers; n can be defined by itself, N is generally 3, SE-dense blocks consist of M continuous complex functions and SE layers, M can be defined by itself, and M is generally 12. The output of the last SE-DenseNet component-V2 goes through SE-dense blocks, 3D average pooling layer and full connection layer for object classification.
As in fig. 6, frame three: the SE-DenseNet network takes DenseNet as a basic framework, and is obtained by adding an SE layer between a transition layer and a dense block and inside the dense block, namely after a composite function layer; the device comprises an input layer, a 3D convolution layer, a 3D average pooling layer, N SE-DenseNet component-V3, SE-dense blocks (SE-DenseNet blocks in fig. 6), a 3D average pooling layer and a full connection layer which are connected in sequence;
SE-DenseNet component-V3 is composed of SE-dense blocks, transition layers and SE layers, wherein the SE-dense blocks are composed of M continuous composite functions and the SE layers;
input to the networkWherein C is 0 For the number of channels, H 0 ×W 0 ×L 0 A three-dimensional size for each channel image; the training method of the SE-DenseNet network comprises the following steps: firstly, the Input is obtained after passing through a 3D convolution layer and a 3D average pooling layerOutput 0 Through N SE-DenseNet component-V3 composed of SE-dense blocks, transition layers and SE layers, the SE-dense blocks are composed of M continuous composite functions and the SE layers; n can be defined by itself, N is generally 3, SE-dense blocks are composed of M continuous complex functionsAnd SE layer composition, M can be defined by itself, M is generally taken as 12. The output of the last SE-DenseNet component-V3 goes through SE-dense blocks, 3D average pooling layer and full connection layer for object classification.
As in fig. 6, frame four: the SE-DenseNet network takes DenseNet-BC as a basic framework, and is obtained by adding an SE layer between a transition layer and a dense block; it comprises an input layer, a 3D convolution layer, a 3D average pooling layer, N SE-DenseNet component-V4, a Dense block (Dense-BC block in FIG. 7), a 3D average pooling layer and a full connection layer which are connected in sequence; the structure of the dense block is shown in fig. 12;
SE-DenseNet component-V4 is composed of a dense block, a transition layer and a SE layer, wherein the dense block is composed of M continuous bottleneck layers and a composite function;
input to the networkWherein C is 0 For the number of channels, H 0 ×W 0 ×L 0 A three-dimensional size for each channel image;
the training method of the SE-DenseNet network comprises the following steps: firstly, the Input is obtained after passing through a 3D convolution layer and a 3D average pooling layerOutput 0 Through N SE-DenseNet component-V4 composed of dense blocks, transition layers and SE layers; n can be defined by itself, N is generally taken to be 3, and the dense block consists of M consecutive bottleneck layers (as in fig. 15) and a complex function, M can be defined by itself, and M is generally taken to be 6. The output of the last SE-DenseNet component-V4 goes through dense block, 3D average pooling layer and full connection layer for object classification.
As in fig. 7, frame five: the SE-DenseNet network takes DenseNet-BC as a basic framework, and is obtained by adding a SE layer into a dense block, namely a composite function layer, and then forming a new SE-dense block (SE-DenseNet-BC block in fig. 8); the device comprises an input layer, a 3D convolution layer, a 3D average pooling layer, N SE-DenseNet component-V5, dense blocks, a 3D average pooling layer and a full connection layer which are connected in sequence; the structure of the SE-dense block is shown in FIG. 13;
SE-DenseNet component-V5 consists of SE-dense blocks consisting of M successive bottleneck layers, a composite function and SE layers;
input to the networkWherein C is 0 For the number of channels, H 0 ×W 0 ×L 0 A three-dimensional size for each channel image; the training method of the SE-DenseNet network comprises the following steps: firstly, the Input is obtained after passing through a 3D convolution layer and a 3D average pooling layerOutput 0 Through N SE-DenseNet component-V5 composed of SE-dense blocks and transition layers; n can be defined by itself, N is generally 3, SE-dense blocks consist of M continuous bottleneck layers, complex functions and SE layers, M can be defined by itself, and M is generally 6. The output of the last SE-DenseNet component-V5 goes through SE-dense blocks, 3D average pooling layer and full connection layer for object classification.
As in fig. 8, frame six: the SE-DenseNet network takes DenseNet-BC as a basic framework, and is obtained by adding an SE layer between a transition layer and a dense block and inside the dense block, namely after a composite function layer; the device comprises an input layer, a 3D convolution layer, a 3D average pooling layer, N SE-DenseNet component-V6, SE-dense blocks, a 3D average pooling layer and a full connection layer which are connected in sequence;
SE-DenseNet component-V6 consists of SE-dense blocks (SE-DenseNet-BC blocks in FIG. 9), transition layers, SE-dense blocks consisting of M consecutive bottleneck layers, complex functions and SE layers;
input to the networkWherein C is 0 For the number of channels, H 0 ×W 0 ×L 0 A three-dimensional size for each channel image; the training method of the SE-DenseNet network comprises the following steps: first, input passes through a 3D convolution layer and a 3D average pooling layerObtainingOutput 0 Through N SE-DenseNet component-V6 composed of SE-dense blocks, transition layers and SE layers; n can be defined by itself, N is generally 3, SE-dense blocks consist of M continuous bottleneck layers, complex functions and SE layers, M can be defined by itself, and M is generally 6. The output of the last SE-DenseNet component-V6 goes through SE-dense blocks, 3D average pooling layer and full connection layer for object classification.
In the figure, dense blocks are Dense blocks, and SE-DenseNet is SE-Dense block
Wherein the SE-DenseNet network uses a weighted sum of cross entropy loss and complex loss as the total loss of the network:
loss=cross_loss+λR(ω) (2)
in the formula (1), n is the number of input samples X, x= [ X ] 0 ,x 1 ,…,x n-1 ],y i For the label of the corresponding sample, y i ∈[0,1,…,class-1]Class is the number of sample classes, the class is the number of sample classes,predicting an input x to y for a network i Probability values of (2);
in the formula (2), R (omega) is an index for evaluating the complexity of the model, lambda is the weight of the complex loss, and lambda is selected according to the rule: lambda is multiplied by R (omega) so that lambda R (omega) becomes a lambda range of the same order of magnitude as cross_loss;
in the formula (3), omega is a weight matrix of each layer in the network;
the SE-DenseNet network adopts a random gradient descent algorithm to adjust the weight matrix parameter omega, minimize loss, optimize the network and achieve the classification effect.
Step S5, testing SE-DenseNet:
using the obtained hepatocellular carcinoma grading prediction model SE-DenseNet network for each test data to carry out grading prediction;
the performance of the hierarchical prediction model is evaluated by using five-fold cross validation, specifically:
recal, precision, AUC, F1-score, accuracy is used as a performance evaluation standard, 80% of data are randomly selected as a training set, 20% of data are used as a test set, the experiment is repeated five times, and the average value of the five times of experiments is calculated to evaluate the classification performance.
The input layer is used for inputting data; the 3D convolution layer is mainly used for feature extraction; the 3D average pooling layer is mainly used for feature compression; the full connection layer is mainly used for connecting features and realizing classification; the bottleneck layer is mainly used for reducing the number of input characteristic channels and fusing the characteristics of each channel; the composite function is mainly used for the feature extraction transition layer to mainly reduce the number of output feature channels and the feature dimension reduction; the SE layer is mainly used for feature enhancement; the dense blocks are mainly used for fusing the characteristics of all layers in the interior, enhancing the characteristic transmission and effectively utilizing the characteristics.
Although embodiments of the present invention have been disclosed above, it is not limited to the use of the description and embodiments, it is well suited to various fields of use for the invention, and further modifications may be readily apparent to those skilled in the art, and accordingly, the invention is not limited to the particular details without departing from the general concepts defined in the claims and the equivalents thereof.

Claims (7)

1. An automatic grading method of hepatocellular carcinoma based on SE-DenseNet deep learning framework and enhanced MR image, which is characterized by comprising the following steps:
1) Clinically acquiring a multi-mode enhanced MR (magnetic resonance) hepatocellular carcinoma three-dimensional image and a pathological grading result;
2) Preprocessing all the MR-enhanced three-dimensional images of the hepatocellular carcinoma to serve as training data;
3) Enhancing the training data and amplifying the training data quantity;
4) Training a hierarchical prediction model of hepatocellular carcinoma based on the enhanced training data: a SE-DenseNet network;
5) Carrying out grading prediction on the test data by adopting a trained model, and evaluating the classification performance of a hepatocellular carcinoma grading prediction model;
the SE-DenseNet network is obtained by combining DenSE, denSE-BC and SE frameworks, namely: a Squeeze-and-Excitation Networks framework;
the pretreatment in the step 2) is as follows: a tumor region of interest, i.e. ROI region, is extracted from each MR image and normalized, specifically including:
2-1) manually delineating the approximate area of the tumor, and roughly dividing the tumor;
2-2) extracting the ROI area and removing background information;
2-3) normalizing the ROI area to a fixed size: tumors with RO area I larger than the standardized size are intercepted into the standardized size by taking the center of the tumor as a reference, and tumors with ROI smaller than the standardized size are expanded and zero filled to the standardized size;
2-4) carrying out pixel normalization on the ROI area with the standardized size to obtain a preprocessed image which is used as training data; wherein, pixel normalization adopts a Z-score-based method: calculating the mean and variance of the ROI area, and dividing the mean by the variance by the pixel of the tumor part;
the SE-DenseNet network is obtained by taking DenseNet as a basic framework, adding an SE layer between a transition layer and a dense block, and comprises an input layer, a 3D convolution layer, a 3D average pooling layer, N SE-DenseNet component-V1, a dense block, a 3D average pooling layer and a full connection layer which are sequentially connected;
wherein, SE-DenseNet component-V1 is composed of dense blocks, transition layers, SE layers, and the dense blocks in SE-DenseNet component-V1 are composed of M complex functions;
wherein the network inputsWherein C is 0 For the number of channels, H 0 ×W 0 ×L 0 A three-dimensional size for each channel image;
the training method of the SE-DenseNet network comprises the following steps: firstly, input is subjected to a 3D convolution layer and a 3D average pooling layer to obtain outputOutput 0 Through N SE-DenseNet component-V1 composed of dense blocks, transition layers and SE layers; the output of the last SE-DenseNet component-V1 is subject to object classification through a dense block, a 3D average pooling layer and a full connection layer;
the SE layer comprises a global 3D average pooling layer, a full connection layer, a ReLU, a full connection layer and a Sigmoid which are sequentially connected; wherein, reLU and Sigmoid are both activation functions:
ReLU:
Sigmoid:
the SE-DenseNet component-V1 treatment method specifically comprises the following steps:
in the nth SE-DenseNet component-V1, the output of the mth complex function in the dense block is expressed as The output of the transition layer is denoted +.>
Output of transition layer before SE layer nT After a series of continuous operations of a global 3D average pooling layer, a full connection layer, a ReLU, a full connection layer and a Sigmoid, the method is obtained,/>Finally Weight and Output are combined nT Multiplication results in Output nT ':
And then the obtained Output is used again nT ' next layer dense block input into SE layer in network;
wherein the SE-DenseNet network uses a weighted sum of cross entropy loss and complex loss as the total loss of the network:
loss=cross_loss+λR(ω) (2)
in the formula (1), n is the number of input samples X, x= [ X ] 0 ,x 1 ,…,x n-1 ],y i For the label of the corresponding sample, y i ∈[0,1,…,class-1]Class is the number of sample classes, the class is the number of sample classes,predicting an input x to y for a network i Probability values of (2);
in the formula (2), R (omega) is an index for evaluating the complexity of the model, lambda is the weight of the complex loss, and lambda is selected according to the rule: lambda is multiplied by R (omega) so that lambda R (omega) becomes a lambda range of the same order of magnitude as cross_loss;
in the formula (3), ω is a weight matrix of each layer in the network.
2. The automatic grading method for hepatocellular carcinoma based on a SE-DenseNet deep learning framework and enhanced MR image according to claim 1, wherein the step 1) comprises: acquiring preoperative hepatocellular carcinoma image data of multi-modal enhanced MR from clinic, including arterial phase MR sequence, venous phase MR sequence and delayed phase MR sequence; grading of hepatocellular carcinoma per patient was according to Edmondson and Steiner system grading method: primary, i.e. high differentiation, secondary, i.e. medium differentiation, tertiary, i.e. low differentiation, and quaternary, i.e. undifferentiated, wherein primary and secondary belong to low grade, tertiary and quaternary belong to high grade; each MR sequence is labeled as either low-ranking or high-ranking, as a clinically common gold standard applied to supervised learning.
3. The automatic grading method of hepatocellular carcinoma based on a SE-densnet deep learning framework and enhanced MR image according to claim 1, wherein the SE-densnet network is based on a DenseSE and is obtained by adding a SE layer into a dense block, namely, after a composite function layer, to form a new SE-dense block; the device comprises an input layer, a 3D convolution layer, a 3D average pooling layer, N SE-DenseNet component-V2, SE-dense blocks, a 3D average pooling layer and a full connection layer which are connected in sequence;
SE-DenseNet component-V2 consists of SE-dense blocks consisting of M successive complex functions and SE layers; the SE-dense block consists of M continuous complex functions and SE layers;
input to the networkWherein C is 0 For the number of channels, H 0 ×W 0 ×L 0 A three-dimensional size for each channel image;
the training method of the SE-DenseNet network comprises the following steps: firstly, input is obtained after passing through a 3D convolution layer and a 3D average pooling layerOutput 0 Through N SE-DenseNet component-V2 composed of SE-dense blocks and transition layers; the output of the last SE-DenseNet component-V2 goes through SE-dense blocks, 3D average pooling layer and full connection layer for object classification.
4. The automatic classification method of hepatocellular carcinoma based on a SE-densnet deep learning framework and an enhanced MR image according to claim 1, wherein the SE-densnet network is obtained by adding a SE layer between a transition layer and a dense block and after a composite function layer inside the dense block simultaneously with the densnet as a basic framework; the device comprises an input layer, a 3D convolution layer, a 3D average pooling layer, N SE-DenseNet component-V3, SE-dense blocks, a 3D average pooling layer and a full connection layer which are connected in sequence;
SE-DenseNet component-V3 is composed of SE-dense blocks, transition layers and SE layers, wherein the SE-dense blocks are composed of M continuous composite functions and the SE layers;
input to the networkWherein C is 0 For the number of channels, H 0 ×W 0 ×L 0 A three-dimensional size for each channel image;
the training method of the SE-DenseNet network comprises the following steps: firstly, the Input is obtained after passing through a 3D convolution layer and a 3D average pooling layerOutput 0 Through N SE-DenseNet component-V3 composed of SE-dense blocks, transition layer and SE layer, the SE-dense blocks are composed of M continuous blocksIs composed of a composite function and an SE layer; the output of the last SE-DenseNet component-V3 goes through SE-dense blocks, 3D average pooling layer and full connection layer for object classification.
5. The automatic classification method of hepatocellular carcinoma based on a SE-densnet deep learning framework and an enhanced MR image according to claim 1, wherein the SE-densnet network is obtained by adding a SE layer between a transition layer and a dense block with a densnet-BC as a basic framework; the device comprises an input layer, a 3D convolution layer, a 3D average pooling layer, N SE-DenseNet component-V4, dense blocks, a 3D average pooling layer and a full connection layer which are connected in sequence;
SE-DenseNet component-V4 is composed of a dense block, a transition layer and a SE layer, wherein the dense block is composed of M continuous bottleneck layers and a composite function;
input to the networkWherein C is 0 For the number of channels, H 0 ×W 0 ×L 0 A three-dimensional size for each channel image;
the training method of the SE-DenseNet network comprises the following steps: firstly, the Input is obtained after passing through a 3D convolution layer and a 3D average pooling layerOutput 0 Through N SE-DenseNet component-V4 composed of dense blocks, transition layers and SE layers; the output of the last SE-DenseNet component-V4 goes through dense block, 3D average pooling layer and full connection layer for object classification.
6. The automatic classification method of hepatocellular carcinoma based on a SE-densnet deep learning framework and an enhanced MR image according to claim 1, wherein the SE-densnet network is obtained by constructing a new SE-dense block by adding a SE layer to the interior of the dense block, i.e. after a complex function layer, with a densnet-BC as a basic framework; the device comprises an input layer, a 3D convolution layer, a 3D average pooling layer, N SE-DenseNet component-V5, dense blocks, a 3D average pooling layer and a full connection layer which are connected in sequence;
SE-DenseNet component-V5 consists of SE-dense blocks consisting of M successive bottleneck layers, a composite function and SE layers;
input to the networkWherein C is 0 For the number of channels, H 0 ×W 0 ×L 0 A three-dimensional size for each channel image;
the training method of the SE-DenseNet network comprises the following steps: firstly, the Input is obtained after passing through a 3D convolution layer and a 3D average pooling layerOutput 0 Through N SE-DenseNet component-V5 composed of SE-dense blocks and transition layers; the output of the last SE-DenseNet component-V5 goes through SE-dense blocks, 3D average pooling layer and full connection layer for object classification.
7. The automatic classification method of hepatocellular carcinoma based on a SE-densnet deep learning framework and an enhanced MR image according to claim 1, wherein the SE-densnet network is obtained by adding a SE layer between a transition layer and a dense block and inside the dense block, namely after a composite function layer, with a densnet-BC as a basic framework; the device comprises an input layer, a 3D convolution layer, a 3D average pooling layer, N SE-DenseNet component-V6, SE-dense blocks, a 3D average pooling layer and a full connection layer which are connected in sequence;
SE-DenseNet component-V6 is composed of SE-dense blocks, transition layers and SE layers, wherein the SE-dense blocks are composed of M continuous bottleneck layers, a composite function and SE layers;
input to the networkWherein C is 0 For the number of channels, H 0 ×W 0 ×L 0 For each ofThree-dimensional size of the individual channel images;
the training method of the SE-DenseNet network comprises the following steps: firstly, the Input is obtained after passing through a 3D convolution layer and a 3D average pooling layerOutput 0 Through N SE-DenseNet component-V6 composed of SE-dense blocks, transition layers and SE layers; the output of the last SE-DenseNet component-V6 goes through SE-dense blocks, 3D average pooling layer and full connection layer for object classification.
CN201910042749.5A 2019-01-17 2019-01-17 Automatic grading method for hepatocellular carcinoma based on SE-DenseNet deep learning framework and enhanced MR image Active CN109886922B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910042749.5A CN109886922B (en) 2019-01-17 2019-01-17 Automatic grading method for hepatocellular carcinoma based on SE-DenseNet deep learning framework and enhanced MR image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910042749.5A CN109886922B (en) 2019-01-17 2019-01-17 Automatic grading method for hepatocellular carcinoma based on SE-DenseNet deep learning framework and enhanced MR image

Publications (2)

Publication Number Publication Date
CN109886922A CN109886922A (en) 2019-06-14
CN109886922B true CN109886922B (en) 2023-08-18

Family

ID=66926041

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910042749.5A Active CN109886922B (en) 2019-01-17 2019-01-17 Automatic grading method for hepatocellular carcinoma based on SE-DenseNet deep learning framework and enhanced MR image

Country Status (1)

Country Link
CN (1) CN109886922B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110694934A (en) * 2019-09-01 2020-01-17 阿尔飞思(昆山)智能物联科技有限公司 Intelligent dry garbage classification cloud system and working method thereof
CN111046964B (en) * 2019-12-18 2021-01-26 电子科技大学 Convolutional neural network-based human and vehicle infrared thermal image identification method
CN111274942A (en) * 2020-01-19 2020-06-12 国汽(北京)智能网联汽车研究院有限公司 Traffic cone identification method and device based on cascade network
CN111931802A (en) * 2020-06-16 2020-11-13 南京信息工程大学 Pedestrian re-identification method based on fusion of middle-layer features of Simese network structure
CN111862087A (en) * 2020-08-03 2020-10-30 张政 Liver and pancreas steatosis distinguishing method based on deep learning
CN112085113B (en) * 2020-09-14 2021-05-04 四川大学华西医院 Severe tumor image recognition system and method
CN112836584B (en) * 2021-01-05 2023-04-07 西安理工大学 Traffic image safety belt classification method based on deep learning
CN112508953B (en) * 2021-02-05 2021-05-18 四川大学 Meningioma rapid segmentation qualitative method based on deep neural network
CN112966780A (en) * 2021-03-31 2021-06-15 动联(山东)电子科技有限公司 Animal behavior identification method and system
CN113076909B (en) * 2021-04-16 2022-10-25 重庆大学附属肿瘤医院 Automatic cell detection method
CN113222932B (en) * 2021-05-12 2023-05-02 上海理工大学 Small intestine endoscope picture feature extraction method based on multi-convolution neural network integrated learning
CN113762395B (en) * 2021-09-09 2022-08-19 深圳大学 Pancreatic bile duct type ampulla carcinoma classification model generation method and image classification method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220980A (en) * 2017-05-25 2017-09-29 重庆理工大学 A kind of MRI image brain tumor automatic division method based on full convolutional network
CN107480702A (en) * 2017-07-20 2017-12-15 东北大学 Towards the feature selecting and Feature fusion of the identification of HCC pathological images
CN108509991A (en) * 2018-03-29 2018-09-07 青岛全维医疗科技有限公司 Liver's pathological image sorting technique based on convolutional neural networks
CN108776774A (en) * 2018-05-04 2018-11-09 华南理工大学 A kind of human facial expression recognition method based on complexity categorization of perception algorithm
CN108875787A (en) * 2018-05-23 2018-11-23 北京市商汤科技开发有限公司 A kind of image-recognizing method and device, computer equipment and storage medium
CN108899087A (en) * 2018-06-22 2018-11-27 中山仰视科技有限公司 X-ray intelligent diagnosing method based on deep learning
CN108960257A (en) * 2018-07-06 2018-12-07 东北大学 A kind of diabetic retinopathy grade stage division based on deep learning
CN109190712A (en) * 2018-09-21 2019-01-11 福州大学 A kind of line walking image automatic classification system of taking photo by plane based on deep learning
CN109214433A (en) * 2018-08-20 2019-01-15 福建师范大学 A kind of method that convolutional neural networks distinguish liver cancer differentiation grade

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9585627B2 (en) * 2013-08-14 2017-03-07 Siemens Healthcare Gmbh Histological differentiation grade prediction of hepatocellular carcinoma in computed tomography images
CN105447872A (en) * 2015-12-03 2016-03-30 中山大学 Method for automatically identifying liver tumor type in ultrasonic image

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220980A (en) * 2017-05-25 2017-09-29 重庆理工大学 A kind of MRI image brain tumor automatic division method based on full convolutional network
CN107480702A (en) * 2017-07-20 2017-12-15 东北大学 Towards the feature selecting and Feature fusion of the identification of HCC pathological images
CN108509991A (en) * 2018-03-29 2018-09-07 青岛全维医疗科技有限公司 Liver's pathological image sorting technique based on convolutional neural networks
CN108776774A (en) * 2018-05-04 2018-11-09 华南理工大学 A kind of human facial expression recognition method based on complexity categorization of perception algorithm
CN108875787A (en) * 2018-05-23 2018-11-23 北京市商汤科技开发有限公司 A kind of image-recognizing method and device, computer equipment and storage medium
CN108899087A (en) * 2018-06-22 2018-11-27 中山仰视科技有限公司 X-ray intelligent diagnosing method based on deep learning
CN108960257A (en) * 2018-07-06 2018-12-07 东北大学 A kind of diabetic retinopathy grade stage division based on deep learning
CN109214433A (en) * 2018-08-20 2019-01-15 福建师范大学 A kind of method that convolutional neural networks distinguish liver cancer differentiation grade
CN109190712A (en) * 2018-09-21 2019-01-11 福州大学 A kind of line walking image automatic classification system of taking photo by plane based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Grading of hepatocellular carcinoma using 3D SE-DenseNet in dynamic enhanced MR images;Qing Zhou;《Computers in Biology and Medicine》;第47-57页 *

Also Published As

Publication number Publication date
CN109886922A (en) 2019-06-14

Similar Documents

Publication Publication Date Title
CN109886922B (en) Automatic grading method for hepatocellular carcinoma based on SE-DenseNet deep learning framework and enhanced MR image
US11101033B2 (en) Medical image aided diagnosis method and system combining image recognition and report editing
CN108446730B (en) CT pulmonary nodule detection device based on deep learning
CN111429474B (en) Mammary gland DCE-MRI image focus segmentation model establishment and segmentation method based on mixed convolution
CN109829918B (en) Liver image segmentation method based on dense feature pyramid network
CN110517253B (en) Method for classifying benign and malignant pulmonary nodules based on 3D multi-target feature learning
CN112150428A (en) Medical image segmentation method based on deep learning
CN111798425B (en) Intelligent detection method for mitotic image in gastrointestinal stromal tumor based on deep learning
CN112364920B (en) Thyroid cancer pathological image classification method based on deep learning
CN111429473A (en) Chest film lung field segmentation model establishment and segmentation method based on multi-scale feature fusion
CN114565761A (en) Deep learning-based method for segmenting tumor region of renal clear cell carcinoma pathological image
CN114782307A (en) Enhanced CT image colorectal cancer staging auxiliary diagnosis system based on deep learning
CN111080596A (en) Auxiliary screening method and system for pneumoconiosis fusing local shadows and global features
CN113643261B (en) Lung disease diagnosis method based on frequency attention network
CN112861994A (en) Intelligent gastric ring cell cancer image classification system based on Unet migration learning
CN114723669A (en) Liver tumor two-point five-dimensional deep learning segmentation algorithm based on context information perception
CN113269799A (en) Cervical cell segmentation method based on deep learning
CN112348839A (en) Image segmentation method and system based on deep learning
CN116797609A (en) Global-local feature association fusion lung CT image segmentation method
Costa et al. Data augmentation for detection of architectural distortion in digital mammography using deep learning approach
CN114565601A (en) Improved liver CT image segmentation algorithm based on DeepLabV3+
CN114078137A (en) Colposcope image screening method and device based on deep learning and electronic equipment
Yang et al. CT-based transformer model for non-invasively predicting the Fuhrman nuclear grade of clear cell renal cell carcinoma
CN116230237B (en) Lung cancer influence evaluation method and system based on ROI focus features
CN115132275B (en) Method for predicting EGFR gene mutation state based on end-to-end three-dimensional convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant