CN112200810B - Multi-modal automated ventricle segmentation system and method of use thereof - Google Patents

Multi-modal automated ventricle segmentation system and method of use thereof Download PDF

Info

Publication number
CN112200810B
CN112200810B CN202011062511.8A CN202011062511A CN112200810B CN 112200810 B CN112200810 B CN 112200810B CN 202011062511 A CN202011062511 A CN 202011062511A CN 112200810 B CN112200810 B CN 112200810B
Authority
CN
China
Prior art keywords
data set
thick
layer
loss function
decoder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011062511.8A
Other languages
Chinese (zh)
Other versions
CN112200810A (en
Inventor
夏军
杨光
牛张明
江荧辉
叶晴昊
王旻皓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Second Peoples Hospital
Original Assignee
Shenzhen Second Peoples Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Second Peoples Hospital filed Critical Shenzhen Second Peoples Hospital
Priority to CN202011062511.8A priority Critical patent/CN112200810B/en
Publication of CN112200810A publication Critical patent/CN112200810A/en
Application granted granted Critical
Publication of CN112200810B publication Critical patent/CN112200810B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a multi-mode automatic ventricle segmentation system and a using method thereof, wherein the multi-mode automatic ventricle segmentation system comprises the following steps: collecting a thick layer scan data set D1 which has been manually segmented and a thin layer scan data set D2 which has not been segmented; leading in a pre-training model to construct an encoder ER, constructing a decoder DR through a sub-pixel convolution layer, and constructing a multi-mode ventricle segmentation model M by combining the encoder ER and the decoder DR; generating a supervisory signal S using the segmented information in the thick layer scan data set D1; taking the thick layer scanning data set D1 and the thin layer scanning data set D2 as input, extracting the characteristic F, and inputting the characteristic F and the supervisory signal S into a decoder DR; combining the loss function L1 generated by the thick-layer scanning data set D1 and the loss function L2 generated by the thin-layer scanning data set D2 to obtain a loss function L of the ventricle segmentation model M; according to the loss function L, continuously training and optimizing a ventricle segmentation model M; and (3) automatically segmenting brain images of different multi-mode scanning methods by using a trained ventricle segmentation model M.

Description

Multi-modal automated ventricle segmentation system and method of use thereof
Technical Field
The application relates to the technical field of biological information, in particular to a multi-mode automatic ventricle segmentation system and a using method thereof.
Background
Ventricular volume is closely related to many brain diseases, and many studies suggest that ventricular volume changes are characteristic of diseases such as schizophrenia, parkinson's disease, alzheimer's disease, hydrocephalus and brain atrophy. The accurate measurement of ventricle volume is of great clinical significance in the aspects of early disease discovery, patient disease assessment, disease diagnosis, operation effect assessment and the like. Measuring ventricular volume is therefore of great value in the medical field.
Ventricular segmentation is the only way to measure ventricular volume. The method is to divide ventricles of a patient brain medical imaging image obtained by scanning by adopting a proper method. It is divided into automatic segmentation and manual segmentation. Currently, in traditional clinical conditions, manual segmentation is typically performed by a physician. The ventricular volume produced by the manual segmentation technique is the most accurate and golden standard for the segmentation technique. However, when processing more data, the manual segmentation technique is time-consuming, subjective, and has human errors and poor repeatability.
By introducing methods in the fields of bioinformatics, deep learning and the like, the automatic segmentation can replace the traditional manual segmentation, so that an automatic segmentation method is realized, and the ventricle volume of the patient is rapidly, accurately and automatically estimated. By the technology, a great deal of time and cost for manually processing medical imaging information can be saved, and clinicians are helped to find early diseases, know the disease progress of patients, diagnose diseases, evaluate operation effects and the like. Ventricular segmentation in different slice thicknesses (thick and thin) in different modalities (MRI and CT) requires case-specific analysis, otherwise inaccurate cutting is problematic. For example, thick cut pictures and marking information thereof are easy to obtain, and the marking of thin cut pictures requires a large amount of manpower marking, which is not in line with the actual application scene. Experiments prove that the model trained on thick cut data only has poor performance on the prediction of thin cut pictures, and cannot meet the practical application condition.
Disclosure of Invention
The application aims to provide a multi-mode automatic ventricle segmentation system and a using method thereof, which are used for solving the problem of inaccurate automatic segmentation of the traditional brain medical imaging images.
In order to solve the above technical problems, the present application provides a multi-modal automated ventricle segmentation system, comprising:
a collection module configured to collect a thick layer scan data set D1 that has been manually segmented and a thin layer scan data set D2 that has not been segmented;
a construction module configured to import the pre-training model to construct an encoder ER, construct a decoder DR through a sub-pixel convolution layer, and construct a multi-modal ventricle segmentation model M in combination with the encoder ER and the decoder DR;
a supervision module configured to generate a supervision signal S using the segmented information in the thick layer scan dataset D1;
an input module configured to take as input the thick layer scan data set D1 and the thin layer scan data set D2, extract the features F, and input the features together with the supervisory signal S to the decoder DR;
the loss function module is configured to combine the loss function L1 generated by the thick-layer scanning data set D1 and the loss function L2 generated by the thin-layer scanning data set D2 to obtain a loss function L of the ventricle segmentation model M;
the training module is configured to continuously train and optimize the ventricle segmentation model M according to the loss function L;
the segmentation module is configured to automatically segment brain images of different multi-mode scanning methods by using a trained ventricle segmentation model M.
Optionally, in the multi-mode automatic ventricle segmentation system, the thick-layer scanning data set is formed according to a thick-cut picture set, and the thick-cut picture set is:
the thin-layer scanning data set is formed according to a thin-cut picture set, wherein the thin-cut picture set is as follows:
optionally, in the multi-modal automated ventricle segmentation system, constructing the encoder ER includes:
utilizing a ResNet-34 residual neural network pre-trained on an ImageNet dataset as an encoder;
extracting information of each layer through a residual error module respectively;
the thick cut picture set and the thin cut picture set are taken as input extraction features F, and the extraction features F are respectively input into a decoder.
Optionally, in the multi-mode automatic ventricle segmentation system, for the decoder, a sub-pixel convolution layer is adopted to reconstruct and recover the picture, and the mathematical expression is as follows:
F L =SP(W L *F L-1 +b L )
wherein the SP (-) operation will be shaped as H W C r 2 The tensors of (a) are rearranged so that they are arranged as tensors of the shape rH× rW ×C, F L-1 And F L Is the input characteristic and the output characteristic of the layer respectively, W L And b L Trainable parameters convolved for the sub-pixels;
after passing through the decoder, the prediction probability of each category is obtained.
Optionally, in the multi-mode automatic ventricle segmentation system, a thick cut picture set and a thin cut picture set are used as input of a ventricle segmentation model, thick cut labeling information is used as a supervision signal, and the ventricle segmentation model is optimized through a loss function, wherein the loss function is as follows:
wherein lambda is a super parameter for regulatingAnd->Is the influence of p s And p t The prediction probability of the ventricle segmentation model on the thick cut picture and the thin cut picture is H multiplied by W multiplied by C tensor, wherein C represents the category number; />Is a cross entropy loss function in the form of:
optionally, in the multi-modal automated ventricular segmentation system,the distance of the predicted probability distribution from the uniform probability distribution, expressed as a cut-off picture, the uniform probability distribution is:
minimization ofMaking the classification of the prediction probability more discriminative, implicitly pushing the image features away from the decision boundary, so that the two distributions are aligned; />Expressed mathematically as
Wherein C is the number of categories.
Alternatively, in the multi-modal automated ventricular segmentation system, pearson χ is used 2 Divergence asIs of the form of embodiment of pearson χ 2 The gradient of the divergence is expressed as:
the gradient of which has a constant growth rate, not followingAnd changes from variation to variation.
Optionally, in the multi-modal automated ventricular segmentation system, pearson χ 2 The divergence is f (x) =x 2 -1。
The application also provides a using method of the multi-mode automatic ventricle segmentation system, which comprises the following steps:
the collection module collects the thick layer scan data set D1 which has been manually segmented and the thin layer scan data set D2 which has not been segmented;
the construction module imports a pre-training model to construct an encoder ER, constructs a decoder DR through a sub-pixel convolution layer, and constructs a multi-mode ventricle segmentation model M by combining the encoder ER and the decoder DR;
the supervision module generates a supervision signal S using the segmented information in the thick layer scan dataset D1;
the input module takes the thick layer scanning data set D1 and the thin layer scanning data set D2 as input, extracts the characteristic F, and inputs the characteristic F and the supervision signal S into the decoder DR;
the loss function module combines the loss function L1 generated by the thick-layer scanning data set D1 and the loss function L2 generated by the thin-layer scanning data set D2 to obtain a loss function L of the ventricle segmentation model M;
the training module continuously trains and optimizes the ventricle segmentation model M according to the loss function L;
the segmentation module uses a trained ventricle segmentation model M to automatically segment brain images of different multi-mode scanning methods.
In the multi-mode automatic ventricle segmentation system and the using method thereof provided by the application, a collection module collects a thick-layer scanning data set D1 which is segmented manually and a thin-layer scanning data set D2 which is not segmented, a construction module is led into a pre-training model to construct an encoder ER, a decoder DR is constructed through a sub-pixel convolution layer, the encoder ER and the decoder DR are combined to construct a multi-mode ventricle segmentation model M, a supervision module uses segmented information in the thick-layer scanning data set D1 to generate a supervision signal S, an input module takes the thick-layer scanning data set D1 and the thin-layer scanning data set D2 as input, extracts a characteristic F and the supervision signal S together, the loss function module is input into the decoder DR, the loss function module combines a loss function L1 generated by the thick-layer scanning data set D1 and a loss function L2 generated by the thin-layer scanning data set D2, the training module continuously trains and optimizes the ventricle segmentation model M according to the loss function L, and the segmentation module uses the trained ventricle segmentation model M to automatically segment brain images of different scanning methods, so that the two general models of thickness cutting and labeling of the brain images are realized by only one model.
The application is mainly composed of a segmentation network based on the structure of an Encoder (Encoder) and a Decoder (Decode) and an unsupervised domain adaptation technology, and can train only by thin cut pictures and marked thick cut pictures. In the training process, the labeling information of the thin cut slices is not needed, so that the manpower consumption can be reduced. Because thick cut labels are relatively easy to obtain, the relative label costs are lower, while thin cuts are higher.
Drawings
FIG. 1 is a schematic illustration of a method of using a multi-modal automated ventricular segmentation system according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a multi-modal automated ventricular segmentation system according to an embodiment of the present application.
Detailed Description
The multi-modal automated ventricle segmentation system and method of use thereof according to the present application is described in further detail below with reference to the accompanying drawings and specific examples. Advantages and features of the application will become more apparent from the following description and from the claims. It should be noted that the drawings are in a very simplified form and are all to a non-precise scale, merely for convenience and clarity in aiding in the description of embodiments of the application.
In addition, features of different embodiments of the application may be combined with each other, unless otherwise specified. For example, a feature of the second embodiment may be substituted for a corresponding feature of the first embodiment, or may have the same or similar function, and the resulting embodiment may fall within the scope of disclosure or description of the application.
The core idea of the application is to provide a multi-modal automated ventricle segmentation system and a method of use thereof for achieving ventricle segmentation in different slice thicknesses (thick and thin) in different modalities (MRI and CT).
To achieve the above-mentioned idea, the present application provides a multi-modal automated ventricle segmentation system and a method for using the same, as shown in fig. 1, comprising: a collection module configured to collect a thick layer scan data set D1 that has been manually segmented and a thin layer scan data set D2 that has not been segmented; a construction module configured to import the pre-training model to construct an encoder ER, construct a decoder DR through a sub-pixel convolution layer, and construct a multi-modal ventricle segmentation model M in combination with the encoder ER and the decoder DR; a supervision module configured to generate a supervision signal S using the segmented information in the thick layer scan dataset D1; an input module configured to take as input the thick layer scan data set D1 and the thin layer scan data set D2, extract the features F, and input the features together with the supervisory signal S to the decoder DR; the loss function module is configured to combine the loss function L1 generated by the thick-layer scanning data set D1 and the loss function L2 generated by the thin-layer scanning data set D2 to obtain a loss function L of the ventricle segmentation model M; the training module is configured to continuously train and optimize the ventricle segmentation model M according to the loss function L; the segmentation module is configured to automatically segment brain images of different multi-mode scanning methods by using a trained ventricle segmentation model M.
Specifically, the symbol represents:
thick cut picture collection:
a thin cut picture set:
the use method of the system mainly comprises two major parts, namely a model framework and a training mode. The main structure diagram is shown in fig. 2: wherein only thin cut pictures and marked thick cut pictures are needed for training. In the training process, the labeling information of the thin cut slices is not needed, so that the manpower consumption can be reduced. Because thick cut labels are relatively easy to obtain, the relative label costs are lower, while thin cuts are higher.
In one embodiment of the application, the encoder is composed of: the ResNet-34 residual neural network pre-trained on the ImageNet dataset was used as the encoder for the model. Information of each layer is extracted through a residual block (ResBlock) respectively. And then the thick cut picture and the thin cut picture are used as input to extract the features, and the extracted features are respectively input into a decoder.
In one embodiment of the application, the decoder is composed of: for a decoder, a sub-pixel convolution layer is adopted for reconstructing and recovering pictures, and the mathematical expression is as follows
F L =SP(W L *F L-1 +b L );
Wherein the SP (-) operation will be shaped as H W C r 2 The tensors of (a) are rearranged so that they are arranged as tensors of the shape rH× rW ×C, F L-1 And F L Is the input characteristic and the output characteristic of the layer respectively, W L And b L Trainable parameters for the subpixel convolution. After passing through the decoder, the prediction probability of each category is obtained.
In one embodiment of the application, the training regimen comprises: the thick cut and thin cut pictures are used as the input of the model, and thick cut labeling information is used as a supervision signal, so that the model is optimized through the following loss functions:
wherein lambda is a super parameter for adjustmentAnd->Is the influence of p s And p t The prediction probability of the model for thick and thin cut pictures is the tensor with the shape H×W×C, where C represents the number of categories>Is a cross entropy loss function in the form of:
because the object of the present application is that the model requires accurate segmentation performance on both thin and thick cuts,predictive probability distribution and uniform probability distribution, which can be represented as a slice picture>Is a distance of (3). So minimize +.>The discrimination of the categories of the prediction probabilities can be made even more, as this implicitly pushes the image features away from the decision boundary, so that the two distributions are aligned. />Can be expressed mathematically as
Wherein C is the number of categories. Most methods take f (x) =x log x to representAlso known as KL divergence. However, the use of KL divergence has the following problems: in the optimization, the->The gradient of (c) may be extremely unbalanced for high probability samples and low probability samples, taking into account the two-dimensional form:
the growth speed of the material is along withIs increased, resulting in a gradient with a certain bias.
Therefore, to solve the above problems, pearson (Pearson) χ was used 2 Divergence (i.e.f (x) =x 2 -1) asAnd thus, its gradient may be expressed as,
it can be found that its gradient has a constant growth rate, not withAnd changes from variation to variation.
In summary, the foregoing embodiments describe in detail different configurations of the multi-modal automated ventricle segmentation system and the method of use thereof, and of course, the present application includes, but is not limited to, the configurations listed in the foregoing embodiments, and any modifications based on the configurations provided by the foregoing embodiments are within the scope of the present application. One skilled in the art can recognize that the above embodiments are illustrative.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the system disclosed in the embodiment, the description is relatively simple because of corresponding to the method disclosed in the embodiment, and the relevant points refer to the description of the method section.
The above description is only illustrative of the preferred embodiments of the present application and is not intended to limit the scope of the present application, and any alterations and modifications made by those skilled in the art based on the above disclosure shall fall within the scope of the appended claims.

Claims (7)

1. A multi-modal automated ventricular segmentation system, comprising:
a collection module configured to collect a thick layer scan data set D1 that has been manually segmented and a thin layer scan data set D2 that has not been segmented;
a construction module configured to import the pre-training model to construct an encoder ER, construct a decoder DR through a sub-pixel convolution layer, and construct a multi-modal ventricle segmentation model M in combination with the encoder ER and the decoder DR;
a supervision module configured to generate a supervision signal S using the segmented information in the thick layer scan dataset D1;
an input module configured to take as input the thick layer scan data set D1 and the thin layer scan data set D2, extract the features F, and input the features together with the supervisory signal S to the decoder DR;
the loss function module is configured to combine the loss function L1 generated by the thick-layer scanning data set D1 and the loss function L2 generated by the thin-layer scanning data set D2 to obtain a loss function L of the ventricle segmentation model M;
the training module is configured to continuously train and optimize the ventricle segmentation model M according to the loss function L;
a segmentation module configured to automatically segment brain images of different multi-modality scanning methods using a trained ventricle segmentation model M,
wherein constructing the encoder ER comprises: utilizing a ResNet-34 residual neural network pre-trained on an ImageNet dataset as an encoder; extracting information of each layer through a residual error module respectively; the thick cut picture set and the thin cut picture set are taken as input extraction features F, the extraction features F are respectively input into a decoder,
the reconstruction and recovery of the picture are carried out by adopting a sub-pixel convolution layer for a decoder, and the mathematical expression is as follows:
F L =SP(W L *F L-1 +b L )
wherein the SP (-) operation will be shaped as H W C r 2 The tensors of (a) are rearranged so that they are arranged as tensors of the shape rH× rW ×C, C representing the number of categories, F L-1 And F L Is the input characteristic and the output characteristic of the layer respectively, W L And b L Trainable parameters convolved for the sub-pixels;
after passing through the decoder, the prediction probability of each category is obtained.
2. The multi-modal automated ventricle segmentation system of claim 1 wherein the thick-layer scan dataset is formed from a collection of thick-cut pictures, the collection of thick-cut pictures being:
the thin-layer scanning data set is formed according to a thin-cut picture set, wherein the thin-cut picture set is as follows:
3. the multi-modal automated ventricular segmentation system of claim 1 wherein a set of thick and thin cut pictures is employed as input to a ventricular segmentation model and labeling information of the thick cuts is applied as supervisory signals, the ventricular segmentation model being optimized by a loss function, the loss function being:
wherein lambda is a super parameter for regulatingAnd->Is the influence of p s And p t The prediction probability of the ventricle segmentation model on the thick cut picture and the thin cut picture is H multiplied by W multiplied by C tensor, wherein C represents the category number; />Is a cross entropy loss function in the form of:
4. the multi-modal automated ventricular segmentation system as set forth in claim 3 wherein,the distance of the predicted probability distribution from the uniform probability distribution, expressed as a cut-off picture, the uniform probability distribution is:
minimization ofMaking the classification of the prediction probability more discriminative, implicitly pushing the image features away from the decision boundary, so that the two distributions are aligned; />Expressed mathematically as:
wherein C is the number of categories.
5. The multi-modal automated ventricular segmentation system as claimed in claim 4 employing pearson χ 2 Divergence asIs of the form of embodiment of pearson χ 2 The gradient of the divergence is expressed as:
the gradient of which has a constant growth rate, not followingAnd changes from variation to variation.
6. The multi-modal automated ventricular segmentation system as set forth in claim 5 wherein pearson χ 2 The divergence is f (x) =x 2 -1。
7. A method of using a multi-modal automated ventricular segmentation system, comprising:
the collection module collects the thick layer scan data set D1 which has been manually segmented and the thin layer scan data set D2 which has not been segmented;
the construction module imports a pre-training model to construct an encoder ER, constructs a decoder DR through a sub-pixel convolution layer, and constructs a multi-mode ventricle segmentation model M by combining the encoder ER and the decoder DR;
the supervision module generates a supervision signal S using the segmented information in the thick layer scan dataset D1;
the input module takes the thick layer scanning data set D1 and the thin layer scanning data set D2 as input, extracts the characteristic F, and inputs the characteristic F and the supervision signal S into the decoder DR;
the loss function module combines the loss function L1 generated by the thick-layer scanning data set D1 and the loss function L2 generated by the thin-layer scanning data set D2 to obtain a loss function L of the ventricle segmentation model M;
the training module continuously trains and optimizes the ventricle segmentation model M according to the loss function L;
the segmentation module uses the trained ventricle segmentation model M to automatically segment brain images of different multi-mode scanning methods,
wherein constructing the encoder ER comprises: utilizing a ResNet-34 residual neural network pre-trained on an ImageNet dataset as an encoder; extracting information of each layer through a residual error module respectively; the thick cut picture set and the thin cut picture set are taken as input extraction features F, the extraction features F are respectively input into a decoder,
the reconstruction and recovery of the picture are carried out by adopting a sub-pixel convolution layer for a decoder, and the mathematical expression is as follows:
F L =SP(W L *F L-1 +b L )
wherein the SP (-) operation will be shaped as H W C r 2 The tensors of (a) are rearranged so that they are arranged as tensors of the shape rH× rW ×C, C representing the number of categories, F L-1 And F L Is the input characteristic and the output characteristic of the layer respectively, W L And b L Trainable parameters convolved for the sub-pixels;
after passing through the decoder, the prediction probability of each category is obtained.
CN202011062511.8A 2020-09-30 2020-09-30 Multi-modal automated ventricle segmentation system and method of use thereof Active CN112200810B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011062511.8A CN112200810B (en) 2020-09-30 2020-09-30 Multi-modal automated ventricle segmentation system and method of use thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011062511.8A CN112200810B (en) 2020-09-30 2020-09-30 Multi-modal automated ventricle segmentation system and method of use thereof

Publications (2)

Publication Number Publication Date
CN112200810A CN112200810A (en) 2021-01-08
CN112200810B true CN112200810B (en) 2023-11-14

Family

ID=74012564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011062511.8A Active CN112200810B (en) 2020-09-30 2020-09-30 Multi-modal automated ventricle segmentation system and method of use thereof

Country Status (1)

Country Link
CN (1) CN112200810B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113284126B (en) * 2021-06-10 2022-06-24 安徽省立医院(中国科学技术大学附属第一医院) Method for predicting hydrocephalus shunt operation curative effect by artificial neural network image analysis
CN113255683B (en) * 2021-06-25 2021-10-01 广东兴睿科技有限公司 Image segmentation method, system and storage medium based on neural network

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108062753A (en) * 2017-12-29 2018-05-22 重庆理工大学 The adaptive brain tumor semantic segmentation method in unsupervised domain based on depth confrontation study
CN108492297A (en) * 2017-12-25 2018-09-04 重庆理工大学 The MRI brain tumors positioning for cascading convolutional network based on depth and dividing method in tumor
CN109685804A (en) * 2019-01-04 2019-04-26 清华大学深圳研究生院 A kind of multichannel head NMR zeugmatographic imaging tissue segmentation methods
CN110782427A (en) * 2019-08-19 2020-02-11 大连大学 Magnetic resonance brain tumor automatic segmentation method based on separable cavity convolution
CN110795821A (en) * 2019-09-25 2020-02-14 的卢技术有限公司 Deep reinforcement learning training method and system based on scene differentiation
CN111080575A (en) * 2019-11-22 2020-04-28 东南大学 Thalamus segmentation method based on residual error dense U-shaped network model
CN111612754A (en) * 2020-05-15 2020-09-01 复旦大学附属华山医院 MRI tumor optimization segmentation method and system based on multi-modal image fusion

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11270445B2 (en) * 2017-03-06 2022-03-08 The Regents Of The University Of California Joint estimation with space-time entropy regularization
US10753997B2 (en) * 2017-08-10 2020-08-25 Siemens Healthcare Gmbh Image standardization using generative adversarial networks
US10624558B2 (en) * 2017-08-10 2020-04-21 Siemens Healthcare Gmbh Protocol independent image processing with adversarial networks

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108492297A (en) * 2017-12-25 2018-09-04 重庆理工大学 The MRI brain tumors positioning for cascading convolutional network based on depth and dividing method in tumor
CN108062753A (en) * 2017-12-29 2018-05-22 重庆理工大学 The adaptive brain tumor semantic segmentation method in unsupervised domain based on depth confrontation study
CN109685804A (en) * 2019-01-04 2019-04-26 清华大学深圳研究生院 A kind of multichannel head NMR zeugmatographic imaging tissue segmentation methods
CN110782427A (en) * 2019-08-19 2020-02-11 大连大学 Magnetic resonance brain tumor automatic segmentation method based on separable cavity convolution
CN110795821A (en) * 2019-09-25 2020-02-14 的卢技术有限公司 Deep reinforcement learning training method and system based on scene differentiation
CN111080575A (en) * 2019-11-22 2020-04-28 东南大学 Thalamus segmentation method based on residual error dense U-shaped network model
CN111612754A (en) * 2020-05-15 2020-09-01 复旦大学附属华山医院 MRI tumor optimization segmentation method and system based on multi-modal image fusion

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"CT synthesis from MRI images based on deep learning methods for MRI-only radiotherapy";Yafen Li et al;《2019 International Conference on Medical Imaging Physics and Engineering (ICMIPE)》;全文 *
"Learning Based Segmentation of CT Brain Images: Application to Postoperative Hydrocephalic Scans";Venkateswararao Cherukuri et al;《 IEEE Transactions on Biomedical Engineering》;全文 *
基于多网络集成的脑白质高信号分割方法;李鑫鑫;汪绪先;程健;徐红;李子孝;刘涛;;中国卒中杂志(第03期);全文 *

Also Published As

Publication number Publication date
CN112200810A (en) 2021-01-08

Similar Documents

Publication Publication Date Title
Jalali et al. ResBCDU-Net: a deep learning framework for lung CT image segmentation
Yamanakkanavar et al. MRI segmentation and classification of human brain using deep learning for diagnosis of Alzheimer’s disease: a survey
CN109886273B (en) CMR image segmentation and classification system
CN111488914B (en) Alzheimer disease classification and prediction system based on multitask learning
CN108464840B (en) Automatic detection method and system for breast lumps
CN107016395B (en) Identification system for sparsely expressed primary brain lymphomas and glioblastomas
CN113516210B (en) Lung adenocarcinoma squamous carcinoma diagnosis model training method and device based on PET/CT
Khawaja et al. A multi-scale directional line detector for retinal vessel segmentation
CN112862830B (en) Multi-mode image segmentation method, system, terminal and readable storage medium
CN107766874B (en) Measuring method and measuring system for ultrasonic volume biological parameters
CN112200810B (en) Multi-modal automated ventricle segmentation system and method of use thereof
CN103249358A (en) Medical image processing device
CN111079901A (en) Acute stroke lesion segmentation method based on small sample learning
Rachmadi et al. Deep learning vs. conventional machine learning: pilot study of wmh segmentation in brain mri with absence or mild vascular pathology
CN112508953A (en) Meningioma rapid segmentation qualitative method based on deep neural network
CN114565613A (en) Pancreas postoperative diabetes prediction system based on supervised deep subspace learning
CN108319969B (en) Brain glioma survival period prediction method and system based on sparse representation framework
CN116468655A (en) Brain development atlas and image processing system based on fetal magnetic resonance imaging
Rasheed et al. Recognizing brain tumors using adaptive noise filtering and statistical features
Benvenuto et al. A fully unsupervised deep learning framework for non-rigid fundus image registration
CN114283406A (en) Cell image recognition method, device, equipment, medium and computer program product
CN115409812A (en) CT image automatic classification method based on fusion time attention mechanism
CN116228731A (en) Multi-contrast learning coronary artery high-risk plaque detection method, system and terminal
CN115082494A (en) Coronary artery image segmentation method based on multi-label and segmentation network
CN115294023A (en) Liver tumor automatic segmentation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant