CN109102512B - DBN neural network-based MRI brain tumor image segmentation method - Google Patents

DBN neural network-based MRI brain tumor image segmentation method Download PDF

Info

Publication number
CN109102512B
CN109102512B CN201810885507.8A CN201810885507A CN109102512B CN 109102512 B CN109102512 B CN 109102512B CN 201810885507 A CN201810885507 A CN 201810885507A CN 109102512 B CN109102512 B CN 109102512B
Authority
CN
China
Prior art keywords
training
image
network
samples
label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810885507.8A
Other languages
Chinese (zh)
Other versions
CN109102512A (en
Inventor
刘红英
沈雄杰
尚凡华
杨淑媛
焦李成
缑水平
汪玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201810885507.8A priority Critical patent/CN109102512B/en
Publication of CN109102512A publication Critical patent/CN109102512A/en
Application granted granted Critical
Publication of CN109102512B publication Critical patent/CN109102512B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a DBN neural network-based MRI brain tumor image segmentation method, which comprises the steps of firstly selecting a plurality of images from an existing patient brain MRI sequence image library as training samples, preprocessing the training samples and calculating a saliency map; then, downsampling is sent to a DBN neural network to be subjected to unsupervised training and supervised training in sequence, downsampling processing is performed on a non-tumor area under the condition that a training sample is extremely unbalanced, and the detection rate of positive samples is improved; after training is finished, the test image to be segmented can be sent to the network for segmentation, the visual attention model is introduced, the accuracy of the network in segmenting the area difficult to segment is improved, and finally the segmentation result is output.

Description

DBN neural network-based MRI brain tumor image segmentation method
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an MRI brain tumor image segmentation method based on a DBN neural network.
Background
In recent years, brain tumors have become one of the most prevalent tumors. Magnetic Resonance Imaging (MRI) enables high spatial resolution and high contrast imaging of brain soft tissue, is the best choice for physicians to perform structural analysis of the brain, and is therefore widely used clinically. In brain MRI image processing, accurate segmentation of a tumor part is a crucial step, and the accurate segmentation plays a crucial role in subsequent analysis and judgment of a doctor. At present, the step still depends on manual segmentation seriously, which is time-consuming and has strong instability, so that the search for an accurate automatic segmentation method has high practical value. However, due to the high variability of the shape, position and structure of brain tumors and the great influence of different people and different devices on the imaging gray scale distribution, it is difficult to find a high-precision segmentation method.
Currently, some scholars have done research on this aspect, and most of them adopt traditional machine learning algorithms (such as random forest, markov random field method, etc.) to segment normal brain tissue (such as white matter and gray matter) and abnormal brain tissue (such as brain tumor). However, the method usually needs to manually extract the features in advance, so that a designer is required to have related professional knowledge, which is not practical in many cases, and the manually extracted features have the defects of strong pertinence, poor expansibility and the like. The appearance and development of deep learning models perfectly solve these problems.
The deep learning model refers to a laminated structure for feature learning by using a multi-layer neural network (generally, a neural network with more than 3 layers), and the initial motivation is to simulate the human brain for learning and analysis. Compared with the traditional machine learning algorithm, the deep learning model has stronger feature abstraction capability and complex function expression capability. It presents incomparable advantages over many tasks such as speech recognition, image recognition, machine translation, etc. The Deep Belief Network (DBN) is a novel neural network developed on the basis of a Boltzmann machine, is an unsupervised probability generation model and can be used for fitting the probability distribution of original input data. The training of the whole network is divided into two steps, each RBM is firstly unsupervised and layer-by-layer trained through a contrast divergence algorithm, and then network parameters are supervised and finely adjusted through a back propagation algorithm, so that the network parameters can be closer to the global optimum by the training method. It has been shown that DBNs perform well in many medical image processing tasks, such as Tuan et al, which combine DBNs with horizontal slices for left ventricular segmentation of the heart, with the best results currently available in this task.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide an MRI brain tumor image segmentation method based on a DBN neural network, which is improved based on a DBN and is applied to the segmentation of brain MRI tumors.
The invention adopts the following technical scheme:
a MRI brain tumor image segmentation method based on DBN neural network, select many pieces from existing patient's brain MRI sequence image library as training sample first, preprocess it and calculate the significance map; then, the down-sampling is sent to a DBN neural network to carry out unsupervised training and supervised training in sequence; and after the training is finished, sending the test image to be segmented into a network for segmentation, and finally outputting a segmentation result.
Specifically, the method comprises the following steps:
s1, dividing N frames of images in the brain MRI sequence image into a training set and a test set, and preprocessing data;
s2, calculating a saliency map of each frame of image, and normalizing each saliency map;
s3, downsampling the training set samples according to the saliency map;
s4, sending the training set sample obtained in the step S3 to a DBN network for unsupervised pre-training through a contrast divergence method;
s5, simultaneously sending the training set samples and the labels thereof into a network, and finely adjusting network parameters through an Adam algorithm;
s6, testing the set image
Figure BDA0001755548100000021
And taking each pixel point as the center, taking a 9 x 9 area of the pixel point and spreading the area into 81-dimensional column vectors, sending the column vectors into a trained network for testing, outputting a classification label of each pixel point to obtain a divided binary image, and supplementing missing pixel values around the pixel points by using points on the edge of the image in a symmetrical filling mode.
Further, step S1 is specifically as follows:
s101, selecting a section with the largest tumor area from brain MRI sequence diagrams of N patients with brain tumors, and taking t frame images as a training set DTrainThe remaining N-t frames are used as a test set Dtest
S102, carrying out image D on each frame of training set and test setiI is 1. ltoreq. N, by
Figure BDA0001755548100000031
And carrying out normalization processing on each image.
4. The method of claim 3, wherein the DBN neural network-based MRI brain tumor image segmentation method is applied to the MRI brain tumor image segmentation method,
Figure BDA0001755548100000032
the calculation is as follows:
Figure BDA0001755548100000033
further, step S2 is specifically as follows:
s201, setting the pixel point value of the mth row and the nth column of each frame image as
Figure BDA0001755548100000034
Averaging each frame of image
Figure BDA0001755548100000035
S202, performing convolution operation on each frame of image and 5 × 5 Gaussian kernels respectively to obtain Gaussian blurred images of each frame of image
Figure BDA0001755548100000036
S203, solving a saliency map of each frame of image;
s204, according to SiObtaining a normalized significance map after the normalization,
Figure BDA0001755548100000037
representing the image of the i-th frameThe saliency value of the pixel of the mth row and nth column.
Further, the saliency map of each frame image in step S203
Figure BDA0001755548100000038
The method comprises the following specific steps:
Figure BDA0001755548100000039
further, in step S204, the normalized saliency map SiThe calculation is as follows:
Figure BDA00017555481000000310
further, in step S3, the saliency values of each pixel of each frame of image in the training set are sorted from large to small, and the first h pixels with the largest saliency are centered on the image in the training set
Figure BDA0001755548100000041
Taking a square region of 9 x 9, spreading the square region into a column vector of 81 dimensions according to rows to be used as a training sample, obtaining t x h training samples, and setting AkDenotes the k-th training sample, LkThe label of the kth training sample is 0, which means belonging to the background region, and 1, which means belonging to the tumor region.
Further, step S5 is specifically as follows:
s501, the number of samples sent each time during training: let "batch _ size" 1024, where a sample labeled 0 is a (label 0), and the number of samples is n0The sample labeled 1 is a (label ═ 1), and the number thereof is n1F (-) denotes the output of the last layer of the DBN network;
s502, solving the mean value of samples with labels of 0 or 1 in each batch of training samples, the total intra-class variance of the samples in each characteristic dimension output by the last layer of the DBN network and the inter-class variance of each characteristic dimension output by the samples in the last layer of the DBN network;
s503, calculating the loss function of the network as follows:
Figure BDA0001755548100000042
wherein:
Figure BDA0001755548100000043
Skand expressing the significance value of the kth sample, wherein the significance value of the kth sample expresses the corresponding loss weight distributed to each point according to the significance value of the pixel, thereby promoting the identification capability of the network on the tumor region, Softmax (DEG) expresses a Softmax classifier function, lambda is an adjustable hyperparameter which expresses the weight of a divergence regularization term, and then the loss function is minimized through an Adam algorithm, and the network parameters are continuously updated until convergence.
Further, in step S502, the average value of the output of the sample labeled 0 or 1 in each training sample in the last layer is calculated as follows:
Figure BDA0001755548100000044
wherein x ∈ {0,1}, batch _ size represents the number of samples sent in each batch during training, a (label ═ x) represents a sample labeled x, f (·) represents the last layer output of the DBN network, and μ (label ═ x) represents the feature average of the sample labeled x in the last layer;
total within-class variance delta of samples on each feature dimension output by the last layer of the DBN networkinThe calculation is as follows:
Figure BDA0001755548100000051
wherein n isxRepresents the number of samples marked x;
the inter-class variance delta of each characteristic dimension of the sample output at the last layer of the DBN networkbetweenThe calculation is as follows:
δbetween=(μ(label=0)-μ(label=1))2
where μ (label ═ 0) denotes the average value of the output of the sample labeled 0 in the last layer, and μ (label ═ 1) denotes the average value of the output of the sample labeled 1 in the last layer.
Compared with the prior art, the invention has at least the following beneficial effects:
the invention relates to a DBN neural network-based MRI brain tumor image segmentation method, which comprises the steps of firstly selecting a plurality of images from an existing patient brain MRI sequence image library as training samples, preprocessing the training samples and calculating a significance map; then, downsampling is sent to a DBN neural network to be subjected to unsupervised training and supervised training in sequence, downsampling processing is performed on a non-tumor area under the condition that a training sample is extremely unbalanced, and the detection rate of positive samples is improved; after training is finished, the test image to be segmented can be sent to the network for segmentation, the visual attention model is introduced, the accuracy of the network in segmenting the area difficult to segment is improved, and finally the segmentation result is output.
Furthermore, the normalization operation can reduce the interference of the MRI picture caused by nonuniform light.
Furthermore, the pixels of the non-tumor area are down-sampled according to the sequence of the significance degree, so that the information of the non-tumor area can be retained to the maximum extent, the influence of unbalance of the sample can be eliminated, and the memory space and the calculation amount required by training can be reduced.
Furthermore, a divergence regular term is added to an error term of network back propagation, so that the robustness of the DBN network for extracting features is enhanced, the performance of the network is further improved, meanwhile, different loss weights are given to all pixel points according to the significance value, the attention of the network can be more concentrated on a tumor area and an area close to the pixel value of the tumor area, and the identification capability of the network on an error-prone area is improved.
In conclusion, the method reduces the interference of the MRI picture caused by uneven light through the over-normalization operation, eliminates the influence of unbalanced samples through the down-sampling operation, and improves the detection rate of positive samples. Meanwhile, the performance of the network is further improved by introducing a divergence regular term and a visual attention model.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
FIG. 1 is a general flow chart of the present invention;
fig. 2 is a diagram of a DBN network structure used in the present invention;
FIG. 3 is a view of one of the MRI slices used in the test;
FIG. 4 is a graph of the results of the normalization of the saliency map of FIG. 3;
FIG. 5 is a schematic diagram of the segmentation of FIG. 4, wherein (a) is a real label graph and (b) is a graph of the segmentation result of the present invention.
Detailed Description
The invention provides an MRI brain tumor image segmentation method based on a DBN neural network, which can be used for assisting a doctor in diagnosis and segmentation of brain tumors. The realization process is as follows: firstly, picking out a plurality of samples from an existing patient brain MRI sequence image library as training samples, preprocessing the training samples and calculating a saliency map. Then the down-sampling is sent to a DBN neural network to carry out unsupervised training and supervised training in sequence. After training is finished, the test image to be segmented can be sent to a network for segmentation, and finally, a segmentation result is output. According to the invention, the image features are extracted by a deep learning method, so that the complexity and instability of manual feature extraction are avoided. In addition, the accuracy of pixel classification is improved by introducing a downsampling balance sample and a visual attention mechanism, and a better segmentation result is obtained for the MRI brain tumor.
Referring to fig. 1, the MRI brain tumor image segmentation method based on the DBN neural network of the present invention includes the following steps:
s1, dividing N frames of images in the brain MRI sequence image into a training set and a test set, and preprocessing data;
s101, selecting a section with the largest tumor area from brain MRI sequence diagrams of N patients with brain tumors, and taking t frame images as a training set DTrainThe remaining N-t frames are used as a test set Dtest
In the embodiment of the present application, the used training and testing data are both from Flair modal image data of the brakes 2015 challenge race, and the resolution of each frame of image is: 250 x 250 pixels;
s102, carrying out image D on each frame of training set and test setiI is 1. ltoreq. N, by
Figure BDA0001755548100000071
Carrying out normalization processing on each image;
Figure BDA0001755548100000072
the calculation is as follows:
Figure BDA0001755548100000073
s2, calculating a saliency map of each frame of image, and normalizing each saliency map;
s201, setting the pixel point value of the mth row and the nth column of each frame image as
Figure BDA0001755548100000074
Averaging each frame of image
Figure BDA0001755548100000075
Figure BDA0001755548100000076
The calculation is as follows:
Figure BDA0001755548100000077
s202, performing convolution operation on each frame of image and 5 × 5 Gaussian kernels respectively to obtain Gaussian blurred images of each frame of image
Figure BDA0001755548100000078
S203, finding a saliency map of each frame of image, specifically as follows:
Figure BDA0001755548100000079
s204, according to SiObtaining a normalized significance map after the normalization,
Figure BDA00017555481000000710
the saliency value of the pixel of the mth row and nth column of the ith frame image is shown, and as can be seen from fig. 5, the pixel value of the tumor region is greatly different from that of the background region and has a larger saliency;
Sithe calculation is as follows:
Figure BDA00017555481000000711
s3, downsampling the training set samples according to the saliency map;
respectively sequencing the significance values of all pixel points of each frame of image in the training set from large to small, and centering on the first h pixel points with the maximum significance in the images in the training set
Figure BDA00017555481000000712
The 9 x 9 square regions are taken and developed into 81-dimensional column vectors by rows as training samples. From this, t x h training samples are obtained, let AkDenotes the k-th training sample, LkA label of the kth training sample, wherein the label is 0 and belongs to the background area, and the label is 1 and belongs to the tumor area;
s4, sending the processed training set samples to a DBN network for unsupervised pre-training;
the training set sample obtained in the step S3 is sent to a DBN network for unsupervised pre-training through a contrast divergence method, and the network structure of the DBN refers to FIG. 2;
s5, simultaneously sending the training set samples and the labels thereof into a network, and finely adjusting network parameters through an Adam algorithm;
s501, sending the training data each timeNumber of input samples: let "batch _ size" 1024, where a sample labeled 0 is a (label 0), and the number of samples is n0The sample labeled 1 is a (label ═ 1), and the number thereof is n1F (-) denotes the output of the last layer of the DBN network;
s502, solving the output average value of the sample with the label of 0 or 1 in each batch of training samples in the last layer as follows:
Figure BDA0001755548100000081
wherein x ∈ {0,1}, batch _ size represents the number of samples sent in each batch during training, a (label ═ x) represents a sample labeled x, f (·) represents the last layer output of the DBN network, and μ (label ═ x) represents the feature average of the sample labeled x in the last layer;
and (3) solving the total intra-class variance of the sample on each feature dimension output by the last layer of the DBN network as follows:
Figure BDA0001755548100000082
wherein n isxRepresents the number of samples marked x;
and solving the inter-class variance of each characteristic dimension output by the sample at the last layer of the DBN network as follows:
δbetween=(μ(label=0)-μ(label=1))2
where μ (label ═ 0) denotes the average value of the output of the sample labeled 0 in the last layer, and μ (label ═ 1) denotes the average value of the output of the sample labeled 1 in the last layer.
S503, calculating a loss function of the network:
Figure BDA0001755548100000083
wherein:
Figure BDA0001755548100000084
sk represents a saliency value of a kth sample, wherein representing a loss weight corresponding to each point according to the saliency value of a pixel, so as to promote the identification capability of the network on a tumor region (with high saliency value), Softmax ((-)) represents a Softmax classifier function, lambda is an adjustable hyper-parameter and represents the weight of a divergence regularization term, and then the loss function is minimized through an Adam algorithm, and the network parameters are continuously updated until convergence;
and S6, segmenting the test set image by using the trained network model.
With test set images
Figure BDA0001755548100000091
And taking each pixel point as a center, taking a 9 x 9 area of the pixel point and spreading the pixel point into 81-dimensional column vectors, sending the column vectors into a trained network for testing, and outputting a classification label of each pixel point, thereby obtaining a divided binary image. Points on the edge of the image supplement missing pixel values around the points in a symmetrical filling mode.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The experimental contents are as follows:
to illustrate the effectiveness and adaptability of the invention, the training images and test images used in the experiment were each decimated from the 80 th slice data in the flair modality of different patients in the Brats2015 challenge race database. Fig. 3 and 4 show one of the images and the corresponding label map for training and testing, respectively. The training data and the test data are sent to a network for training and testing after being preprocessed according to the method provided by the invention, and the evaluation indexes of the test result comprise three items: dice Similarity Coefficient (DSC), Sensitivity (Sensitivity), Positive Predictive Value (PPV). Wherein DSC is defined as:
Figure BDA0001755548100000092
wherein, VsegRepresenting the result of the manual segmentation, VgtA real label representing the image.
Sensitivity and positive predictive value are defined as follows:
Figure BDA0001755548100000101
Figure BDA0001755548100000102
where TP represents an overlapping region of the divided region of the present invention and the true label is 1, FP represents a pixel whose true label is 0 but which is identified as 1 in the present invention, and FN represents a pixel whose true label is 1 but which is identified as 0 in the present invention. A comparison of the present invention with other segmentation methods is shown in table 1:
table one comparison of the evaluation indices of the present invention with other segmentation methods in the Brats2015 challenge race
Method of producing a composite material DSC Sensitivity PPV
Zhao 0.79 0.85 0.77
Festa 0.72 0.72 0.77
Doyle 0.71 0.87 0.66
The invention 0.73 0.88 0.70
By contrast, the method has better effect on segmenting the brain MRI image, has indexes close to or superior to those of other methods, and has certain practical value.
The above-mentioned contents are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modification made on the basis of the technical idea of the present invention falls within the protection scope of the claims of the present invention.

Claims (1)

1. A MRI brain tumor image segmentation method based on DBN neural network is characterized in that a plurality of images are picked out from the existing MRI sequence image library of the brain of a patient and used as training samples, and the training samples are preprocessed and a saliency map is calculated; then, the down-sampling is sent to a DBN neural network to carry out unsupervised training and supervised training in sequence; after training, sending the test image to be segmented into a network for segmentation, and finally outputting a segmentation result, wherein the method comprises the following steps:
s1, dividing N frames of images in the brain MRI sequence image into a training set and a test set, and preprocessing data, specifically:
s101, selecting a section with the largest tumor area from brain MRI sequence diagrams of N patients with brain tumors, and taking t frame images as a training set DTrainThe remaining N-t frames are used as a test set Dtest
S102, carrying out image D on each frame of training set and test setiI is 1. ltoreq. N, by
Figure FDA0002889153280000011
Each image is subjected to a normalization process,
Figure FDA0002889153280000012
the calculation is as follows:
Figure FDA0002889153280000013
s2, calculating a saliency map of each frame of image, and normalizing each saliency map, specifically:
s201, setting the pixel point value of the mth row and the nth column of each frame image as
Figure FDA0002889153280000014
Averaging each frame of image
Figure FDA0002889153280000015
S202, performing convolution operation on each frame of image and 5 × 5 Gaussian kernels respectively to obtain Gaussian blurred images of each frame of image
Figure FDA0002889153280000016
S203, calculating the saliency map of each frame of image, and the saliency map of each frame of image in the step S203
Figure FDA0002889153280000017
The method comprises the following specific steps:
Figure FDA0002889153280000018
s204, according to SiObtaining a normalized significance map after the normalization,
Figure FDA0002889153280000019
a normalized saliency map S representing the saliency values of the pixels of the mth row and the nth column of the ith frame imageiThe calculation is as follows:
Figure FDA00028891532800000110
s3, down-sampling the training set samples according to the saliency map, sorting the saliency values of each pixel point of each frame of image in the training set from large to small, and normalizing the training set images by taking the first h pixel points with the largest saliency as centers
Figure FDA0002889153280000022
Taking a square region of 9 x 9, spreading the square region into a column vector of 81 dimensions according to rows to be used as a training sample, obtaining t x h training samples, and setting AkDenotes the k-th training sample, LkA label of the kth training sample, wherein the label is 0 and belongs to the background area, and the label is 1 and belongs to the tumor area;
s4, sending the training set sample obtained in the step S3 to a DBN network for unsupervised pre-training through a contrast divergence method;
s5, simultaneously sending the training set samples and the labels thereof into the network, and finely adjusting network parameters through an Adam algorithm, specifically:
s501, the number of samples sent each time during training: let "batch _ size" 1024, where a sample labeled 0 is a (label 0), and the number of samples is n0The sample labeled 1 is a (label ═ 1), and the number thereof is n1F (-) denotes the output of the last layer of the DBN network;
s502, solving the mean value of the sample with the label of 0 or 1 in each batch of training samples, the total intra-class variance of the sample on each characteristic dimension output by the last layer of the DBN network and the inter-class variance of each characteristic dimension output by the sample on the last layer of the DBN network, wherein the mean value output by the sample with the label of 0 or 1 in each batch of training samples on the last layer is calculated as follows:
Figure FDA0002889153280000023
wherein x ∈ {0,1}, batch _ size represents the number of samples sent in each batch during training, a (label ═ x) represents a sample labeled x, f (·) represents the last layer output of the DBN network, and μ (label ═ x) represents the feature average of the sample labeled x in the last layer;
total within-class variance delta of samples on each feature dimension output by the last layer of the DBN networkinThe calculation is as follows:
Figure FDA0002889153280000024
wherein n isxRepresents the number of samples marked x;
the inter-class variance delta of each characteristic dimension of the sample output at the last layer of the DBN networkbetweenThe calculation is as follows:
δbetween=(μ(label=0)-μ(label=1))2
wherein μ (label ═ 0) represents the average of the samples labeled 0 output at the last layer, and μ (label ═ 1) represents the average of the samples labeled 1 output at the last layer;
s503, calculating the loss function of the network as follows:
Figure FDA0002889153280000031
wherein:
Figure FDA0002889153280000032
Skrepresenting a saliency value of a kth sample, representing a loss weight distributed to each point according to the saliency value of a pixel, promoting the identification capability of the network on a tumor region, wherein Softmax (DEG) represents a Softmax classifier function, and lambda is an adjustable hyper-parameter and represents the weight of a divergence regular term, and then minimizing the loss function through an Adam algorithm, and continuously updating network parameters until convergence;
s6, testing the set image
Figure FDA0002889153280000033
And taking each pixel point as the center, taking a 9 x 9 area of the pixel point and spreading the area into 81-dimensional column vectors, sending the column vectors into a trained network for testing, outputting a classification label of each pixel point to obtain a divided binary image, and supplementing missing pixel values around the pixel points by using points on the edge of the image in a symmetrical filling mode.
CN201810885507.8A 2018-08-06 2018-08-06 DBN neural network-based MRI brain tumor image segmentation method Active CN109102512B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810885507.8A CN109102512B (en) 2018-08-06 2018-08-06 DBN neural network-based MRI brain tumor image segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810885507.8A CN109102512B (en) 2018-08-06 2018-08-06 DBN neural network-based MRI brain tumor image segmentation method

Publications (2)

Publication Number Publication Date
CN109102512A CN109102512A (en) 2018-12-28
CN109102512B true CN109102512B (en) 2021-03-09

Family

ID=64848832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810885507.8A Active CN109102512B (en) 2018-08-06 2018-08-06 DBN neural network-based MRI brain tumor image segmentation method

Country Status (1)

Country Link
CN (1) CN109102512B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109754404B (en) * 2019-01-02 2020-09-01 清华大学深圳研究生院 End-to-end tumor segmentation method based on multi-attention mechanism
CN110689057B (en) * 2019-09-11 2022-07-15 哈尔滨工程大学 Method for reducing neural network training sample size based on image segmentation
CN111445443B (en) * 2020-03-11 2023-09-01 北京深睿博联科技有限责任公司 Early acute cerebral infarction detection method and device
CN111612764B (en) * 2020-05-21 2023-09-22 广州普世医学科技有限公司 Method, system and storage medium for resolving new coronal pneumonia ground glass focus contrast
CN112132842A (en) * 2020-09-28 2020-12-25 华东师范大学 Brain image segmentation method based on SEEDS algorithm and GRU network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102903103A (en) * 2012-09-11 2013-01-30 西安电子科技大学 Migratory active contour model based stomach CT (computerized tomography) sequence image segmentation method
CN105719303A (en) * 2016-01-25 2016-06-29 杭州职业技术学院 Magnetic resonance imaging prostate 3D image segmentation method based on multi-depth belief network
CN106296699A (en) * 2016-08-16 2017-01-04 电子科技大学 Cerebral tumor dividing method based on deep neural network and multi-modal MRI image
CN106780453A (en) * 2016-12-07 2017-05-31 电子科技大学 A kind of method realized based on depth trust network to brain tumor segmentation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102903103A (en) * 2012-09-11 2013-01-30 西安电子科技大学 Migratory active contour model based stomach CT (computerized tomography) sequence image segmentation method
CN105719303A (en) * 2016-01-25 2016-06-29 杭州职业技术学院 Magnetic resonance imaging prostate 3D image segmentation method based on multi-depth belief network
CN106296699A (en) * 2016-08-16 2017-01-04 电子科技大学 Cerebral tumor dividing method based on deep neural network and multi-modal MRI image
CN106780453A (en) * 2016-12-07 2017-05-31 电子科技大学 A kind of method realized based on depth trust network to brain tumor segmentation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION;Diederik P.Kingma等;《ICLR 2015》;20150727;第1-15页 *
一种带心肌癫痕的心脏磁共振图像左室壁分割方法;李晓宁;《四川大学学报(自然科学版)》;20160930;第53卷(第5期);第1011-1017页 *

Also Published As

Publication number Publication date
CN109102512A (en) 2018-12-28

Similar Documents

Publication Publication Date Title
CN109102512B (en) DBN neural network-based MRI brain tumor image segmentation method
Khened et al. Densely connected fully convolutional network for short-axis cardiac cine MR image segmentation and heart diagnosis using random forest
CN107016681B (en) Brain MRI tumor segmentation method based on full convolution network
CN108648191B (en) Pest image recognition method based on Bayesian width residual error neural network
CN108171232B (en) Deep learning algorithm-based bacterial and viral pneumonia classification method for children
CN110084318B (en) Image identification method combining convolutional neural network and gradient lifting tree
CN107610087B (en) Tongue coating automatic segmentation method based on deep learning
CN109376636B (en) Capsule network-based eye fundus retina image classification method
CN110930416B (en) MRI image prostate segmentation method based on U-shaped network
CN111259982A (en) Premature infant retina image classification method and device based on attention mechanism
CN107316294B (en) Lung nodule feature extraction method based on improved depth Boltzmann machine
CN105825509A (en) Cerebral vessel segmentation method based on 3D convolutional neural network
WO2019001208A1 (en) Segmentation algorithm for choroidal neovascularization in oct image
CN112270666A (en) Non-small cell lung cancer pathological section identification method based on deep convolutional neural network
CN111488914A (en) Alzheimer disease classification and prediction system based on multitask learning
CN107766874B (en) Measuring method and measuring system for ultrasonic volume biological parameters
CN112529042A (en) Medical image classification method based on dual-attention multi-instance deep learning
CN110543916B (en) Method and system for classifying missing multi-view data
Mahapatra et al. Visual saliency based active learning for prostate mri segmentation
CN112598613A (en) Determination method based on depth image segmentation and recognition for intelligent lung cancer diagnosis
CN115147600A (en) GBM multi-mode MR image segmentation method based on classifier weight converter
CN114596317A (en) CT image whole heart segmentation method based on deep learning
CN116071383A (en) Hippocampus subzone segmentation method and system based on ultra-high field magnetic resonance image reconstruction
CN114818931A (en) Fruit image classification method based on small sample element learning
CN112669319B (en) Multi-view multi-scale lymph node false positive inhibition modeling method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant