CN112508953A - Meningioma rapid segmentation qualitative method based on deep neural network - Google Patents

Meningioma rapid segmentation qualitative method based on deep neural network Download PDF

Info

Publication number
CN112508953A
CN112508953A CN202110161083.2A CN202110161083A CN112508953A CN 112508953 A CN112508953 A CN 112508953A CN 202110161083 A CN202110161083 A CN 202110161083A CN 112508953 A CN112508953 A CN 112508953A
Authority
CN
China
Prior art keywords
meningioma
image
magnetic resonance
segmentation
brain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110161083.2A
Other languages
Chinese (zh)
Other versions
CN112508953B (en
Inventor
张蕾
***
王利团
陈超越
舒鑫
王梓舟
黄伟
花语
李佳怡
谭硕
余怡洁
王凌度
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202110161083.2A priority Critical patent/CN112508953B/en
Publication of CN112508953A publication Critical patent/CN112508953A/en
Application granted granted Critical
Publication of CN112508953B publication Critical patent/CN112508953B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention relates to the field of meningioma grade judgment before operation, and discloses a meningioma rapid segmentation qualitative method based on a deep neural network, which comprises the following steps: preparing a magnetic resonance brain image; establishing a meningioma segmentation model, and screening an effective image containing a meningioma region from a magnetic resonance brain image through the meningioma segmentation model; and establishing a meningioma grading model, carrying out grading detection on the effective image through the meningioma grading model, and outputting a meningioma grading detection result. By the method provided by the invention, a series of magnetic resonance brain images generated after a patient is scanned are only required to be input into a network, and the grading result of the patient's meningioma is quickly given after all the magnetic resonance brain images are comprehensively analyzed and calculated, so that the aim of assisting a doctor in diagnosing is fulfilled.

Description

Meningioma rapid segmentation qualitative method based on deep neural network
Technical Field
The invention relates to the field of meningioma grade judgment before operation, in particular to a meningioma rapid segmentation qualitative method based on a deep neural network.
Background
Meningioma originates from meninges and derivatives of meningeal gaps, is the second most common intracranial tumor, accounts for 13% -26% of intracranial tumors, and has an increasing trend in incidence in recent years. Meningiomas are classified into 3 grades according to the 2016 classification of WHO central nervous system tumors, and most of meningiomas are classified as WHO grade I lesions, namely benign lesions, grow slowly and are not easy to relapse after operation. A few of the patients are classified into WHO II or III grade lesions according to local invasive and atypical cell characteristics, namely malignant lesions, and the patients may have symptoms such as blindness, hemiplegia, epilepsy and the like, and even may die suddenly when the patients have serious diseases. The malignant meningioma has the characteristics of strong invasiveness and high recurrence and metastasis rate, so the operation mode is different from that of benign meningioma. Therefore, it is important to accurately judge the grade of the meningioma as early as possible before the operation, and the method is helpful for making an operation mode of a patient and evaluating the condition after the recovery.
For the diagnosis of meningiomas, the most common and effective means at present is Magnetic Resonance Imaging (MRI) examination in medical imaging examinations. Under the guidance of a professional physician in an imaging department, a patient with meningioma receives an omnidirectional three-dimensional scan of the cranium of the patient according to a certain scan sequence in the coronary position, the sagittal position and the transverse position by a magnetic resonance machine. After the scanning is finished, a doctor performs image reading analysis on each MRI image data obtained under the sequence, and diagnoses the grade of the meningioma of the patient according to the characteristics of tumor morphology, texture, pixel intensity and the like displayed in the MRI images of the cranium and the brain of the patient.
Although most hospitals have nuclear magnetic scanning machines, more important is the reading diagnosis of professional doctors, and it is difficult for relatively remote township hospitals to obtain the clinical opinions of experienced doctors in time. Meanwhile, because the interval of tomography is very small, the brain nuclear magnetic tomography slice of one patient generally reaches 200 pieces and 300 pieces, the number is huge, the diagnosis workload is huge and the time consumption is long due to the fact that the diagnosis completely depends on the manual image reading of a doctor, and the efficiency is low.
Disclosure of Invention
The invention aims to provide a meningioma rapid segmentation qualitative method based on a deep neural network, which can rapidly analyze a plurality of brain MRI images corresponding to a patient in a very short time and predict the benign and malignant grade of the meningioma.
The invention solves the technical problem, and adopts the technical scheme that:
a brain meningioma rapid segmentation qualitative method based on a deep neural network comprises the following steps:
step 1, preparing a magnetic resonance brain image;
step 2, establishing a meningioma segmentation model, and screening an effective image containing a meningioma region from the magnetic resonance brain image through the meningioma segmentation model;
and 3, establishing a meningioma grading model, carrying out grading detection on the effective image through the meningioma grading model, and outputting a meningioma grading detection result.
Further, step 1 specifically comprises the following steps:
step 101, acquiring cranial magnetic resonance image files and case reports of a plurality of meningioma patients, wherein the cranial magnetic resonance image files comprise meningioma first-level benign samples and meningioma second-level malignant samples;
step 102, reading a magnetic resonance brain image sequence of a patient brain which is subjected to tomography scanning at intervals of a first distance in the transverse position direction from each case of cranial magnetic resonance image file;
103, obtaining a category label of the meningioma in the corresponding magnetic resonance brain image based on a case report result of the patient;
step 104, finishing delineation of a meningioma area in the magnetic resonance brain image based on an image marking system;
and 105, after the delineation is finished and the cross validation is carried out, confirming the tumor delineation area to obtain a division label of the meningioma.
Further, in step 102, the first distance is 1 mm.
Further, in step 2, the established meningioma segmentation model is a meningioma segmentation model combining a two-way long-and-short-term memory unit, and the meningioma region is identified and segmented according to the sequence order for the magnetic resonance brain images of all groups in each cranial magnetic resonance image file through the meningioma segmentation model.
Further, in step 2, the processing of the input magnetic resonance image by the established meningioma segmentation model includes the following steps:
step 201, inputting a magnetic resonance brain image of a patient;
step 202, sequentially utilizing pooling operation to carry out down-sampling;
step 203, in the last step of down-sampling, adding a bidirectional long-time and short-time memory unit, storing characteristic diagrams of a forward image and a backward image in a magnetic resonance brain image sequence, and using the characteristic diagrams for segmentation constraint;
step 204, sequentially utilizing bilinear interpolation to perform upsampling, wherein feature maps in the upsampling and downsampling processes of the same scale are spliced for information completion;
and step 205, classifying each pixel in the feature map obtained by the last upsampling and restoring, and segmenting a meningioma area and a background area, wherein the pixel value of the meningioma area is 1, and the pixel value of the background area is 0.
Further, after step 205, training the established meningioma segmentation model, inputting a series of magnetic resonance brain images after the training is completed, outputting a segmentation matrix with the size consistent with that of the input image, and screening the top 5 magnetic resonance brain images with the most obvious meningioma as a representative image of the patient case according to the ranking of the segmentation matrix and the values.
Furthermore, the screened representative images are positioned to the center of the meningioma region according to the 0/1 distribution of the segmentation matrix, and then the individual meningioma images with the size of 90 × 90 are cut out to the periphery by taking the representative images as the center, and the individual meningioma images are input to the meningioma classification model as effective images.
Further, in step 3, the effective image is subjected to hierarchical detection by the meningioma hierarchical model, and a meningioma hierarchical detection result is output, which specifically includes the following steps:
step 301, forming a feature extractor by using a plurality of residual error learning modules in a combined and superposed mode to extract the feature of the meningioma;
step 302, performing mean value fusion on a plurality of effective image characteristics belonging to the same case in a channel dimension;
and 303, using a classifier formed by the full connection layer to classify the feature vector result after feature fusion, and outputting a meningioma grading detection result corresponding to the patient case.
Further, after step 3, the whole network formed by steps 1-3 is trained and tested.
Further, when the whole network is trained and tested, all magnetic resonance brain images are randomly divided into a training set and a testing set according to the ratio of 8: 2.
The method has the advantages that by the aid of the method for rapidly segmenting and qualifying the meningioma based on the deep neural network, only a series of magnetic resonance brain images generated after a patient is scanned need to be input into the network, grading results of the meningioma of the patient are rapidly given after all the magnetic resonance brain images are comprehensively analyzed and calculated, the purpose of assisting a doctor in diagnosis is achieved, the whole process is automatically carried out, a large amount of repetitive work of the doctor is reduced, time is saved, and the patient can receive treatment more rapidly.
Drawings
FIG. 1 is a flow chart of a qualitative method for rapid brain tumor segmentation based on a deep neural network according to the present invention;
FIG. 2 is a flow chart of data preparation according to the present invention;
FIG. 3 is a flow chart of a meningioma segmentation model process according to the present invention;
FIG. 4 is a flowchart of a process for a meningioma staging model according to the present invention;
fig. 5 is an overall operation flowchart in the embodiment of the present invention.
Detailed Description
The technical solution of the present invention is described in detail below with reference to the accompanying drawings and embodiments.
The invention discloses a brain tumor rapid segmentation qualitative method based on a deep neural network, a flow chart of which is shown in figure 1, wherein the method comprises the following steps:
s1, preparing a magnetic resonance brain image.
S2, establishing a meningioma segmentation model, and screening an effective image containing a meningioma region from the magnetic resonance brain image through the meningioma segmentation model.
And S3, establishing a meningioma grading model, carrying out grading detection on the effective image through the meningioma grading model, and outputting a meningioma grading detection result.
In the above method, referring to the data preparation flowchart of fig. 2, S1 may specifically include the following steps:
s101, acquiring cranial magnetic resonance image files and case reports of a plurality of meningioma patients, wherein the cranial magnetic resonance image files comprise meningioma first-level benign samples and meningioma second-level malignant samples.
S102, reading a magnetic resonance brain image sequence of a patient brain which carries out tomography scanning at intervals of a first distance in the transverse position direction from each case of brain magnetic resonance image file; among them, in order to obtain data more capable of expressing an image, the first distance is preferably 1 mm.
And S103, obtaining a category label of the meningioma in the corresponding magnetic resonance brain image based on the case report result of the patient.
S104, delineating a meningioma area in the magnetic resonance brain image based on an image labeling system.
And S105, after the delineation is finished and the cross validation is carried out, confirming the tumor delineation area to obtain the division label of the meningioma. After the delineation is completed by the technical staff and the cross validation is carried out, the tumor delineation area is confirmed by a professional doctor to obtain the division label of the meningioma so as to ensure the accuracy of the division label.
It should be noted that, in S2, the established meningioma segmentation model is a meningioma segmentation model combining a two-way long-and-short-term memory unit, and the meningioma region can be identified and segmented in sequence for the magnetic resonance brain images of all the groups in each cranial magnetic resonance image file by the meningioma segmentation model.
Here, referring to the flowchart of the meningioma segmentation model processing of fig. 3, the processing of the input magnetic resonance image by the established meningioma segmentation model may include the following steps:
s201, inputting a magnetic resonance brain image of a patient.
S202, downsampling is conducted sequentially through a pooling operation.
And S203, adding a bidirectional long-time and short-time memory unit in the last step of down-sampling, and storing characteristic graphs of a forward image and a backward image in a magnetic resonance brain image sequence for segmentation constraint.
And S204, sequentially utilizing bilinear interpolation to perform upsampling, wherein feature maps in the upsampling and downsampling processes of the same scale are spliced for information completion.
S205, classifying each pixel in the feature map obtained by last upsampling and restoring, and segmenting a meningioma area and a background area, wherein the pixel value of the meningioma area is 1, and the pixel value of the background area is 0.
In the practical application process, after S205, training can be performed on the established meningioma segmentation model, after the training is completed, a series of magnetic resonance brain images are input, a segmentation matrix with the size consistent with that of the input image is output, and the first 5 magnetic resonance brain images with the most obvious meningioma are screened out to serve as representative images of the patient case according to the ranking of the segmentation matrix and the values; and positioning the screened representative image to the center of the meningioma region according to the 0/1 distribution condition of the segmentation matrix, taking the representative image as the center, cutting out a meningioma single image with the size of 90 x 90 towards the periphery, and inputting the meningioma single image serving as an effective image into the meningioma grading model.
Referring to the flowchart of fig. 4, in S3, the method includes the following steps:
s301, a plurality of residual error learning modules are used, and a feature extractor is formed in a combined and superposed mode to extract the features of the meningioma.
S302, mean value fusion is carried out on a plurality of effective image characteristics belonging to the same case in the channel dimension.
And S303, using a full-connection layer to form a classifier, classifying the feature vector result after feature fusion, and outputting a meningioma grading detection result corresponding to the patient case.
In addition, after S3, the entire network of S1-S3 may be trained and tested.
When the whole network is trained and tested, all magnetic resonance brain images are randomly divided into a training set and a testing set according to the ratio of 8:2, so that the whole network can be better trained and tested, and the meningioma grade of a patient can be better identified by the rapid brain tumor segmentation qualitative method based on the deep neural network.
Therefore, the invention can diagnose the brain tumor level of a patient according to a series of MRI (magnetic resonance imaging) images of the brain of the patient, and the diagnosis process of the brain tumor level is completely and automatically completed by a computer, only the MRI image of the brain of the patient needs to be input, the calculation and the prediction can be automatically carried out, and other artificial parameter setting and characteristic designation are not needed.
In practical application, a large number of MRI sequence images can be generated after a patient carries out brain nuclear magnetic scanning, and the method screens effective images which contain the meningioma and are obvious in meningioma display by using the meningioma segmentation model to carry out subsequent grading diagnosis, so that the diagnosis precision is improved. A bidirectional long-short-time memory (LSTM) unit is added in the meningioma segmentation model, and image information before and after a sequence can be fused in the segmentation of a certain image, so that the segmentation task can be accurately completed, meanwhile, the operation efficiency is improved, and the calculation resources are saved. In addition, the method can quickly analyze a plurality of brain MRI images corresponding to the patient in a very short time, predict the quality and grade of the meningioma, reduce a large amount of repetitive work of doctors, and achieve the aim of assisting the doctors in carrying out the actual application of hierarchical detection of the meningioma clinically.
Examples
The overall work flow chart of the qualitative method for rapidly segmenting the meningioma based on the deep neural network is shown in figure 5, wherein a scanning file corresponding to a three-dimensional MRI image sequence of the brain of a patient is generated after the meningioma patient is subjected to craniocerebral nuclear magnetic scanning. A group of MRI tomography images of the required patient brain in the direction of the transverse position is analyzed from a file, firstly, segmentation and identification of a meningioma area are carried out on all the MRI images of the group, the purpose is to screen out effective images containing tumor areas, then, comprehensive classification judgment is carried out on the partial effective images containing the meningioma, and finally, a grading detection result of the patient meningioma is given, wherein related segmentation and classification models are neural network models.
In this embodiment, a method of using a neural network is specifically implemented to perform an overall qualitative segmentation judgment on a set of MRI images from the brain of a patient with meningioma, including the following parts:
(1) preparation of MRI image data.
Brain MRI image files of a plurality of patients with meningioma are obtained, and comprise a benign sample of meningioma grade 1 and a malignant sample of meningioma grade 2. A group of MRI brain images of the patient's brain, which are subjected to tomography in the transverse position direction at intervals of 1mm, are read from each MRI image file, and meanwhile, the meningioma grade label of each MRI image corresponds to a professional diagnosis result given by a doctor on a patient medical record report. For subsequent training and testing of the model, all image data are randomly divided into a training set and a testing set according to the ratio of 8: 2.
(2) And (5) establishing a meningioma segmentation model.
As one example of the brain MRI image file is analyzed, the number of the brain MRI image file is as high as 200-300, but the brain tumor occupies small volume in the cranium, so that more than two thirds of the sections do not contain the tumor area. An example MRI image file corresponds to the same level label, meaning that there are many pictures without meningioma that will be assigned a level label for the tumor, which is not in line with the fact, and belongs to noisy data. In order to reduce the interference of the image on the final classification effect, a meningioma segmentation model is established, effective sections containing the meningioma are screened out, noise is removed, and the accuracy is improved.
(2a) Meningioma segmentation model combined with two-way long-short memory (LSTM) unit: a U-Net segmentation network is selected as a prototype of the segmentation model, a bidirectional LSTM unit is added to achieve the purpose of sequence fusion, and the identification and segmentation of the meningioma area are carried out on all images in an MRI scanning file according to the sequence order. The segmentation model mainly performs five types of operations:
operation one: down-sampling: reducing the feature map by one time by using a maximal pooling window with the size of 2 multiplied by 2, and sequentially finishing down-sampling;
and operation II: and (3) upsampling: the characteristic graph is up-sampled by a bilinear interpolation mode, the characteristic graph is doubled, and the size of the characteristic graph is gradually restored to the size of an original image;
operation three: cross-layer connection: splicing the front characteristic graph and the rear characteristic graph with the same scale, and supplementing information required by up-sampling and restoring the image;
and operation four: bidirectional LSTM unit: in the last step of down sampling, a bidirectional LSTM memory unit is added to store the characteristic graphs of forward and backward images in the image sequence, prior knowledge is provided during up sampling, the segmentation area is controlled more accurately, and the segmentation accuracy is improved.
And operation five: and (3) dividing: and classifying each pixel in the feature map obtained by final restoration, wherein the pixel value of the meningioma region is 1 and the pixel value of the background region is 0 because only the meningioma and the background need to be segmented.
After the training of the segmentation model is finished, when a series of brain MRI images are input, a segmentation matrix with the size consistent with that of the input image is output, the part with the numerical value of 1 corresponds to the meningioma in the input image, and the rest are 0. The larger the sum of the division matrices output from an image is, the larger the meningioma region included in the image is, and if the sum is 0, the larger the meningioma region is. Therefore, for all images in an example MRI file, the top 5 images with the most significant meningiomas are selected as representative images of the patient case in the order of the segmentation matrix and the value.
(2b) Intercepting a meningioma region based on a segmentation matrix: the screened representative meningioma images are positioned to the center of the meningioma region according to the distribution of the segmentation matrix 0/1, and the individual meningioma images with the size of 90 x 90 are cut out towards the periphery by taking the center as the center. And using the data as input image data for a subsequent meningioma hierarchical model.
(3) And (5) establishing a meningioma grading model.
The resolution ratio of the MRI image of the brain of the patient is larger, and the learning capacity of the ordinary shallow neural network to the image is limited, so that a deep neural network is established in the invention, a group of MRI images of the brain obtained by rearrangement after segmentation can be realized, and the integral grading result is output.
(3a) Feature extraction: a plurality of residual error learning modules are used, and a characteristic extraction part of the neural network is formed in a combined and superposed mode. The residual module is composed of a layer of convolution layers with the size of 1 multiplied by 1, a layer of convolution layers with the size of 3 multiplied by 3 and a layer of convolution layers with the size of 1 multiplied by 1, so that the characteristic extraction can be completed, the number of parameters is reduced, and the training speed is improved.
(3b) Feature fusion: after the input multiple images are respectively processed by the feature extraction part, the feature vectors corresponding to the input multiple images are obtained. And performing mean value calculation on all the feature vectors on the channel dimension to obtain a feature vector result after all the image features are fused.
(3c) And (4) classification: and classifying the feature vectors after feature fusion by using a full-connection layer, and outputting a comprehensive meningioma grading result corresponding to the MRI brain image of the patient.
(4) And (5) training and testing the model.
(4a) Data augmentation: in view of the small size of the data set, in order to make the hierarchical model have stronger generalization performance and robustness, some spatial geometrical transformation, such as random rotation, clipping and the like, is performed before the meningioma MRI image is input into the hierarchical model, so as to obtain a data set with a larger scale than the original data set for training the model.
(4b) Training a strategy: in order to complete the whole classification result of a group of input images, extracted features of all images in the group are fused and converted into a single feature vector, and then classification is completed. Therefore, the feature extraction module and the classification module in the neural network designed by the invention can be used for training separately step by step and before and after.
Training a feature extractor module: the feature extractor portion includes the portion of the network prior to the last layer of convolution structure. The image data used by the part of parameter learning is in sheet units, and a pre-training and fine-tuning mode is adopted, so that the training speed is higher than that of random initialization weight on one hand, and on the other hand, the overfitting phenomenon caused by insufficient data quantity is avoided as much as possible, and the extraction capability of the image features is enhanced.
Training a classifier module: the classifier part is the part after the last layer of convolution structure in the network. When the classifier is trained, the parameters of the feature extractor part are not updated. The image data used by the parameter learning of the classifier is taken as an example unit, and a group of images input into a network are sent into a classifier module for primary and secondary classification judgment of the meningioma after feature extraction and fusion.
(4c) Forward calculation of the network:
in general, for an L-layer feedforward neural network, its training sample set is set as
Figure 578884DEST_PATH_IMAGE002
Where m is the dimension of a single sample and n represents the number of training samples, then the ith sample may be represented as
Figure 761604DEST_PATH_IMAGE004
The ith sample is labeled
Figure 72500DEST_PATH_IMAGE006
. Let the j-th neuron of the l-th layer to the k-th neuron of the l +1 layer be connected as the weight
Figure 264446DEST_PATH_IMAGE008
Then, the connection weight matrix of the l-th layer to the l +1 layer
Figure 609977DEST_PATH_IMAGE010
. Let the activation function of neurons on layer I be
Figure 166860DEST_PATH_IMAGE012
From the input layer to the output layer, forward calculation is continuously performed, and the process is as follows:
Figure 965052DEST_PATH_IMAGE014
wherein the content of the first and second substances,
Figure 960690DEST_PATH_IMAGE016
the activation value of the L-th layer neurons, then, the activation value of the network output layer, i.e., L-th layer neurons, is:
Figure 160727DEST_PATH_IMAGE018
network output using last layer
Figure 685249DEST_PATH_IMAGE020
And a label
Figure 174000DEST_PATH_IMAGE006
Designing a performance function J:
Figure 442170DEST_PATH_IMAGE022
(4d) the network back propagation comprises the following steps:
1. calculating the output value of each node by feedforward calculation
Figure 762293DEST_PATH_IMAGE024
And activation value
Figure 723296DEST_PATH_IMAGE026
Wherein
Figure 699342DEST_PATH_IMAGE028
Number of layer i neurons:
Figure 302361DEST_PATH_IMAGE030
2. computing the residual of the last layer
Figure 8149DEST_PATH_IMAGE032
Wherein
Figure 874474DEST_PATH_IMAGE034
As a function of activation
Figure 337817DEST_PATH_IMAGE036
And (3) derivation results:
Figure 947789DEST_PATH_IMAGE038
3. computing residuals of layers from back to front
Figure 242505DEST_PATH_IMAGE040
Figure 545310DEST_PATH_IMAGE042
4. The corresponding gradient is calculated from the residual:
Figure 230369DEST_PATH_IMAGE044
5. updating the corresponding weight value, wherein
Figure 644033DEST_PATH_IMAGE046
To learn the rate:
Figure 793255DEST_PATH_IMAGE048
the network will repeat (4 c) and (4 d) until the network converges or a specified number of iterations is reached.
(4e) And (3) testing the network: after the whole network training is finished, the classification performance of the network is tested and evaluated.
On the divided test data sets, a group of brain MRI images corresponding to the patient are input, the brain tumor grading result predicted by the neural network designed in the embodiment is counted and compared with the actual grade label of the sample, the classification accuracy of 0.875 is achieved, and the sensitivity and the specificity are both over 90 percent, which indicates that the method can ensure that the missed diagnosis rate is low and the misdiagnosis rate is low.

Claims (10)

1. A brain tumor rapid segmentation qualitative method based on a deep neural network is characterized by comprising the following steps:
step 1, preparing a magnetic resonance brain image;
step 2, establishing a meningioma segmentation model, and screening an effective image containing a meningioma region from the magnetic resonance brain image through the meningioma segmentation model;
and 3, establishing a meningioma grading model, carrying out grading detection on the effective image through the meningioma grading model, and outputting a meningioma grading detection result.
2. The method for rapidly segmenting and characterizing the meningioma based on the deep neural network according to claim 1, wherein the step 1 comprises the following steps:
step 101, acquiring cranial magnetic resonance image files and case reports of a plurality of meningioma patients, wherein the cranial magnetic resonance image files comprise meningioma first-level benign samples and meningioma second-level malignant samples;
step 102, reading a magnetic resonance brain image sequence of a patient brain which is subjected to tomography scanning at intervals of a first distance in the transverse position direction from each case of cranial magnetic resonance image file;
103, obtaining a category label of the meningioma in the corresponding magnetic resonance brain image based on a case report result of the patient;
step 104, finishing delineation of a meningioma area in the magnetic resonance brain image based on an image marking system;
and 105, after the delineation is finished and the cross validation is carried out, confirming the tumor delineation area to obtain a division label of the meningioma.
3. The method of claim 2, wherein in step 102, the first distance is 1 mm.
4. The method according to claim 2, wherein in step 2, the established meningioma segmentation model is a meningioma segmentation model combining two-way long-and-short-term memory units, and the meningioma region is identified and segmented in sequence order for all sets of magnetic resonance brain images in each cranial magnetic resonance image file through the meningioma segmentation model.
5. The method for qualitative rapid brain tumor segmentation based on the deep neural network as claimed in claim 4, wherein in step 2, the step of processing the input magnetic resonance image by the established brain tumor segmentation model comprises the following steps:
step 201, inputting a magnetic resonance brain image of a patient;
step 202, sequentially utilizing pooling operation to carry out down-sampling;
step 203, in the last step of down-sampling, adding a bidirectional long-time and short-time memory unit, storing characteristic diagrams of a forward image and a backward image in a magnetic resonance brain image sequence, and using the characteristic diagrams for segmentation constraint;
step 204, sequentially utilizing bilinear interpolation to perform upsampling, wherein feature maps in the upsampling and downsampling processes of the same scale are spliced for information completion;
and step 205, classifying each pixel in the feature map obtained by the last upsampling and restoring, and segmenting a meningioma area and a background area, wherein the pixel value of the meningioma area is 1, and the pixel value of the background area is 0.
6. The method according to claim 5, wherein after the step 205, the established meningioma segmentation model is trained, and after the training is completed, a series of magnetic resonance brain images are input, a segmentation matrix with the same size as the input image is output, and the top 5 magnetic resonance brain images with the most obvious meningioma are selected as the representative image of the patient case according to the ranking of the segmentation matrix and the values.
7. The method according to claim 6, wherein the screened representative images are positioned to the center of the meningioma region according to the 0/1 distribution of the segmentation matrix, and are taken as the center, 90 × 90 individual meningioma images are extracted towards the periphery, and the individual meningioma images are input to the meningioma hierarchical model as effective images.
8. The method for qualitative and rapid segmentation of meningioma based on deep neural network of claim 1 or 7, wherein in step 3, the effective image is hierarchically detected by the meningioma hierarchical model, and a hierarchical detection result of meningioma is output, which includes the following specific steps:
step 301, forming a feature extractor by using a plurality of residual error learning modules in a combined and superposed mode to extract the feature of the meningioma;
step 302, performing mean value fusion on a plurality of effective image characteristics belonging to the same case in a channel dimension;
and 303, using a classifier formed by the full connection layer to classify the feature vector result after feature fusion, and outputting a meningioma grading detection result corresponding to the patient case.
9. The method for qualitative rapid segmentation of meningiomas based on deep neural networks as claimed in claim 8, wherein after step 3, the whole network formed by steps 1-3 is trained and tested.
10. The method of claim 9, wherein all mr brain images are randomly divided into training and testing sets according to a ratio of 8:2 when training and testing the whole network.
CN202110161083.2A 2021-02-05 2021-02-05 Meningioma rapid segmentation qualitative method based on deep neural network Active CN112508953B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110161083.2A CN112508953B (en) 2021-02-05 2021-02-05 Meningioma rapid segmentation qualitative method based on deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110161083.2A CN112508953B (en) 2021-02-05 2021-02-05 Meningioma rapid segmentation qualitative method based on deep neural network

Publications (2)

Publication Number Publication Date
CN112508953A true CN112508953A (en) 2021-03-16
CN112508953B CN112508953B (en) 2021-05-18

Family

ID=74952770

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110161083.2A Active CN112508953B (en) 2021-02-05 2021-02-05 Meningioma rapid segmentation qualitative method based on deep neural network

Country Status (1)

Country Link
CN (1) CN112508953B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113095376A (en) * 2021-03-24 2021-07-09 四川大学 Oral cavity epithelial abnormal proliferation distinguishing and grading equipment and system based on deep learning
CN113744271A (en) * 2021-11-08 2021-12-03 四川大学 Neural network-based automatic optic nerve segmentation and compression degree measurement and calculation method
CN115018836A (en) * 2022-08-08 2022-09-06 四川大学 Automatic dividing and predicting method, system and equipment for epileptic focus
CN116188469A (en) * 2023-04-28 2023-05-30 之江实验室 Focus detection method, focus detection device, readable storage medium and electronic equipment
CN116664590A (en) * 2023-08-02 2023-08-29 中日友好医院(中日友好临床医学研究所) Automatic segmentation method and device based on dynamic contrast enhancement magnetic resonance image

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103098090A (en) * 2011-12-21 2013-05-08 中国科学院自动化研究所 Multiparameter three-dimensional magnetic resonance imaging brain tumor partition method
CN106709907A (en) * 2016-12-08 2017-05-24 上海联影医疗科技有限公司 MR image processing method and device
CN107220966A (en) * 2017-05-05 2017-09-29 郑州大学 A kind of Histopathologic Grade of Cerebral Gliomas Forecasting Methodology based on image group
CN107610129A (en) * 2017-08-14 2018-01-19 四川大学 A kind of multi-modal nasopharyngeal carcinima joint dividing method based on CNN
CN109584244A (en) * 2018-11-30 2019-04-05 安徽海浪智能技术有限公司 A kind of hippocampus dividing method based on Sequence Learning
CN109686426A (en) * 2018-12-29 2019-04-26 上海商汤智能科技有限公司 Medical imaging processing method and processing device, electronic equipment and storage medium
CN109886922A (en) * 2019-01-17 2019-06-14 丽水市中心医院 Hepatocellular carcinoma automatic grading method based on SE-DenseNet deep learning frame and multi-modal Enhanced MR image
CN110414481A (en) * 2019-08-09 2019-11-05 华东师范大学 A kind of identification of 3D medical image and dividing method based on Unet and LSTM
CN111161241A (en) * 2019-12-27 2020-05-15 联想(北京)有限公司 Liver image identification method, electronic equipment and storage medium
CN111192245A (en) * 2019-12-26 2020-05-22 河南工业大学 Brain tumor segmentation network and method based on U-Net network
CN111340767A (en) * 2020-02-21 2020-06-26 四川大学华西医院 Method and system for processing scalp positioning image of brain tumor
US20200211185A1 (en) * 2018-12-29 2020-07-02 Shenzhen Malong Technologies Co., Ltd. 3d segmentation network and 3d refinement module
CN111445451A (en) * 2020-03-20 2020-07-24 上海联影智能医疗科技有限公司 Brain image processing method, system, computer device and storage medium
CN111862066A (en) * 2020-07-28 2020-10-30 平安科技(深圳)有限公司 Brain tumor image segmentation method, device, equipment and medium based on deep learning
CN112272839A (en) * 2018-06-07 2021-01-26 爱克发医疗保健公司 Sequential segmentation of anatomical structures in three-dimensional scans
CN112289455A (en) * 2020-10-21 2021-01-29 王智 Artificial intelligence neural network learning model construction system and construction method

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103098090A (en) * 2011-12-21 2013-05-08 中国科学院自动化研究所 Multiparameter three-dimensional magnetic resonance imaging brain tumor partition method
CN106709907A (en) * 2016-12-08 2017-05-24 上海联影医疗科技有限公司 MR image processing method and device
CN107220966A (en) * 2017-05-05 2017-09-29 郑州大学 A kind of Histopathologic Grade of Cerebral Gliomas Forecasting Methodology based on image group
CN107610129A (en) * 2017-08-14 2018-01-19 四川大学 A kind of multi-modal nasopharyngeal carcinima joint dividing method based on CNN
CN112272839A (en) * 2018-06-07 2021-01-26 爱克发医疗保健公司 Sequential segmentation of anatomical structures in three-dimensional scans
CN109584244A (en) * 2018-11-30 2019-04-05 安徽海浪智能技术有限公司 A kind of hippocampus dividing method based on Sequence Learning
US20200211185A1 (en) * 2018-12-29 2020-07-02 Shenzhen Malong Technologies Co., Ltd. 3d segmentation network and 3d refinement module
CN109686426A (en) * 2018-12-29 2019-04-26 上海商汤智能科技有限公司 Medical imaging processing method and processing device, electronic equipment and storage medium
CN109886922A (en) * 2019-01-17 2019-06-14 丽水市中心医院 Hepatocellular carcinoma automatic grading method based on SE-DenseNet deep learning frame and multi-modal Enhanced MR image
CN110414481A (en) * 2019-08-09 2019-11-05 华东师范大学 A kind of identification of 3D medical image and dividing method based on Unet and LSTM
CN111192245A (en) * 2019-12-26 2020-05-22 河南工业大学 Brain tumor segmentation network and method based on U-Net network
CN111161241A (en) * 2019-12-27 2020-05-15 联想(北京)有限公司 Liver image identification method, electronic equipment and storage medium
CN111340767A (en) * 2020-02-21 2020-06-26 四川大学华西医院 Method and system for processing scalp positioning image of brain tumor
CN111445451A (en) * 2020-03-20 2020-07-24 上海联影智能医疗科技有限公司 Brain image processing method, system, computer device and storage medium
CN111862066A (en) * 2020-07-28 2020-10-30 平安科技(深圳)有限公司 Brain tumor image segmentation method, device, equipment and medium based on deep learning
CN112289455A (en) * 2020-10-21 2021-01-29 王智 Artificial intelligence neural network learning model construction system and construction method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
FAN XU 等: "LSTM Multi-modal UNet for Brain Tumor Segmentation", 《2019 IEEE 4TH INTERNATIONAL CONFERENCE ON IMAGE, VISION AND COMPUTING》 *
MOHAMED A. NASER 等: "Brain tumor segmentation and grading of lower-grade glioma using deep learning in MRI images", 《COMPUTERS IN BIOLOGY AND MEDICINE》 *
REZA AZAD 等: "Bi-Directional ConvLSTM U-Net with Densley Connected Convolutions", 《ARXIV》 *
蔡亚洁 等: "基于深度学习的***MRI图像分级分期预测", 《电脑知识与技术》 *
谢素丹 等: "根据磁共振 T2 加权影像特征预测乳腺癌组织学分级", 《中国生物医学工程学报》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113095376A (en) * 2021-03-24 2021-07-09 四川大学 Oral cavity epithelial abnormal proliferation distinguishing and grading equipment and system based on deep learning
CN113744271A (en) * 2021-11-08 2021-12-03 四川大学 Neural network-based automatic optic nerve segmentation and compression degree measurement and calculation method
CN115018836A (en) * 2022-08-08 2022-09-06 四川大学 Automatic dividing and predicting method, system and equipment for epileptic focus
CN116188469A (en) * 2023-04-28 2023-05-30 之江实验室 Focus detection method, focus detection device, readable storage medium and electronic equipment
CN116664590A (en) * 2023-08-02 2023-08-29 中日友好医院(中日友好临床医学研究所) Automatic segmentation method and device based on dynamic contrast enhancement magnetic resonance image
CN116664590B (en) * 2023-08-02 2023-10-13 中日友好医院(中日友好临床医学研究所) Automatic segmentation method and device based on dynamic contrast enhancement magnetic resonance image

Also Published As

Publication number Publication date
CN112508953B (en) 2021-05-18

Similar Documents

Publication Publication Date Title
CN112508953B (en) Meningioma rapid segmentation qualitative method based on deep neural network
CN112101451B (en) Breast cancer tissue pathological type classification method based on generation of antagonism network screening image block
CN109859215B (en) Automatic white matter high signal segmentation system and method based on Unet model
CN108257135A (en) The assistant diagnosis system of medical image features is understood based on deep learning method
Aslam et al. Neurological Disorder Detection Using OCT Scan Image of Eye
CN110189293A (en) Cell image processing method, device, storage medium and computer equipment
CN115909006B (en) Mammary tissue image classification method and system based on convolution transducer
CN112420170B (en) Method for improving image classification accuracy of computer aided diagnosis system
CN112861994A (en) Intelligent gastric ring cell cancer image classification system based on Unet migration learning
CN114596317A (en) CT image whole heart segmentation method based on deep learning
CN114972254A (en) Cervical cell image segmentation method based on convolutional neural network
Eltoukhy et al. Classification of multiclass histopathological breast images using residual deep learning
CN115661066A (en) Diabetic retinopathy detection method based on segmentation and classification fusion
CN112200810B (en) Multi-modal automated ventricle segmentation system and method of use thereof
CN116664590B (en) Automatic segmentation method and device based on dynamic contrast enhancement magnetic resonance image
Goutham et al. Brain tumor classification using EfficientNet-B0 model
CN115527204A (en) Frame-assisted tumor microenvironment analysis method for liver cancer tissue complete slides
CN114494132A (en) Disease classification system based on deep learning and fiber bundle spatial statistical analysis
CN114723937A (en) Method and system for classifying blood vessel surrounding gaps based on nuclear magnetic resonance image
CN113222887A (en) Deep learning-based nano-iron labeled neural stem cell tracing method
CN112967295A (en) Image processing method and system based on residual error network and attention mechanism
Srivardhini et al. A deep learning based multi-model for early prognosticate of Alzheimer’s dementia using MRI dataset
CN115019045B (en) Small data thyroid ultrasound image segmentation method based on multi-component neighborhood
Akella et al. A novel hybrid model for automatic diabetic retinopathy grading and multi-lesion recognition method based on SRCNN & YOLOv3
CN116310604B (en) Placenta implantation parting assessment tool and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant