CN109872328A - A kind of brain image dividing method, device and storage medium - Google Patents

A kind of brain image dividing method, device and storage medium Download PDF

Info

Publication number
CN109872328A
CN109872328A CN201910070881.7A CN201910070881A CN109872328A CN 109872328 A CN109872328 A CN 109872328A CN 201910070881 A CN201910070881 A CN 201910070881A CN 109872328 A CN109872328 A CN 109872328A
Authority
CN
China
Prior art keywords
image
module
residual error
feature
top set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910070881.7A
Other languages
Chinese (zh)
Other versions
CN109872328B (en
Inventor
郭恒
李悦翔
郑冶枫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910070881.7A priority Critical patent/CN109872328B/en
Publication of CN109872328A publication Critical patent/CN109872328A/en
Priority to EP20744359.9A priority patent/EP3916674B1/en
Priority to PCT/CN2020/072114 priority patent/WO2020151536A1/en
Priority to US17/241,800 priority patent/US11748889B2/en
Application granted granted Critical
Publication of CN109872328B publication Critical patent/CN109872328B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Image Analysis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses a kind of brain image dividing method, device and storage mediums, specially after getting image group to be split, on the one hand, skull removing can be carried out according to the multiple modalities image in the image group to be split, obtain the exposure mask of removing skull, on the other hand feature extraction and fusion can be carried out to the multiple modalities image respectively, intracranial tissue is split further according to feature after fusion, then, the initial segmentation result that segmentation obtains is merged with the exposure mask obtained before again, obtains final segmentation result;The accuracy of feature representation ability and segmentation can be improved in the program.

Description

A kind of brain image dividing method, device and storage medium
Technical field
The present invention relates to fields of communication technology, and in particular to a kind of brain image dividing method, device and storage medium.
Background technique
Brain diseases are one of the principal diseases for threatening human health, quantitatively analyze brain tissue's structure in clinic Medically have great significance.For example, with Alzheimer's disease, parkinsonism, multiple sclerosis and schizophrenia For equal degenerations cerebral disease, since these neurological diseases change normal volume and its region point of human brain soft tissue and cerebrospinal fluid Cloth, therefore, doctor can assess the risk and disease rank of patient by carrying out precise measurement to these volume of tissue, and brain A premise of the Accurate Segmentation of portion's image as precise measurement, is just particularly important.
The segmentation of traditional brain image generally can be by manually realizing, and with computer vision technique and artificial intelligence The development of technology, existing nuclear magnetic resonance image (MRI, the Magnetic having also been proposed based on deep learning to brain Resonance Imaging) technology that is split, for example, the image of multiple mode of brain tissue can specifically be carried out After fusion, skull removing (separating brain tissue from non-brain tissue) is carried out using software, then, then removing is tied Fruit carries out tissue regions identification and segmentation.Although for the program is with respect to traditional scheme, efficiency and segmentation precision increase, But since during fusion, the information of part mode can be rejected, therefore, feature representation ability is extremely limited, significantly Influence segmentation accuracy.
Summary of the invention
The embodiment of the present invention provides a kind of brain image dividing method, device and storage medium, and feature representation can be improved The accuracy of ability and segmentation.
The embodiment of the present invention provides a kind of brain image dividing method, comprising:
Image group to be split is obtained, the image group to be split includes the multiple modalities image of brain;
Skull removing is carried out according to the multiple modalities image, obtains the exposure mask of removing skull;
Feature extraction is carried out to the multiple modalities image respectively, and merges the feature extracted;
Intracranial tissue is split according to feature after fusion, obtains initial segmentation result;
The exposure mask and initial segmentation result are merged, the corresponding segmentation result of the image group to be split is obtained.
Correspondingly, the embodiment of the present invention provides a kind of brain image segmenting device, including acquiring unit, stripping unit, mention Unit, cutting unit and integrated unit are taken, as follows:
Acquiring unit, for obtaining image group to be split, the image group to be split includes the multiple modalities image of brain;
Stripping unit obtains the exposure mask of removing skull for carrying out skull removing according to the multiple modalities image;
Extraction unit for carrying out feature extraction to the multiple modalities image respectively, and merges the feature extracted;
Cutting unit obtains initial segmentation result for being split according to feature after fusion to intracranial tissue;
Integrated unit obtains the image group pair to be split for merging the exposure mask and initial segmentation result The segmentation result answered.
Optionally, in some embodiments, the multiple modalities image includes the first mode for tissue regions segmentation Image, for intracranial area identification second mode image and for the third modality images of protein lesion region recognition, then:
The stripping unit specifically can be used for carrying out skull removing according to first mode image and second mode image, Obtain the exposure mask of removing skull;
The extraction unit specifically can be used for respectively to first mode image, second mode image and third modal graph As carrying out feature extraction, and merge the feature extracted.
Optionally, in some embodiments, the stripping unit specifically can be used for first mode image and the second mould State image is merged, and fused image is obtained, using full convolutional network three-dimensional after training to tissue points in fused image Type is predicted, the tissue points of intracranial area are not belonging to according to the screening of the voxel vertex type of prediction, obtain background voxels collection, The background voxels collection is shielded in the fused image, obtains the exposure mask of removing skull.
Optionally, in some embodiments, the extraction unit specifically can be used for first mode image and the second mould State image is merged, and fused image is obtained, using the full convolutional network of multiple-limb after training respectively to fused image and Three modality images carry out feature extraction, and merge the feature extracted.
Optionally, in some embodiments, the full convolutional network of multiple-limb includes top set's three-dimensional residual error knot after the training Structure, inferior division three-dimensional residual error structure and sorter network module, then the extraction unit, specifically can be used for three-dimensional using top set Residual error structure carries out feature extraction to fused image, obtains top set's feature;Using inferior division three-dimensional residual error structure to third Modality images carry out feature extraction, obtain inferior division feature;By sorter network module by top set's feature and inferior division feature It is merged.
Optionally, in some embodiments, top set's three-dimensional residual error structure includes top set's convolution module, on first Branch's residual error module, first top set's down sample module, second top set's residual error module, second top set's down sample module, Three top set's residual error modules and third top set down sample module, the then extraction unit specifically can be used for:
Process of convolution is carried out to fused image using top set's convolution module;
The output of top set's convolution module is encoded using first top set's residual error module, and uses the first top set Down sample module carries out down-sampling operation to coding result;
The output of first top set's down sample module is encoded using second top set's residual error module, and uses first Top set's down sample module carries out down-sampling operation to coding result;
The output of second top set's down sample module is encoded using third top set residual error module, and uses third Top set's down sample module carries out down-sampling operation to coding result, obtains top set's feature.
Optionally, in some embodiments, the inferior division three-dimensional residual error structure includes under inferior division convolution module, first Branch's residual error module, the first inferior division down sample module, the second inferior division residual error module, the second inferior division down sample module, Three inferior division residual error modules and third inferior division down sample module, the then extraction unit specifically can be used for:
Process of convolution is carried out to third modality images using inferior division convolution module;
The output of inferior division convolution module is encoded using the first inferior division residual error module, and uses the first inferior division Down sample module carries out down-sampling operation to coding result;
The output of the first inferior division down sample module is encoded using the second inferior division residual error module, and uses first Inferior division down sample module carries out down-sampling operation to coding result;
The output of the second inferior division down sample module is encoded using third inferior division residual error module, and uses third Inferior division down sample module carries out down-sampling operation to coding result, obtains inferior division feature.
Optionally, in some embodiments, the cutting unit specifically can be used for using sorter network module to fusion Feature is classified afterwards, is split based on classification results to intracranial tissue, is obtained initial segmentation result.
Optionally, in some embodiments, the integrated unit specifically can be used for obtaining respectively each on the exposure mask The value of each voxel in the value of voxel and the initial segmentation result establishes the first square according to the value of voxel each on the exposure mask Battle array, and establishes the second matrix according to the value of voxel each in the initial segmentation result, by the element and the in the first matrix Element in two matrixes carries out dot product operation, obtains the corresponding segmentation result of the image group to be split.
Optionally, in some embodiments, the brain image segmenting device can also include the first acquisition unit and the One training unit, as follows:
First acquisition unit, for acquiring multiple first sample image groups, the first sample image group includes using First mode image pattern in tissue regions segmentation and the second mode image pattern for intracranial area identification;
First training unit is obtained for merging to first mode image pattern and second mode image pattern To fused image sample;The type of tissue points in fused image is predicted using default three-dimensional full convolutional network, is obtained To predicted value, the true value of voxel vertex type in fused image is obtained, using cross entropy loss function, according to the predicted value The three-dimensional full convolutional network is restrained with true value, three-dimensional full convolutional network after being trained.
Optionally, in some embodiments, the brain image segmenting device can also include the second acquisition unit and the Two training units, as follows:
Second acquisition unit, for acquiring multiple second sample image groups, the second sample image group includes being used for group Tissue region segmentation first mode image pattern, for intracranial area identification second mode image pattern and be used for albumen The third modality images sample of matter lesion region identification;
Second training unit is melted for merging to first mode image pattern and second mode image pattern Image pattern after conjunction, using the full convolutional network of default multiple-limb respectively to fused image sample and third modality images sample into Row feature extraction merges the feature extracted, and classifies to feature after fusion, obtains type prediction value, after obtaining fusion The type true value of feature, by more Classification Loss functions, according to the type prediction value and type true value to described more points The full convolutional network of branch is restrained, the full convolutional network of multiple-limb after being trained.
The embodiment of the present invention is after getting image group to be split, on the one hand, can be according in the image group to be split Multiple modalities image carry out skull removing, obtain removing skull exposure mask, on the other hand can be respectively to the multiple modalities figure As carrying out feature extraction and fusion, intracranial tissue is split further according to feature after fusion, then, then will segmentation obtain just Beginning segmentation result is merged with the exposure mask obtained before, obtains final segmentation result;Since the program is initially being divided It when cutting, is extracted using the feature first to each modality images, then the mode merged therefore can be as far as possible Retain the information that each mode is contained, improve the ability to express of extracted feature, moreover, it is also possible to separate institute using skull Obtained exposure mask eliminates the positive phenomenon (false positive) of the vacation in the obtained segmentation result of initial segmentation, therefore, phase It, can for directly carrying out fusion to each modality images and skull is removed, then for the scheme that is split based on peel results To improve the accuracy of feature representation ability and segmentation.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for For those skilled in the art, without creative efforts, it can also be obtained according to these attached drawings other attached Figure.
Fig. 1 is the schematic diagram of a scenario of brain image dividing method provided in an embodiment of the present invention;
Fig. 2 is the flow chart of brain image dividing method provided in an embodiment of the present invention;
Fig. 3 is the topology example figure of three-dimensional full convolutional network provided in an embodiment of the present invention;
Fig. 4 is the topology example figure of the full convolutional network of multiple-limb provided in an embodiment of the present invention;
Fig. 5 is the topology example figure of brain image parted pattern provided in an embodiment of the present invention;
Fig. 6 is the topology example figure of down sample module provided in an embodiment of the present invention;
Fig. 7 is the schematic diagram of multi-modality images provided in an embodiment of the present invention;
Fig. 8 is the structural schematic diagram of residual error module provided in an embodiment of the present invention;
Fig. 9 is another topology example figure of brain image parted pattern provided in an embodiment of the present invention;
Figure 10 is another flow chart of brain image dividing method provided in an embodiment of the present invention
Figure 11 is the comparison diagram of image group and segmentation result to be split provided in an embodiment of the present invention;
Figure 12 is the structural schematic diagram of brain image segmenting device provided in an embodiment of the present invention;
Figure 13 is another structural schematic diagram of brain image segmenting device provided in an embodiment of the present invention;
Figure 14 is the structural schematic diagram of the network equipment provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, those skilled in the art's every other implementation obtained without creative efforts Example, shall fall within the protection scope of the present invention.
The embodiment of the present invention provides a kind of brain image dividing method, device and storage medium.Wherein, brain image point Cutting device can integrate in the network device, which can be server, be also possible to the equipment such as terminal.
For example, with reference to Fig. 1, by taking the brain image segmenting device specifically integrates in the network device as an example, it is possible, firstly, to logical Each medical image acquisition device is crossed, for example CT or NMR imaging instrument etc. carry out the figure of multiple mode to the brain of the same person As acquisition, then, by collected multiple modalities image (such as first mode image, second mode image ..., N mode Image etc.) it is added to identity set, image group to be split is obtained, and the image group to be split is supplied to the network equipment.Network Equipment is after receiving the image group to be split, on the one hand, can carry out skull removing according to the multiple modalities image, be shelled On the other hand exposure mask from skull can be split intracranial tissue according to the multiple modalities image, obtain initial segmentation knot Then fruit, then the exposure mask and initial segmentation result is merged, the corresponding segmentation result of the image group to be split is obtained.
Wherein, when carrying out initial segmentation, feature representation ability is influenced in order to avoid image information loss, this programme exists Except feature extraction, multiple modalities image is not merged, but is mentioned carrying out feature to the multiple modalities image respectively After taking, then the feature that just fusion is extracted is split intracranial tissue further according to feature after fusion, initially to be divided Cut result.Since feature representation ability can be improved in the program, and exposure mask can also be eliminated just with merging for initial segmentation result Therefore the positive phenomenon of vacation in beginning segmentation result can greatly improve the accuracy of segmentation.
It is described in detail separately below.It should be noted that the following description sequence is not as excellent to embodiment The restriction of choosing sequence.
In the present embodiment, it will be described from the angle of brain image segmenting device, brain image segmenting device tool Body can integrate in the network equipment such as terminal or server equipment.
A kind of brain image dividing method, comprising: obtain image group to be split, which includes the more of brain Kind modality images carry out skull removing according to the multiple modalities image, the exposure mask of removing skull are obtained, respectively to the multiple modalities Image carries out feature extraction, and merges the feature extracted, and is split according to feature after fusion to intracranial tissue, obtains initial Segmentation result merges the exposure mask and initial segmentation result, obtains the corresponding segmentation result of the image group to be split.
As shown in Fig. 2, the detailed process of the brain image segmenting device can be such that
101, image group to be split is obtained.
For example, specifically can receive each medical image acquisition device, such as computed tomographic scanner (CT, Computed Tomography) or the transmissions such as NMR imaging instrument image group to be split.Each group of image group to be split, Can be by each medical image acquisition device to same target, for example the brain of the same person is acquired to obtain.
Wherein, which refers to needing to carry out the set of the image of image segmentation, can specifically include The multiple modalities image of brain, for example, may include first mode image (such as nuclear magnetic resonance mode for tissue regions segmentation T1 image in image), for intracranial area identification second mode image (in such as nuclear magnetic resonance modality images T1_IR figure Picture) and for protein lesion region recognition third modality images (the T2 image in such as nuclear magnetic resonance modality images), etc. Deng.
Wherein, T1 image is one of nuclear magnetic resonance image data modality, is only caught during Magnetic resonance imaging Image obtained from the longitudinal movement of protium proton is caught, is mainly used to observe anatomical details;T1_IR(T1Inversion Recovery) image is a kind of Fat-suppression technique (fat suppression technique) in nuclear magnetic resonance image, is passed through The signal strength of adipose tissue is inhibited to highlight its hetero-organization, for observing adrenal gland (adrenal glands), marrow (bone Marrow) and fatty tumour (fatty tumours) is highly useful;T2 image refers mainly to FLAIR in embodiments of the present invention (Fluid Attenuation Inversion Recovery) image is that a kind of signal for inhibiting water is strong in nuclear magnetic resonance image The technology of degree, for display brain edema (cerebral oedema) and in multiple sclerosis (multiple Sclerosis the other room generated under) or cortex lesion (periventricular or cortical lesions) have very much With.
102, skull removing (Skull Stripping) is carried out according to the multiple modalities image, obtains covering for removing skull Film.
Skull separation, refers to the operation for peeling away skull from the nuclear magnetic resonance image of brain, that is, refers to pair Intracranial area in image is identified, and intracranial area and background area (region i.e. in addition to intracranial area) are drawn The operation (separating brain tissue from non-brain tissue) divided.And exposure mask then refers to retaining intracranial area, and will back The tissue points of scene area obtained image after being shielded.
It should be noted that when carrying out skull separation, it can be using whole modality images, it can also be only with therein Part modality images, for example, including first mode image, second mode image and third modality images with the multiple modalities image For, then at this point it is possible to carry out skull removing according to first mode image and second mode image, obtain covering for removing skull Film, etc..The mode of skull removing specifically can be such that
(1) first mode image and second mode image are merged, obtains fused image.
Wherein, specific amalgamation mode can there are many, for example, can by feature be added or channel splicing etc. modes come The first mode image and second mode image are merged, fused image is obtained.
(2) using three-dimensional full convolutional network (3D Fully Convolutional Network) after training to scheming after fusion The type of tissue points is predicted as in.
Wherein, three-dimensional full convolutional network is training back brain image segmentation mould provided by the embodiment of the present invention after the training One branch of type, three-dimensional full convolutional network may include two parts after the training, and a part is three-dimensional residual error network (this part It is referred to as encoder, i.e. encoder), for encoding, to carry out feature extraction to fused image, another part is point Class network (this part is to be referred to as decoder, i.e. decoder), is mainly used for decoding, to predict the type of tissue points.
If three-dimensional full convolutional network includes three-dimensional residual error network and sorter network after the training, step is " after training Three-dimensional full convolutional network predicts the type of tissue points in fused image " may include:
A, feature extraction is carried out to fused image by three-dimensional residual error network, obtains characteristic information.
Wherein, the specific structure of the three-dimensional residual error network can be depending on the demand of practical application, for example, may include Convolutional layer, multiple residual error modules (ResBlock) and multiple down sample modules (Downsample) etc..Different sizes (i.e. input/ The image size of output) residual error module may be implemented the feature extraction of different scale, and down sample module then can be generated pair The thumbnail for answering image enables exported image to meet required image size.
For example, with reference to Fig. 3, including: one with the three-dimensional residual error network, (i.e. the image of input/output is big having a size of 32 It is small), and convolution kernel be 3 × 3 × 3 convolutional layer (i.e. convolutional layer 32), one having a size of 32 residual error module (i.e. residual error module 32), one having a size of 64 residual error module (i.e. residual error module 64), one having a size of 128 residual error module (i.e. residual error module 128) and for multiple down sample modules, then the process for carrying out feature extraction to fused image can be such that
It is after carrying out process of convolution to the fused image using convolutional layer, convolution processing result is " residual as residual error module The input of difference module 32 " is to carry out feature extraction, and then, the characteristic pattern (Feature Map) of " residual error module 32 " output is passed through After being operated by down sample module progress down-sampling, the input as another residual error module " residual error module 64 " is to carry out another ruler The feature extraction of degree;Similar, the characteristic pattern of " residual error module 64 " output is subjected to down-sampling behaviour via another down sample module Input after work, as another residual error module " residual error module 128 ";It is adopted after " residual error module 128 " processing, then using under another Egf block carries out down-sampling operation to the characteristic pattern that " residual error module 128 " exports, and so on, in this way using multiple and different After the feature extraction of the residual error module of size and down-sampling operation, multiple and different scales (or being different stage) can be obtained The final output value of characteristic pattern and a three-dimensional residual error network, for convenience, in embodiments of the present invention, by this multiple spy The final output primary system of sign figure and the three-dimensional residual error network is known as characteristic information.
B, using sorter network, the class of each tissue points in the fused image is predicted according to obtained characteristic information Type.
Wherein, the specific structure of the sorter network can also be depending on the demand of practical application, for example, may include more A residual error module and multiple warp laminations (Deconv), in addition, it can include a convolutional layer and multiple batches of standardization (BN, Batch Normalization, also referred to as batch are normalized) layer and activation primitive (ReLU) layer (by BN and can also swash Function living is realized as same layer).
For example, including: one having a size of 256 residual error module " residual error module with the sorter network as shown in Figure 3 and Figure 5 256 ", one having a size of 128 warp lamination (i.e. warp lamination 128), one having a size of 128 residual error module (residual error module 128), one having a size of 64 warp lamination (i.e. warp lamination 64), one having a size of 64 residual error module (residual error module 64), One having a size of 32 warp lamination (i.e. warp lamination 32), one having a size of 32 residual error module (residual error module 32), one Having a size of 2 and convolutional layer (i.e. convolutional layer 2) and multiple BN and activation primitive layer that convolution kernel is 1 × 1 × 1 (criticize normalization And active coating) for, then the process of the prediction tissue points type can be such that
After being handled using residual error module " residual error module 256 " the final output value of three-dimensional residual error network, it will handle As a result it imports and carries out deconvolution processing in warp lamination " warp lamination 128 ", then, by deconvolution processing result and residual error network In " residual error module 128 " obtained characteristic pattern merged, and by fusion results carry out batch normalization and introduce it is non-linear (non-linear factor is introduced by activation primitive) after factor, another residual error module " residual error module as the sorter network 128 " input is similar to be handled, and carries out the output of " residual error module 128 " instead in warp lamination " warp lamination 64 " After process of convolution, the deconvolution processing result and " residual error module 64 " obtained characteristic pattern of residual error network can be melted It closes;It similarly, can be with after being carried out batch normalization by " batch normalization and active coating " to fusion results and being introduced non-linear factor Input of the output that " normalization and active coating will be criticized " as another residual error module " residual error module 64 " of the sorter network, warp After processing by " residual error module 64 " and warp lamination " warp lamination 32 ", then by deconvolution processing result and residual error network " residual error module 32 " obtained characteristic pattern is merged, finally, by the fusion results via " batch normalization and active coating ", After the processing of " residual error module 32 " and convolutional layer (i.e. convolutional layer 2), the class of each tissue points in the fused image can be predicted Type.
Wherein, three-dimensional full convolutional network can be formed by the training of multiple first sample image groups after training, specifically can be by After other equipment are trained, it is supplied to the medical image segmentation device, alternatively, can also be by the medical image segmentation device certainly Row is trained;That is step " type of tissue points in fused image is predicted using full convolutional network three-dimensional after training " Before, which can also include:
Multiple first sample image groups are acquired, which includes the first mode for tissue regions segmentation The information such as image pattern and the second mode image pattern identified for intracranial area;To first mode image pattern and Two modality images samples are merged, and fused image sample is obtained, using default three-dimensional full convolutional network to fused image The type of middle tissue points is predicted, predicted value is obtained, and the true value of voxel vertex type in fused image is obtained, using intersection Entropy loss function restrains the full convolutional network of the three-dimensional according to the predicted value and true value, three-dimensional full volume after being trained Product network.
For example, can specifically acquire multiple about the 3 d medical images of brain as raw data set, then to the original Image in beginning data set is pre-processed, such as the operation such as duplicate removal, cutting, rotation and/or overturning, default to obtain meeting this Then the image of the input standard of three-dimensional full convolutional network carries out the mark of voxel type to these pretreated images, and The image for belonging to same brain is added in the same image collection, so that each brain corresponds to an image collection, wherein These image collections in embodiments of the present invention, i.e. referred to as first sample image group.
Wherein, mode first mode image pattern and second mode image pattern merged can there are many, than Such as, feature addition or channel splicing etc. can be passed through.In addition, using default three-dimensional full convolutional network to voxel in fused image The method that the type of point is predicted, it is also similar with the processing to first mode image and second mode image, it is detailed in front Embodiment, therefore not to repeat here.
(3) it is not belonging to the tissue points of intracranial area according to the screening of the voxel vertex type of prediction, obtains background voxels collection.
(4) the background voxels collection is shielded in the fused image, obtains the exposure mask of removing skull.
For example, the value of tissue points in intracranial area can specifically be set to 1, and by the value of the tissue points of background area It is set as 0, to obtain the exposure mask of the removing skull.
In this way, subsequent merge (for example be multiplied) with the exposure mask with initial segmentation result, can make in initial segmentation result Intracranial area image value remain unchanged, and intracranial area image value is all 0, that is to say, that can eliminate initial segmentation result In the positive phenomenon of vacation.
103, feature extraction is carried out to the multiple modalities image respectively, and merges the feature extracted.
For example, still including first mode image, second mode image and third modality images with the multiple modalities image For, then at this point it is possible to carry out feature extraction to first mode image, second mode image and third modality images respectively, and Merge the feature extracted;For example, specifically can be such that
First mode image and second mode image are merged, fused image is obtained, using multiple-limb after training Full convolutional network carries out feature extraction to fused image and third modality images respectively, and merges the feature extracted.
It should be noted that optionally, if in a step 102, first mode image and second mode image are carried out It merges and is saved, then at this point, saved fused image also can be read directly, without again to the first mould State image and second mode image are merged, i.e., as shown in figure 5, first mode image and second mode image can be carried out After fusion, fused image is respectively supplied to the first stage and second stage uses.
Wherein, the full convolutional network of multiple-limb is training back brain image segmentation provided by the embodiment of the present invention after the training Another branch of model;The structure of the full convolutional network of multiple-limb can be configured according to the demand of practical application after the training, For example, after the training the full convolutional network of multiple-limb may include top set's three-dimensional residual error structure, inferior division three-dimensional residual error structure and Sorter network module, wherein top set's three-dimensional residual error structure and this part of inferior division three-dimensional residual error structure are referred to as encoding Device, for encoding, to carry out feature extraction to image;And sorter network module is then equivalent to encoder, is mainly used for decoding, with Prediction tissue points type is simultaneously split.
If after the training the full convolutional network of multiple-limb include top set's three-dimensional residual error structure, inferior division three-dimensional residual error structure and Sorter network module, then step is " using the full convolutional network of multiple-limb after training respectively to fused image and third modality images Feature extraction is carried out, and merges the feature extracted " may include:
(1) feature extraction is carried out to fused image using top set's three-dimensional residual error structure, obtains top set's feature.
Wherein, which may include top set's convolution module, first top set's residual error module, One top set's down sample module, second top set's residual error module, second top set's down sample module, third top set residual error module With third top set down sample module, then step " using top set's three-dimensional residual error structure to fused image carry out feature extraction, Obtain top set's feature " may include:
A, process of convolution is carried out to fused image using top set's convolution module;
B, the output of top set's convolution module is encoded using first top set's residual error module, and uses and divides on first Branch down sample module carries out down-sampling operation to coding result;
C, the output of first top set's down sample module is encoded using second top set's residual error module, and using the Two top set's down sample modules carry out down-sampling operation to coding result;
D, the output of second top set's down sample module is encoded using third top set residual error module, and using the Three top set's down sample modules carry out down-sampling operation to coding result, obtain top set's feature.
Wherein, top set's convolution module, first top set's residual error module, second top set's residual error module and third top set The network parameter of residual error module can be configured according to the demand of practical application.For example, referring to fig. 4 and Fig. 5, top set's convolution It is 32 that module, which can be set to size, the convolutional layer (convolutional layer of Ji Tu4Zhong top set that convolution kernel size is 3 × 3 × 3 32);First top set's residual error module can be set to residual error module (the residual error mould of Ji Tu4Zhong top set that size is 32 Block 32);Second top set's residual error module can be set to the residual error module (residual error of Ji Tu4Zhong top set that size is 64 Module 64);Third top set residual error module can be set to size be 128 residual error module (Ji Tu4Zhong top set it is residual Difference module 128), etc..
(2) feature extraction is carried out to third modality images using inferior division three-dimensional residual error structure, obtains inferior division feature;
Wherein, which includes under inferior division convolution module, the first inferior division residual error module, first Branch's down sample module, the second inferior division residual error module, the second inferior division down sample module, third inferior division residual error module and Three inferior division down sample modules;Then step " carries out feature extraction to third modality images using inferior division three-dimensional residual error structure, obtains To inferior division feature " may include:
A, process of convolution is carried out to third modality images using inferior division convolution module;
B, the output of inferior division convolution module is encoded using the first inferior division residual error module, and uses first lower point Branch down sample module carries out down-sampling operation to coding result;
C, the output of the first inferior division down sample module is encoded using the second inferior division residual error module, and using the Two inferior division down sample modules carry out down-sampling operation to coding result;
D, the output of the second inferior division down sample module is encoded using third inferior division residual error module, and using the Three inferior division down sample modules carry out down-sampling operation to coding result, obtain inferior division feature.
Wherein, inferior division convolution module, the first inferior division residual error module, the second inferior division residual error module and third inferior division The network parameter of residual error module can be configured according to the demand of practical application.For example, referring to fig. 4 and Fig. 5, inferior division convolution It is 32 that module, which can be set to size, the convolutional layer (convolutional layer of Ji Tu4Zhong top set that convolution kernel size is 3 × 3 × 3 32);First inferior division residual error module can be set to residual error module (the residual error mould of Ji Tu4Zhong top set that size is 32 Block 32);Second inferior division residual error module can be set to the residual error module (residual error of Ji Tu4Zhong top set that size is 64 Module 64);Third inferior division residual error module can be set to size be 128 residual error module (Ji Tu4Zhong top set it is residual Difference module 128), etc..
In addition, it should be noted that, in the extraction process of top set's feature and inferior division feature, step is " to coding result Carry out down-sampling operation " mode can there are many, for example, can be using maximum pond layer come to coding result progress down-sampling Operation;Optionally, can also be using other modes, for example can be respectively adopted that two are parallel, step-length is identical and has difference The convolutional layer of convolution kernel carries out process of convolution to coding result, is then carried out by batch normalizing layer to the result that process of convolution obtains Criticize normalization, the purpose of Lai Shixian down-sampling.
Wherein, should the network parameter of " two parallel, step-length is identical and convolutional layer with different convolution kernels " specifically can root It is configured according to the demand of practical application, for example, the step-length of the two convolutional layers can be set to 2 (i.e. s=2 × 2 referring to Fig. 6 × 2), convolution kernel can be respectively set to 3 × 3 × 3 and 1 × 1 × 1 (referring to " Con 3 × 3 × 3, s=2 × 2 × 2 " in Fig. 6 " Con 1 × 1 × 1, s=2 × 2 × 2 ").Optionally, the structure of the carry out down-sampling operation can also include active coating, with Just non-linear factor is added in batch normalized result, to improve the ability to express of feature.For convenience, in Fig. 6 In, batch normalization layer and active coating are referred to as " BN, ReLU ".
Optionally, " down sample module " in step 102 is in addition to that can use maximum pond layer come other than realizing, similarly It can be realized using down-sampling structure as shown in FIG. 6, therefore not to repeat here.
(3) top set's feature and inferior division feature are merged by sorter network module.
For example, can be by the way that top set's feature be melted in such a way that inferior division feature carries out voxel addition or is multiplied Close, etc..
104, intracranial tissue is split according to feature after fusion, obtains initial segmentation result;For example, specifically can be as Under:
Classified using sorter network module to feature after fusion, intracranial tissue be split based on classification results, Obtain initial segmentation result.
For example, feature after the fusion for belonging to " white matter lesion " type can be added to identity set according to classification results, And be split based on the set, corresponding white matter lesion region, etc. can be obtained.
It include: a size with the sorter network module is 256 and convolution kernel is for example, as shown in Figure 4 and Figure 5 The convolutional layer (i.e. convolutional layer 256) of " 1 × 1 × 1 ", one having a size of 256 residual error module " residual error module 256 ", a size For 128 warp lamination (i.e. warp lamination 128), one having a size of 128 residual error module (residual error module 128), a size For 64 warp lamination (i.e. warp lamination 64), one having a size of 64 residual error module (residual error module 64), one having a size of 32 Warp lamination (i.e. warp lamination 32), one having a size of 32 residual error module (residual error module 32), one having a size of 10 and roll up The convolutional layer (i.e. convolutional layer 10) and multiple BN and activation primitive layer (criticizing normalization and active coating) that product core is 1 × 1 × 1 For, then the process classified to feature after fusion can be such that
Process of convolution is carried out to feature after fusion using " convolutional layer 256 ", and using " residual error module 256 " to " convolutional layer After 256 " output is handled, processing result is imported and carries out deconvolution processing in warp lamination " warp lamination 128 ", so Afterwards, deconvolution processing result is merged with " residual error module 128 " obtained characteristic pattern in residual error network, and fusion is tied After fruit carries out batch normalization and introduces non-linear factor, another residual error module " residual error module as the sorter network 128 " input is similar to be handled, and carries out the output of " residual error module 128 " instead in warp lamination " warp lamination 64 " After process of convolution, the deconvolution processing result and " residual error module 64 " obtained characteristic pattern of residual error network can be melted It closes;It similarly, can be with after being carried out batch normalization by " batch normalization and active coating " to fusion results and being introduced non-linear factor Input of the output that " normalization and active coating will be criticized " as another residual error module " residual error module 64 " of the sorter network, warp After processing by " residual error module 64 " and warp lamination " warp lamination 32 ", then by deconvolution processing result and residual error network " residual error module 32 " obtained characteristic pattern is merged, finally, by the fusion results via " batch normalization and active coating ", After the processing of " residual error module 32 " and convolutional layer (i.e. convolutional layer 10), the type of feature after each fusion can be obtained.
Optionally, the full convolutional network of multiple-limb can be formed by the training of multiple second sample image groups after the training, specifically After being trained by other equipment, it is supplied to the medical image segmentation device, alternatively, can also be by the medical image segmentation Device is voluntarily trained;I.e. step is " using the full convolutional network of multiple-limb after training respectively to fused image and third mode Image carries out feature extraction " before, which can also include:
Multiple second sample image groups are acquired, which includes the first mode for tissue regions segmentation Image pattern, for intracranial area identification second mode image pattern and for the third of protein lesion region recognition Modality images sample;First mode image pattern and second mode image pattern are merged, fused image sample is obtained; Feature extraction is carried out to fused image sample and third modality images sample respectively using the full convolutional network of default multiple-limb;Melt The feature extracted is closed, and is classified to feature after fusion, type prediction value is obtained;The type for obtaining feature after merging is true Value;By more Classification Loss functions, the full convolutional network of the multiple-limb is received according to the type predicted value and type true value It holds back, the full convolutional network of multiple-limb after being trained.
For example, can specifically acquire multiple about the 3 d medical images of brain as raw data set, then to the original Image in beginning data set is pre-processed, such as the operation such as duplicate removal, cutting, rotation and/or overturning, default to obtain meeting this Then the image of the input standard of the full convolutional network of multiple-limb carries out the mark of voxel type to these pretreated images, And the image for belonging to same brain is added in the same image collection, so that each brain corresponds to an image collection, In, these image collections in embodiments of the present invention, i.e. referred to as the second sample image group.
Wherein, mode first mode image pattern and second mode image pattern merged can there are many, than Such as, feature addition or channel splicing etc. can be passed through.Optionally, if in a step 102, by first mode image and second Modality images merge and saved, then at this point, saved fused image also can be read directly, without Again first mode image pattern and second mode image pattern are merged, i.e., as shown in figure 5, it can be by first mode figure After decent and second mode image pattern are merged, fused image, which is respectively supplied to first stage and second stage, to be made With.
It should be noted that step 102 and 103 execution can be in no particular order.
105, the exposure mask and initial segmentation result are merged, obtains the corresponding segmentation result of the image group to be split.
For example, (the Element-wise that exposure mask can specifically be multiplied with initial segmentation result progress voxel Multiplication the corresponding segmentation result of the image group to be split) is obtained;Specifically it can be such that
The value for obtaining the value of each voxel and each voxel in the initial segmentation result on the exposure mask respectively, according to the exposure mask The value of upper each voxel establishes the first matrix, and establishes the second matrix according to the value of voxel each in the initial segmentation result, Element in element and the second matrix in first matrix is subjected to dot product operation, obtains the corresponding segmentation of the image group to be split As a result.
From the foregoing, it will be observed that the present embodiment is after getting image group to be split, on the one hand, can be according to the image to be split Multiple modalities image in group carries out skull removing, obtains the exposure mask of removing skull, on the other hand can be respectively to a variety of moulds State image carries out feature extraction and fusion, is split further according to feature after fusion to intracranial tissue, then, then segmentation is obtained Initial segmentation result merged with the exposure mask obtained before, obtain final segmentation result;Since the program is carrying out just When beginning to divide, extracted using the feature first to each modality images, then the mode merged, therefore, Ke Yijin The information that each mode is contained may be retained, improve the ability to express of extracted feature, moreover, it is also possible to utilize skull point The positive phenomenon of the vacation in the obtained segmentation result of initial segmentation is eliminated from obtained exposure mask (has judged the class of a certain pixel by accident Type, for example, a certain pixel is background, but has been considered as target object in segmentation, this phenomenon is known as false sun), therefore, Relative to directly carrying out fusion to each modality images and skull is removed, then for the scheme that is split based on peel results, The accuracy of feature representation ability and segmentation can be improved.
According to method described in upper one embodiment, citing is described in further detail below.
In the present embodiment, the network equipment, and first mode image will be specifically integrated in the brain image segmenting device T1 image, second mode image specially in nuclear magnetic resonance modality images are specially the T1_IR in nuclear magnetic resonance modality images Image and third modality images are specially to be illustrated for the T2 image in nuclear magnetic resonance modality images;Wherein, such as Fig. 7 Shown, T1 image is generally used for tissue regions segmentation, and T1_IR image is generally used for intracranial area identification (i.e. skull separation), and T2 image is generally used for protein lesion region recognition.
(1) training of brain image parted pattern.
Wherein, which may include three-dimensional full convolutional network and the full convolutional network of multiple-limb, this three The training for tieing up full convolutional network and the full convolutional network of multiple-limb can be such that
(1) training of three-dimensional full convolutional network;
Firstly, the network equipment can acquire multiple first sample image groups, wherein each first sample image group can be with Image (i.e. multiple modalities image) under multiple mode of brain including same target such as same people, such as T1 image and T1_IR Image etc., and each modality images are labeled with true voxel vertex type.
Secondly, the network equipment can make part first sample image group after collecting multiple first sample image groups For training dataset, another part is as validation data set, for example, can be by these first sample image groups at random according to 4:1 Ratio be divided into training dataset and validation data set, etc.;It then, can be using training dataset to default three-dimensional full volume Product network is trained, and is verified using validation data set, with the full convolutional network of three-dimensional after train.
Wherein, the process that the network equipment is trained default three-dimensional full convolutional network using training dataset can be as Under:
A1, the network equipment by first sample image group T1 image and T1_IR image merge, scheme after being merged Decent.
For example, the network equipment can be added by feature or the modes such as channel splicing, to T1 image and T1_IR image into Row fusion, obtains fused image sample.
Convolutional network predicts the type of tissue points in fused image entirely using default three-dimensional for A2, the network equipment, Obtain predicted value.
Wherein, the encoder section of the full convolutional network of the three-dimensional can be realized by three-dimensional residual error network, specifically can wrap Convolutional layer, multiple residual error modules and multiple down sample modules etc. are included, and decoder section can then be realized by sorter network, specifically It may include multiple residual error modules and multiple warp laminations, in addition, decoder section can also include convolutional layer, a Yi Jiduo A batch of normalization (BN) layer and activation (ReLU) layer (can also be using BN and activation primitive as same layer --- batch normalization and sharp Layer live to realize, for example, specifically may refer to Fig. 3, Fig. 5 and Fig. 9.
For example, fused image can be imported the three-dimensional residual error network of the full convolutional network of the three-dimensional by the network equipment, with into Row feature extraction obtains characteristic information, then, is carried out using the sorter network of the full convolutional network of the three-dimensional to these characteristic informations Processing, to predict the type of each tissue points in the fused image, obtains the prediction of voxel vertex type in fused image Value.
A3, the network equipment determine fused image according to the mark of each modality images in the first sample sample image group The true value of middle voxel vertex type.
A4, using cross entropy loss function, the full convolutional network of the three-dimensional is restrained according to the predicted value and true value, Three-dimensional full convolutional network after being trained.
Optionally, full convolutional network three-dimensional after training can also be verified and is adjusted using validation data set, to mention The accuracy of the high two dimension parted pattern.
(2) the full convolutional network training of multiple-limb;
Firstly, the network equipment can acquire multiple second sample image groups, wherein each second sample image group can be with Image under multiple mode of brain including same target such as same people, such as T1 image, T1_IR image and T2 image, and Each modality images are labeled with true voxel vertex type.
It is similar with collected first sample image group, after collecting multiple second sample image groups, the network equipment Can be using part the second sample image group as training dataset, another part is as validation data set, for example, can be by these Second sample image group is divided into training dataset and validation data set, etc. according to the ratio of 4:1 at random;It then, can benefit The full convolutional network of preset multiple-limb is trained with training dataset, and is verified using validation data set, to obtain The full convolutional network of multiple-limb after training.
Wherein, the process that the network equipment is trained the full convolutional network of preset multiple-limb using training dataset can be with It is as follows:
B1, the network equipment by the second sample image group T1 image and T1_IR image merge, scheme after being merged Decent.
For example, can be merged by the modes such as feature addition or channel splicing to T1 image and T1_IR image, obtain To fused image sample.
B2, the network equipment are using the default full convolutional network of multiple-limb respectively to fused image sample and third modality images Sample carries out feature extraction, and merges the feature extracted.
Wherein, the encoder section of the full convolutional network of the multiple-limb may include top set's three-dimensional residual error structure and inferior division Three-dimensional residual error structure, and decoder section can then be realized by sorter network module etc., for example, specific structure can be such as Fig. 4 It is shown.Wherein, top set's three-dimensional residual error structure is used to carry out feature extraction to fused image sample, obtains top set's feature, Inferior division three-dimensional residual error structure is used to carry out feature extraction to T2 image, obtains inferior division feature, and sorter network module is then used It is merged in top set's feature and inferior division feature, and feature after fusion is classified and divided (i.e. realization step B3)。
Optionally, top set's three-dimensional residual error structure is similar with the network structure of inferior division three-dimensional residual error structure, specific to roll up The network parameter of lamination number, residual error number of modules and each layer can depending on the demand of practical application, for example, referring to fig. 4, Fig. 5 and Fig. 9, structure can be such that
Top set's three-dimensional residual error structure includes: under top set's convolution module, first top set's residual error module, the first top set Divide in sampling module, second top set's residual error module, second top set's down sample module, third top set residual error module and third Branch down sample module.
Inferior division three-dimensional residual error structure includes: under inferior division convolution module, the first inferior division residual error module, the first inferior division Sampling module, the second inferior division residual error module, the second inferior division down sample module, third inferior division residual error module and lower point of third Branch down sample module.
For details, reference can be made to preceding for the extraction process of top set's three-dimensional residual error structure and inferior division three-dimensional residual error structure to feature The embodiment in face, therefore not to repeat here.
Optionally, the structure of the sorter network module can also be configured according to the demand of practical application, for example, specifically It may include multiple convolutional layers, multiple residual error modules, multiple warp laminations and multiple batches of normalization and active coating, etc., tool Body can be found in the embodiment of front, and therefore not to repeat here,
B3, the network equipment classify to feature after fusion, obtain type prediction value.
For example, the network equipment specifically can be by the sorter network module in the full convolutional network of multiple-limb to feature after fusion Classify, obtains type prediction value.
B4, the network equipment determine feature after fusion according to the mark of each modality images in the second sample sample image group Type true value.
Wherein, the mark of each modality images can be by medical staff according to goldstandard (Gold standard) Lai Jinhang Mark;So-called goldstandard is a kind of method of judgement disease of clinical medicine circle.
B5, the network equipment are by more Classification Loss functions, according to the type predicted value and type true value to the multiple-limb Full convolutional network is restrained, the full convolutional network of multiple-limb after being trained.
Wherein, it is easy missing inspection white matter lesion region in view of common loss function such as CrossEntropy, therefore, can adopted Restrained with more Classification Loss functions, the design parameters of more Classification Loss functions can according to the demand of practical application into Row setting, for example, if there is N class brain area to need to divide (for example needing to divide albumen qualitative change region and grey matter regions), it can To use following formula as more Classification Loss function L (j):
Wherein, i is tissue points, and j and k are tissue points classification, and p is type prediction value, and g is type true value (such as doctor The goldstandard provided).For example, pijIndicate be " pixel i as voxel vertex type for j when type prediction value ", gijIt indicates Be " pixel i as voxel vertex type be j when type true value ";pikWhat is indicated is that " pixel i is as voxel vertex type Type prediction value when for k ", gikIndicate be " pixel i as voxel vertex type for k when type true value ".
Optionally, the full convolutional network of multiple-limb after training can also be verified and is adjusted using validation data set, with Improve the accuracy of the two dimension parted pattern.
In addition, it should be noted that, the ability to express of each feature in order to further increase, in three-dimensional full convolutional network and more points Residual error module (such as each residual error module in three-dimensional full convolutional network, top set's three-dimensional residual error knot in the full convolutional network of branch First top set's residual error module, second top set's residual error module and third top set residual error module, inferior division three-dimensional in structure is residual The first inferior division residual error module, the second inferior division residual error module and third inferior division residual error module in poor structure, and classification Each residual error module in network module etc.) other than the residual unit structure in 3 dimension U-Net can be used, it can also adopt With residual error modular structure as shown in Figure 8.I.e. as shown in figure 8, the residual error module may include Liang Ge branch, one of branch Successively may include batch normalization layer (BN), the convolutional layer that convolution kernel is 3 × 3 × 3, batch normalization and active coating (BN, ReLU), The convolutional layer and another batch of normalization and active coating that another convolution kernel is 3 × 3 × 3, and another branch may include The convolutional layer that one convolution kernel is 1 × 1 × 1;When being respectively adopted at input data of the two branches to the residual error module After reason, the obtained processing result of Liang Ge branch is merged, and using another batch of normalization and active coating to fusion institute Obtained result is handled, and the final output of the residual error module can be obtained.
(2) by the brain image parted pattern after training, segmented image group can be split, to obtain The segmentation result needed, for details, reference can be made to Fig. 9.
As shown in Figure 9 and Figure 10, a kind of brain image dividing method, detailed process can be such that
201, the network equipment obtains image group to be split, wherein and the image group to be split may include multiple modalities image, Such as T1 image, T1_IR image and T2 image.
For example, specifically can by each medical image acquisition device, such as CT or NMR imaging instrument equipment to need into The brain of the people of row brain image detection carries out Image Acquisition, then, by the collected multiple modalities image about the brain Such as T1 image, T1_IR image and T2 image are supplied to the network equipment as an image group to be split.
202, the network equipment merges T1 image and T1_IR image, obtains fused image.
Wherein, specific amalgamation mode can there are many, for example, the network equipment can pass through feature be added or channel splicing Etc. modes the T1 image and T1_IR image merged, obtain fused image.
203, the network equipment carries out the type of tissue points in fused image using full convolutional network three-dimensional after training pre- It surveys, then executes step 204.
For example, the network equipment by the three-dimensional residual error network in full convolutional network three-dimensional after the training to fused image into Row feature extraction obtains characteristic information, then, then using the sorter network in full convolutional network three-dimensional after the training, according to To characteristic information predict the types of each tissue points in the fused image.For example, with three-dimensional complete after training shown in Fig. 9 For the structure of convolutional network, then the process specifically can be such that
(1) encoder section (three-dimensional residual error network):
As shown in figure 9, the network equipment after being merged T1 image and T1_IR image, can use convolutional layer 32 right The fused image carries out process of convolution, and using convolution processing result as the input of residual error module 32 to carry out feature extraction, Then, after characteristic pattern residual error module 32 exported carries out down-sampling operation via a down sample module, as residual error module 64 Input to carry out the feature extraction of another scale;Similar, the characteristic pattern that residual error module 64 is exported is via another down-sampling After module carries out down-sampling operation, the input as residual error module 128;It is adopted after the processing of residual error module 128, then using under another Egf block carries out down-sampling operation to the characteristic pattern that residual error module 128 exports, and so on, multiple and different rulers are utilized in this way After the feature extraction of very little residual error module and down-sampling operation, characteristic information can be obtained, wherein this feature information includes three The final output value of residual error network and the characteristic pattern of multiple and different scales are tieed up, such as 32 gained of residual error module in residual error network To characteristic pattern, residual error module 128 is obtained in the obtained characteristic pattern of residual error module 64 and residual error network in residual error network Characteristic pattern.
It should be noted that in above-mentioned decoding process, the implementation of " down-sampling operation " can there are many, for example, can To realize that down-sampling is operated using maximum pond layer, alternatively, can also using others modes, such as can be respectively adopted as Structure shown in fig. 6 is come the purpose, etc. for realizing down-sampling, and for details, reference can be made to the embodiments of front, and therefore not to repeat here.
(2) decoder section (sorter network):
As shown in figure 9, the network equipment can be using 256 pairs of residual error module three-dimensional residual error networks after obtaining characteristic information Final output value handled, and will processing result import warp lamination 128 in carry out deconvolution processing, then, by warp Product processing result is merged with the obtained characteristic pattern of residual error module 128 in residual error network, and fusion results are carried out batch After normalization and introducing non-linear factor, the input of the residual error module 128 as the sorter network is similar to be handled, It, can be by the deconvolution processing result and residual error after the output of residual error module 128 is carried out deconvolution processing by warp lamination 64 The obtained characteristic pattern of residual error module 64 of network is merged;Similarly, by " batch normalization and active coating " to fusion results After carrying out batch normalization and introducing non-linear factor, the output that " can will criticize normalization and active coating " is as the sorter network Residual error module 64 input, after the processing via residual error module 64 and warp lamination deconvolution layer 32, then deconvolution handled As a result it is merged with the obtained characteristic pattern of the residual error module of residual error network 32, finally, by the fusion results via " batch normalizing After change and active coating ", residual error module 32 and convolutional layer 2 are handled, the class of each tissue points in the fused image can be obtained Type, it is, of course, also possible to obtain corresponding type probability.For example, the type of available tissue points K1 is " intracranial area ", type Probability is " 80% ";The type of tissue points K2 is " intracranial area ", and type probability is " 60% ";The type of tissue points K3 is " back Scene area ", type probability are " 95% ", and so on, etc..
204, the network equipment is not belonging to the tissue points of intracranial area according to the screening of the voxel vertex type of prediction, obtains background body Element collection, then executes step 205.
For example, if in step 203, the type for obtaining tissue points K3 is " background area ", and its type probability is " 95% ", then at this point it is possible to then tissue points K3 is added to back by the tissue points that determining tissue points K3 is not belonging to intracranial area In scape voxel collection, etc..
205, the network equipment shields the background voxels collection in the fused image, obtains covering for removing skull Then film executes step 209.
For example, the network equipment can set the value of tissue points in intracranial area to 1, and by the tissue points of background area Value be set as 0, to obtain the exposure mask of the removing skull, referring to Fig. 9.
For example, still using tissue points K1 and K2 as the tissue points of intracranial area, and tissue points K3 is the voxel of background area For point, then at this point it is possible in the fused image, 1 is set by the value of tissue points K1 and K2, and by the value of tissue points K3 It is set as 0, and so on, similar operations can also be made to other tissue points, the exposure mask of the removing skull can be obtained.In this way, It is subsequent to merge (for example be multiplied) with initial segmentation result with the exposure mask, it can make intracranial area image in initial segmentation result Value remains unchanged, and intracranial area image value is all 0, that is to say, that can eliminate the positive phenomenon of vacation in initial segmentation result.
206, the network equipment is using top set's three-dimensional residual error structure in the full convolutional network of multiple-limb after training, after fusion Image (i.e. the blending image of T1 image and T1_IR image) carries out feature extraction, obtains top set's feature, and use inferior division Three-dimensional residual error structure carries out feature extraction to T2 image, obtains inferior division feature, then executes step 207;For example, specifically can be with It is as follows:
(1) extraction of top set's feature;
As shown in figure 9, the network equipment after being merged T1 image and T1_IR image, can be rolled up first using top set Volume module carries out process of convolution to fused image, then passes through output of the first top set's residual error module to top set's convolution module It is encoded, and down-sampling operation is carried out to coding result using first top set's down sample module, then, continued using the Two top set's residual error modules encode the output of first top set's down sample module, and use second top set's down-sampling Module carries out down-sampling operation to coding result;Finally, again using third top set residual error module to second top set's down-sampling The output of module is encoded, and carries out down-sampling operation to coding result using third top set down sample module, Obtain top set's feature.
(2) extraction of inferior division feature;
As shown in figure 9, similar with top set's feature is extracted, the network equipment can first scheme T2 using inferior division convolution module As carrying out process of convolution, then, then the first inferior division residual error module is used to encode the output of inferior division convolution module, with And down-sampling operation is carried out to coding result using the first inferior division down sample module;Subsequently, continue using the second inferior division Residual error module encodes the output of the first inferior division down sample module, and using the second inferior division down sample module to volume Code result carries out down-sampling operation;Finally, again using third inferior division residual error module to the defeated of the second inferior division down sample module It is encoded out, and down-sampling operation is carried out to coding result using third inferior division down sample module, can obtain down dividing Zhi Tezheng.
It should be noted that step " carries out down coding result in the extraction process of top set's feature and inferior division feature The mode of sampling operation " can there are many, for example, can using maximum pond layer come to coding result progress down-sampling operation, Alternatively, can also be using other modes, for example can be respectively adopted that two parallel, step-length is identical and with different convolution kernels Convolutional layer carries out process of convolution to coding result, then carries out batch normalizing to the result that process of convolution obtains by batch normalizing layer Change, the purpose, etc. of Lai Shixian down-sampling, for details, reference can be made to the embodiments of front, and therefore not to repeat here.
It should be noted that in the present embodiment, step 203 and 206 execution can be in no particular order;In addition, in step 206 In, the sequence of the extraction process of the extraction and inferior division feature of top set's feature can also in no particular order, and therefore not to repeat here.
207, the network equipment by the sorter network module after training in the full convolutional network of multiple-limb by top set's feature and Inferior division feature is merged, and step 208 is then executed.
For example, as shown in figure 9, specifically can be by sorter network module, it will be upper in such a way that voxel is added or is multiplied Branching characteristic and inferior division feature merge, etc..
208, the network equipment is by the sorter network module in the full convolutional network of multiple-limb after training, according to feature after fusion Intracranial tissue is split, initial segmentation result is obtained, then executes step 209.
For example, the network equipment can classify to feature after fusion by the sorter network module, then, based on classification As a result intracranial tissue is split, obtains initial segmentation result.
For example, if the type of some voxel S1 is " white matter lesion " in feature after fusion, and type probability is 90%;Voxel The type of S2 is " grey matter ", and type probability is 87%;The type of voxel S3 is " white matter ", and type probability is 79%;Voxel The type of S4 is " white matter lesion ", and type probability is 88%;The type of voxel S5 is " grey matter ", and type probability is 75%; It, will then at this point it is possible to which the voxel for belonging to " white matter lesion " type S1 and S4 are added to identity set " white matter lesion voxel collection " The voxel S3 for belonging to " white matter " type is added to set " white matter voxel collection ", and will belong to the voxel S2 and S5 of " grey matter " type It is added to set " grey matter voxel collection ", then, these voxel collections is based respectively on and carries out region segmentation, can be obtained corresponding " white Matter lesion region ", " white matter region " and " grey matter regions ".
Wherein, white matter here refers to white matter of brain, and grey matter refers to ectocinerea.It is well known that brain is by up to a hundred What hundred million neurons formed, and what neuron was made of cell body and nerve fibre, there is nucleus (color in cell body Deeply), there is cytoplasm (of light color) in nerve fibre.Since in the brain, cell body is gathered in brain surface layer, it appears that color It is deep, so this part is generally known as ectocinerea, and nerve fibre is gathered in inside brain, it appears that it is of light color, so one As be known as white matter of brain.
209, the network equipment merges the exposure mask that step 205 obtains and the initial segmentation result that step 208 obtains, and obtains To the corresponding segmentation result of the image group to be split.
(the Element-wise for example, exposure mask can be multiplied by the network equipment with initial segmentation result progress voxel Multiplication), the corresponding segmentation result of the image group to be split is obtained;Specifically it can be such that
The network equipment obtains the value of the value of each voxel and each voxel in the initial segmentation result on the exposure mask, root respectively The first matrix is established according to the value of each voxel on the exposure mask, and establishes according to the value of voxel each in the initial segmentation result Then element in element and the second matrix in first matrix is carried out dot product operation, obtains the image to be split by two matrixes The corresponding segmentation result of group, referring to Figure 11.
After obtaining segmentation result, which can be supplied to corresponding user, for example medical staff makees Further operating therefrom finds out " white matter lesion region " for example, medical staff can be based on the segmentation result, to determine this point Whether accurate, or judge the case where patient if cutting, for example, if suffer from Parkinson's disease or multiple sclerosis, journey Degree how, etc..
From the foregoing, it will be observed that the present embodiment is after getting image group to be split, on the one hand, can be according to the image to be split T1 image and T1_IR image in group carry out skull removing, obtain the exposure mask of removing skull, on the other hand can scheme respectively to T1 Picture, T1_IR image and T2 image carry out feature extraction and fusion, are split further according to feature after fusion to intracranial tissue, so Afterwards, then by the initial segmentation result that segmentation obtains it is merged with the exposure mask obtained before, obtains final segmentation result;Due to The program is extracted when carrying out initial segmentation using the feature first to each modality images, then the side merged Therefore formula can retain the information that each mode is contained as far as possible, improve the ability to express of extracted feature, moreover, also The exposure mask of skull resulting separation be can use to eliminate the positive phenomenon of vacation in the obtained segmentation result of initial segmentation, therefore, Relative to directly carrying out fusion to each modality images and skull is removed, then for the scheme that is split based on peel results, The accuracy of feature representation ability and segmentation can be improved.
Further, since the program is when carrying out skull separation, using three-dimensional full convolutional network, accordingly, with respect to adopting For scheme of the software to carry out skull removing, a large amount of software parameter of setting may not need, and carry out complicated tune ginseng Operation, therefore, not only realize it is more simple, but also can to avoid due to setting it is improper caused by peel results are undesirable asks Topic is conducive to improve segmentation accuracy.
In order to better implement above method, the embodiment of the present invention also provides a kind of brain image segmenting device, the brain Image segmentation device can integrate in the network device, which can be server, be also possible to the equipment such as terminal.
For example, as shown in figure 12, which may include acquiring unit 301, stripping unit 302, mentions Unit 303, cutting unit 304 and integrated unit 305 are taken, as follows:
(1) acquiring unit 301;
Acquiring unit 301, for obtaining image group to be split, wherein the image group to be split includes a variety of moulds of brain State image.
For example, acquiring unit 301, specifically can be used for receiving each medical image acquisition device, such as CT or nuclear magnetic resonance The image group to be split of the transmissions such as imager.Each group of image group to be split, can be by each medical image acquisition device to same An object, for example the brain of the same person is acquired to obtain.
(2) stripping unit 302;
Stripping unit 302 obtains the exposure mask of removing skull for carrying out skull removing according to the multiple modalities image.
Optionally, can be using whole modality images when carrying out skull separation, it can also be only with part therein Modality images, for example, including that first mode image, second mode image and third modality images are with the multiple modalities image Example, then at this time:
Stripping unit 302 specifically can be used for carrying out skull removing according to first mode image and second mode image, obtain To the exposure mask of removing skull.
For example, in some embodiments, which specifically can be used for first mode image and the second mould State image is merged, and fused image is obtained, using full convolutional network three-dimensional after training to tissue points in fused image Type is predicted, the tissue points of intracranial area are not belonging to according to the screening of the voxel vertex type of prediction, obtain background voxels collection, The background voxels collection is shielded in the fused image, obtains the exposure mask of removing skull.
For example, the stripping unit 302, specifically can be used for setting the value of tissue points in intracranial area to 1, and will back The value of the tissue points of scene area is set as 0, to obtain the exposure mask of the removing skull.
(3) extraction unit 303;
Extraction unit 303 for carrying out feature extraction to the multiple modalities image respectively, and merges the feature extracted.
For example, still including first mode image, second mode image and third modality images with the multiple modalities image For, then at this point, the extraction unit 303, specifically can be used for respectively to first mode image, second mode image and third mould State image carries out feature extraction, and merges the feature extracted, for example, specifically can be such that
First mode image and second mode image are merged, fused image is obtained, using multiple-limb after training Full convolutional network carries out feature extraction to fused image and third modality images respectively, and merges the feature extracted.
It should be noted that optionally, if stripping unit 302 carries out first mode image and second mode image It merges and is saved, then at this point, saved fused image also can be read directly in extraction unit 303, without weight Newly first mode image and second mode image are merged, it may be assumed that
Extraction unit 303 specifically can be used for obtaining fused image, be distinguished using the full convolutional network of multiple-limb after training Feature extraction is carried out to fused image and third modality images, and merges the feature extracted.
Wherein, the structure of the full convolutional network of multiple-limb can be configured according to the demand of practical application after the training, than Such as, the full convolutional network of multiple-limb may include top set's three-dimensional residual error structure, inferior division three-dimensional residual error structure and divide after the training Class network module, then at this time:
The extraction unit 303 specifically can be used for carrying out feature to fused image using top set's three-dimensional residual error structure It extracts, obtains top set's feature;Feature extraction is carried out to third modality images using inferior division three-dimensional residual error structure, obtains down dividing Zhi Tezheng;Top set's feature and inferior division feature are merged by sorter network module.
Wherein, which may include top set's convolution module, first top set's residual error module, One top set's down sample module, second top set's residual error module, second top set's down sample module, third top set residual error module With third top set down sample module;Then:
The extraction unit 303 specifically can be used for carrying out process of convolution to fused image using top set's convolution module; The output of top set's convolution module is encoded using first top set's residual error module, and uses first top set's down-sampling mould Block carries out down-sampling operation to coding result;Output using second top set's residual error module to first top set's down sample module It is encoded, and down-sampling operation is carried out to coding result using first top set's down sample module;Using third, top set is residual Difference module encodes the output of second top set's down sample module, and is tied using third top set down sample module to coding Fruit carries out down-sampling operation, obtains top set's feature.
Wherein, which includes under inferior division convolution module, the first inferior division residual error module, first Branch's down sample module, the second inferior division residual error module, the second inferior division down sample module, third inferior division residual error module and Three inferior division down sample modules;Then:
The extraction unit 303 specifically can be used for carrying out at convolution third modality images using inferior division convolution module Reason;The output of inferior division convolution module is encoded using the first inferior division residual error module, and uses and is adopted under the first inferior division Egf block carries out down-sampling operation to coding result;Using the second inferior division residual error module to the first inferior division down sample module Output is encoded, and carries out down-sampling operation to coding result using the first inferior division down sample module;Using lower point of third Branch residual error module encodes the output of the second inferior division down sample module, and using third inferior division down sample module to volume Code result carries out down-sampling operation, obtains inferior division feature.
Wherein, top set's convolution module, first top set's residual error module, second top set's residual error module, third top set Residual error module, inferior division convolution module, the first inferior division residual error module, the second inferior division residual error module and third inferior division residual error The network parameter of module can be configured according to the demand of practical application, and for details, reference can be made to the embodiments of front, not made herein It repeats.
In addition, it should be noted that, in embodiments of the present invention, the mode of down-sampling operation can there are many, for example, can be with Down-sampling operation is carried out using maximum pond layer;Optionally, can also be using other modes, for example two can be respectively adopted A convolutional layer parallel, that step-length is identical and with different convolution kernels carries out process of convolution to the data for needing to carry out down-sampling, so Batch normalization, the purpose, etc. of Lai Shixian down-sampling, before being detailed in are carried out to the result that process of convolution obtains by batch normalizing layer afterwards The embodiment of the method in face, therefore not to repeat here.
(4) cutting unit 304;
Cutting unit 304 obtains initial segmentation result for being split according to feature after fusion to intracranial tissue.
For example, the cutting unit 304, specifically can be used for classifying to feature after fusion using sorter network module, Intracranial tissue is split based on classification results, obtains initial segmentation result.
(5) integrated unit 305;
It is corresponding to obtain the image group to be split for merging the exposure mask and initial segmentation result for integrated unit 305 Segmentation result.
For example, the integrated unit, specifically can be used for obtaining the value of each voxel and the initial segmentation on the exposure mask respectively As a result the value of upper each voxel, establishes the first matrix according to the value of voxel each on the exposure mask, and according to the initial segmentation knot The value of each voxel establishes the second matrix on fruit, and the element in the element and the second matrix in the first matrix is carried out dot product behaviour Make, obtains the corresponding segmentation result of the image group to be split.
Optionally, three-dimensional full convolutional network can be formed by the training of multiple first sample image groups after training, specifically can be with After being trained by other equipment, it is supplied to the medical image segmentation device, alternatively, can also be by the medical image segmentation device Voluntarily it is trained;I.e. as shown in figure 13, which can also include the first acquisition unit 306 and the first instruction Practice unit 307, as follows:
First acquisition unit 306 can be used for acquiring multiple first sample image groups, wherein the first sample image Group includes the first mode image pattern for tissue regions segmentation and the second mode image sample for intracranial area identification This grade image pattern.
First training unit 307 can be used for melting first mode image pattern and second mode image pattern It closes, obtains fused image sample;The type of tissue points in fused image is carried out using default three-dimensional full convolutional network pre- It surveys, obtains predicted value, obtain the true value of voxel vertex type in fused image, it is pre- according to this using cross entropy loss function Measured value and true value restrain the full convolutional network of the three-dimensional, three-dimensional full convolutional network after being trained.
Similarly, the full convolutional network of multiple-limb can be formed by the training of multiple second sample image groups after the training, specifically may be used After being trained by other equipment, it is supplied to the medical image segmentation device, alternatively, can also be filled by the medical image segmentation It sets and is voluntarily trained;I.e. as shown in figure 13, which can also include the second acquisition unit 308 and second Training unit 309, as follows:
Second acquisition unit 308 can be used for acquiring multiple second sample image groups, wherein the second sample image group Including divide for tissue regions first mode image pattern, for intracranial area identification second mode image pattern, with And the third modality images sample for protein lesion region recognition.
Second training unit 309 can be used for merging first mode image pattern and second mode image pattern, Fused image sample is obtained, using the full convolutional network of default multiple-limb respectively to fused image sample and third modality images Sample carries out feature extraction, merges the feature extracted, and classify to feature after fusion, obtains type prediction value, obtains The type true value of feature after fusion is more to this according to the type predicted value and type true value by more Classification Loss functions The full convolutional network of branch is restrained, the full convolutional network of multiple-limb after being trained.
When it is implemented, above each unit can be used as independent entity to realize, any combination can also be carried out, is made It is realized for same or several entities, the specific implementation of above each unit can be found in the embodiment of the method for front, herein not It repeats again.
From the foregoing, it will be observed that the present embodiment is after getting image group to be split, on the one hand, can be 302 by stripping unit Skull removing is carried out according to the multiple modalities image in the image group to be split, obtains the exposure mask of removing skull, it on the other hand can be with Feature extraction and fusion are carried out to the multiple modalities image respectively by extraction unit 303, then by cutting unit 304 according to fusion after Feature is split intracranial tissue, then, then the obtained initial segmentation result of segmentation and will be obtained before by integrated unit 305 Exposure mask merged, obtain final segmentation result;Since the program is when carrying out initial segmentation, using first to each The feature of modality images extracts, then the mode merged, therefore, can retain the letter that each mode is contained as far as possible Breath, improves the ability to express of extracted feature, moreover, it is also possible to eliminate initial point using the exposure mask of skull resulting separation The positive phenomenon of vacation in obtained segmentation result is cut, accordingly, with respect to directly carrying out fusion to each modality images and skull is shelled From, then for the scheme that is split based on peel results, the accuracy of feature representation ability and segmentation can be improved.
The embodiment of the present invention also provides a kind of network equipment, and as shown in figure 14, it illustrates involved by the embodiment of the present invention The network equipment structural schematic diagram, specifically:
The network equipment may include one or more than one processing core processor 401, one or more The components such as memory 402, power supply 403 and the input unit 404 of computer readable storage medium.Those skilled in the art can manage It solves, network equipment infrastructure shown in Figure 14 does not constitute the restriction to the network equipment, may include more more or less than illustrating Component, perhaps combine certain components or different component layouts.Wherein:
Processor 401 is the control centre of the network equipment, utilizes various interfaces and connection whole network equipment Various pieces by running or execute the software program and/or module that are stored in memory 402, and are called and are stored in Data in reservoir 402 execute the various functions and processing data of the network equipment, to carry out integral monitoring to the network equipment. Optionally, processor 401 may include one or more processing cores;Preferably, processor 401 can integrate application processor and tune Demodulation processor processed, wherein the main processing operation storage medium of application processor, user interface and application program etc., modulatedemodulate Processor is adjusted mainly to handle wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor In 401.
Memory 402 can be used for storing software program and module, and processor 401 is stored in memory 402 by operation Software program and module, thereby executing various function application and data processing.Memory 402 can mainly include storage journey Sequence area and storage data area, wherein application program needed for storing program area can store operation storage medium, at least one function (such as sound-playing function, image player function etc.) etc.;Storage data area can be stored to be created according to using for the network equipment Data etc..In addition, memory 402 may include high-speed random access memory, it can also include nonvolatile memory, example Such as at least one disk memory, flush memory device or other volatile solid-state parts.Correspondingly, memory 402 may be used also To include Memory Controller, to provide access of the processor 401 to memory 402.
The network equipment further includes the power supply 403 powered to all parts, it is preferred that power supply 403 can pass through power management Storage medium and processor 401 are logically contiguous, to realize management charging, electric discharge, Yi Jigong by power management storage medium The functions such as consumption management.Power supply 403 can also include one or more direct current or AC power source, recharge storage medium, The random components such as power failure detection circuit, power adapter or inverter, power supply status indicator.
The network equipment may also include input unit 404, which can be used for receiving the number or character of input Information, and generate keyboard related with user setting and function control, mouse, operating stick, optics or trackball signal Input.
Although being not shown, the network equipment can also be including display unit etc., and details are not described herein.Specifically in the present embodiment In, the processor 401 in the network equipment can be corresponding by the process of one or more application program according to following instruction Executable file be loaded into memory 402, and the application program being stored in memory 402 is run by processor 401, It is as follows to realize various functions:
Image group to be split is obtained, which includes the multiple modalities image of brain, according to the multiple modalities Image carries out skull removing, obtains the exposure mask of removing skull, carries out feature extraction to the multiple modalities image respectively, and merge and mention The feature got is split intracranial tissue according to feature after fusion, obtains initial segmentation result, by the exposure mask and initial point It cuts result to be merged, obtains the corresponding segmentation result of the image group to be split.
For example, can specifically merge to first mode image and second mode image, fused image is obtained, so Afterwards, on the one hand the type of tissue points in fused image is predicted using full convolutional network three-dimensional after training, according to prediction The screening of voxel vertex type be not belonging to the tissue points of intracranial area, background voxels collection is obtained, to the back in the fused image Scape voxel collection is shielded, and the exposure mask of removing skull is obtained;On the other hand using training after the full convolutional network of multiple-limb upper point Zhi Sanwei residual error structure carries out feature extraction to fused image, obtains top set's feature, and complete using multiple-limb after training The inferior division three-dimensional residual error structure of convolutional network carries out feature extraction to third modality images, obtains inferior division feature, then, leads to The sorter network module for crossing the full convolutional network of multiple-limb after training merges top set's feature and inferior division feature, and uses Sorter network module classifies to feature after fusion, is split based on classification results to intracranial tissue, obtains initial segmentation As a result.
Optionally, three-dimensional full convolutional network can be formed by the training of multiple first sample image groups after training, and after training The full convolutional network of multiple-limb can be formed by the training of multiple second sample image groups, can be specifically trained by other equipment Afterwards, it is supplied to the medical image segmentation device, alternatively, can also be voluntarily trained by the medical image segmentation device;Locate Reason device 401 can also run the application program being stored in memory 402, to realize following functions:
Acquire multiple first sample image groups, wherein the first sample image group includes for tissue regions segmentation The image patterns such as one modality images sample and the second mode image pattern identified for intracranial area;To first mode figure Decent is merged with second mode image pattern, obtains fused image sample;Using default three-dimensional full convolutional network pair The type of tissue points is predicted in fused image, obtains predicted value, voxel vertex type is true in acquisition fused image Value restrains the full convolutional network of the three-dimensional according to the predicted value and true value, is trained using cross entropy loss function Three-dimensional full convolutional network afterwards.
And the multiple second sample image groups of acquisition, wherein the second sample image group includes dividing for tissue regions First mode image pattern, for intracranial area identification second mode image pattern and be used for protein lesion region The third modality images sample of identification;First mode image pattern and second mode image pattern are merged, merged Image pattern afterwards respectively carries out fused image sample and third modality images sample using the full convolutional network of default multiple-limb Feature extraction merges the feature extracted, and classifies to feature after fusion, obtains type prediction value, obtains spy after fusion The type true value of sign rolls up the multiple-limb according to the type predicted value and type true value by more Classification Loss functions entirely Product network is restrained, the full convolutional network of multiple-limb after being trained.
The specific implementation of above each operation can be found in the embodiment of front, and details are not described herein.
From the foregoing, it will be observed that the network equipment of the present embodiment is after getting image group to be split, on the one hand, can be according to this Multiple modalities image in image group to be split carries out skull removing, obtains the exposure mask of removing skull, on the other hand can distinguish Feature extraction and fusion are carried out to the multiple modalities image, intracranial tissue is split further according to feature after fusion, then, then The initial segmentation result that segmentation obtains is merged with the exposure mask obtained before, obtains final segmentation result;Due to the party Case is extracted when carrying out initial segmentation using the feature first to each modality images, then the mode merged, because This, can retain the information that each mode is contained as far as possible, improve the ability to express of extracted feature, moreover, it is also possible to The positive phenomenon of vacation in the obtained segmentation result of initial segmentation is eliminated using the exposure mask of skull resulting separation, therefore, relatively It, can be in directly carrying out fusion to each modality images and skull is removed, then for the scheme that is split based on peel results Improve the accuracy of feature representation ability and segmentation.
It will appreciated by the skilled person that all or part of the steps in the various methods of above-described embodiment can be with It is completed by instructing, or relevant hardware is controlled by instruction to complete, which can store computer-readable deposits in one In storage media, and is loaded and executed by processor.
For this purpose, the embodiment of the present invention provides a kind of storage medium, wherein being stored with a plurality of instruction, which can be processed Device is loaded, to execute the step in any brain image dividing method provided by the embodiment of the present invention.For example, this refers to Order can execute following steps:
Image group to be split is obtained, which includes the multiple modalities image of brain, according to the multiple modalities Image carries out skull removing, obtains the exposure mask of removing skull, carries out feature extraction to the multiple modalities image respectively, and merge and mention The feature got is split intracranial tissue according to feature after fusion, obtains initial segmentation result, by the exposure mask and initial point It cuts result to be merged, obtains the corresponding segmentation result of the image group to be split.
For example, can specifically merge to first mode image and second mode image, fused image is obtained, so Afterwards, on the one hand the type of tissue points in fused image is predicted using full convolutional network three-dimensional after training, according to prediction The screening of voxel vertex type be not belonging to the tissue points of intracranial area, background voxels collection is obtained, to the back in the fused image Scape voxel collection is shielded, and the exposure mask of removing skull is obtained;On the other hand using training after the full convolutional network of multiple-limb upper point Zhi Sanwei residual error structure carries out feature extraction to fused image, obtains top set's feature, and complete using multiple-limb after training The inferior division three-dimensional residual error structure of convolutional network carries out feature extraction to third modality images, obtains inferior division feature, then, leads to The sorter network module for crossing the full convolutional network of multiple-limb after training merges top set's feature and inferior division feature, and uses Sorter network module classifies to feature after fusion, is split based on classification results to intracranial tissue, obtains initial segmentation As a result.
Optionally, three-dimensional full convolutional network can be formed by the training of multiple first sample image groups after training, and after training The full convolutional network of multiple-limb can be formed by the training of multiple second sample image groups, can be specifically trained by other equipment Afterwards, it is supplied to the medical image segmentation device, alternatively, can also be voluntarily trained by the medical image segmentation device;I.e. should Following steps can also be performed in instruction:
Acquire multiple first sample image groups, wherein the first sample image group includes for tissue regions segmentation The image patterns such as one modality images sample and the second mode image pattern identified for intracranial area;To first mode figure Decent is merged with second mode image pattern, obtains fused image sample;Using default three-dimensional full convolutional network pair The type of tissue points is predicted in fused image, obtains predicted value, voxel vertex type is true in acquisition fused image Value restrains the full convolutional network of the three-dimensional according to the predicted value and true value, is trained using cross entropy loss function Three-dimensional full convolutional network afterwards.
And the multiple second sample image groups of acquisition, wherein the second sample image group includes dividing for tissue regions First mode image pattern, for intracranial area identification second mode image pattern and be used for protein lesion region The third modality images sample of identification;First mode image pattern and second mode image pattern are merged, merged Image pattern afterwards respectively carries out fused image sample and third modality images sample using the full convolutional network of default multiple-limb Feature extraction merges the feature extracted, and classifies to feature after fusion, obtains type prediction value, obtains spy after fusion The type true value of sign rolls up the multiple-limb according to the type predicted value and type true value by more Classification Loss functions entirely Product network is restrained, the full convolutional network of multiple-limb after being trained.
The specific implementation of above each operation can be found in the embodiment of front, and details are not described herein.
Wherein, which may include: read-only memory (ROM, Read Only Memory), random access memory Body (RAM, Random Access Memory), disk or CD etc..
By the instruction stored in the storage medium, any brain figure provided by the embodiment of the present invention can be executed As the step in dividing method, it is thereby achieved that any brain image dividing method institute provided by the embodiment of the present invention The beneficial effect being able to achieve is detailed in the embodiment of front, and details are not described herein.
A kind of brain image dividing method, device and storage medium is provided for the embodiments of the invention above to have carried out in detail Thin to introduce, used herein a specific example illustrates the principle and implementation of the invention, and above embodiments are said It is bright to be merely used to help understand method and its core concept of the invention;Meanwhile for those skilled in the art, according to this hair Bright thought, there will be changes in the specific implementation manner and application range, in conclusion the content of the present specification should not manage Solution is limitation of the present invention.

Claims (14)

1. a kind of brain image dividing method characterized by comprising
Image group to be split is obtained, the image group to be split includes the multiple modalities image of brain;
Skull removing is carried out according to the multiple modalities image, obtains the exposure mask of removing skull;
Feature extraction is carried out to the multiple modalities image respectively, and merges the feature extracted;
Intracranial tissue is split according to feature after fusion, obtains initial segmentation result;
The exposure mask and initial segmentation result are merged, the corresponding segmentation result of the image group to be split is obtained.
2. the method according to claim 1, wherein the multiple modalities image includes dividing for tissue regions First mode image, for intracranial area identification second mode image and for the third mould of protein lesion region recognition State image;
It is described that skull removing is carried out according to the multiple modalities image, obtain the exposure mask of removing skull, comprising: according to first mode Image and second mode image carry out skull removing, obtain the exposure mask of removing skull;
It is described that feature extraction is carried out to the multiple modalities image respectively, comprising: respectively to first mode image, second mode figure Picture and third modality images carry out feature extraction.
3. according to the method described in claim 2, it is characterized in that, it is described according to first mode image and second mode image into The removing of row skull, obtains the exposure mask of removing skull, comprising:
First mode image and second mode image are merged, fused image is obtained;
The type of tissue points in fused image is predicted using full convolutional network three-dimensional after training;
It is not belonging to the tissue points of intracranial area according to the screening of the voxel vertex type of prediction, obtains background voxels collection;
The background voxels collection is shielded in the fused image, obtains the exposure mask of removing skull.
4. according to the method described in claim 2, it is characterized in that, described respectively to first mode image, second mode image Feature extraction is carried out with third modality images, and merges the feature extracted, comprising:
First mode image and second mode image are merged, fused image is obtained;
Feature extraction is carried out to fused image and third modality images respectively using the full convolutional network of multiple-limb after training, and is melted Close the feature extracted.
5. according to the method described in claim 4, it is characterized in that, the full convolutional network of multiple-limb includes top set after the training Three-dimensional residual error structure, inferior division three-dimensional residual error structure and sorter network module, it is described using the full convolutional network of multiple-limb after training Feature extraction is carried out to fused image and third modality images respectively, and merges the feature extracted, comprising:
Feature extraction is carried out to fused image using top set's three-dimensional residual error structure, obtains top set's feature;
Feature extraction is carried out to third modality images using inferior division three-dimensional residual error structure, obtains inferior division feature;
Top set's feature and inferior division feature are merged by sorter network module.
6. according to the method described in claim 5, it is characterized in that, top set's three-dimensional residual error structure includes top set's convolution Under module, first top set's residual error module, first top set's down sample module, second top set's residual error module, the second top set Sampling module, third top set residual error module and third top set down sample module, it is described to use top set's three-dimensional residual error structure Feature extraction is carried out to fused image, obtains top set's feature, comprising:
Process of convolution is carried out to fused image using top set's convolution module;
The output of top set's convolution module is encoded using first top set's residual error module, and uses and is adopted under the first top set Egf block carries out down-sampling operation to coding result;
The output of first top set's down sample module is encoded using second top set's residual error module, and uses and divides on first Branch down sample module carries out down-sampling operation to coding result;
The output of second top set's down sample module is encoded using third top set residual error module, and using in third points Branch down sample module carries out down-sampling operation to coding result, obtains top set's feature.
7. according to the method described in claim 5, it is characterized in that, the inferior division three-dimensional residual error structure includes inferior division convolution Under module, the first inferior division residual error module, the first inferior division down sample module, the second inferior division residual error module, the second inferior division Sampling module, third inferior division residual error module and third inferior division down sample module, it is described to use inferior division three-dimensional residual error structure Feature extraction is carried out to third modality images, obtains inferior division feature, comprising:
Process of convolution is carried out to third modality images using inferior division convolution module;
The output of inferior division convolution module is encoded using the first inferior division residual error module, and uses and is adopted under the first inferior division Egf block carries out down-sampling operation to coding result;
The output of the first inferior division down sample module is encoded using the second inferior division residual error module, and uses first lower point Branch down sample module carries out down-sampling operation to coding result;
The output of the second inferior division down sample module is encoded using third inferior division residual error module, and uses lower point of third Branch down sample module carries out down-sampling operation to coding result, obtains inferior division feature.
8. method according to claim 6 or 7, which is characterized in that carry out down-sampling operation to coding result, comprising:
Two convolutional layers parallel, that step-length is identical and with different convolution kernels are respectively adopted, process of convolution is carried out to coding result;
Batch normalization is carried out to the result that process of convolution obtains by batch normalizing layer.
9. according to the method described in claim 8, it is characterized in that, described divide intracranial tissue according to feature after fusion It cuts, obtains initial segmentation result
Classified using sorter network module to feature after fusion;
Intracranial tissue is split based on classification results, obtains initial segmentation result.
10. method according to any one of claims 1 to 7, which is characterized in that described by the exposure mask and initial segmentation knot Fruit is merged, and the corresponding segmentation result of the image group to be split is obtained, comprising:
The value of each voxel in the value of each voxel and the initial segmentation result on the exposure mask is obtained respectively;
The first matrix is established according to the value of voxel each on the exposure mask, and according to voxel each in the initial segmentation result Value establish the second matrix;
Element in element and the second matrix in first matrix is subjected to dot product operation, it is corresponding to obtain the image group to be split Segmentation result.
11. according to the method described in claim 3, it is characterized in that, using three-dimensional full convolutional network after training to scheming after fusion Before the type of tissue points is predicted as in, further includes:
Multiple first sample image groups are acquired, the first sample image group includes the first mode figure for tissue regions segmentation Decent and for intracranial area identification second mode image pattern;
First mode image pattern and second mode image pattern are merged, fused image sample is obtained;
The type of tissue points in fused image is predicted using default three-dimensional full convolutional network, obtains predicted value;
Obtain the true value of voxel vertex type in fused image;
Using cross entropy loss function, the three-dimensional full convolutional network is restrained according to the predicted value and true value, is obtained Three-dimensional full convolutional network after to training.
12. according to the method described in claim 4, it is characterized in that, described using the full convolutional network difference of multiple-limb after training Before fused image and the progress feature extraction of third modality images, comprising:
Multiple second sample image groups are acquired, the second sample image group includes the first mode figure for tissue regions segmentation Decent, for intracranial area identification second mode image pattern and for the third mould of protein lesion region recognition State image pattern;
First mode image pattern and second mode image pattern are merged, fused image sample is obtained;
Feature is carried out to fused image sample and third modality images sample respectively using the full convolutional network of default multiple-limb to mention It takes;
The feature extracted is merged, and is classified to feature after fusion, type prediction value is obtained;
Obtain the type true value of feature after merging;
By more Classification Loss functions, according to the type prediction value and type true value to the full convolutional network of the multiple-limb into Row convergence, the full convolutional network of multiple-limb after being trained.
13. a kind of brain image segmenting device characterized by comprising
Acquiring unit, for obtaining image group to be split, the image group to be split includes the multiple modalities image of brain;
Stripping unit obtains the exposure mask of removing skull for carrying out skull removing according to the multiple modalities image;
Extraction unit for carrying out feature extraction to the multiple modalities image respectively, and merges the feature extracted;
Cutting unit obtains initial segmentation result for being split according to feature after fusion to intracranial tissue;
It is corresponding to obtain the image group to be split for merging the exposure mask and initial segmentation result for integrated unit Segmentation result.
14. a kind of storage medium, which is characterized in that the storage medium is stored with a plurality of instruction, and described instruction is suitable for processor It is loaded, the step in 1 to 12 described in any item brain image segmentations is required with perform claim.
CN201910070881.7A 2019-01-25 2019-01-25 Brain image segmentation method, device and storage medium Active CN109872328B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201910070881.7A CN109872328B (en) 2019-01-25 2019-01-25 Brain image segmentation method, device and storage medium
EP20744359.9A EP3916674B1 (en) 2019-01-25 2020-01-15 Brain image segmentation method, apparatus, network device and storage medium
PCT/CN2020/072114 WO2020151536A1 (en) 2019-01-25 2020-01-15 Brain image segmentation method, apparatus, network device and storage medium
US17/241,800 US11748889B2 (en) 2019-01-25 2021-04-27 Brain image segmentation method and apparatus, network device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910070881.7A CN109872328B (en) 2019-01-25 2019-01-25 Brain image segmentation method, device and storage medium

Publications (2)

Publication Number Publication Date
CN109872328A true CN109872328A (en) 2019-06-11
CN109872328B CN109872328B (en) 2021-05-07

Family

ID=66918101

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910070881.7A Active CN109872328B (en) 2019-01-25 2019-01-25 Brain image segmentation method, device and storage medium

Country Status (4)

Country Link
US (1) US11748889B2 (en)
EP (1) EP3916674B1 (en)
CN (1) CN109872328B (en)
WO (1) WO2020151536A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110335217A (en) * 2019-07-10 2019-10-15 东北大学 One kind being based on the decoded medical image denoising method of 3D residual coding
CN110766051A (en) * 2019-09-20 2020-02-07 四川大学华西医院 Lung nodule morphological classification method based on neural network
CN111144418A (en) * 2019-12-31 2020-05-12 北京交通大学 Railway track area segmentation and extraction method
CN111291767A (en) * 2020-02-12 2020-06-16 中山大学 Fine granularity identification method, terminal equipment and computer readable storage medium
WO2020151536A1 (en) * 2019-01-25 2020-07-30 腾讯科技(深圳)有限公司 Brain image segmentation method, apparatus, network device and storage medium
CN112241955A (en) * 2020-10-27 2021-01-19 平安科技(深圳)有限公司 Method and device for segmenting broken bones of three-dimensional image, computer equipment and storage medium
CN112435212A (en) * 2020-10-15 2021-03-02 杭州脉流科技有限公司 Brain focus region volume obtaining method and device based on deep learning, computer equipment and storage medium
ES2813777A1 (en) * 2019-09-23 2021-03-24 Quibim S L METHOD AND SYSTEM FOR THE AUTOMATIC SEGMENTATION OF HYPERINTENSITIES OF WHITE SUBSTANCE IN BRAIN MAGNETIC RESONANCE IMAGES (Machine-translation by Google Translate, not legally binding)
CN112950612A (en) * 2021-03-18 2021-06-11 西安智诊智能科技有限公司 Brain tumor image segmentation method based on convolutional neural network
CN112950774A (en) * 2021-04-13 2021-06-11 复旦大学附属眼耳鼻喉科医院 Three-dimensional modeling device, operation planning system and teaching system
CN115187819A (en) * 2022-08-23 2022-10-14 北京医准智能科技有限公司 Training method and device for image classification model, electronic equipment and storage medium

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111951281B (en) * 2020-08-10 2023-11-28 中国科学院深圳先进技术研究院 Image segmentation method, device, equipment and storage medium
CN112016318B (en) * 2020-09-08 2023-11-21 平安科技(深圳)有限公司 Triage information recommendation method, device, equipment and medium based on interpretation model
CN112419332A (en) * 2020-11-16 2021-02-26 复旦大学 Skull stripping method and device for thick-layer MRI (magnetic resonance imaging) image
CN112699937B (en) * 2020-12-29 2022-06-21 江苏大学 Apparatus, method, device, and medium for image classification and segmentation based on feature-guided network
CN112801161B (en) * 2021-01-22 2024-06-14 桂林市国创朝阳信息科技有限公司 Small sample image classification method, device, electronic equipment and computer storage medium
CN112991350B (en) * 2021-02-18 2023-06-27 西安电子科技大学 RGB-T image semantic segmentation method based on modal difference reduction
CN113034518A (en) * 2021-04-16 2021-06-25 佛山市南海区广工大数控装备协同创新研究院 Liver focus segmentation method based on convolutional neural network
CN113744250A (en) * 2021-09-07 2021-12-03 金陵科技学院 Method, system, medium and device for segmenting brachial plexus ultrasonic image based on U-Net
CN114241278B (en) * 2021-12-29 2024-05-07 北京工业大学 Multi-branch pedestrian re-identification method and system
CN114742848B (en) * 2022-05-20 2022-11-29 深圳大学 Polyp image segmentation method, device, equipment and medium based on residual double attention
CN115578370B (en) * 2022-10-28 2023-05-09 深圳市铱硙医疗科技有限公司 Brain image-based metabolic region abnormality detection method and device
CN115661178A (en) * 2022-11-17 2023-01-31 博奥生物集团有限公司 Method and apparatus for segmenting an imprinted image
CN115760810B (en) * 2022-11-24 2024-04-12 江南大学 Medical image segmentation apparatus, method and computer-readable storage medium
CN115998337A (en) * 2022-12-02 2023-04-25 天津大学 Three-dimensional craniotomy ultrasonic imaging method based on linear residual decomposition
CN116434045B (en) * 2023-03-07 2024-06-14 中国农业科学院烟草研究所(中国烟草总公司青州烟草研究所) Intelligent identification method for tobacco leaf baking stage
CN116386027B (en) * 2023-04-03 2023-10-24 南方海洋科学与工程广东省实验室(珠海) Ocean three-dimensional vortex recognition system and method based on artificial intelligence algorithm
CN116630334B (en) * 2023-04-23 2023-12-08 中国科学院自动化研究所 Method, device, equipment and medium for real-time automatic segmentation of multi-segment blood vessel
CN117495893B (en) * 2023-12-25 2024-03-19 南京筑卫医学科技有限公司 Skull peeling method based on active contour model

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180025489A1 (en) * 2016-07-25 2018-01-25 Case Western Reserve University Quantifying mass effect deformation with structural radiomics in brain tumor patients
CN107909588A (en) * 2017-07-26 2018-04-13 广州慧扬健康科技有限公司 Partition system under MRI cortex based on three-dimensional full convolutional neural networks
CN109035263A (en) * 2018-08-14 2018-12-18 电子科技大学 Brain tumor image automatic segmentation method based on convolutional neural networks

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8588486B2 (en) * 2009-06-18 2013-11-19 General Electric Company Apparatus and method for isolating a region in an image
EP2446418A4 (en) * 2009-06-23 2013-11-13 Agency Science Tech & Res A method and system for segmenting a brain image
US20150279084A1 (en) * 2014-04-01 2015-10-01 Yu Deuerling-Zheng Guided Noise Reduction with Streak Removal for High Speed C-Arm CT
EP2988270A1 (en) * 2014-08-22 2016-02-24 Siemens Aktiengesellschaft Method for volume evaluation of penumbra mismatch in acute ischemic stroke and system therefor
US9530206B2 (en) * 2015-03-31 2016-12-27 Sony Corporation Automatic 3D segmentation and cortical surfaces reconstruction from T1 MRI
US10037603B2 (en) * 2015-05-04 2018-07-31 Siemens Healthcare Gmbh Method and system for whole body bone removal and vascular visualization in medical image data
CN105701799B (en) * 2015-12-31 2018-10-30 东软集团股份有限公司 Divide pulmonary vascular method and apparatus from lung's mask image
US10402976B2 (en) * 2017-05-16 2019-09-03 Siemens Healthcare Gmbh Isolation of aneurysm and parent vessel in volumetric image data
CN108986106B (en) * 2017-12-15 2021-04-16 浙江中医药大学 Automatic segmentation method for retinal blood vessels for glaucoma
CN108257134B (en) * 2017-12-21 2022-08-23 深圳大学 Nasopharyngeal carcinoma focus automatic segmentation method and system based on deep learning
US11170508B2 (en) * 2018-01-03 2021-11-09 Ramot At Tel-Aviv University Ltd. Systems and methods for the segmentation of multi-modal image data
CN109872328B (en) * 2019-01-25 2021-05-07 腾讯科技(深圳)有限公司 Brain image segmentation method, device and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180025489A1 (en) * 2016-07-25 2018-01-25 Case Western Reserve University Quantifying mass effect deformation with structural radiomics in brain tumor patients
CN107909588A (en) * 2017-07-26 2018-04-13 广州慧扬健康科技有限公司 Partition system under MRI cortex based on three-dimensional full convolutional neural networks
CN109035263A (en) * 2018-08-14 2018-12-18 电子科技大学 Brain tumor image automatic segmentation method based on convolutional neural networks

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JENS KLEESIEK ET.AL: "Deep MRI brain extraction: A 3D convolutional neural network for skull stripping", 《NEUROIMAGE》 *
L. VIDYARATNE ET.AL: "Deep Learning and Texture-Based Semantic Label Fusion for Brain Tumor Segmentation", 《SPIE MEDICAL IMAGING》 *
SNEHASHIS ROY ET.AL: "Multiple Sclerosis Lesion Segmentation from Brain MRI via Fully Convolutional Neural Networks", 《ARXIV:1803.09172V1 [CS.CV]》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020151536A1 (en) * 2019-01-25 2020-07-30 腾讯科技(深圳)有限公司 Brain image segmentation method, apparatus, network device and storage medium
US11748889B2 (en) 2019-01-25 2023-09-05 Tencent Technology (Shenzhen) Company Limited Brain image segmentation method and apparatus, network device, and storage medium
CN110335217A (en) * 2019-07-10 2019-10-15 东北大学 One kind being based on the decoded medical image denoising method of 3D residual coding
CN110766051A (en) * 2019-09-20 2020-02-07 四川大学华西医院 Lung nodule morphological classification method based on neural network
WO2021058843A1 (en) * 2019-09-23 2021-04-01 Quibim, S.L. Method and system for the automatic segmentation of white matter hyperintensities in brain magnetic resonance images
ES2813777A1 (en) * 2019-09-23 2021-03-24 Quibim S L METHOD AND SYSTEM FOR THE AUTOMATIC SEGMENTATION OF HYPERINTENSITIES OF WHITE SUBSTANCE IN BRAIN MAGNETIC RESONANCE IMAGES (Machine-translation by Google Translate, not legally binding)
CN111144418B (en) * 2019-12-31 2022-12-02 北京交通大学 Railway track area segmentation and extraction method
CN111144418A (en) * 2019-12-31 2020-05-12 北京交通大学 Railway track area segmentation and extraction method
CN111291767A (en) * 2020-02-12 2020-06-16 中山大学 Fine granularity identification method, terminal equipment and computer readable storage medium
CN111291767B (en) * 2020-02-12 2023-04-28 中山大学 Fine granularity identification method, terminal equipment and computer readable storage medium
CN112435212A (en) * 2020-10-15 2021-03-02 杭州脉流科技有限公司 Brain focus region volume obtaining method and device based on deep learning, computer equipment and storage medium
CN112241955A (en) * 2020-10-27 2021-01-19 平安科技(深圳)有限公司 Method and device for segmenting broken bones of three-dimensional image, computer equipment and storage medium
CN112241955B (en) * 2020-10-27 2023-08-25 平安科技(深圳)有限公司 Broken bone segmentation method and device for three-dimensional image, computer equipment and storage medium
CN112950612A (en) * 2021-03-18 2021-06-11 西安智诊智能科技有限公司 Brain tumor image segmentation method based on convolutional neural network
CN112950774A (en) * 2021-04-13 2021-06-11 复旦大学附属眼耳鼻喉科医院 Three-dimensional modeling device, operation planning system and teaching system
CN115187819A (en) * 2022-08-23 2022-10-14 北京医准智能科技有限公司 Training method and device for image classification model, electronic equipment and storage medium

Also Published As

Publication number Publication date
US11748889B2 (en) 2023-09-05
EP3916674B1 (en) 2024-04-10
US20210248751A1 (en) 2021-08-12
CN109872328B (en) 2021-05-07
EP3916674A1 (en) 2021-12-01
WO2020151536A1 (en) 2020-07-30
EP3916674A4 (en) 2022-03-30

Similar Documents

Publication Publication Date Title
CN109872328A (en) A kind of brain image dividing method, device and storage medium
CN110110617B (en) Medical image segmentation method and device, electronic equipment and storage medium
CN109886986A (en) A kind of skin lens image dividing method based on multiple-limb convolutional neural networks
CN110070540B (en) Image generation method and device, computer equipment and storage medium
CN110490881A (en) Medical image dividing method, device, computer equipment and readable storage medium storing program for executing
CN108615236A (en) A kind of image processing method and electronic equipment
CN111047589A (en) Attention-enhanced brain tumor auxiliary intelligent detection and identification method
Hu et al. Automatic segmentation of intracerebral hemorrhage in CT images using encoder–decoder convolutional neural network
CN110246109B (en) Analysis system, method, device and medium fusing CT image and personalized information
CN109919932A (en) target object identification method and device
CN114926396B (en) Mental disorder magnetic resonance image preliminary screening model construction method
CN116452618A (en) Three-input spine CT image segmentation method
CN113436128B (en) Dual-discriminator multi-mode MR image fusion method, system and terminal
CN114283406A (en) Cell image recognition method, device, equipment, medium and computer program product
Sun et al. Detection of breast tumour tissue regions in histopathological images using convolutional neural networks
Tian et al. Non-tumorous facial pigmentation classification based on multi-view convolutional neural network with attention mechanism
CN115965785A (en) Image segmentation method, device, equipment, program product and medium
KR101442728B1 (en) Method of classifying pulmonary arteries and veins
Rastgarpour et al. The status quo of artificial intelligence methods in automatic medical image segmentation
Kalaivani et al. A Deep Ensemble Model for Automated Multiclass Classification Using Dermoscopy Images
Dessai et al. A parallel segmentation of brain tumor from magnetic resonance images
Kumar et al. Medical image fusion based on type-2 fuzzy sets with teaching learning based optimization
Liang et al. CasCRNN-GL-Net: cascaded convolutional and recurrent neural networks with global and local pathways for classification of focal liver lesions in multi-phase CT images
Liu et al. Direct 3D model extraction method for color volume images
Pana et al. 3D Brain Tumor Volume Reconstruction and Quantification using MRI Multi-modalities Brain Images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant