CN110390660A - A kind of medical image jeopardizes organ automatic classification method, equipment and storage medium - Google Patents

A kind of medical image jeopardizes organ automatic classification method, equipment and storage medium Download PDF

Info

Publication number
CN110390660A
CN110390660A CN201810335549.4A CN201810335549A CN110390660A CN 110390660 A CN110390660 A CN 110390660A CN 201810335549 A CN201810335549 A CN 201810335549A CN 110390660 A CN110390660 A CN 110390660A
Authority
CN
China
Prior art keywords
convolutional neural
neural networks
depth convolutional
human body
medical image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810335549.4A
Other languages
Chinese (zh)
Inventor
刘春雷
孙窈
崔德琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Lianxin Medical Technology Co Ltd
Original Assignee
Beijing Lianxin Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Lianxin Medical Technology Co Ltd filed Critical Beijing Lianxin Medical Technology Co Ltd
Priority to CN201810335549.4A priority Critical patent/CN110390660A/en
Publication of CN110390660A publication Critical patent/CN110390660A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to medical image and field of computer technology, it is related to a kind of medical image and jeopardizes organ automatic classification method, equipment and storage medium.The classification method includes the following steps: to pre-process medical image to be sorted;Pretreated medical image is input in trained CNN and carries out classification prediction, wherein the CNN training method includes human body being sequentially divided into from top to bottom several regions;Human body tag along sort corresponding with above-mentioned cut zone is made using one-hot coding;It will be used to be input in CNN and be trained after trained image data carries out pretreatment and data enhancing, determine network weight.Whole body provided by the invention based on CNN jeopardizes organ automatic classification method with very high accuracy, and the image layer where only needing very short time whole body can be mainly jeopardized organ predicts and, the working efficiency of doctor can be greatly improved, while providing valuable time again for the timely treatment of patient.

Description

A kind of medical image jeopardizes organ automatic classification method, equipment and storage medium
Technical field
The invention belongs to medical images and field of computer technology, are related to a kind of medicine based on depth convolutional neural networks Image jeopardizes organ automatic classification method, equipment and storage medium.
Background technique
During hospital carries out radiotherapy to patient, delineating to target target area is often related to.Doctor master at present If being delineated using manual mode to target target area, delineate by hand time-consuming and laborious, influences the working efficiency of doctor, more influence The timely treatment of patient.
With the development of depth learning technology, the Automatic medical image segmentation technology based on depth learning technology becomes medicine The popular research in field.But a set of medical image, such as CT images or magnetic resonance imaging etc. usually contain many human bodies and jeopardize device Official, firstly the need of the layer where filtering out target organ before divide automatically to target organ, when we to human body very When more jeopardizing organ and have the demand divided automatically, manually go to screen it is each jeopardize organ where layer just seem time-consuming and laborious, Therefore the whole body based on organization of human body jeopardizes organ and classifies automatically just and is particularly important.
Summary of the invention
It is an object of the invention to provide a kind of based on depth convolutional Neural net to overcome the defect of the above-mentioned prior art The medical image of network jeopardizes organ automatic classification method, equipment and storage medium.
To achieve the above object, the invention adopts the following technical scheme:
Human medical image (such as CT images, magnetic resonance imaging, PET image etc.) is divided into ten according to organization of human body by the present invention Class (dividing method is successively to divide nine classes from top to bottom and medical image other than human body is divided into one kind) classifies to this ten kinds Label is made using one-hot coding, each section of such human body has the label of oneself, then builds using the present invention Depth convolutional neural networks are trained, and trained model and weight are saved, and then can use the model of preservation New medical image is predicted with weight, prediction result is exactly classification belonging to the medical image.
A kind of human body based on depth convolutional neural networks jeopardizes organ automatic classification method, suitable for holding in calculating equipment Row, includes the following steps:
(1) medical image to be sorted is pre-processed;
(2) pretreated medical image is input in trained depth convolutional neural networks and carries out classification prediction;
The wherein training method of the depth convolutional neural networks, comprising the following steps:
(i) human body is sequentially divided into from top to bottom several regions;
(ii) human body tag along sort corresponding with above-mentioned cut zone is made using one-hot coding;
(iii) trained data will be used for and carries out interpolation processing;
(iv) according to the picture position where the organ of target area, training data is cut into fixed dimension;
(v) data enhancing is carried out to the training data cut, to enhance the extensive energy of depth convolutional neural networks model Power;
(vi) the enhanced training data of data is input in depth convolutional neural networks model and is trained, work as verifying When the loss value of data set is less than or equal to given threshold, trained depth convolutional neural networks model is obtained.
It is further preferred that the medical image is CT image, nuclear-magnetism image or PET image etc. in step (1).
In step (1), the pretreatment is interpolation processing and/or cuts out the size in the direction medical image x, y to be sorted It is cut to consistent with the size in the direction training image x, y, and/or the size in the direction medical image x, y to be sorted is cut into and is instructed The size for practicing the direction image x, y is consistent.
The depth convolutional neural networks include input layer, convolutional layer, maximum pond layer, merge layer, output layer, wherein Convolutional layer, maximum pond layer, merging layer are hidden layer.
In the depth convolutional neural networks model, each convolutional layer includes weights initialisation function and activation letter Number.
It is further preferred that the weights initialisation function is selected from He_normal function, Random_normal function Or Glorot_normal function;The activation primitive is selected from SeLU function, ReLU function, PReLU function or ELU function.
Wherein He_normal function are as follows:
Wherein i indicates i-th layer of neural network,
W(i)Indicate i-th layer of weight,
n(i)Indicate the quantity of i-th layer of neuron.
Activation primitive SeLU activation primitive are as follows:
Wherein, α=1.6732632423543772848170429916717;
λ=1.0507009873554804934193349852946.
The loss function of the depth convolutional neural networks intersects entropy function using more classification, is defined as:
WhereinakIndicate the output valve of k-th of neuron, zkIndicate the input of k-th of neuron, e Indicate natural constant, ykIndicate the corresponding true value of k-th of neuron.
In step (iii), the interpolation processing be x in each training data image, y are unified in direction interpolation be it is fixed Size.
In step (v), the data enhancing includes the rotation for surrounding image center, x, the translation in y-axis direction.Such as By image x, the shake in y-axis direction can artificially create some new data around rotation of central point etc., to enhance The generalization ability of model also can be very good to identify when model encounters the data such as some head deflections.
In step (vi), the network parameter threshold value set is less than or equal to 0.01.
In step (vi), training process uses the optimization method of AdaDelta or Adam.
In step (vi), the training includes propagated forward and backpropagation, and a propagated forward and backpropagation are An iteration, preferably, the number of iterations is more than or equal to 10 times the present invention, it is further preferred that the number of iterations is 10~100 times, It is highly preferred that the number of iterations is 20~50 times.The accuracy rate of trained depth convolutional neural networks model tends towards stability.Once Propagated forward and backpropagation in iteration cover all hidden layers.
The present invention also provides a kind of calculating equipment, comprising:
One or more processors;
Memory;And
One or more programs, wherein one or more of programs are stored in the memory and are configured as by one A or multiple processors execute, and one or more programs include for the above-mentioned medicine based on depth convolutional neural networks Image jeopardizes the instruction of organ automatic classification method.
The present invention also provides a kind of computer readable storage medium for storing one or more programs, described one or more A program includes instruction, and described instruction is suitable for being loaded by memory and executing the above-mentioned medicine figure based on depth convolutional neural networks As jeopardizing organ automatic classification method.
The present invention has following technical effect that
It is to realize the basis for jeopardizing organs automatic segmentation, and doctor screens a danger by hand that whole body jeopardizes organ classifies automatically And organ needs 1~3 minute, wastes doctor's valuable time.The whole body danger based on depth convolutional neural networks that invention provides And organ automatic classification method only need 1 second or so whole body can be mainly jeopardized organ where layer predict come, with hand Work screening is compared, and the time shortens about 98%, this can greatly improve the working efficiency of doctor, while being again the timely of patient Treatment provides valuable time.In addition, automatic classification method provided by the invention also has very high accuracy.
Detailed description of the invention
Fig. 1 (a) is that the medical image based on depth convolutional neural networks jeopardizes device in a preferred embodiment of the invention Official's automatic classification method flow chart;
It (b) is depth convolutional neural networks training method flow chart in a preferred embodiment of the invention.
Fig. 2 is human body classification method schematic diagram from top to bottom in a preferred embodiment of the invention.
Fig. 3 is depth convolutional neural networks structure chart in a preferred embodiment of the invention.
Fig. 4 is the multireel product module block structure in a preferred embodiment of the invention in depth convolutional neural networks structure chart Schematic diagram.
Fig. 5 is the residual error link block knot in a preferred embodiment of the invention in depth convolutional neural networks structure chart Structure schematic diagram.
Fig. 6 is training image in a preferred embodiment of the invention.
To be based in a preferred embodiment of the invention, depth convolutional neural networks are of all categories to human body to jeopardize organ to Fig. 7 Prediction result, to predict that accurate quantity explains.
To be based in a preferred embodiment of the invention, depth convolutional neural networks are of all categories to human body to jeopardize organ to Fig. 8 Prediction result is explained with predictablity rate.
Specific embodiment
The present invention is further illustrated below in conjunction with drawings and examples.
The present invention provides a kind of human body based on depth convolutional neural networks and jeopardizes organ automatic classification method.According to human body Structure is by human medical image (such as CT images) divide into several classes (for example, dividing method is nine successively divided from top to bottom Class and medical image other than human body is divided into the tenth class), to this ten kinds classification using one-hot coding make label, such human body Each section have oneself label, be then trained using the depth convolutional neural networks that the present invention is built, instructing The model and weight perfected are saved, and then can be carried out using the model and weight saved to new medical image pre- It surveys, prediction result is exactly classification belonging to the medical image.
A kind of human body based on depth convolutional neural networks jeopardizes organ automatic classification method (shown in such as Fig. 1 (a)), is suitable for It executes, includes the following steps: in calculating equipment
Pretreatment 210 is carried out to medical image to be sorted;
Preferably, the pretreatment be interpolation processing, it is further preferred that pretreatment further include by medical image x to be sorted, The size in the direction y is cut into consistent with the size in the direction training image x, y.
In the present embodiment preferably, the size in the direction medical image x, y to be sorted after cutting is cut into be schemed with training As the size in the direction x, y is consistent.
Pretreated medical image is input in trained depth convolutional neural networks and carries out classification prediction 220; The medical image can be CT image, nuclear-magnetism image or PET image etc..
Fig. 3 is depth convolutional neural networks structure chart in an illustrative embodiment of the invention, the depth convolutional Neural net It is (as shown in Figure 4) for unit with single multireel volume module in network structure, residual error company is carried out by interval of two multireel volume modules It connects, constitutes residual error link block (as shown in Figure 5);In figure, arrow indicates the direction of network data conduction, and dotted line indicates that residual error connects It connects.Wherein, 1*1 convolution kernel, 3*3 convolution kernel and 5*5 convolution kernel have been used simultaneously in multireel product module block structure as shown in Figure 4, Different convolution kernels can provide different receptive fields, can extract the different feature of image, increase the rich of network.It is residual Poor link block is attached by the output input and after two multireel volume modules, and usual neural network is with the number of plies Increase the problem of will appear gradient disperse, and residual error connection can effectively solve this problem.Depth convolution mind shown in Fig. 3 It is included 7 sequentially connected residual error link blocks in network, it will be appreciated by those skilled in the art that as needed, can adjust The quantity of residual error link block in CNN network;With the increase of residual error module number in network, required calculating memory is bigger.
The wherein training method of depth convolutional neural networks (shown in such as Fig. 1 (b)), comprising the following steps:
Human body is sequentially divided into from top to bottom several regions 301;In an illustrative embodiment of the invention, by human body Medical image is in turn divided into ten classifications according to organization of human body from top to bottom.Of all categories and its range is as shown in table 1 and Fig. 2: Wherein, Fig. 2 be human body sagittal plain and coronal bitmap, illustrated using this two width figure Whole Body classify situation.It is more than the crown Range without organization of human body be classification 1, from the crown to eyes on top layer be classification 2, the top layer under top layer to eyes from eyes For classification 3, top layer is classification 4 under top layer to cerebellum under eyes, under cerebellum top layer to lower jaw the last layer be classification 5, from Lower jaw the last layer is classification 6 to lung top layer, is classification 7 from lung top layer to stomach top layer, is classification 8 from stomach top layer to kidney bottom, from Kidney bottom is classification 9 to bladder top, is classification 10 from bladder top to foot.
Table 1
The one-to-one label 302 of human body classification corresponding with above-mentioned cut zone is made using one-hot coding;In this hair It is as shown in table 2 human body corresponding one-hot coding label of all categories in a bright exemplary embodiment.
Table 2
Classification Label
1 1000000000
2 0100000000
3 0010000000
4 0001000000
5 0000100000
6 0000010000
7 0000001000
8 0000000100
9 0000000010
10 0000000001
It will be used for trained data and carry out interpolation processing 303;The interpolation processing is x, the direction y in each training data image Unified interpolation is fixed size x0Mm, y0Mm, the direction x and the direction y are as shown in Figure 6 in training data image;Preferably, x0Mm, y0Mm is a statistical result according to multiple hospitals data format, most values of Qu Ge hospital.
According to the picture position where the organ of target area, training data is cut into fixed dimension 304;
Data enhancing is carried out to the training data cut, to enhance the generalization ability of depth convolutional neural networks model 305;Preferably, data enhancing includes rotation or the x around image center, the one or more such as the translation in y-axis direction. Such as by image x, the shake in y-axis direction can artificially create some new data around rotation of central point etc., thus The generalization ability for enhancing model, also can be very good to identify when model encounters the data such as some head deflections.
The enhanced training data of data is input in depth convolutional neural networks model and is trained, verify data is worked as When the loss value of collection is less than or equal to given threshold, trained depth convolutional neural networks model 306 is obtained.
Further, the threshold value of the validation data set loss value of setting is less than or equal to 0.0.
Further, the optimization method of CNN neural network training process can use the optimization side of AdaDelta or Adam Method.
Training includes propagated forward and backpropagation, and a propagated forward and backpropagation are an iteration, the present embodiment In preferably, the number of iterations be more than or equal to 10 times, it is further preferred that the number of iterations be 10~100 times, it is highly preferred that iteration Number is 20~50 times.The accuracy rate of trained depth convolutional neural networks model tends towards stability.Forward direction in an iteration It propagates and backpropagation covers all hidden layers.
In an illustrative embodiment of the invention, depth convolutional neural networks include input layer, convolutional layer, maximum pond Layer merges layer, and output layer, wherein convolutional layer, maximum pond layer, merging layer are hidden layer.The depth convolutional neural networks model In, each convolutional layer includes weights initialisation function and activation primitive.
It is further preferred that it will be understood by those skilled in the art that wherein weights initialisation function can be selected from He_ Normal function, Random_normal function or Glorot_normal function;Activation primitive can be selected from SeLU function, ReLU Function, PReLU function or ELU function.
Wherein He_normal function are as follows:
Wherein i indicates i-th layer of neural network,
W(i)Indicate i-th layer of weight,
n(i)Indicate the quantity of i-th layer of neuron.
Activation primitive SeLU activation primitive are as follows:
Wherein, α=1.6732632423543772848170429916717;
λ=1.0507009873554804934193349852946.
The loss function of the depth convolutional neural networks intersects entropy function using more classification, is defined as:
WhereinakIndicate the output valve of k-th of neuron, zkIndicate the input of k-th of neuron, e is indicated Natural constant, ykIndicate the corresponding true value of k-th of neuron.
Fig. 7 and Fig. 8 is to utilize depth convolutional neural networks shown in Fig. 3 in the present embodiment (Adam network optimized approach, setting The threshold value of validation data set loss jeopardizes organ prediction result to be 0.01) of all categories to human body.Wherein, ordinate is really to mark Label, abscissa are prediction labels, and Fig. 6 diagonal line is the correct quantity of prediction, by taking classification 2 as an example, altogether to 1289 (9+1269+ 11) the CT image of example classification 2 is classified automatically, and it is 1269 that wherein CNN neural network forecast, which is the quantity of classification 2, in the present embodiment Example, being predicted as classification 1 is 9, and being predicted as classification 3 is 11, and predictablity rate isAccordingly, Diagonal line display predicts that correct ratio is 0.98 in Fig. 8.It can be seen that prediction result most from the prediction result in Fig. 7 and Fig. 8 The accuracy rate of low classification 7 is 0.92, and preferably up to reaching 0.99, total test set quantity is for classification 1 and 6 result of classification 10483, prediction correct number is 10061, and consensus forecast accuracy rate is 0.96.
In addition, only needing 1s or so that can realize that the automatic of medical image is divided using the CNN network in the embodiment of the present invention Class (wherein hardware configuration are as follows: GPU:GEFORCE GTX 1080;CPU:Intel Xeon E3-1230 v5;Memory: 16G).
Embodiment 2
A kind of calculating equipment, comprising:
One or more processors;
Memory;And
One or more programs, the storage of wherein one or more programs in the memory and be configured as by one or Multiple processors execute, and said one or multiple programs include jeopardizing device for the medical image based on depth convolutional neural networks The instruction of official's automatic classification method, this method comprises the following steps:
(1) medical image to be sorted is pre-processed;
(2) pretreated medical image is input in trained depth convolutional neural networks and carries out classification prediction;
The wherein training method of above-mentioned depth convolutional neural networks, comprising the following steps:
(i) human body is sequentially divided into from top to bottom several regions;
(ii) human body tag along sort corresponding with above-mentioned cut zone is made using one-hot coding;
(iii) trained data will be used for and carries out interpolation processing;
(iv) according to the picture position where the organ of target area, training data is cut into fixed dimension;
(v) data enhancing is carried out to the training data cut, to enhance the extensive energy of depth convolutional neural networks model Power;
(vi) the enhanced training data of data is input in depth convolutional neural networks model and is trained, instructed The depth convolutional neural networks model perfected.
Embodiment 3
A kind of computer readable storage medium storing one or more programs, wherein one or more programs include referring to It enables, which is suitable for being loaded by memory and being executed the medical image based on depth convolutional neural networks and jeopardize organ and classify automatically Method, this method comprises the following steps:
(1) medical image to be sorted is pre-processed;
(2) pretreated medical image is input in trained depth convolutional neural networks and carries out classification prediction;
The wherein training method of above-mentioned depth convolutional neural networks, comprising the following steps:
(i) human body is sequentially divided into from top to bottom several regions;
(ii) human body tag along sort corresponding with above-mentioned cut zone is made using one-hot coding;
(iii) trained data will be used for and carries out interpolation processing;
(iv) according to the picture position where the organ of target area, training data is cut into fixed dimension;
(v) data enhancing is carried out to the training data cut, to enhance the extensive energy of depth convolutional neural networks model Power;
(vi) the enhanced training data of data is input in depth convolutional neural networks model and is trained, instructed The depth convolutional neural networks model perfected.
Those skilled in the art will understand that can be carried out adaptively to the module in the equipment in embodiment Change and they are arranged in one or more devices different from this embodiment.It can be the module or list in embodiment Member or component are combined into a module or unit or component, and furthermore they can be divided into multiple submodule or subelement or Sub-component.Other than such feature and/or at least some of process or unit exclude each other, it can use any Combination is to all features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so disclosed All process or units of what method or apparatus are combined.Unless expressly stated otherwise, this specification is (including adjoint power Benefit require, abstract and attached drawing) disclosed in each feature can carry out generation with an alternative feature that provides the same, equivalent, or similar purpose It replaces.
As used in this, unless specifically stated, come using ordinal number " first ", " second ", " third " etc. Description plain objects, which are merely representative of, is related to the different instances of similar object, and is not intended to imply that the object being described in this way must Must have the time it is upper, spatially, sequence aspect or given sequence in any other manner.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments In included certain features rather than other feature, but the combination of the feature of different embodiments mean it is of the invention Within the scope of and form different embodiments.For example, in the following claims, embodiment claimed is appointed Meaning one of can in any combination mode come using.
It should be appreciated that various technologies described herein are realized together in combination with hardware or software or their combination.From And some aspects or part of the process and apparatus of the present invention or the process and apparatus of the present invention can take the tangible matchmaker of insertion It is situated between, such as the program code in floppy disk, CD-ROM, hard disk drive or other any machine readable storage mediums (refers to Enable) form, wherein when program is loaded into the machine of such as computer etc, and when being executed by the machine, which becomes real Trample equipment of the invention.
By way of example and not limitation, computer-readable medium includes computer storage media and communication media.It calculates Machine storage medium stores the information such as computer readable instructions, data structure, program module or other data.Communication media one As with the modulated message signals such as carrier wave or other transmission mechanisms embody computer readable instructions, data structure, program Module or other data, and including any information transmitting medium.Above any combination is also included within computer-readable Within the scope of medium.
This hair can be understood and applied the above description of the embodiments is intended to facilitate those skilled in the art It is bright.Person skilled in the art obviously easily can make various modifications to these embodiments, and described herein General Principle is applied in other embodiments without having to go through creative labor.Therefore, the present invention is not limited to implementations here Example, those skilled in the art's announcement according to the present invention, improvement and modification made without departing from the scope of the present invention all should be Within protection scope of the present invention.

Claims (10)

1. a kind of human body based on depth convolutional neural networks jeopardizes organ automatic classification method, suitable for being held in calculating equipment Row, characterized by the following steps:
(1) medical image to be sorted is pre-processed;
(2) pretreated medical image is input in trained depth convolutional neural networks and carries out classification prediction;
The wherein training method of the depth convolutional neural networks, comprising the following steps:
(i) human body is sequentially divided into from top to bottom several regions;
(ii) human body tag along sort corresponding with above-mentioned cut zone is made using one-hot coding;
(iii) trained data will be used for and carries out interpolation processing;
(iv) according to the picture position where the organ of target area, training data is cut into fixed dimension;
(v) data enhancing is carried out to the training data cut, to enhance the generalization ability of depth convolutional neural networks model;
(vi) the enhanced training data of data is input in depth convolutional neural networks model and is trained, work as verify data When the loss value of collection is less than or equal to given threshold, trained depth convolutional neural networks model is obtained.
2. the human body according to claim 1 based on depth convolutional neural networks jeopardizes organ automatic classification method, special Sign is: in step (1), the medical image is CT image, nuclear-magnetism image or PET image;
Or the pretreatment is interpolation processing, and/or the size in the direction medical image x, y to be sorted is cut into and is schemed with training As the size in the direction x, y is consistent.
3. the human body according to claim 1 based on depth convolutional neural networks jeopardizes organ automatic classification method, special Sign is: the depth convolutional neural networks include input layer, convolutional layer, maximum pond layer, merge layer, output layer, wherein Convolutional layer, maximum pond layer, merging layer are hidden layer.
4. the human body according to claim 1 or 3 based on depth convolutional neural networks jeopardizes organ automatic classification method, Be characterized in that: in the depth convolutional neural networks, each convolutional layer includes weights initialisation function and activation primitive.
5. the human body according to claim 4 based on depth convolutional neural networks jeopardizes organ automatic classification method, special Sign is: the weights initialisation function is selected from He_normal function, Random_normal function or Glorot_normal Function;
The activation primitive is selected from SeLU function, ReLU function, PReLU function or ELU function.
6. the human body according to claim 1 or 3 based on depth convolutional neural networks jeopardizes organ automatic classification method, Be characterized in that: the loss function of the depth convolutional neural networks intersects entropy function using more classification, is defined as:
WhereinakIndicate the output valve of k-th of neuron, zkIndicate the input of k-th of neuron, e indicates nature Constant, ykIndicate the corresponding true value of k-th of neuron.
7. the human body according to claim 1 based on depth convolutional neural networks jeopardizes organ automatic classification method, special Sign is: in step (iii), the interpolation processing be x in each training data image, y are unified in direction interpolation be it is fixed Size;
Or in step (v), the data enhancing includes the rotation for surrounding image center, x, the translation in y-axis direction.
8. the human body according to claim 1 based on depth convolutional neural networks jeopardizes organ automatic classification method, special Sign is: in step (vi), loss threshold value is less than or equal to 0.01;
Or training process uses the optimization method of AdaDelta or Adam;
Or the training includes propagated forward and backpropagation, a propagated forward and backpropagation are an iteration, once Propagated forward and backpropagation in iteration cover all hidden layers.
9. a kind of calculating equipment, comprising:
One or more processors;
Memory;And
One or more programs, wherein the storage of one or more of programs in the memory and be configured as by one or Multiple processors execute, and one or more programs include for any described based on depth in the claims 1-8 The medical image of degree convolutional neural networks jeopardizes the instruction of organ automatic classification method.
10. a kind of computer readable storage medium for storing one or more programs, one or more programs include referring to Enable, described instruction be suitable for load by memory and execute in the claims 1-8 it is any described in based on depth convolutional Neural The medical image of network jeopardizes organ automatic classification method.
CN201810335549.4A 2018-04-16 2018-04-16 A kind of medical image jeopardizes organ automatic classification method, equipment and storage medium Pending CN110390660A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810335549.4A CN110390660A (en) 2018-04-16 2018-04-16 A kind of medical image jeopardizes organ automatic classification method, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810335549.4A CN110390660A (en) 2018-04-16 2018-04-16 A kind of medical image jeopardizes organ automatic classification method, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110390660A true CN110390660A (en) 2019-10-29

Family

ID=68283735

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810335549.4A Pending CN110390660A (en) 2018-04-16 2018-04-16 A kind of medical image jeopardizes organ automatic classification method, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110390660A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183718A (en) * 2020-08-31 2021-01-05 华为技术有限公司 Deep learning training method and device for computing equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102609680A (en) * 2011-12-22 2012-07-25 中国科学院自动化研究所 Method for detecting human body parts by performing parallel statistical learning based on three-dimensional depth image information
CN105740892A (en) * 2016-01-27 2016-07-06 北京工业大学 High-accuracy human body multi-position identification method based on convolutional neural network
CN106204587A (en) * 2016-05-27 2016-12-07 孔德兴 Multiple organ dividing method based on degree of depth convolutional neural networks and region-competitive model
CN107194338A (en) * 2017-05-14 2017-09-22 北京工业大学 Traffic environment pedestrian detection method based on human body tree graph model
CN107403201A (en) * 2017-08-11 2017-11-28 强深智能医疗科技(昆山)有限公司 Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method
US20180060652A1 (en) * 2016-08-31 2018-03-01 Siemens Healthcare Gmbh Unsupervised Deep Representation Learning for Fine-grained Body Part Recognition

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102609680A (en) * 2011-12-22 2012-07-25 中国科学院自动化研究所 Method for detecting human body parts by performing parallel statistical learning based on three-dimensional depth image information
CN105740892A (en) * 2016-01-27 2016-07-06 北京工业大学 High-accuracy human body multi-position identification method based on convolutional neural network
CN106204587A (en) * 2016-05-27 2016-12-07 孔德兴 Multiple organ dividing method based on degree of depth convolutional neural networks and region-competitive model
US20180060652A1 (en) * 2016-08-31 2018-03-01 Siemens Healthcare Gmbh Unsupervised Deep Representation Learning for Fine-grained Body Part Recognition
CN107194338A (en) * 2017-05-14 2017-09-22 北京工业大学 Traffic environment pedestrian detection method based on human body tree graph model
CN107403201A (en) * 2017-08-11 2017-11-28 强深智能医疗科技(昆山)有限公司 Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ICTCXQ: "TensorFlow 多分类标签转换成One-hot", 《CSDN博客-HTTPS://BLOG.CSDN.NET/ICTCXQ/ARTICLE/DETAILS/78545282》 *
YUNCHAO WEI 等: "HCP: A Flexible CNN Framework for Multi-Label Image Classification", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183718A (en) * 2020-08-31 2021-01-05 华为技术有限公司 Deep learning training method and device for computing equipment
WO2022042713A1 (en) * 2020-08-31 2022-03-03 华为技术有限公司 Deep learning training method and apparatus for use in computing device
CN112183718B (en) * 2020-08-31 2023-10-10 华为技术有限公司 Deep learning training method and device for computing equipment

Similar Documents

Publication Publication Date Title
Salehi et al. A CNN model: earlier diagnosis and classification of Alzheimer disease using MRI
CN109522973A (en) Medical big data classification method and system based on production confrontation network and semi-supervised learning
CN107506797A (en) One kind is based on deep neural network and multi-modal image alzheimer disease sorting technique
CN107715314B (en) Deep learning-based radiotherapy system and method
CN106204587A (en) Multiple organ dividing method based on degree of depth convolutional neural networks and region-competitive model
CN110310287A (en) It is neural network based to jeopardize the automatic delineation method of organ, equipment and storage medium
CN108171711A (en) A kind of infant's brain Magnetic Resonance Image Segmentation method based on complete convolutional network
CN106682616A (en) Newborn-painful-expression recognition method based on dual-channel-characteristic deep learning
CN107977969A (en) A kind of dividing method, device and the storage medium of endoscope fluorescence image
CN106815481A (en) A kind of life cycle Forecasting Methodology and device based on image group
CN107358600A (en) Automatic hook Target process, device and electronic equipment in radiotherapy planning
CN110070540A (en) Image generating method, device, computer equipment and storage medium
CN109589092A (en) Method and system are determined based on the Alzheimer's disease of integrated study
CN109472263A (en) A kind of brain magnetic resonance image dividing method of the global and local information of combination
CN109308477A (en) A kind of medical image automatic division method, equipment and storage medium based on rough sort
CN112614133B (en) Three-dimensional pulmonary nodule detection model training method and device without anchor point frame
CN110363760A (en) The computer system of medical image for identification
CN114782350A (en) Multi-modal feature fusion MRI brain tumor image segmentation method based on attention mechanism
CN109215040A (en) A kind of tumor of breast dividing method based on multiple dimensioned weighting study
CN109461161A (en) A method of human organ in medical image is split based on neural network
CN110059656A (en) The leucocyte classification method and system for generating neural network are fought based on convolution
CN110223275A (en) A kind of cerebral white matter fiber depth clustering method of task-fMRI guidance
CN110097128A (en) Medical Images Classification apparatus and system
CN110232721A (en) A kind of crisis organ delineates the training method and device of model automatically
CN107958472A (en) PET imaging methods, device, equipment and storage medium based on sparse projection data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191029

RJ01 Rejection of invention patent application after publication