CN109816666A - Symmetrical full convolutional neural networks model building method, eye fundus image blood vessel segmentation method, apparatus, computer equipment and storage medium - Google Patents
Symmetrical full convolutional neural networks model building method, eye fundus image blood vessel segmentation method, apparatus, computer equipment and storage medium Download PDFInfo
- Publication number
- CN109816666A CN109816666A CN201910009415.8A CN201910009415A CN109816666A CN 109816666 A CN109816666 A CN 109816666A CN 201910009415 A CN201910009415 A CN 201910009415A CN 109816666 A CN109816666 A CN 109816666A
- Authority
- CN
- China
- Prior art keywords
- fundus image
- eye fundus
- sampling
- characteristic pattern
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the present invention provides a kind of symmetrical convolutional neural networks model building method, eye fundus image blood vessel segmentation method, apparatus, computer equipment and storage medium entirely.The training method includes: to carry out piecemeal processing and whitening processing to original eye fundus image to obtain original eye fundus image block;Original eye fundus image block is input in preset symmetrical full convolutional neural networks and is trained, to obtain preset symmetrical full convolutional neural networks model, wherein, each hidden layer in the model realizes that the characteristic pattern to this layer of input is handled, the characteristic pattern of all outputs before this layer is handled, to realize that input for original eye fundus image block, exports as the optical fundus blood vessel segmentation result of each pixel corresponding with original eye fundus image block.The embodiment of the present invention also provides a kind of eye fundus image blood vessel segmentation method.The model of building of the embodiment of the present invention avoids overfitting problem, improves the generalization ability of model, improves the accuracy and precision of eye fundus image blood vessel segmentation.
Description
Technical field
The present invention relates to field of artificial intelligence more particularly to a kind of symmetrical full convolutional neural networks model construction sides
Method, eye fundus image blood vessel segmentation method, apparatus, computer equipment and storage medium.
Background technique
In eye fundus image the health status of retinal vessel doctor is diagnosed early diabetes and cardiovascular and cerebrovascular disease and
A variety of ophthalmology diseases are of great significance.But due to the complexity and particularity of retinal vessel self structure, so that view
The feature extraction of film blood vessel is in the task that field of medical image processing is always a great challenge.It is manual by healthcare givers
Divide retinal vessel, not only workload is huge, but also has great subjectivity.Different healthcare givers, to same figure eye
Base map is as there may be differences for the segmentation results of medium vessels.With the development of computer technology, the automatic segmentation of retinal vessel
Technology is come into being.
Due to the particularity of retinal vessel, there are many difficult points for the blood vessel segmentation in retinal images at present: 1) eyeground
Contrast between image medium vessels and background is low.Due to fundus camera acquisition equipment and acquire the influence such as illumination of environment not
Impartial factor causes the situation that contrast is low;2) blood vessel self structure feature is complicated.The bending degree of retinal vessel, shape
It is different, it is in tree-shaped distribution, so that segmentation gets up to have certain difficulty;3) the obtained blood vessel of most of dividing methods exists at present
Breakpoint causes segmentation precision not high.
Summary of the invention
The embodiment of the present invention provides a kind of symmetrical convolutional neural networks model building method, eye fundus image blood vessel segmentation side entirely
Method, device, computer equipment and storage medium can be improved the generalization ability of symmetrical full convolutional neural networks model, improve simultaneously
The accuracy and precision of eye fundus image blood vessel segmentation.
In a first aspect, the embodiment of the invention provides a kind of symmetrical full convolutional neural networks model building method, this method
Include:
Piecemeal processing is carried out to original eye fundus image;To piecemeal treated original eye fundus image carries out whitening processing with
To original eye fundus image block;Original eye fundus image block is input in preset symmetrical full convolutional neural networks and is trained, with
Obtain preset symmetrical full convolutional neural networks model, wherein each of preset symmetrical full convolutional neural networks model
Hidden layer realizes that the characteristic pattern to this layer of input is handled, while handling the characteristic pattern of all outputs before this layer,
To realize input as original eye fundus image block, the optical fundus blood vessel segmentation for each pixel corresponding with original eye fundus image block is exported
As a result.
Second aspect, the embodiment of the invention provides a kind of eye fundus image blood vessel segmentation methods, which comprises
Piecemeal processing is carried out to target eye fundus image;To piecemeal treated target eye fundus image carry out whitening processing with
To target eye fundus image block;Target eye fundus image block is input to the preset symmetrical complete of the building of method as described in relation to the first aspect
In convolutional neural networks model, to obtain the optical fundus blood vessel segmentation result of each pixel of target eye fundus image block;By target eye
The optical fundus blood vessel segmentation result of each pixel of bottom image block splices again to obtain the segmentation of the optical fundus blood vessel of target eye fundus image
As a result.
The third aspect, the embodiment of the invention provides a kind of devices, which includes for holding
The corresponding unit of method described in the above-mentioned first aspect of row or for executing the corresponding list of method described in above-mentioned second aspect
Member.
Fourth aspect, the embodiment of the invention provides a kind of computer equipment, the computer equipment includes memory, with
And the processor being connected with the memory;
The memory is for storing computer program, and the processor is for running the calculating stored in the memory
Machine program, to execute method described in above-mentioned first aspect or to execute method described in above-mentioned second aspect.
5th aspect, the embodiment of the invention provides a kind of computer readable storage medium, the computer-readable storage
Media storage has computer program, when the computer program is executed by processor, realizes method described in above-mentioned first aspect
Or realize method described in above-mentioned second aspect.
The embodiment of the present invention is realized by each hidden layer in preset symmetrical full convolutional neural networks and is inputted to this layer
Characteristic pattern handled, while the characteristic pattern of all outputs before this layer is handled, to realize input for original eyeground
Image block exports the preset symmetrical full volume of the optical fundus blood vessel segmentation result for each pixel corresponding with original eye fundus image block
Product neural network model.Each hidden layer in the preset symmetrical full convolutional neural networks model of the building of the embodiment of the present invention
Other than handling upper one layer of output, the characteristic pattern of all outputs before this layer can also be handled.So avoid
Upper one layer of output is deepened to arrive as the network number of plies caused by next layer of input in other full convolutional neural networks
The problem of after to a certain degree, the connection between front layer and rear layer will weaken with elongated, and the gradient that may cause disappears.
The classifier that the preset symmetrical full convolutional neural networks model that training simultaneously obtains can solve other neural networks is direct
Dependent on the output of network the last layer, the mistake that there is the decision function of fine Generalization Capability and generate so is hardly resulted in
The problem of fitting problems, i.e., the preset symmetrical full convolutional neural networks model in the embodiment of the present invention can solve over-fitting,
Improve the generalization ability of model.Using the preset symmetrical full convolutional neural networks model constructed in the embodiment of the present invention come into
The accuracy and precision of eye fundus image blood vessel segmentation can be improved in row eye fundus image blood vessel segmentation.
Detailed description of the invention
Technical solution in order to illustrate the embodiments of the present invention more clearly, below will be to needed in embodiment description
Attached drawing is briefly described, it should be apparent that, drawings in the following description are some embodiments of the invention, general for this field
For logical technical staff, without creative efforts, it is also possible to obtain other drawings based on these drawings.
Fig. 1 is the flow diagram of symmetrical full convolutional neural networks model building method provided in an embodiment of the present invention;
Fig. 2 is the sub-process schematic diagram of symmetrical full convolutional neural networks model building method provided in an embodiment of the present invention;
Fig. 3 is the sub-process schematic diagram of symmetrical full convolutional neural networks model building method provided in an embodiment of the present invention;
Fig. 4 is piecemeal provided by the embodiments of the present application treated original eye fundus image and the eyeground figure for carrying out whitening processing
The comparison diagram of picture;
Fig. 5 is the structural schematic diagram of the full convolutional neural networks of symmetrical configuration provided in an embodiment of the present invention;
Fig. 6 is the structural schematic diagram of dense connection network provided in an embodiment of the present invention;
Fig. 7 is the structural schematic diagram of preset symmetrical full convolutional neural networks provided in an embodiment of the present invention;
Fig. 8 is the hidden layer decomposition diagram of preset symmetrical full convolutional neural networks provided in an embodiment of the present invention;
Fig. 9 is the sub-process schematic diagram of symmetrical full convolutional neural networks model building method provided in an embodiment of the present invention;
Figure 10 is the schematic diagram of Gamma correction provided in an embodiment of the present invention;
Figure 11 is the sub-process schematic diagram of symmetrical full convolutional neural networks model training provided in an embodiment of the present invention;
Figure 12 is the sub-process schematic diagram of Figure 11 provided in an embodiment of the present invention;
Figure 13 is the sub-process schematic diagram of Figure 11 provided in an embodiment of the present invention;
Figure 14 is the visualization schematic diagram of convolution operator provided in an embodiment of the present invention;
Figure 15 is the schematic flow chart of eye fundus image blood vessel segmentation method provided in an embodiment of the present invention;
Figure 16 is the schematic block diagram of symmetrical full convolutional neural networks model construction device provided in an embodiment of the present invention;
Figure 17 is the schematic block diagram of training unit provided in an embodiment of the present invention;
Figure 18 is the schematic block diagram of eye fundus image blood vessel segmentation device provided in an embodiment of the present invention;
Figure 19 is the schematic block diagram of computer equipment provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, shall fall within the protection scope of the present invention.
Fig. 1 is the flow diagram of symmetrical full convolutional neural networks model building method provided in an embodiment of the present invention.Such as
Shown in Fig. 1, this method includes S101-S103.
S101 carries out piecemeal processing to original eye fundus image.
Wherein, original eye fundus image can be color image, be also possible to gray level image etc..Color image is generally used,
Such as color image in tri- channels RGB.Original eye fundus image can be from preset data and concentrate the original eye fundus image obtained,
Such as the eye fundus image in the training set of the library DRIVE.In one embodiment, be also possible to according to preset data concentrate eye fundus image into
It is obtained after row data augmentation, as will be obtained after the eye fundus image progress data augmentation in the training set of the library DRIVE.
In one embodiment, as shown in Fig. 2, step S101 includes the following steps S1011-S1013.
S1011 determines data scale.Such as by test experiments selected data scale, such as needing will be in the training set of the library DRIVE
All original images are divided into how many segment in total.Such as it is divided into 500000 segments.
S1012 determines the segment number and size that each original eye fundus image needs to divide according to data scale.That is root
Determine that each original eye fundus image needs to be divided into how many a segments, the size sum number for the segment being divided into according to data scale
Mesh.
S1013, by original eye fundus image and the preset standard picture for having carried out optical fundus blood vessel segmentation according to identified
Tile data and size random division.The preset standard picture for having carried out optical fundus blood vessel segmentation refers to what preset data was concentrated
The standard picture of expert's manual segmentation, the standard picture of expert's manual segmentation in the training set of the library DRIVE.Each original eyeground
Image is all corresponding with the preset standard picture for having carried out optical fundus blood vessel segmentation, by original eye fundus image and it is preset into
The standard picture of row optical fundus blood vessel segmentation is all split.
Such as assume that data scale is n, original eye fundus image size is w*h, and segment size is a*b.It is then each original big
Small original eye fundus image needs random division at n/60 segment, may wherein there is intersection between segment.
The central point of segment should meet: (a/2) < x_center < (w-a/2)
(b/2)<y_center<(h-b/2)
After randomly selecting central point, then the value range of segment should are as follows:
Patch=(x_center-a/2:x_center+a/2, y_center-b/2:y_center+b/2)
Wherein: x_center, y_center are respectively the central point X axis coordinate value and Y axis coordinate value of segment;W, h distinguishes
For the length and width of original eye fundus image;A, b is respectively the length and width of segment.
S102, to piecemeal, treated that original eye fundus image carries out whitening processing to obtain original eye fundus image block.
Piecemeal treated original eye fundus image is corresponding segment.Eye fundus image can be by acquisition ring under normal conditions
The influence of the factors such as intensity of illumination, center line reflection and acquisition equipment in border.It is readily incorporated noise, and makes blood vessel and background
Contrast reduces.It is influenced to reduce these factor brings, extracts the constant information in picture, need to do eye fundus image white
Change processing, converts zero-mean and unit variance for the pixel value of eye fundus image.
In one embodiment, as shown in figure 3, step S102 includes the following steps S1021-S1022.
S1021 calculates pixel average and variance under piecemeal treated original eye fundus image difference channel.
Such as calculate piecemeal treated the pixel average μ and variance δ in the original each channel of eye fundus image2.Wherein, it calculates
The pixel average μ and variance δ in each channel2Calculation formula it is as follows:
Wherein R, G, B are respectively the number of red, green, blue channel pixel;R, g, b are respectively the current picture in red, green, blue channel
Vegetarian refreshments;Zr、Zg、ZbThe respectively value of current pixel point;μr、μg、μbRespectively red, green, blue channel pixel average.
S1022, by piecemeal, treated that each of original eye fundus image difference channel pixel value subtracts under the channel
Pixel average, divided by the standard deviation under the channel, in this way, to obtain original eye fundus image block.Wherein, calculation formula is such as public
Formula (7).
Fig. 4 is piecemeal provided by the embodiments of the present application treated original eye fundus image and the eyeground figure for carrying out whitening processing
The comparison diagram of picture.As shown in figure 4, Fig. 4 a is piecemeal treated original eye fundus image, Fig. 4 b is that treated for corresponding piecemeal
Image of the original eye fundus image after whitening processing.It can be seen that the comparison of image medium vessels and background after processing
Degree has obtained apparent enhancing.Certain small blood dim in certain illumination, in the excessive image of noise, in original eye fundus image
Pipe is almost difficult to the naked eye be identified, but blood vessel can still become apparent from after passing through albefaction, this mentions the later period
The segmentation precision of high blood vessel plays a significant role.After carrying out whitening processing, enhance that certain illumination are dim or the picture of lesion
The contrast of middle tiny blood vessels and background plays important function to the segmentation precision for promoting blood vessel.So that model is finally in disease
Precision on stove image and the dim uneven image of illumination is improved.
Original eye fundus image block is input in preset symmetrical full convolutional neural networks and is trained, to obtain by S103
Preset symmetrical full convolutional neural networks model, wherein each of preset symmetrical full convolutional neural networks model is hidden
Layer realizes that the characteristic pattern to this layer of input is handled, while handling the characteristic pattern of all outputs before this layer, with reality
Now input is original eye fundus image block, exports the optical fundus blood vessel segmentation knot for each pixel corresponding with original eye fundus image block
Fruit.
Preset symmetrical full convolutional neural networks are a symmetrical configurations, connect dense full convolutional neural networks.
Fig. 5 is the structural schematic diagram of the full convolutional neural networks of symmetrical configuration provided in an embodiment of the present invention.Such as Fig. 5 institute
Show, includes input layer, hidden layer, output layer in the full convolutional neural networks of symmetrical configuration, wherein in addition to input layer and output
Layer, other are all hidden layers, and multiple layers are symmetrical structures in hidden layer.Hidden layer is divided into down-sampling part and up-sampling part,
Down-sampling part is formed by several convolutional layers and pond layer alternate combinations, can carry out road to input picture in the training of network
Diameter shrinks to capture global information.Up-sampling one's duty is formed by several convolutional layers and warp lamination alternate combinations, in network
Path expansion is carried out to the characteristic pattern of down-sampling in training, to be accurately positioned each pixel.Different from general convolution
Neural network is free of full articulamentum, only output layer in preset symmetrical full convolutional neural networks, and output layer passes through default activation
Function, each of characteristic pattern identical with original image size pixel after being up-sampled such as softmax activation primitive
Carry out two sort operation (calculating the probability that pixel is background dot and puncta vasculosa is respectively how many).The full convolution of symmetrical configuration
Neural network is a network end to end, that is, what is inputted is piece image, and output is also a corresponding identical size
Image.
Fig. 6 is the structural schematic diagram of dense connection network provided in an embodiment of the present invention.As can be seen that each layer in Fig. 6
The characteristic pattern that can be exported to upper one layer is handled, while can also be handled the characteristic pattern of all outputs before this layer.
On the basis of the full convolutional neural networks prototype network model of symmetrical structure, introduce dense connection mechanism have with
Lower advantage:
(1) in the hidden layer of deep learning network model, generally by upper one layer of output as next layer of input, that
There is N number of hidden layer just to deposit N number of articulamentum in network.But after the network number of plies is deepened to a certain extent, front layer and rear layer
Between connection will weaken with elongated, thus may cause gradient disappear the problem of.And under dense connection mechanism, it is hidden
Each layer of input in layer is all the set of all layers of output in front, and the characteristic pattern of same current layer can be also directly passed to below
All network layers are as inputting, if there is N number of hidden layer there is N* (N+1)/2 connection in such network.This is with regard to fine
Solve the problems, such as gradient disappearance.
(2) over-fitting can be solved the problems, such as to a certain extent.Data in eye fundus image database used by studying herein
The problem of amount is relatively deficient, is easy to appear over-fitting in the training process of network.Each layer of feature in deep learning network
It is all a nonlinear transformation to input data, with the increase of the network number of plies, the complexity of transformation increases therewith, in net
The last layer complexity of network will build up on to a biggish degree.And the classifier of general neural network all depends directly on
The output of network the last layer hardly results in the decision function with fine Generalization Capability in this way, and overfitting problem is therewith
It generates.And the feature used in input layer with low complex degree that dense connection mechanism lower network is comprehensive, so that net
Network is easier to obtain a decision function with more preferable Generalization Capability.
Fig. 7 is the structural schematic diagram of preset symmetrical full convolutional neural networks provided in an embodiment of the present invention.This is preset
Symmetrical structure is integrally being presented in symmetrical full convolutional neural networks, and hidden layer is equally made of down-sampling part and up-sampling part,
The difference is that each layer of network of input is the superpositions of all output characteristic patterns in upper layer, the deeper network layer of layer each in this way can
The feature extracted in recycling front layer.
Down-sampling part and up-sampling part are made of multiple cycling elements respectively in the hidden layer of model, such as Fig. 8 institute
Show, for the hidden layer decomposition diagram of preset symmetrical full convolutional neural networks provided in an embodiment of the present invention.Wherein, down-sampling
There are three down-sampling cycling elements for part, and there are three up-sample cycling element for up-sampling part.Each down-sampling cycling element pair
A hidden layer is answered, each up-sampling cycling element also corresponds to a hidden layer.
In one embodiment, if the quantity for the eye fundus image that preset data is concentrated is not sufficient enough, then needing to increase data
The quantity of the eye fundus image of concentration.As shown in Figure 1, before step S101, the method also includes:
S101a obtains the eye fundus image that preset data is concentrated, and acquired eye fundus image is carried out augmentation processing, with
Obtain original eye fundus image.
In one embodiment, as shown in figure 9, step S101a includes the following steps S1011a-S1013a.
S1011a, each width eye fundus image shorthand one angle of rotation that preset data is concentrated.
Each width eye fundus image after rotation is used the brightness of Gamma correction adjustment picture by S1012a.
S1013a, using the eye fundus image of each width eye fundus image and preset data concentration after adjustment brightness as original eye
Base map picture.
Wherein, preset data, which integrates, to be the library DRIVE training set.Wherein, the formula of Gamma correction are as follows: f (imgi)=
imgi γ, wherein imgiIndicate the pixel value of certain point i.Figure 10 is the schematic diagram of Gamma correction provided in an embodiment of the present invention.Knot
The effect of Gamma correction can be obtained by closing Figure 10:
1) as γ < 1, as shown in the dotted line in Figure 10, in low gray level areas, dynamic range becomes larger, and then image
Contrast enhancing;In high gray areas, dynamic range becomes smaller, while the gray value of image entirety becomes larger.Such as 0.5 < γ < 1.
2) as γ > 1, as shown by the solid line in fig. 10, the dynamic range of low gray level areas becomes smaller, high gray areas
Dynamic range becomes larger, and reduces the contrast of low gray level areas image, improves the contrast of high gray scale area image, schemes simultaneously
As whole gray value becomes smaller.Such as 1 < γ < 1.5.
Data are extended for three width by each width picture in training set after augmentation: the original image in training set, γ < 1
When obtained augmentation image, the augmentation image that when γ > 1 obtains.Assuming that 20 width images in training set are obtained, then by increasing
60 width images can be obtained after wide processing, for use as the training of model.
In one embodiment, as shown in figure 11, step S103 includes the following steps S1031-S1038.
S1031 randomly selects the eye fundus image block of preset ratio as training sample from original eye fundus image block.
As assumed in total to include 570000 image blocks, randomly select wherein in each round training in model training stage
90% data can enter data into preset symmetrical full convolutional neural networks in batches for training when training
To reduce training duration.The data of residue 10% are used as verifying.Assuming that tile size is 48*48, due to remaining image
The information of middle color space obtains the image in different channels, therefore the size of image block is 48*48*3.
Acquired training sample is input to multiple down-samplings in preset symmetrical full convolutional neural networks by S1032
Cycling element is handled, wherein each down-sampling cycling element corresponds to one in preset symmetrical full convolutional neural networks
A hidden layer, each down-sampling cycling element carry out process of convolution to the characteristic pattern of this layer of input, and to all defeated before this layer
Characteristic pattern out carries out process of convolution, will carry out pond processing by all characteristic patterns after process of convolution.
Each down-sampling cycling element corresponds to a hidden layer in preset symmetrical full convolutional neural networks.Wherein,
It will treated that characteristic pattern is input to symmetrically up-samples circulation with down-sampling cycling element by multiple down-sampling cycling elements
The step of unit is handled, comprising: acquired training sample is input to first down-sampling cycling element and is handled;
By first down-sampling cycling element, treated that characteristic pattern is input to second down-sampling cycling element handles, by second
Treated that characteristic pattern is input to third down-sampling cycling element is handled for a down-sampling cycling element ....Wherein,
The concrete processing procedure of each down-sampling cycling element is the same.Each down-sampling cycling element to this layer input characteristic pattern into
Row process of convolution, and process of convolution is carried out to the characteristic pattern of all outputs before this layer, it will be by all after process of convolution
Characteristic pattern carries out pond processing.
Specifically, acquired training sample is input to the step of first down-sampling cycling element is handled is
Example, to be illustrated the concrete processing procedure of each down-sampling cycling element.The specific of each down-sampling cycling element processes
Journey may refer to Fig. 8 and Figure 12.Each down-sampling cycling element in Fig. 8 respectively include: conv2d (the first convolutional layer of down-sampling),
Add (superposition processing), Batch_normalization (standardization), activation (activation primitive is handled),
Conv2d (the second convolutional layer of down-sampling), Max_pooling (down-sampling pond layer).
As shown in figure 12, acquired training sample is input to the step of first down-sampling cycling element is handled
Include the following steps S1032a-S1032f.
Acquired training sample is input to the first convolutional layer of down-sampling in the down-sampling cycling element by S1032a,
To carry out process of convolution.
The characteristic pattern that first convolutional layer of down-sampling exports upper one layer extracts feature.If convolution kernel size can be 3*3.
S1032b, the characteristic pattern of all outputs before obtaining this layer, and acquired characteristic pattern is superimposed.
Superposition is interpreted as acquired characteristic pattern as a whole, being handled together.
S1032c is standardized superimposed characteristic pattern.
Standardization be in order to for each hidden neuron, gradually to after nonlinear function mapping to the value interval limit
The input distribution that saturation region is drawn close is forced to be withdrawn into the normal distribution that mean value is the standard of comparison that 0 variance is 1, so that non-linear change
The input value of exchange the letters number is fallen into input than more sensitive region, avoids gradient disappearance problem with this, promotes the convergence speed of network
Degree.
S1032d will be activated by standardized characteristic pattern using activation primitive.
Wherein, activation primitive is ' Relu ' activation primitive most common in current deep learning.
Characteristic pattern after activation is input to the second convolutional layer of down-sampling in the down-sampling cycling element by S1032e, with
Carry out process of convolution.
Second of convolution extracts feature to the superposition of all inputs of front layer, and convolution kernel size such as can be 3*3.
S1032f will be handled by the first convolutional layer of down-sampling treated characteristic pattern and by the second convolutional layer of down-sampling
Characteristic pattern afterwards is input to the down-sampling pond layer in the down-sampling cycling element to carry out pond processing, in this way, completing one
The processing of down-sampling cycling element.
Wherein, down-sampling Chi Huachi uses maximum pond method to carry out pond.
Wherein, in each down-sampling cycling element, comprising convolution operation, first time convolution propose upper one layer of input twice
Feature is taken, second of convolution extracts feature to the superposition of all inputs of front layer.Convolution kernel size is 3*3, this is because convolution kernel
Size, which has to be larger than 1, just to be played the role of promoting receptive field, even and if size is the convolution kernel of even number symmetrically on characteristic pattern both sides
Padding is added to cannot guarantee that the size of input feature vector figure and the size constancy of output characteristic pattern, so general all select 3*3 big
Small convolution kernel.
Original picture block size reduction is original 1/4 after each down-sampling operation.
S1033, will treated that characteristic pattern is input to and down-sampling cycling element pair by multiple down-sampling cycling elements
It is much of sample cyclic unit to be handled, wherein each up-sampling cycling element corresponds to preset symmetrical full convolutional Neural
A hidden layer in network, each up-sampling cycling element carry out up-sampling treatment to the characteristic pattern of this layer of input, and to upper
Characteristic pattern after sampling processing carries out process of convolution, carries out process of convolution to the characteristic pattern of all outputs before this layer.
Each up-sampling cycling element corresponds to a hidden layer in preset symmetrical full convolutional neural networks.Wherein,
It will treated that characteristic pattern is input to symmetrically up-samples circulation with down-sampling cycling element by multiple down-sampling cycling elements
The step of unit is handled, comprising: will treated that characteristic pattern is input on first by multiple down-sampling cycling elements
Sample cyclic unit is handled;By first up-sampling cycling element, treated that characteristic pattern is input to second up-sampling follows
Ring element is handled, by second up-sampling cycling element treated characteristic pattern is input to third up-sampling cycling element
It is handled ....Wherein, the concrete processing procedure of each up-sampling cycling element is the same.Each up-sampling circulation is single
Member up-sampling cycling element each to the characteristic pattern of this layer of input carries out up-sampling treatment to the characteristic pattern of this layer of input (can also be with
Referred to as deconvolution is handled), and process of convolution is carried out to the characteristic pattern after up-sampling treatment, to the feature of all outputs before this layer
Figure carries out process of convolution.
Specifically, acquired training sample is input to the step of first up-sampling cycling element is handled is
Example, to be illustrated the concrete processing procedure of each up-sampling cycling element.Each the specific of up-sampling cycling element processes
Journey may refer to Fig. 8 and Figure 13.Each up-sampling cycling element in Fig. 8 respectively include: Up_samping (up-sampling treatment),
Conv2d (up-sampling the first convolutional layer), add (superposition processing), Batch_normalization (standardization),
Activation (activation primitive is handled), conv2d (the second convolutional layer of up-sampling).
As shown in figure 13, acquired training sample is input to the step of first down-sampling cycling element is handled
Include the following steps S1033a-S1033f.
Acquired characteristic pattern is carried out up-sampling treatment by S1033a.Wherein, up-sampling treatment is it can be appreciated that warp
Interpolation algorithm can be used to carry out interpolation to acquired characteristic for product processing, up-sampling treatment.Such as use bilinear interpolation.
Acquired characteristic pattern is input to the first convolutional layer of up-sampling in the up-sampling cycling element by S1033b, with
Carry out process of convolution.
The characteristic pattern that first convolutional layer of up-sampling exports upper one layer extracts feature.If convolution kernel size can be 3*3.
S1033c, the characteristic pattern of all outputs before obtaining this layer, and acquired characteristic pattern is superimposed.
S1033d is standardized superimposed characteristic pattern.
Wherein, aims of standardization and effect please refer to described above, and details are not described herein.
S1033e will be activated by standardized characteristic pattern using activation primitive.
Wherein, activation primitive is ' Relu ' activation primitive most common in current deep learning.
Characteristic pattern after activation is input to the second convolutional layer of up-sampling in the up-sampling cycling element by S1033f, with
Process of convolution is carried out, in this way, completing the processing of first up-sampling cycling element.
Second of convolution extracts feature to the superposition of all inputs of front layer.Convolution kernel size such as can be 3*3.
Up-sampling cycling element is corresponding with down-sampling cycling element, and internal structure is similar.Every time after up-sampling operation
Tile size is increased to the 1/4 of upper one layer.
Such as assume to include three down-sampling cycling elements, three up-sampling circulations in preset symmetrical full convolutional neural networks
Unit, the tile size of input are 48*48.The tile size 48*48 of so first down-sampling cycling element input, warp
Crossing the characteristic pattern size obtained after first down-sampling cycling element processing is 24*24, i.e. second down-sampling cycling element is defeated
The tile size entered is 24*24, and the characteristic pattern size obtained after second down-sampling cycling element is handled is 12*12,
That is the tile size of third down-sampling cycling element input is 12*12.In this way, being handled by three down-sampling cycling elements
Characteristic pattern afterwards, then when being input to the processing of first up-sampling cycling element, the size of the characteristic pattern of input is 12*12, is passed through
The characteristic pattern size obtained after first up-sampling cycling element processing is 24*24, i.e., second up-sampling cycling element input
Tile size be 24*24, obtained characteristic pattern size is 48*48 after second up-samples cycling element processing, i.e.,
The tile size that third up-samples cycling element input is 48*48.Wherein, each down-sampling cycling element and it is each on adopt
The convolution kernel size of each convolutional layer in sample cycling element is 3*3.Specifically data please refer to table 1, and table 1 is each hidden layer
Input parameter.Wherein, Layer_1, Layer_2, Layer_3 respectively correspond a cycling element.
1 hidden layer of table inputs parameter
Down-sampling layer | Characteristic pattern size | Up-sample layer | Characteristic pattern size | Convolution kernel size |
Layer_1 | 48*48 | Layer_1 | 12*12 | 3*3 |
Layer_2 | 24*24 | Layer_2 | 24*24 | 3*3 |
Layer_3 | 12*12 | Layer_3 | 48*48 | 3*3 |
S1034, will treated that characteristic pattern is input to that preset symmetrical full convolution is refreshing by multiple up-sampling cycling elements
It is handled through the output layer in network, to obtain the corresponding predicted value of each pixel in training sample.
S1035, according to the true tag of pixel each in training sample corresponding predicted value and each pixel of training sample
Calculate error.
Wherein, cross entropy cost function calculation error is used when calculating error.
Cross entropy cost function are as follows:
A=σ (z) (9)
Z=∑ (Wj*Xj+b) (10)
Wherein: n is the sum of training set, x is input, and w is the weight of input, and b is bias, z be input cum rights with,
σ is activation primitive, and a is the actual output of neuron, and y is desired output, and C is cross entropy cost function.
Whether S1036, error in judgement have reached minimum.
S1037 updates preset symmetrical full convolutional Neural net by gradient descent algorithm if error is not up to minimized
The symmetrical full convolutional network for having updated network parameter is known as preset symmetrical full convolutional neural networks by the network parameter in network.
It is then returned to and executes step S1031.
During training neural network, w and b are updated by gradient descent algorithm, it is therefore desirable to calculate cost function
To the derivative of w and b:
Then w, b are updated:
Each layer of network of weight is updated by back propagation later:
Use next layer of error deltal+1To indicate the error delta of current layerlAre as follows:
δl=((wl+1)Tδl+1)⊙σ'(zl) (15)
Formula (13)-(16) are weight and the process that biasing updates, and each letter meaning is same as above in (13) (14);Wherein, l
Indicate that network layer δ indicates error, then δlIndicate l layers of error,Indicate j from the k neuron of (l-1) layer to l layers
Weight in the connection of neuron.
S1038, if error has reached minimum, the symmetrical full convolutional neural networks model that training is obtained is as preset
Symmetrical full convolutional neural networks model.
Which type of learn on earth in order to understand preset symmetrical full convolutional neural networks model as a result, study is arrived
Convolution operator visualization it is as shown in figure 14.Each convolution operator is the equal of the volume in order to extract one of image feature
The parameter (being exactly the w calculated) of integrating the inside is exactly in order to enable when doing convolution operation with input value, and reduction does not feel emerging
The input of interest, strengthens interested input.
As seen in Figure 14, the reticular structure of this convolution operator substantially blood vessel, white part represent big
Weight, the part of black represent small or negative weight.So the convolution operator is intended in fact to shape of blood vessel part
The bigger weight of pixel, and punish background dot.Predict that the pixel is the probability of puncta vasculosa finally by softmax.
Above method embodiment is realized defeated to this layer by each hidden layer in preset symmetrical full convolutional neural networks
The characteristic pattern entered is handled, while being handled the characteristic pattern of all outputs before this layer, to realize input for original eye
Bottom image block exports as the optical fundus blood vessel segmentation result of each pixel corresponding with original eye fundus image block.The embodiment of the present invention
Each hidden layer of building, can also be to the characteristic pattern of all outputs before this layer other than handling upper one layer of output
It is handled.So avoid in other full convolutional neural networks by upper one layer output as next layer input and cause
The network number of plies deepen to a certain extent after, the connection between front layer and rear layer will weaken with elongated, may cause
Gradient disappear the problem of.The preset symmetrical full convolutional neural networks model that training simultaneously obtains can solve other nerves
The classifier of network depends directly on the output of network the last layer, so hardly results in determining with fine Generalization Capability
Plan function and the overfitting problem generated, i.e., the preset symmetrical full convolutional neural networks model in the embodiment of the present invention can solve
Certainly the problem of over-fitting, the generalization ability of model is improved.
Figure 15 is the schematic flow chart of eye fundus image blood vessel segmentation method provided in an embodiment of the present invention.Such as Figure 15 institute
Show, this method includes S201-S204.
S201 carries out piecemeal processing to target eye fundus image.Wherein, target eye fundus image can be preset data concentration
Test image, such as the image in DRIVE in test set.Such as obtain 20 width original images in DRIVE in test set in sequence
From left to right, several segments are divided the image into from top to bottom, and segment size is identical as segment size in step S101.Segment with
There is no lap between segment, then the image of each width full size is divided into n small segments, then:
Wherein, new_w, new_h are respectively the width and height of original image, and a, b are respectively the width and height of segment;
Further, the value of new_w and new_h is determined by following rule:
If w%a=0, then new_w=w;W is the width of original image, and % is remainder operation.
Else new_w=(w/a+1) * a;Division is floor division, i.e. the result of the divide operations integer portion that only takes quotient
Point;
Similarly,
If h%b=0, then new_h=h;H is the width of original image;
Else new_h=(h/b+1) * b.
I.e. determining mode is as follows: if w%a=0, then it is determined that new_w=w;If w%a=0 is invalid,
New_w=(w/a+1) * a, wherein w/a is floor division;If h%b=0, then it is determined that new_h=h;If h%b=0
It is invalid, then it is determined that new_h=(h/b+1) * b.
The 20*n segment in total obtained after processing is put into sequence in an array.
S202 carries out whitening processing to piecemeal treated target eye fundus image to obtain target eye fundus image block.
Treated that target eye fundus image is corresponding segment for piecemeal.Wherein, the specific method of whitening processing please refers to
Described above, details are not described herein.
Target eye fundus image block is input in the preset symmetrical full convolutional neural networks model of building by S203, with
To the optical fundus blood vessel segmentation result of each pixel of target eye fundus image block.
Wherein, the preset symmetrical full convolutional neural networks model of building can be what above-mentioned any embodiment constructed
Preset symmetrical full convolutional neural networks model.
S204 splices the optical fundus blood vessel segmentation result of each pixel of target eye fundus image block again to obtain target eye
The optical fundus blood vessel segmentation result of base map picture.
It will be by the 20*n characteristic pattern that preset symmetrical full convolutional neural networks model exports according to original graph
The size of picture is stitched together again, to obtain the optical fundus blood vessel segmentation result of target eye fundus image.
This method embodiment obtains optical fundus blood vessel by above obtaining preset symmetrical full convolutional neural networks model
Segmentation result improves eyeground since the precision and generalization ability of preset symmetrical full convolutional neural networks model improve
The accuracy and precision of vessel segmentation.
Figure 16 is the schematic block diagram of symmetrical full convolutional neural networks model construction device provided in an embodiment of the present invention.Such as
Shown in Figure 16, which includes for executing unit corresponding to above-mentioned symmetrical full convolutional neural networks model building method.Tool
Body, as shown in figure 16, which includes piecemeal processing unit 301, whitening processing unit 302 and training unit 303.
Piecemeal processing unit 301, for carrying out piecemeal processing to original eye fundus image.
In one embodiment, piecemeal processing unit 301, including scale determination unit, segment size determination unit, image point
Block processing unit.Wherein, scale determination unit, for determining data scale.Segment size determination unit, for being advised according to data
Mould determines the segment number and size that each original eye fundus image needs to divide.Fragmental image processing unit, being used for will be original
Eye fundus image and the preset standard picture for having carried out optical fundus blood vessel segmentation divide at random according to identified tile data and size
It cuts.
Whitening processing unit 302, for treated that original eye fundus image carries out whitening processing is original to obtain to piecemeal
Eye fundus image block.
In one embodiment, whitening processing unit 302 includes mean variance computing unit, mean variance processing unit.Its
In, mean variance computing unit, pixel average under treated for calculating piecemeal original eye fundus image difference channel and
Variance.Mean variance processing unit, for by piecemeal treated each of original eye fundus image difference channel pixel value
The pixel average under the channel is subtracted, divided by the standard deviation under the channel, in this way, to obtain original eye fundus image block.
Training unit 303 is carried out for original eye fundus image block to be input in preset symmetrical full convolutional neural networks
Training, to obtain preset symmetrical full convolutional neural networks model, wherein in preset symmetrical full convolutional neural networks model
Each hidden layer realizes that the characteristic pattern to this layer of input is handled, while carrying out to the characteristic pattern of all outputs before this layer
Processing, to realize that input for original eye fundus image block, exports as the eyeground blood of each pixel corresponding with original eye fundus image block
Pipe segmentation result.
In one embodiment, as shown in figure 16, the symmetrical full convolutional neural networks model construction device further includes image
Augmentation unit 301a.Wherein, image augmentation unit 301a, for obtaining the eye fundus image of preset data concentration, and will be acquired
Eye fundus image carry out the processing of data augmentation, to obtain original eye fundus image.
In one embodiment, image augmentation unit 301a includes rotary unit, correction adjustment unit, the determining list of original image
Member.Wherein, rotary unit, each width eye fundus image shorthand one angle of rotation for concentrating preset data.Correction adjustment
Unit uses the brightness of Gamma correction adjustment picture for each width eye fundus image after rotating.Original image determines single
Member, for that will adjust each width eye fundus image after brightness and eye fundus image that preset data is concentrated is as original eye fundus image.
In one embodiment, as shown in figure 17, training unit 303 includes sample acquisition unit 3031, downsampling unit
3032, up-sampling unit 3033, output unit 3034, computing unit 3035, judging unit 3036, updating unit 3037 and mould
Type determination unit 3038.Wherein, sample acquisition unit 3031, for randomly selecting preset ratio from original eye fundus image block
Eye fundus image block is as training sample.Downsampling unit 3032, it is preset symmetrical for acquired training sample to be input to
Multiple down-sampling cycling elements in full convolutional neural networks are handled, wherein each down-sampling cycling element corresponds to pre-
If symmetrical full convolutional neural networks in a hidden layer, each down-sampling cycling element to this layer input characteristic pattern carry out
Process of convolution, and process of convolution is carried out to the characteristic pattern of all outputs before this layer, it will be by all spies after process of convolution
Sign figure carries out pond processing.Up-sampling unit 3033, for will treated that characteristic pattern is defeated by multiple down-sampling cycling elements
Enter to symmetrically up-sampling cycling element with down-sampling cycling element and handled, wherein each up-sampling cycling element is corresponding
A hidden layer in preset symmetrical full convolutional neural networks, characteristic pattern of each up-sampling cycling element to this layer of input
Up-sampling treatment is carried out, and process of convolution is carried out to the characteristic pattern after up-sampling treatment, to the feature of all outputs before this layer
Figure carries out process of convolution.Output unit 3034, treated for will pass through multiple up-sampling cycling elements, and characteristic pattern is input to
Output layer in preset symmetrical full convolutional neural networks is handled, to obtain the corresponding prediction of each pixel in training sample
Value.Computing unit 3035, for according to the true of the corresponding predicted value of pixel each in training sample and each pixel of training sample
Real label calculates error.Whether judging unit 3036 has reached minimum for error in judgement.Updating unit 3037, if for
Error not up to minimizes, and the network parameter in preset symmetrical full convolutional neural networks is updated by gradient descent algorithm,
The symmetrical full convolutional network for having updated network parameter is known as preset symmetrical full convolutional neural networks.Then sample acquisition is triggered
Unit 3031.Model determination unit 3038 will train obtained symmetrical full convolutional Neural net if having reached minimum for error
Network model is as preset symmetrical full convolutional neural networks model.
Wherein, downsampling unit 3032 includes multiple down-sampling cycling elements.Wherein, a down-sampling cycling element is used for
First down-sampling cycling element that acquired training sample is input in preset symmetrical full convolutional neural networks is carried out
Processing.The down-sampling cycling element includes: the first convolution unit of down-sampling, the first superpositing unit, the first Standardisation Cell, first
Activate unit, the second convolution unit of down-sampling and pond unit.Wherein, the first convolution unit of down-sampling, being used for will be acquired
Training sample be input to the first convolutional layer of down-sampling in the down-sampling cycling element, to carry out process of convolution.First superposition
Unit is superimposed for the characteristic pattern of all outputs before obtaining this layer, and by acquired characteristic pattern.First standardization is single
Member, for being standardized to superimposed characteristic pattern.First activation unit, it is sharp for that will be used by standardized characteristic pattern
Function living is activated.The second convolution unit of down-sampling, for the characteristic pattern after activation to be input to the down-sampling cycling element
In the second convolutional layer of down-sampling, to carry out process of convolution.Pond unit, being used for will be after the processing of the first convolutional layer of down-sampling
Characteristic pattern and by the second convolutional layer of down-sampling, treated that characteristic pattern is input to the down-sampling in the down-sampling cycling element
Pond layer is to carry out pond processing, in this way, completing the processing of a down-sampling cycling element.
Up-sampling unit 3033 includes multiple up-sampling cycling elements.Wherein, a up-sampling cycling element, for will be through
It crosses multiple down-sampling cycling elements treated characteristic pattern and be input in a up-sampling cycling element and handled.The up-sampling
Cycling element includes: up-sampling treatment unit, the first convolution unit of up-sampling, the second superpositing unit, the second Standardisation Cell, the
The second convolution unit of two activation units and up-sampling.Wherein, up-sampling treatment unit, for carrying out acquired characteristic pattern
Up-sampling treatment.The first convolution unit is up-sampled, for the characteristic pattern for passing through up-sampling treatment to be input to up-sampling circulation
The first convolutional layer of up-sampling in unit, to carry out process of convolution.Second superpositing unit, for all defeated before obtaining this layer
Characteristic pattern out, and acquired characteristic pattern is superimposed.Second Standardisation Cell, for carrying out standard to superimposed characteristic pattern
Change.Second activation unit, is activated for that will pass through standardized characteristic pattern using activation primitive.Up-sample the second convolution list
Member, for the characteristic pattern after activation to be input to the second convolutional layer of up-sampling in the up-sampling cycling element, to carry out convolution
Processing, in this way, completing the processing of first up-sampling cycling element.
Figure 18 is the schematic block diagram of eye fundus image blood vessel segmentation device provided in an embodiment of the present invention.As shown in figure 18,
The device includes for executing unit corresponding to above-mentioned eye fundus image blood vessel segmentation method.Specifically, as shown in figure 18, the dress
It sets 40 and uses unit 403 and concatenation unit 404 including piecemeal processing unit 401, whitening processing unit 402, model.
Piecemeal processing unit 401, for carrying out piecemeal processing to target eye fundus image.
Whitening processing unit 402, for carrying out whitening processing to piecemeal treated target eye fundus image to obtain target
Eye fundus image block.
Model uses unit 403, for target eye fundus image block to be input to the preset symmetrical full convolutional Neural of building
In network model, to obtain the optical fundus blood vessel segmentation result of each pixel of target eye fundus image block.
Concatenation unit 404, for splicing the optical fundus blood vessel segmentation result of each pixel of target eye fundus image block again
To obtain the optical fundus blood vessel segmentation result of target eye fundus image.
Above-mentioned apparatus can be implemented as a kind of form of computer program, and computer program can be in meter as shown in figure 19
It calculates and is run on machine equipment.
Figure 19 is a kind of schematic block diagram of computer equipment provided in an embodiment of the present invention.The equipment is that terminal etc. is set
It is standby, such as mobile terminal, PC terminal, IPad.The equipment 50 includes processor 502, the memory connected by system bus 501
With network interface 503, wherein memory may include non-volatile memory medium 504 and built-in storage 505.
The non-volatile memory medium 504 can storage program area 5041 and computer program 5042.This is non-volatile to deposit
, it can be achieved that symmetrical full convolution mind described above when the computer program 5042 stored in storage media is executed by processor 502
Through network model construction method.The processor 502 supports the operation of whole equipment for providing calculating and control ability.This is interior
Memory 505 provides environment for the operation of the computer program in non-volatile memory medium, and the computer program is by processor
When 502 execution, processor 502 may make to execute symmetrical full convolutional neural networks model building method described above.The network
Interface 503 is for carrying out network communication.It will be understood by those skilled in the art that structure shown in Figure 19, only with this hair
The block diagram of the relevant part-structure of bright scheme, does not constitute the restriction for the equipment being applied thereon to the present invention program, specifically
Equipment may include perhaps combining certain components or with different components than more or fewer components as shown in the figure
Arrangement.
Wherein, the processor 502 is described above right to realize for running computer program stored in memory
Claim any embodiment of full convolutional neural networks model building method.
Another embodiment of the present invention additionally provides a kind of schematic block diagram of computer equipment.In the present embodiment, this sets
Standby is the equipment such as terminal, such as mobile terminal, PC terminal, IPad.Figure 19 specifically is please referred to, which includes and Figure 19
The identical structure of shown computer equipment.The difference of the computer equipment and computer equipment shown in Figure 19 is, the computer
, it can be achieved that eye described above when the computer program that non-volatile memory medium is stored in equipment is executed by processor 502
Any embodiment of the base map as blood vessel segmentation method.
It should be appreciated that in embodiments of the present invention, alleged processor 502 can be central processing unit (Central
Processing Unit, CPU), which can also be other general processors, digital signal processor (Digital
Signal Processor, DSP), specific integrated circuit (application program lication Specific Integrated
Circuit, ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other can
Programmed logic device, discrete gate or transistor logic etc..General processor can be microprocessor or the processor
It is also possible to any conventional processor etc..
Those of ordinary skill in the art will appreciate that be realize above-described embodiment method in all or part of the process,
It is that relevant hardware can be instructed to complete by computer program.The computer program can be stored in a storage medium,
The storage medium can be computer readable storage medium.The computer program is by the processing of at least one of the computer system
Device executes, to realize the process step of the embodiment of the above method.
Therefore, the present invention also provides a kind of storage mediums.The storage medium can be computer readable storage medium.It should
Storage medium is stored with computer program, and the computer program is when being executed by a processor, aforementioned described symmetrical complete to realize
Any embodiment of convolutional neural networks model building method.
A kind of storage medium is additionally provided in another embodiment of the invention, which can be computer-readable
Storage medium, the computer-readable recording medium storage have computer program, and the computer program, which is worked as, to be executed by processor
When, to realize any embodiment of the aforementioned eye fundus image blood vessel segmentation method.
The storage medium can be USB flash disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), magnetic disk
Or the various computer readable storage mediums that can store program code such as CD.
In several embodiments provided by the present invention, it should be understood that disclosed device, device and method, it can be with
It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, the division of the unit,
Only a kind of logical function partition, there may be another division manner in actual implementation.Those skilled in the art can be with
It is well understood, for convenience of description and succinctly, the specific work process of the device of foregoing description, equipment and unit can
With with reference to the corresponding process in preceding method embodiment, details are not described herein.The above, specific embodiment party only of the invention
Formula, but scope of protection of the present invention is not limited thereto, anyone skilled in the art the invention discloses
In technical scope, various equivalent modifications or substitutions can be readily occurred in, these modifications or substitutions should all cover in guarantor of the invention
Within the scope of shield.Therefore, protection scope of the present invention should be subject to the protection scope in claims.
Claims (10)
1. a kind of symmetrical full convolutional neural networks model building method, which comprises
Piecemeal processing is carried out to original eye fundus image;
To piecemeal, treated that original eye fundus image carries out whitening processing to obtain original eye fundus image block;
Original eye fundus image block is input in preset symmetrical full convolutional neural networks and is trained, it is preset symmetrical to obtain
Full convolutional neural networks model, wherein each of preset symmetrical full convolutional neural networks model hidden layer is realized to this
The characteristic pattern of layer input is handled, while being handled the characteristic pattern of all outputs before this layer, to realize input for original
Beginning eye fundus image block exports as the optical fundus blood vessel segmentation result of each pixel corresponding with original eye fundus image block.
2. the method according to claim 1, wherein it is described original eye fundus image block is input to it is preset symmetrical
It is trained in full convolutional neural networks, to obtain preset symmetrical full convolutional neural networks model, comprising:
The eye fundus image block of preset ratio is randomly selected from original eye fundus image block as training sample;
By acquired training sample be input to multiple down-sampling cycling elements in preset symmetrical full convolutional neural networks into
Row processing, wherein each down-sampling cycling element corresponds to a hidden layer in preset symmetrical full convolutional neural networks, often
A down-sampling cycling element carries out process of convolution to the characteristic pattern of this layer of input, and to the characteristic pattern of all outputs before this layer into
Row process of convolution will carry out pond processing by all characteristic patterns after process of convolution;
It will treated that characteristic pattern is input to symmetrically up-samples with down-sampling cycling element by multiple down-sampling cycling elements
Cycling element is handled, wherein each up-sampling cycling element corresponds to one in preset symmetrical full convolutional neural networks
A hidden layer, it is each up-sampling cycling element to this layer input characteristic pattern carry out up-sampling treatment, and to up-sampling treatment after
Characteristic pattern carry out process of convolution, process of convolution is carried out to the characteristic pattern of all outputs before this layer;
It will be by multiple up-sampling cycling elements treated characteristic pattern is input to preset symmetrical convolutional neural networks entirely
Output layer is handled, to obtain the corresponding predicted value of each pixel in training sample;
Error is calculated according to the true tag of the corresponding predicted value of pixel each in training sample and each pixel of training sample;
Whether error in judgement has reached default frequency of training;
If the not up to default frequency of training of error is updated by gradient descent algorithm in preset symmetrical full convolutional neural networks
Network parameter, the symmetrical full convolutional network for having updated network parameter is known as preset symmetrical full convolutional neural networks;
It returns and executes the step of the eye fundus image as training sample for randomly selecting preset ratio from original eye fundus image block
Suddenly;
If error has reached default frequency of training, the symmetrical full convolutional neural networks model that training is obtained is as preset symmetrical
Full convolutional neural networks model.
3. according to the method described in claim 2, it is characterized in that, acquired training sample is input to preset symmetrical complete
A down-sampling cycling element in convolutional neural networks is handled, comprising:
Acquired training sample is input to the first convolutional layer of down-sampling in the down-sampling cycling element, to carry out at convolution
Reason;
The characteristic pattern of all outputs before obtaining this layer, and acquired characteristic pattern is superimposed;
Superimposed characteristic pattern is standardized;
It will be activated by standardized characteristic pattern using activation primitive;
Characteristic pattern after activation is input to the second convolutional layer of down-sampling in the down-sampling cycling element, to carry out at convolution
Reason;
It will be by the first convolutional layer of down-sampling treated characteristic pattern and by the second convolutional layer of down-sampling treated characteristic pattern
The down-sampling pond layer in the down-sampling cycling element is input to carry out pond processing, in this way, completing a down-sampling circulation
The processing of unit.
4. according to the method described in claim 2, it is characterized in that, will be by multiple down-sampling cycling elements treated feature
Figure is input in a up-sampling cycling element and is handled, comprising:
Acquired characteristic pattern is subjected to up-sampling treatment;
Characteristic pattern Jing Guo up-sampling treatment is input to the first convolutional layer of up-sampling in the up-sampling cycling element, to carry out
Process of convolution;
The characteristic pattern of all outputs before obtaining this layer, and acquired characteristic pattern is superimposed;
Superimposed characteristic pattern is standardized;
It will be activated by standardized characteristic pattern using activation primitive;
Characteristic pattern after activation is input to the second convolutional layer of up-sampling in the up-sampling cycling element, to carry out at convolution
Reason, in this way, completing the processing of first up-sampling cycling element.
5. the method according to claim 1, wherein described, to piecemeal, treated that original eye fundus image carries out is white
Change processing to obtain original eye fundus image block, comprising:
Calculate the pixel average and variance under piecemeal treated original eye fundus image difference channel;
By piecemeal, treated that each of original eye fundus image difference channel pixel value subtracts the pixel under the channel is averaged
Value, divided by the standard deviation under the channel, in this way, to obtain original eye fundus image block.
6. the method according to claim 1, wherein handling it to original eye fundus image progress piecemeal described
Before, the method also includes:
The eye fundus image that preset data is concentrated is obtained, and acquired eye fundus image is subjected to the processing of data augmentation, to obtain original
Beginning eye fundus image.
7. a kind of eye fundus image blood vessel segmentation method, which is characterized in that the described method includes:
Piecemeal processing is carried out to target eye fundus image;
Whitening processing is carried out to obtain target eye fundus image block to piecemeal treated target eye fundus image;
Target eye fundus image block is input to the preset symmetrical full convolution of as the method according to claim 1 to 6 building
In neural network model, to obtain the optical fundus blood vessel segmentation result of each pixel of target eye fundus image block;
The optical fundus blood vessel segmentation result of each pixel of target eye fundus image block is spliced again to obtain target eye fundus image
Optical fundus blood vessel segmentation result.
8. a kind of device, which is characterized in that described device includes for executing the list such as any one of claim 1-6 the method
Member includes unit for executing method as claimed in claim 7.
9. a kind of computer equipment, which is characterized in that the computer equipment includes memory, and is connected with the memory
Processor;
The memory is for storing computer program;The processor is for running the computer journey stored in the memory
Sequence to execute as the method according to claim 1 to 6, or executes the method for claim 7.
10. a kind of computer readable storage medium, which is characterized in that the computer-readable recording medium storage has computer journey
Sequence when the computer program is executed by processor, realizes as the method according to claim 1 to 6, or execute such as
Method of claim 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910009415.8A CN109816666B (en) | 2019-01-04 | 2019-01-04 | Symmetrical full convolution neural network model construction method, fundus image blood vessel segmentation device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910009415.8A CN109816666B (en) | 2019-01-04 | 2019-01-04 | Symmetrical full convolution neural network model construction method, fundus image blood vessel segmentation device, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109816666A true CN109816666A (en) | 2019-05-28 |
CN109816666B CN109816666B (en) | 2023-06-02 |
Family
ID=66604042
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910009415.8A Active CN109816666B (en) | 2019-01-04 | 2019-01-04 | Symmetrical full convolution neural network model construction method, fundus image blood vessel segmentation device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109816666B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110399833A (en) * | 2019-07-25 | 2019-11-01 | 上海鹰瞳医疗科技有限公司 | Personal identification method, modeling method and equipment |
CN110796161A (en) * | 2019-09-18 | 2020-02-14 | 平安科技(深圳)有限公司 | Recognition model training method, recognition device, recognition equipment and recognition medium for eye ground characteristics |
CN111161240A (en) * | 2019-12-27 | 2020-05-15 | 上海联影智能医疗科技有限公司 | Blood vessel classification method, computer device and readable storage medium |
CN111489364A (en) * | 2020-04-08 | 2020-08-04 | 重庆邮电大学 | Medical image segmentation method based on lightweight full convolution neural network |
CN113409201A (en) * | 2021-06-01 | 2021-09-17 | 平安科技(深圳)有限公司 | Image enhancement processing method, device, equipment and medium |
CN113763314A (en) * | 2020-06-03 | 2021-12-07 | 通用电气精准医疗有限责任公司 | System and method for image segmentation and classification using depth-reduced convolutional neural networks |
CN114612404A (en) * | 2022-03-04 | 2022-06-10 | 清华大学 | Blood vessel segmentation method, device, storage medium and electronic equipment |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108520522A (en) * | 2017-12-31 | 2018-09-11 | 南京航空航天大学 | Retinal fundus images dividing method based on the full convolutional neural networks of depth |
-
2019
- 2019-01-04 CN CN201910009415.8A patent/CN109816666B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108520522A (en) * | 2017-12-31 | 2018-09-11 | 南京航空航天大学 | Retinal fundus images dividing method based on the full convolutional neural networks of depth |
Non-Patent Citations (3)
Title |
---|
MAZHAR SHAIKH ETAL.: "Brain Tumor Segmentation Using Dense Fully Convolutional Neural Network", 《BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES》 * |
吴晨 等: "基于改进卷积神经网络的视网膜血管图像分割", 《光学学报》 * |
唐明轩 等: "基于Dense Connected深度卷积神经网络的自动视网膜血管分割方法", 《成都信息工程大学学报》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110399833A (en) * | 2019-07-25 | 2019-11-01 | 上海鹰瞳医疗科技有限公司 | Personal identification method, modeling method and equipment |
CN110399833B (en) * | 2019-07-25 | 2023-03-24 | 上海鹰瞳医疗科技有限公司 | Identity recognition method, modeling method and equipment |
CN110796161A (en) * | 2019-09-18 | 2020-02-14 | 平安科技(深圳)有限公司 | Recognition model training method, recognition device, recognition equipment and recognition medium for eye ground characteristics |
CN111161240A (en) * | 2019-12-27 | 2020-05-15 | 上海联影智能医疗科技有限公司 | Blood vessel classification method, computer device and readable storage medium |
CN111161240B (en) * | 2019-12-27 | 2024-03-05 | 上海联影智能医疗科技有限公司 | Blood vessel classification method, apparatus, computer device, and readable storage medium |
CN111489364A (en) * | 2020-04-08 | 2020-08-04 | 重庆邮电大学 | Medical image segmentation method based on lightweight full convolution neural network |
CN111489364B (en) * | 2020-04-08 | 2022-05-03 | 重庆邮电大学 | Medical image segmentation method based on lightweight full convolution neural network |
CN113763314A (en) * | 2020-06-03 | 2021-12-07 | 通用电气精准医疗有限责任公司 | System and method for image segmentation and classification using depth-reduced convolutional neural networks |
CN113409201A (en) * | 2021-06-01 | 2021-09-17 | 平安科技(深圳)有限公司 | Image enhancement processing method, device, equipment and medium |
CN113409201B (en) * | 2021-06-01 | 2024-03-19 | 平安科技(深圳)有限公司 | Image enhancement processing method, device, equipment and medium |
CN114612404A (en) * | 2022-03-04 | 2022-06-10 | 清华大学 | Blood vessel segmentation method, device, storage medium and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN109816666B (en) | 2023-06-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109816666A (en) | Symmetrical full convolutional neural networks model building method, eye fundus image blood vessel segmentation method, apparatus, computer equipment and storage medium | |
JP2018171177A (en) | Fundus image processing device | |
CN110197493A (en) | Eye fundus image blood vessel segmentation method | |
CN112132817B (en) | Retina blood vessel segmentation method for fundus image based on mixed attention mechanism | |
CN108198184B (en) | Method and system for vessel segmentation in contrast images | |
Tian et al. | Multi-path convolutional neural network in fundus segmentation of blood vessels | |
CN110348541A (en) | Optical fundus blood vessel image classification method, device, equipment and storage medium | |
CN107657612A (en) | Suitable for full-automatic the retinal vessel analysis method and system of intelligent and portable equipment | |
CN108961279A (en) | Image processing method, device and mobile terminal | |
CN108764342B (en) | Semantic segmentation method for optic discs and optic cups in fundus image | |
CN108986891A (en) | Medical imaging processing method and processing device, electronic equipment and storage medium | |
CN108021916A (en) | Deep learning diabetic retinopathy sorting technique based on notice mechanism | |
CN110348515A (en) | Image classification method, image classification model training method and device | |
WO2020022027A1 (en) | Learning device and learning method | |
CN110349147A (en) | Training method, the lesion recognition methods of fundus flavimaculatus area, device and the equipment of model | |
CN109919915A (en) | Retina fundus image abnormal region detection method and device based on deep learning | |
CN108198185A (en) | Dividing method and device, storage medium, the processor of eyeground lesion image | |
US9480925B2 (en) | Image construction game | |
WO2021058867A1 (en) | Image analysis in pathology | |
Firke et al. | Convolutional neural network for diabetic retinopathy detection | |
CN110059607A (en) | Living body multiple detection method, device, computer equipment and storage medium | |
JP2018114031A (en) | Fundus image processing device | |
CN113362360B (en) | Ultrasonic carotid plaque segmentation method based on fluid velocity field | |
CN110490138A (en) | A kind of data processing method and device, storage medium, electronic equipment | |
CN112101438A (en) | Left and right eye classification method, device, server and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |