CN110236483B - Method for detecting diabetic retinopathy based on depth residual error network - Google Patents

Method for detecting diabetic retinopathy based on depth residual error network Download PDF

Info

Publication number
CN110236483B
CN110236483B CN201910520291.XA CN201910520291A CN110236483B CN 110236483 B CN110236483 B CN 110236483B CN 201910520291 A CN201910520291 A CN 201910520291A CN 110236483 B CN110236483 B CN 110236483B
Authority
CN
China
Prior art keywords
layer
image
network
size
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910520291.XA
Other languages
Chinese (zh)
Other versions
CN110236483A (en
Inventor
颜成钢
朱嘉凯
王兴政
陈子阳
孙垚棋
张继勇
张勇东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201910520291.XA priority Critical patent/CN110236483B/en
Publication of CN110236483A publication Critical patent/CN110236483A/en
Application granted granted Critical
Publication of CN110236483B publication Critical patent/CN110236483B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Surgery (AREA)
  • Evolutionary Biology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Quality & Reliability (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Radiology & Medical Imaging (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for detecting diabetic retinopathy based on a depth residual error network. The invention comprises the following steps: step 1: screening a data set; step 2: pre-processing of fundus images, step 3: expanding a data set, namely performing image expansion processing on images in a class with few samples, wherein the specific operation comprises mirroring and rotating the images; and 4, step 4: making a data set label; and 5: constructing a training set and a test set; step 6: building a convolutional neural network; and 7: network training; and 8: and (5) testing the network. The invention can obtain higher accuracy of detecting the pathological changes, and meanwhile, the detection time can be greatly shortened.

Description

Method for detecting diabetic retinopathy based on depth residual error network
Technical Field
The invention relates to the field of deep learning computer vision, in particular to a method for detecting diabetic retinopathy.
Technical Field
Diabetic retinopathy is a common diabetic ocular complication that often causes vision loss or blindness. According to statistics, the probability of the diabetes mellitus patients with the age of about 10 years is 50 percent, and the probability of the diabetes mellitus patients with the age of more than 15 years is as high as 80 percent. The more severe the condition of diabetes, the older the patient, the higher the incidence of the disease. The disease is the consequence of diabetic microangiopathy, because the diabetes causes damage to retinal capillary walls, and in addition, blood is in a highly coagulated state, easily causing thrombus and blood stasis, even blood vessel rupture.
In 2015, the doctor team of Kaiming He proposed a deep residual error network (ResNet), and once it came out, the image classification, detection and localization of three champions were obtained in ImageNet. The residual network is easier to optimize than other deep learning networks and can improve accuracy by adding comparable depth. The core of the method is that a residual learning structure is used for solving the degradation problem of side effects caused by depth increase, wherein the degradation problem means that the training accuracy reaches a bottleneck along with the increase of the number of layers of a neural network and then starts to slide down. But after the residual error learning structure is used, the network performance can be improved by simply increasing the network depth, namely the training and testing accuracy is improved.
Disclosure of Invention
The invention provides a method for detecting diabetic retinopathy based on a depth residual error network. The method can be used for detecting the degree of the diabetic retinopathy.
The traditional method for detecting the degree of the diabetic retinopathy adopts an image feature extraction method, the eyeground images of patients are respectively detected for symptoms such as microaneurysms, small bleeding spots, white soft exudation, bleeding spots and the like in an image processing mode, the number of each different lesion is counted, and a multi-layer sensor is used for classifying to give a detection result.
The invention discloses a method for detecting diabetic retinopathy based on a depth residual error network, which is characterized in that a photographed eye fundus image is cut into uniform size of images, then ButterVorziq blur is carried out to extract detail features, and the simply processed images are used as the input of a convolutional neural network for training, and the method specifically comprises the following steps:
step 1: screening a data set, wherein the source of the data set is a test set used in a diabetic retinopathy detection competition held in 2014 by Kaggle; in this data set, the sponsor of the race classifies diabetic retinopathy into 5 categories according to the patient's symptoms, which are: normal, mild, moderate, severe, and proliferative lesions. Considering that most fundus images are blurred in competition, and part of the images are not well focused, simple screening is performed first, and sample images with part of overexposure, part of underexposure and no good focus in the test set are deleted.
Step 2: the method comprises the following steps of:
2-1, loading the fundus image and estimating the radius of the eyeball;
(1) estimating the lateral radius of an eyeball, and extracting N lateral vectors of the fundus map on the assumption that the size of the fundus map is M x N;
(2) averaging the corresponding pixel values on the transverse vector, dividing by 10, and comparing with the original pixel values;
(3) if the original pixel value is larger than the calculated value, the value is 1, and the value obtained by dividing the number of the statistical values which are 1 by 2 is the estimated value of the transverse radius of the eyeball;
(4) the longitudinal eyeball radius estimation value is calculated by the same method, and a value with larger transverse direction and longitudinal direction is selected as the final eyeball radius estimation value.
2-2, cutting the original image according to the final estimated value of the eyeball radius;
2-3, blurring the clipped image, and subtracting the blurred image from the original image to obtain a simple feature extraction image of the fundus;
2-4, eliminating the fuzzy boundary effect of the image, and removing 10% of the excircle of the fundus image;
and 2-5, further cutting the image, wherein the cut image is an RGB image with the size of 256 × 256.
And step 3: and (3) expanding the data set, wherein the number of samples of the patient with severe lesion and proliferative lesion is small, so that the number of samples of the patient with severe lesion and proliferative lesion in the data set of Kaggle is small, and the image expansion processing is performed on the images in the class with few samples in part by considering the problem of training sample balance, wherein the specific operation comprises image mirroring and rotation.
And 4, step 4: in the method, mild lesions, moderate lesions and severe lesions are regarded as non-proliferative lesions, so that the labels marked on the images are respectively as follows: the label number corresponding to the normal image is 0, the label number corresponding to the non-proliferative lesion image is 1, and the label number corresponding to the proliferative lesion image is 2; during neural network training, the corresponding label number is subjected to one-hot coding, namely 0 is coded into 001, 1 is coded into 010, and 2 is coded into 100.
And 5: construction of training and test sets
The data set is divided into a training set and a test set using the train _ test _ split () function in sklern, where the training set accounts for 80% and the test set accounts for 20% of the data set.
Step 6: the construction of the convolutional neural network comprises the following specific steps:
6-1, constructing basic residual modules, wherein one basic residual module consists of 3 convolution layers, 2 activation function layers, 3 BN (batch normalization) layers and 1 jump connection layer, and finally 1 activation function layer;
6-2, constructing residual modules with the dimension-increasing function, wherein one residual module with the dimension-increasing function consists of 2 convolution layers, 3 activation function layers, 3 BN (batch normalization) layers and 1 jump connection layer, the jump connection layer comprises 1 convolution layer and 1 activation function layer, and finally 1 activation function layer is arranged;
6-3. front-end construction of convolutional neural network, which performs feature extraction on the image using multiple residual modules, with a network input size of 256 × 3, followed by 1 zero-padding layer (parameter 3 × 3), one convolutional layer (number of filters 64, convolutional kernel size 7, step size 2 × 2), BN layer (axis value 3, momentum value 0.99, epsilon value 0.001), one activation function layer (function used is relu), and one maximum pooling layer (convolutional kernel size 3 × 3, step size 2 × 2, zero-padding valid, output size 128 × 64), then followed by 1 residual module with ascending function (output size 63 × 63, 256), 2 basic residual modules, 1 residual module with ascending function (output size 32), and 3 basic residual modules, 1 residual module with an ascending-dimension function (output size is 16 × 1024), 5 basic residual modules, 1 residual module with an ascending-dimension function (output size is 8 × 2048) and 2 basic residual modules, and finally the residual module is composed of an activation function layer (the used function is relu) and an average pooling layer (convolution kernel size is 7 × 7, step size is 7 × 7, zero padding is valid), and the size of the front-end output of the network is 1 × 2048;
6-4. the back end of the convolutional neural network is constructed, the front end of the network classifies the image by using a plurality of fully connected layers, firstly, one Fatten layer is used for reducing the dimension of the image, then, 1 fully connected layer (the number of nodes is 36, the activation function is relu), 1 Dropout layer (the rate is 0.25), 1 fully connected layer (the number of nodes is 26, the activation function is relu) and 1 fully connected layer (the number of nodes is 3, the activation function is softmax), and the size of the final output of the network is 3 x 1.
And 7: network training, wherein a loss function used by a network is Cross Entropy Cross Engine, a Gradient optimization algorithm used by the network is Stochastic Gradient Descent, the learning rate set by the algorithm is 0.01, the network is trained by using a training set, the iteration number of the network training is 250, and the number of batch samples is 4;
and 8: and (4) network testing, namely storing the trained model by using a model & save () function, generating a model weight file of h5, and testing the network by using a test set.
The invention has the following beneficial effects:
the invention adopts a data set in a diabetic retinopathy detection competition held by Kaggle in 2014, after preprocessing an eye fundus image in the data set, a full-connection network is added at the front end of a ResNet network with 50 layers for classification, and the classification accuracy and the detection time are greatly improved compared with the traditional detection method.
The invention can obtain higher accuracy of detecting pathological changes, and the detection time can be greatly shortened.
The method comprises the steps of training and testing 7240 images of the fundus after data expansion, wherein the number of the training images is 5792, a loss function used by a network is Cross Entropy exercise, a Gradient optimization algorithm is Stochastic Gradient Descemet, the learning rate set by the algorithm is 0.01, the network is trained by using a training set, the iteration number of network training is 250, the number of batch samples is 4, and the maximum testing precision is 0.9427 by testing 1448 images.
Drawings
FIG. 1 is a schematic diagram of basic residual modules;
FIG. 2 is a schematic diagram of a residual module with a dimension-raising function;
FIG. 3 is a diagram of image preprocessing steps and effects;
FIG. 4 is a flow chart of a lesion detection algorithm;
Detailed Description
The objects and effects of the present invention will become more apparent from the following detailed description of the present invention with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of basic residual modules. The residual module consists of 3 convolution layers, 2 activation function layers, 3 BN (batch normalization) layers and 1 jump connection layer, and finally, 1 activation function layer. The specific parameters of the network are: the image is input first, the jump connection layer on the left side does not process the image, the convolution layer (convolution kernel size is 1 x 1, step size is 1 x 1, padding is valid), the BN layer (axis value is 3, momentum value is 0.99, epsilon value is 0.001), the activation function layer (used function is relu), the convolution layer (convolution kernel size is 3, step size is 1 x 1, padding is same), the BN layer (axis value is 3, momentum value is 0.99, epsilon value is 0.001), the activation function layer (used function is relu), the convolution layer (convolution kernel size is 1 x 1, step size is 1 x 1, padding is valid), the BN layer (axis value is 3, momentum value is 0.99, epsilon value is 0.001), the current result and the input result are added with the image, and the final result is output as a superposition function (used function).
Fig. 2 is a schematic diagram of a residual module with a dimension-raising function. The residual module consists of 2 convolution layers, 3 activation function layers, 3 BN (batch normalization) layers and 1 jump connection layer, wherein the jump connection layer comprises 1 convolution layer and 1 activation function layer, and finally, 1 activation function layer. The specific parameters of the network are: inputting an image, sequentially adding a convolution layer (convolution kernel size is 1 x 1, step size is 1 x 1, padding is valid), a BN layer (axis value is 3, padding is 0.99, epsilon value is 0.001), an activation function layer (function used is relu), a convolution layer (convolution kernel size is 3 x 3, step size is 1, padding is same), a BN layer (axis value is 3, momentum value is 0.99, epsilon value is 0.001), an activation function layer (function used is relu), a convolution layer (convolution kernel size is 1 x 1, step size is 1 x 1, padding is valid), a BN layer (axis value is 3, momentum value is 0.99, epsilon value is 0.001), a connection layer on the right side is formed by adding 1 convolution layer (convolution kernel size is 1, padding is valid), and a BN layer (axis value is 3, momentum value is 0.99, epsilon value is 0.001), and a connection layer on the right side is formed by adding 1 convolution kernel size (convolution kernel size is 1, step size is 1, and the concatenation layer is 0.001), and the activation function used is added, and the left and right side is added with a convolution kernel size (convolution kernel size) and the result is 0.1. the next, and the activation function used is added, the left side is added to form a superposition value is 0.001), and finally, outputting the result.
FIG. 3 is a diagram of image preprocessing steps and effects. Taking the data set 103_ right. jpeg as an example, the processing steps specifically include:
(1) inputting a fundus image;
(2) estimating the radius of an eyeball, taking the larger radius as a target radius, and clipping the original image according to the target radius;
(3) carrying out fuzzy processing on the cut image;
(4) subtracting the blurred image from the original image to obtain a simple characteristic extraction image of the fundus;
(5) eliminating boundary effect caused by fuzzy processing, and removing 10% of excircle of fundus image;
(6) and (4) further cutting the image, wherein the cut image is an RGB image with the size of 256 × 256.
Fig. 4 is a flowchart of a lesion detection algorithm, and the processing steps specifically include:
(1) primarily screening data;
(2) preprocessing a sample image, wherein the preprocessing flow and effect are shown in figure 3;
(3) data expansion, wherein the used operations are mirror image and rotation respectively, and the rotation angles are 90 degrees, 180 degrees and 270 degrees respectively;
(4) dividing the data set into a training set and a test set, wherein the division ratio is 4: 1;
(5) constructing a diabetic retinopathy detection network, wherein basic residual module schematic diagrams and residual module schematic diagrams with a dimensionality raising function are mainly used and are shown in figures 1 and 2;
(6) network training, wherein a loss function used by a network is Cross Entropy Cross Engine, a Gradient optimization algorithm used by the network is Stochastic Gradient Descent, the learning rate set by the algorithm is 0.01, the network is trained by using a training set, the iteration number of the network training is 250, and the number of batch samples is 4;
(7) and (5) testing the network.

Claims (1)

1. A method for detecting diabetic retinopathy based on a depth residual error network is characterized by comprising the following steps:
step 1: screening a data set;
step 2: the pre-processing of the fundus image is carried out,
and step 3: expanding a data set, namely performing image expansion processing on images in a class with few samples, wherein the specific operation comprises mirroring and rotating the images;
and 4, step 4: making a data set label;
and 5: constructing a training set and a test set;
dividing the data set into a training set and a testing set by using a train _ test _ split () function in sklern, wherein the training set accounts for 80% of the data set, and the testing set accounts for 20%;
step 6: building a convolutional neural network;
and 7: network training;
and 8: testing a network;
the screening of the data set in step 1 is specifically realized as follows:
the source of the data set is a test set used in a diabetic retinopathy detection competition; deleting the over-exposed and under-exposed sample images which are not well focused in the test set; in this data set, the differences classify diabetic retinopathy into 5 categories, which are: normal, mild, moderate, severe and proliferative lesions;
the fundus image preprocessing method in the step 2 comprises the following specific steps:
2-1, loading the fundus image and estimating the radius of the eyeball;
2-2, taking the larger radius as a target radius, and clipping the original image according to the target radius;
2-3, blurring the clipped image, and subtracting the blurred image from the original image to obtain a simple feature extraction image of the fundus;
2-4, eliminating the fuzzy boundary effect of the image, and removing 10% of the excircle of the fundus image;
2-5, further cutting the image, wherein the cut image is an RGB image with the size of 256 × 256;
the estimation of the eyeball radius in step 2-1 is realized as follows:
(1) estimating the lateral radius of an eyeball, and extracting N lateral vectors of the fundus map on the assumption that the size of the fundus map is M x N;
(2) averaging the corresponding pixel values on the transverse vector, dividing by 10, and comparing with the original pixel values;
(3) if the original pixel value is larger than the calculated value, the value is 1, and the value obtained by dividing the number of the statistical values which are 1 by 2 is the estimated value of the transverse radius of the eyeball;
(4) calculating an eyeball longitudinal radius estimation value by the same method, and selecting a value with a larger transverse direction and a larger longitudinal direction as a final eyeball radius estimation value;
the manufacturing of the data set label in the step 4 is specifically realized as follows:
because supervised learning is adopted in the convolutional neural network, a category corresponding to each image in a data set after data expansion is given, in the method, mild lesions, moderate lesions and severe lesions are regarded as non-proliferative lesions, and therefore labels marked on the images are as follows: the label number corresponding to the normal image is 0, the label number corresponding to the non-proliferative lesion image is 1, and the label number corresponding to the proliferative lesion image is 2; during neural network training, performing one-hot coding on the corresponding label number, namely coding 0 into 001, coding 1 into 010 and coding 2 into 100;
and 6, building the convolutional neural network, specifically comprising the following steps:
6-1, constructing basic residual modules, wherein one basic residual module consists of 3 convolution layers, 2 activation function layers, 3 BN layers and 1 jump connection layer, and finally 1 activation function layer is arranged;
6-2, constructing residual modules with the dimension-increasing function, wherein one residual module with the dimension-increasing function consists of 2 convolution layers, 3 activation function layers, 3 BN layers and 1 jump connection layer, and the jump connection layer comprises 1 convolution layer and 1 activation function layer and finally 1 activation function layer;
6-3, constructing the front end of the convolutional neural network, wherein the front end of the convolutional neural network uses a plurality of residual error modules to extract the features of the image, and the input size of the network is 256 × 3, and then 1 zero padding layer, one convolutional layer, a BN layer, one activation function layer and one maximum pooling layer are formed; then, 1 residual error module I with an ascending-dimension function, 2 basic residual error modules, 1 residual error module II with an ascending-dimension function, 3 basic residual error modules, 1 residual error module III with an ascending-dimension function, 5 basic residual error modules, 1 residual error module IV with an ascending-dimension function and 2 basic residual error modules are followed, and finally, the device is composed of an activation function layer and an average pooling layer, and the size of the front-end output of the network is 1 x 2048;
wherein the zero padding layer parameter is 3 x 3; the number of filters in the convolutional layer is 64, the size of a convolutional kernel is 7 x 7, and the step size is 2 x 2; in the BN layer, the axis value is 3, the momentum value is 0.99, and the epsilon value is 0.001; the convolution kernel size in the maximum pooling layer is 3 × 3, the step size is 2 × 2, zero padding is valid, and the output size is 128 × 64; the output size of the residual module I is 63 x 256; the output size of the residual module II is 32 x 512; the output size of the residual module iii is 16 × 1024; the output size of the residual error module IV is 8 × 2048; the convolution kernel size of the average pooling layer is 7 × 7, the step size is 7 × 7, and zero padding is valid;
6-4, constructing the rear end of the convolutional neural network, classifying the images by using a plurality of full connection layers at the front end of the network, firstly reducing the dimensions of the images by using a Fatten layer, then forming 1 full connection layer I, 1 Dropout layer, 1 full connection layer II and 1 full connection layer III, and finally outputting the size of the network to be 3 x 1;
the number of nodes of the full connection layer I is 36, and the activation function is relu; the rate of the Dropout layer is 0.25; the number of nodes of the full connection layer II is 26, and the activation function is relu; the number of nodes of the full connection layer III is 3, and the activation function is softmax;
7, training the network, wherein a loss function used by the network is cross entropy cross Entry, a Gradient optimization algorithm used is Stochastic Gradient Descent, the learning rate set by the algorithm is 0.01, the network is trained by using a training set, the iteration number of the network training is 250, and the number of batch samples is 4;
and 8, performing network test, namely storing the trained model by using a model.save () function to generate a model weight file of h5, and testing the network by using a test set.
CN201910520291.XA 2019-06-17 2019-06-17 Method for detecting diabetic retinopathy based on depth residual error network Active CN110236483B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910520291.XA CN110236483B (en) 2019-06-17 2019-06-17 Method for detecting diabetic retinopathy based on depth residual error network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910520291.XA CN110236483B (en) 2019-06-17 2019-06-17 Method for detecting diabetic retinopathy based on depth residual error network

Publications (2)

Publication Number Publication Date
CN110236483A CN110236483A (en) 2019-09-17
CN110236483B true CN110236483B (en) 2021-09-28

Family

ID=67887295

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910520291.XA Active CN110236483B (en) 2019-06-17 2019-06-17 Method for detecting diabetic retinopathy based on depth residual error network

Country Status (1)

Country Link
CN (1) CN110236483B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110720888A (en) * 2019-10-12 2020-01-24 杭州电子科技大学 Method for predicting macular edema lesion of fundus image based on deep learning
CN110837803B (en) * 2019-11-07 2022-11-29 复旦大学 Diabetic retinopathy grading method based on depth map network
CN111222457B (en) * 2020-01-06 2023-06-16 电子科技大学 Detection method for identifying authenticity of video based on depth separable convolution
CN112052829B (en) * 2020-09-25 2023-06-30 中国直升机设计研究所 Pilot behavior monitoring method based on deep learning
CN112508884A (en) 2020-11-24 2021-03-16 江苏大学 Comprehensive detection device and method for cancerous region
CN112686855B (en) * 2020-12-28 2024-04-16 博奥生物集团有限公司 Information association method of eye image and symptom information
CN113188984B (en) * 2021-04-29 2022-06-24 青岛理工大学 Intelligent monitoring system and method for corrosion state of steel bar in concrete
CN113768460B (en) * 2021-09-10 2023-11-14 北京鹰瞳科技发展股份有限公司 Fundus image analysis system, fundus image analysis method and electronic equipment

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6331059B1 (en) * 2001-01-22 2001-12-18 Kestrel Corporation High resolution, multispectral, wide field of view retinal imager
CN105260717A (en) * 2015-10-16 2016-01-20 浙江工业大学 Eyeball tracking method utilizing iris center positioning based on convolution kernel and circle boundary calculus
CN106934798A (en) * 2017-02-20 2017-07-07 苏州体素信息科技有限公司 Diabetic retinopathy classification stage division based on deep learning
CN107292887A (en) * 2017-06-20 2017-10-24 电子科技大学 A kind of Segmentation Method of Retinal Blood Vessels based on deep learning adaptive weighting
CN107437096A (en) * 2017-07-28 2017-12-05 北京大学 Image classification method based on the efficient depth residual error network model of parameter
CN108021916A (en) * 2017-12-31 2018-05-11 南京航空航天大学 Deep learning diabetic retinopathy sorting technique based on notice mechanism
CN108095683A (en) * 2016-11-11 2018-06-01 北京羽医甘蓝信息技术有限公司 The method and apparatus of processing eye fundus image based on deep learning
CN108345911A (en) * 2018-04-16 2018-07-31 东北大学 Surface Defects in Steel Plate detection method based on convolutional neural networks multi-stage characteristics
CN108717869A (en) * 2018-05-03 2018-10-30 中国石油大学(华东) Diabetic retinopathy diagnosis aid system based on convolutional neural networks
CN108876775A (en) * 2018-06-12 2018-11-23 广州图灵人工智能技术有限公司 The rapid detection method of diabetic retinopathy
CN108960257A (en) * 2018-07-06 2018-12-07 东北大学 A kind of diabetic retinopathy grade stage division based on deep learning
CN109447962A (en) * 2018-10-22 2019-03-08 天津工业大学 A kind of eye fundus image hard exudate lesion detection method based on convolutional neural networks
CN109691979A (en) * 2019-01-07 2019-04-30 哈尔滨理工大学 A kind of diabetic retina image lesion classification method based on deep learning
CN109800789A (en) * 2018-12-18 2019-05-24 中国科学院深圳先进技术研究院 Diabetic retinopathy classification method and device based on figure network
CN110046604A (en) * 2019-04-25 2019-07-23 成都信息工程大学 A kind of single lead ECG arrhythmia detection classification method based on residual error network

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6331059B1 (en) * 2001-01-22 2001-12-18 Kestrel Corporation High resolution, multispectral, wide field of view retinal imager
CN105260717A (en) * 2015-10-16 2016-01-20 浙江工业大学 Eyeball tracking method utilizing iris center positioning based on convolution kernel and circle boundary calculus
CN108095683A (en) * 2016-11-11 2018-06-01 北京羽医甘蓝信息技术有限公司 The method and apparatus of processing eye fundus image based on deep learning
CN106934798A (en) * 2017-02-20 2017-07-07 苏州体素信息科技有限公司 Diabetic retinopathy classification stage division based on deep learning
CN107292887A (en) * 2017-06-20 2017-10-24 电子科技大学 A kind of Segmentation Method of Retinal Blood Vessels based on deep learning adaptive weighting
CN107437096A (en) * 2017-07-28 2017-12-05 北京大学 Image classification method based on the efficient depth residual error network model of parameter
CN108021916A (en) * 2017-12-31 2018-05-11 南京航空航天大学 Deep learning diabetic retinopathy sorting technique based on notice mechanism
CN108345911A (en) * 2018-04-16 2018-07-31 东北大学 Surface Defects in Steel Plate detection method based on convolutional neural networks multi-stage characteristics
CN108717869A (en) * 2018-05-03 2018-10-30 中国石油大学(华东) Diabetic retinopathy diagnosis aid system based on convolutional neural networks
CN108876775A (en) * 2018-06-12 2018-11-23 广州图灵人工智能技术有限公司 The rapid detection method of diabetic retinopathy
CN108960257A (en) * 2018-07-06 2018-12-07 东北大学 A kind of diabetic retinopathy grade stage division based on deep learning
CN109447962A (en) * 2018-10-22 2019-03-08 天津工业大学 A kind of eye fundus image hard exudate lesion detection method based on convolutional neural networks
CN109800789A (en) * 2018-12-18 2019-05-24 中国科学院深圳先进技术研究院 Diabetic retinopathy classification method and device based on figure network
CN109691979A (en) * 2019-01-07 2019-04-30 哈尔滨理工大学 A kind of diabetic retina image lesion classification method based on deep learning
CN110046604A (en) * 2019-04-25 2019-07-23 成都信息工程大学 A kind of single lead ECG arrhythmia detection classification method based on residual error network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
An alternative reconstruction framework with optimal permission source;Defu Yang;《Optics Communications》;20180623;第113-122页 *
基于残差网络的糖网病自动筛查;邹北骥;《计算机辅助设计与图形学学报》;20190430;第31卷(第4期);第580-588页 *

Also Published As

Publication number Publication date
CN110236483A (en) 2019-09-17

Similar Documents

Publication Publication Date Title
CN110236483B (en) Method for detecting diabetic retinopathy based on depth residual error network
CN111259982B (en) Attention mechanism-based premature infant retina image classification method and device
CN110837803B (en) Diabetic retinopathy grading method based on depth map network
CN112132817B (en) Retina blood vessel segmentation method for fundus image based on mixed attention mechanism
CN111476283A (en) Glaucoma fundus image identification method based on transfer learning
CN112016626B (en) Uncertainty-based diabetic retinopathy classification system
CN109726743B (en) Retina OCT image classification method based on three-dimensional convolutional neural network
CN112017185B (en) Focus segmentation method, device and storage medium
CN110751636B (en) Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network
CN111862009B (en) Classifying method of fundus OCT (optical coherence tomography) images and computer readable storage medium
KR102313143B1 (en) Diabetic retinopathy detection and severity classification apparatus Based on Deep Learning and method thereof
CN112101424B (en) Method, device and equipment for generating retinopathy identification model
CN112150476A (en) Coronary artery sequence vessel segmentation method based on space-time discriminant feature learning
CN109919915A (en) Retinal fundus images abnormal area detection method and equipment based on deep learning
CN112733929A (en) Improved method for detecting small target and shielded target of Yolo underwater image
CN113012163A (en) Retina blood vessel segmentation method, equipment and storage medium based on multi-scale attention network
Sallam et al. Diabetic retinopathy grading using resnet convolutional neural network
CN110610480A (en) MCASPP neural network eyeground image optic cup optic disc segmentation model based on Attention mechanism
CN112102234B (en) Ear sclerosis focus detection and diagnosis system based on target detection neural network
Nurrahmadayeni et al. Analysis of deep learning methods in diabetic retinopathy disease identification based on retinal fundus image
CN114998300A (en) Corneal ulcer classification method based on multi-scale information fusion network
CN110969117A (en) Fundus image segmentation method based on Attention mechanism and full convolution neural network
Giroti et al. Diabetic Retinopathy Detection & Classification using Efficient Net Model
CN110720888A (en) Method for predicting macular edema lesion of fundus image based on deep learning
Safmi et al. The Augmentation Data of Retina Image for Blood Vessel Segmentation Using U-Net Convolutional Neural Network Method.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant