CN108230322B - Eye ground characteristic detection device based on weak sample mark - Google Patents

Eye ground characteristic detection device based on weak sample mark Download PDF

Info

Publication number
CN108230322B
CN108230322B CN201810080532.9A CN201810080532A CN108230322B CN 108230322 B CN108230322 B CN 108230322B CN 201810080532 A CN201810080532 A CN 201810080532A CN 108230322 B CN108230322 B CN 108230322B
Authority
CN
China
Prior art keywords
fundus
characteristic
feature
module
type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810080532.9A
Other languages
Chinese (zh)
Other versions
CN108230322A (en
Inventor
吴健
林志文
郭若乾
吴边
陈为
吴福理
吴朝晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201810080532.9A priority Critical patent/CN108230322B/en
Publication of CN108230322A publication Critical patent/CN108230322A/en
Application granted granted Critical
Publication of CN108230322B publication Critical patent/CN108230322B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a fundus feature detection device based on a weak sample mark, which comprises: the characteristic extraction module is used for extracting the fundus characteristics in the input fundus map and outputting a fundus characteristic map; the distinguishing characteristic learning module is used for carrying out dimension reduction processing on the input fundus characteristic graph, calculating the central position of each type of fundus characteristic, calculating the distance from each fundus characteristic to the center of the type to which the fundus characteristic belongs, and determining the center of each type of fundus characteristic by continuously iterating by taking the distance convergence as a target; the sampling module is used for calculating the L2 distance from each feature vector corresponding to the background area in the dimensionality-reduced fundus feature map to the fundus feature type center, deleting the feature vector corresponding to the background area if the L2 distance is smaller than a threshold value, and outputting a sampling feature map; and the characteristic detection module is used for carrying out characteristic detection and classification on the sampling characteristic graph and outputting the type prediction probability of the fundus characteristic and the corresponding position of the fundus characteristic.

Description

Eye ground characteristic detection device based on weak sample mark
Technical Field
The invention belongs to the field of image processing, and particularly relates to a fundus feature detection device based on a weak sample mark.
Background
Currently, some groups have begun to use deep learning algorithms to address the detection of diabetic retinas. The traditional automatic sugar network detection method generally seems to be ineffective without using a deep learning method, and meanwhile, the deep learning method uses more data and has better generalization. The deep learning network structure framework is mostly constructed on the basis of a VGG network model and a Google Net network model, a deep neural network can extract features in the network by itself without considering that the features need to be extracted by appointing, the extracted features can be classified according to the full connection layer of the network, so that the feature extraction and the classification are combined together, and the training result is more excellent compared with the traditional method. In addition, the time required by the deep learning method for detecting the fundus feature prediction is shorter than that of the traditional method, and the trained network can quickly judge the input.
The existing fundus feature detection model needs to label a large number of complete sample images to learn the marked features, and then predict the positions and the probabilities of the features. If the training samples are labeled incorrectly or incompletely, the network learns incorrect features or incomplete learned features during learning, or even fails to learn the features, so that the network training effect is poor, and therefore, the existing model is very sensitive to the correctness of the sample labeling. Then, at present, the sugar net feature is discriminated by calculating the number of the round patch (class 1 fundus feature) having a diameter of 10 to 30 pixels and the irregular dark red region (class 2 fundus feature) having a size of 50 to 100 pixels in the fundus image. But since the class 1 and class 2 fundus features are typically many in number and small in area. Complete labeling is difficult. This often results in the model failing to learn the correct features when training the model, greatly reducing the performance of the model.
Therefore, a fundus oculi focus detection device capable of learning based on weakly marked samples has become an urgent need in academic and industrial fields.
Disclosure of Invention
The invention aims to provide a fundus feature detection device based on weak sample marks, which is additionally provided with a distinguishing feature learning module, samples training samples by using the result of the distinguishing feature learning module as a sampling basis, prevents unmarked noise data from participating in training and influencing the training result, and solves the problem of poor model learning effect caused by incomplete sample marks.
In order to achieve the purpose, the invention provides the following technical scheme:
an ocular fundus feature detection apparatus based on a weak sample mark, comprising:
the characteristic extraction module is used for extracting the fundus characteristics in the input fundus map and outputting a fundus characteristic map;
the distinguishing characteristic learning module is used for carrying out dimension reduction processing on the input fundus characteristic graph, calculating the central position of each type of fundus characteristic, calculating the distance from each fundus characteristic to the center of the type to which the fundus characteristic belongs, and determining the center of each type of fundus characteristic by continuously iterating by taking the distance convergence as a target;
the sampling module is used for calculating the Euclidean distance from each feature vector corresponding to the background area in the dimension-reducing fundus feature map to the center of the fundus feature category, and deleting the feature vector corresponding to the background area if the Euclidean distance is smaller than a threshold value, and outputting a sampling feature map;
and the characteristic detection module is used for carrying out characteristic detection and classification on the sampling characteristic graph and outputting the type prediction probability of the fundus characteristic and the corresponding position of the fundus characteristic.
Wherein the feature extraction module adopts a VGG16 network model. Specifically, the convolution device sequentially comprises two convolution layers with convolution kernel sizes of 3 and channel numbers of 64, two convolution layers with convolution kernel sizes of 3 and channel numbers of 128, three convolution layers with convolution kernel sizes of 3 and channel numbers of 256, three convolution layers with convolution kernel sizes of 3 and channel numbers of 512, and three convolution layers with convolution kernel sizes of 3 and channel numbers of 512. The VGG network model is a commonly used detection network, and can enable the extracted fundus feature map to accurately describe the feature information of the original fundus map.
Wherein, the discriminative feature learning module comprises:
the full-connection layer is used for performing ginger dimension processing on the input fundus feature map and outputting a dimension-reducing fundus feature map, and the dimension-reducing processing can reduce network parameters, reduce the calculation amount of a rear central loss function, reduce the calculation overhead and improve the calculation efficiency;
the eyeground characteristic center determining module is used for determining that each characteristic vector in the dimensionality reduction eyeground characteristic graph corresponds to a position represented by an original input eyeground graph according to a network mapping relation, then judging whether each characteristic vector in the dimensionality reduction eyeground characteristic graph corresponds to a characteristic region or a background region according to the determined original sample characteristic position, if the characteristic region exists, taking the characteristic vector as a distinguishing characteristic, finally, calculating the average value of all distinguishing characteristics of each type of eyeground characteristics, taking the average value as the central position of each type of eyeground characteristics, and recording the average value as the eyeground characteristic average value;
and the central loss calculation module is used for calculating the Euclidean distance between each discrimination feature and the fundus feature mean value of the category to which the discrimination feature belongs in the dimensionality reduction fundus feature map according to the fundus feature mean value.
Wherein, the loss function in the central loss calculation module is as follows:
Figure BDA0001560890690000031
wherein x isiRepresenting the i-th discriminant feature of the sample, cyiRepresenting the central feature corresponding to the category yi to which the ith sample belongs, wherein the category central new tracking quantity is as follows during each iteration:
Figure BDA0001560890690000032
the center of each type of fundus feature is determined through successive iterations.
The fundus features are divided into two types, namely 1-level fundus feature and 2-level fundus feature, circular spots with the diameter of 10-30 pixels in the fundus map are the 1-level fundus feature, and irregular dark red areas with the size of 50-100 pixels are the 2-level feature. In the fundus feature center determination module, each type of determined fundus feature is a class 1 fundus feature center and a class 2 fundus feature center.
Specifically, the input of the sampling module is a fundus characteristic map, a dimensionality reduction fundus characteristic map and the center of each type of fundus characteristic, and the output is a sampling characteristic map. In the sampling module, the threshold value is 10%, namely the first 10% of the background features closest to the fundus feature is removed. The threshold value can be set to remove the wrongly marked lesion area to the maximum extent, so that a subsequent fundus feature detection device cannot learn wrong features, correct background features cannot be reduced too much, and the robustness is stronger.
The sampling module is set to provide learning guarantee for the feature detection module, so that the feature detection module only learns the feature region, confusion of the background region and the feature region in the learning process of the module is avoided, and the detection correctness of the feature detection module is improved.
The feature detection module comprises convolution layers with convolution kernel size of 3 and channel number of 1024, convolution layers with convolution kernel size of 1 and channel number of 256, convolution layers with convolution kernel size of 3 and channel number of 512, convolution layers with convolution kernel size of 1 and channel number of 128, convolution layers with convolution kernel size of 3 and channel number of 256, and convolution layers with convolution kernel size of 3 and channel number of 9 x (4+3) which are connected in sequence.
Wherein, the loss function of the feature detection area is as formula (3):
Figure BDA0001560890690000041
wherein α represents a classification loss LconfAnd positioning loss LlocThe ratio of the two is 10, N represents the number of the eye bottom images contained in the training sample,
Lloc(x, l, g) represents a localization loss function, where xij kWhether the ith prediction box and the jth real box are matched with each other with respect to the category k or not is judged to be 1 or 0, and matching and mismatching are respectively represented; li mExpressed as the difference between the horizontal (cx), vertical (cy) coordinates, length (w), width (h) of the center position of the ith prediction box and the corresponding default box, such as li cxThe difference value of the abscissa representing the central position of the ith prediction frame and the abscissa representing the central position of the corresponding default frame; g ^ gj mThe difference value between the horizontal (cx), vertical (cy) coordinates, length (w), width (h) and default frame of the j-th real frame, such as g ^ gj cxRepresenting the difference value of the horizontal coordinate of the central position of the jth real frame and the central position of the default frame; gj cx、gj cy、gj w、gj cxRespectively representing the horizontal (cx) and vertical (cy) coordinates, the length (w) and the width (h) of the center position of the jth real frame; di cx、di cy、di w、di cxThe center positions of the ith default frame are respectively indicated by horizontal (cx) coordinates, vertical (cy) coordinates, length (w) coordinates and width (h) coordinates. As shown in equation (4):
Figure BDA0001560890690000057
Figure BDA0001560890690000051
Figure BDA0001560890690000052
Figure BDA0001560890690000053
Figure BDA0001560890690000054
Lconf(x, c) represents a classification loss function, where xij pWhether the ith prediction box and the jth real box are matched with the category p or not is judged to be 1 or 0, and matching and mismatching are respectively represented; c. Ci pRepresenting the probability of predicting that the ith region belongs to the p class;
Figure BDA0001560890690000055
is ci pNormalized representation of (a); n represents the number of feature regions. As shown in equation (5):
Figure BDA0001560890690000056
specifically, before the fundus image is input into the detection device, the fundus image is preprocessed, and the specific processing procedures are as follows: the mean of all input fundus images was subtracted from the input fundus image and the variance of all fundus images was processed. By processing in this way, the distribution of the input fundus map can be made to approach the standard normal distribution, which is beneficial to the whole model learning.
Compared with the prior art, the invention has the beneficial effects that:
the invention is designed for solving the problem of incomplete sample marking, so that the eyeground characteristic region can be accurately identified for the sample to be detected with weak marks, and the characteristic regions which are missed to be detected are fewer.
Drawings
Fig. 1 is a schematic structural diagram of a fundus feature detection apparatus based on a weak sample mark according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the detailed description and specific examples, while indicating the scope of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
Fig. 1 is a schematic structural diagram of a fundus feature detection apparatus based on a weak sample mark according to an embodiment. As shown in fig. 1, the present embodiment provides a fundus feature detection apparatus including:
a feature extraction module 101, which extracts fundus features in the input fundus image and outputs a fundus feature image;
the discriminative feature learning module 102 specifically includes:
the full-connection layer 1021 is used for performing ginger dimension processing on the input eye ground characteristic diagram and outputting a dimension reduction eye ground characteristic diagram, and the dimension reduction processing can reduce network parameters, reduce the calculation amount of a rear central loss function, reduce the calculation cost and improve the calculation efficiency;
the fundus feature center determining module 1022 determines, according to the network mapping relationship, that each feature vector in the dimension-reduced fundus feature map corresponds to a position represented by the original input fundus map, then determines, according to the determined feature position of the original sample, whether each feature vector in the dimension-reduced fundus feature map corresponds to a feature region or a background region, if the feature region is the feature region, the feature vector is used as a distinguishing feature, and finally, calculates an average value of all distinguishing features of each type of fundus features, and uses the average value as the center position of each type of fundus features and records the average value as a fundus feature average value;
the central loss calculating module 1023 is used for calculating the Euclidean distance between each distinguishing feature and the fundus feature mean value of the category to which the distinguishing feature belongs in the dimensionality reduction fundus feature map according to the fundus feature mean value;
wherein, the loss function in the central loss calculation module is as follows:
Figure BDA0001560890690000071
wherein x isiRepresenting the i-th discriminant feature of the sample, cyiRepresenting the central feature corresponding to the category yi to which the ith sample belongs, wherein the category central new tracking quantity is as follows during each iteration:
Figure BDA0001560890690000072
the center of each type of fundus feature is determined through successive iterations.
The sampling module 103 is used for calculating the Euclidean distance from each feature vector corresponding to the background region in the dimension-reducing fundus feature map to the center of the fundus feature category to which the feature vector belongs, and deleting the feature vector corresponding to the background region if the Euclidean distance is smaller than a threshold value, and outputting a sampling feature map;
and the characteristic detection module 104 is used for carrying out characteristic detection and classification on the sampling characteristic diagram and outputting the type prediction probability of the fundus characteristic and the corresponding position of the fundus characteristic.
Before the raw data enters each module, the raw input preprocessing process is carried out, namely the mean value of the training data is reduced and divided by the variance of the training data, and the mean value enables the distribution of the training data to be close to the standard plus-minus distribution.
The training data first enters the feature extraction module 101, which consists of all convolutional layers and activation functions of the VGG16 network model. The arrangement is that the VGG16 network model is the most commonly used feature extraction model in the detection network model, and the extraction effect on the features is good. The module may obtain a lesion feature map.
Then, the eyeground characteristic map enters a distinguishing characteristic learning module 102, the module firstly reduces the dimension of the characteristic map, namely the eyeground characteristic map passes through a full connection layer with the characteristic dimension of 128 to obtain a dimension-reduced eyeground characteristic map, and the characteristic dimension of the dimension-reduced eyeground characteristic map is changed from 512 to 128; determining the position represented by each characteristic vector in the dimensionality reduction fundus characteristic image corresponding to the original sample input image according to the network mapping relation, and judging whether each characteristic vector in the dimensionality reduction fundus characteristic image is a fundus region or a background region according to the fundus position of the original sample; then, respectively calculating the mean value of all the feature vectors of each feature type to obtain the feature vector mean value corresponding to each type of fundus feature, wherein the feature vector mean value is the feature center of each feature; then, the Euclidean distance between the feature vector of each fundus region in the dimension-reducing fundus feature map and the mean value of the feature vectors of feature classes to which the feature vector belongs is calculated, and the distance is the loss value of the distinguishing feature learning module.
Then, in a sampling module 103, firstly, calculating the euclidean distance from each background region feature vector to the feature vector mean value corresponding to each feature in the dimensionality reduction fundus feature map; next, all the resulting euclidean distances are sorted from small to large, and the present invention considers that the background region feature vectors ranked in the top 10% are most likely to be the missing labeled feature regions because they are more similar to the features, and then the feature detection module that follows does not use the background region as training data. The output of the sampling module is a sampled characteristic diagram, and is marked as a sampling characteristic diagram.
Finally, the sampled feature map is passed through a feature detection module 104, which uses a single multi-anchor output, which is a rectangular area of fixed size positions and aspect ratios in the input image, using 3 aspect ratios (1:1, 1:2, 2:1) and 3 fixed sizes (60, 120, 180) to make up 3 anchors of different sizes and shapes. The lesion sampling feature map is subjected to a series of convolutions in the module which are connected in sequence, so that each position of the lesion sampling feature map has 9 × 4+3 outputs, and each output contains the coordinate position of each anchor and the probability of belonging to each feature category. The invention finally takes the predicted positions with the probability of more than 70% of the predicted category, and takes the predicted positions as the positions of the final predicted features.
The above-mentioned embodiments are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only the most preferred embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions, equivalents, etc. made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (5)

1. An ocular fundus feature detection apparatus based on a weak sample mark, comprising:
the characteristic extraction module is used for extracting the fundus characteristics in the input fundus map and outputting a fundus characteristic map;
the distinguishing characteristic learning module is used for carrying out dimension reduction processing on the input fundus characteristic graph, calculating the central position of each type of fundus characteristic, calculating the distance from each fundus characteristic to the center of the type to which the fundus characteristic belongs, and determining the center of each type of fundus characteristic by continuously iterating by taking the distance convergence as a target;
the sampling module is used for calculating the L2 distance from each feature vector corresponding to the background area in the dimensionality-reduced fundus feature map to the fundus feature type center, deleting the feature vector corresponding to the background area if the L2 distance is smaller than a threshold value, and outputting a sampling feature map;
and the characteristic detection module is used for carrying out characteristic detection and classification on the sampling characteristic graph and outputting the type prediction probability of the fundus characteristic and the corresponding position of the fundus characteristic.
2. The apparatus of claim 1, wherein the feature extraction module employs a VGG16 network model.
3. A fundus feature detection apparatus based on a weak sample marker according to claim 1, wherein said discriminant feature learning module comprises:
the full-connection layer is used for performing dimensionality reduction processing on the input fundus feature map and outputting a dimensionality reduction fundus feature map, and the dimensionality reduction processing can reduce network parameters, reduce the calculation amount of a rear central loss function, reduce calculation overhead and improve calculation efficiency;
the eyeground characteristic center determining module is used for determining that each characteristic vector in the dimensionality reduction eyeground characteristic graph corresponds to a position represented by an original input eyeground graph according to a network mapping relation, then judging whether each characteristic vector in the dimensionality reduction eyeground characteristic graph corresponds to a characteristic region or a background region according to the determined original sample characteristic position, if the characteristic region exists, taking the characteristic vector as a distinguishing characteristic, finally, calculating the average value of all distinguishing characteristics of each type of eyeground characteristics, taking the average value as the central position of each type of eyeground characteristics, and recording the average value as the eyeground characteristic average value;
the central loss calculation module is used for calculating the L2 distance between each discrimination feature and the fundus feature mean value of the category to which the discrimination feature belongs in the dimensionality reduction fundus feature map according to the fundus feature mean value;
wherein, the loss function in the central loss calculation module is as follows:
Figure FDA0003229352640000021
wherein x isiRepresenting the i-th discriminant feature of the sample, cyiRepresenting the central feature corresponding to the category yi to which the ith sample belongs, wherein the updating amount of the category center is as follows during each iteration:
Figure FDA0003229352640000022
the center of each type of fundus feature is determined through successive iterations.
4. A fundus feature detection apparatus according to claim 1 based on weak sample marks wherein in the sampling module the threshold value is 10% i.e. the first 10% of the background features closest to the fundus feature are removed.
5. The fundus feature detection apparatus according to claim 1, wherein said feature detection module comprises a convolution layer having a convolution kernel size of 3 and a channel number of 1024, a convolution layer having a convolution kernel size of 1 and a channel number of 256, a convolution layer having a convolution kernel size of 3 and a channel number of 512, a convolution layer having a convolution kernel size of 1 and a channel number of 128, a convolution layer having a convolution kernel size of 3 and a channel number of 256, and a convolution layer having a convolution kernel size of 3 and a channel number of 256, which are connected in this order, The number of channels is 9 × (4+ 3).
CN201810080532.9A 2018-01-28 2018-01-28 Eye ground characteristic detection device based on weak sample mark Active CN108230322B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810080532.9A CN108230322B (en) 2018-01-28 2018-01-28 Eye ground characteristic detection device based on weak sample mark

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810080532.9A CN108230322B (en) 2018-01-28 2018-01-28 Eye ground characteristic detection device based on weak sample mark

Publications (2)

Publication Number Publication Date
CN108230322A CN108230322A (en) 2018-06-29
CN108230322B true CN108230322B (en) 2021-11-09

Family

ID=62667843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810080532.9A Active CN108230322B (en) 2018-01-28 2018-01-28 Eye ground characteristic detection device based on weak sample mark

Country Status (1)

Country Link
CN (1) CN108230322B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325942B (en) * 2018-09-07 2022-03-25 电子科技大学 Fundus image structure segmentation method based on full convolution neural network
CN110473192B (en) * 2019-04-10 2021-05-14 腾讯医疗健康(深圳)有限公司 Digestive tract endoscope image recognition model training and recognition method, device and system
CN110309810B (en) * 2019-07-10 2021-08-17 华中科技大学 Pedestrian re-identification method based on batch center similarity

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103870838A (en) * 2014-03-05 2014-06-18 南京航空航天大学 Eye fundus image characteristics extraction method for diabetic retinopathy
CN104463215A (en) * 2014-12-10 2015-03-25 东北大学 Tiny aneurysm occurrence risk prediction system based on retina image processing
CN104573712A (en) * 2014-12-31 2015-04-29 浙江大学 Arteriovenous retinal blood vessel classification method based on eye fundus image
CN104573716A (en) * 2014-12-31 2015-04-29 浙江大学 Eye fundus image arteriovenous retinal blood vessel classification method based on breadth first-search algorithm
CN106408564A (en) * 2016-10-10 2017-02-15 北京新皓然软件技术有限责任公司 Depth-learning-based eye-fundus image processing method, device and system
CN106529598A (en) * 2016-11-11 2017-03-22 北京工业大学 Classification method and system based on imbalanced medical image data set
CN106599804A (en) * 2016-11-30 2017-04-26 哈尔滨工业大学 Retina fovea centralis detection method based on multi-feature model
CN106815853A (en) * 2016-12-14 2017-06-09 海纳医信(北京)软件科技有限责任公司 To the dividing method and device of retinal vessel in eye fundus image
CN106920227A (en) * 2016-12-27 2017-07-04 北京工业大学 Based on the Segmentation Method of Retinal Blood Vessels that deep learning is combined with conventional method
CN107045720A (en) * 2017-05-04 2017-08-15 深圳硅基智能科技有限公司 Artificial neural network and system for recognizing eye fundus image lesion
CN107203758A (en) * 2017-06-06 2017-09-26 哈尔滨理工大学 Diabetes patient's retinal vascular images dividing method
CN107633513A (en) * 2017-09-18 2018-01-26 天津大学 The measure of 3D rendering quality based on deep learning

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103793593B (en) * 2013-11-15 2018-02-13 吴一兵 One kind obtains brain states objective quantitative and refers to calibration method
US9836849B2 (en) * 2015-01-28 2017-12-05 University Of Florida Research Foundation, Inc. Method for the autonomous image segmentation of flow systems
US10405739B2 (en) * 2015-10-23 2019-09-10 International Business Machines Corporation Automatically detecting eye type in retinal fundus images

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103870838A (en) * 2014-03-05 2014-06-18 南京航空航天大学 Eye fundus image characteristics extraction method for diabetic retinopathy
CN104463215A (en) * 2014-12-10 2015-03-25 东北大学 Tiny aneurysm occurrence risk prediction system based on retina image processing
CN104573712A (en) * 2014-12-31 2015-04-29 浙江大学 Arteriovenous retinal blood vessel classification method based on eye fundus image
CN104573716A (en) * 2014-12-31 2015-04-29 浙江大学 Eye fundus image arteriovenous retinal blood vessel classification method based on breadth first-search algorithm
CN106408564A (en) * 2016-10-10 2017-02-15 北京新皓然软件技术有限责任公司 Depth-learning-based eye-fundus image processing method, device and system
CN106529598A (en) * 2016-11-11 2017-03-22 北京工业大学 Classification method and system based on imbalanced medical image data set
CN106599804A (en) * 2016-11-30 2017-04-26 哈尔滨工业大学 Retina fovea centralis detection method based on multi-feature model
CN106815853A (en) * 2016-12-14 2017-06-09 海纳医信(北京)软件科技有限责任公司 To the dividing method and device of retinal vessel in eye fundus image
CN106920227A (en) * 2016-12-27 2017-07-04 北京工业大学 Based on the Segmentation Method of Retinal Blood Vessels that deep learning is combined with conventional method
CN107045720A (en) * 2017-05-04 2017-08-15 深圳硅基智能科技有限公司 Artificial neural network and system for recognizing eye fundus image lesion
CN107203758A (en) * 2017-06-06 2017-09-26 哈尔滨理工大学 Diabetes patient's retinal vascular images dividing method
CN107633513A (en) * 2017-09-18 2018-01-26 天津大学 The measure of 3D rendering quality based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Kumar,Sjj等.An Improved Medical Decision Support System to Identify the Diabetic Retinopathy Using Fundus Images.《Journal of Medical Systems 36》.2012,第3573-3581页. *
视网膜血管分割与动静脉分类方法研究;杨毅;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170215(第2期);第I138-4028页 *

Also Published As

Publication number Publication date
CN108230322A (en) 2018-06-29

Similar Documents

Publication Publication Date Title
CN106803247B (en) Microangioma image identification method based on multistage screening convolutional neural network
CN111611905B (en) Visible light and infrared fused target identification method
Marszalek et al. Accurate object localization with shape masks
Pan et al. A robust system to detect and localize texts in natural scene images
TW201926140A (en) Method, electronic device and non-transitory computer readable storage medium for image annotation
US10445602B2 (en) Apparatus and method for recognizing traffic signs
CN108230322B (en) Eye ground characteristic detection device based on weak sample mark
CN109472226B (en) Sleeping behavior detection method based on deep learning
CN105389593A (en) Image object recognition method based on SURF
CN109165658B (en) Strong negative sample underwater target detection method based on fast-RCNN
US8948517B2 (en) Landmark localization via visual search
CN111008576B (en) Pedestrian detection and model training method, device and readable storage medium
Li et al. Group-housed pig detection in video surveillance of overhead views using multi-feature template matching
CN110765814A (en) Blackboard writing behavior recognition method and device and camera
CN108509950A (en) Railway contact line pillar number plate based on probability characteristics Weighted Fusion detects method of identification
CN108921172B (en) Image processing device and method based on support vector machine
Wang et al. License plate recognition system
Mannan et al. Classification of degraded traffic signs using flexible mixture model and transfer learning
CN115861738A (en) Category semantic information guided remote sensing target detection active sampling method
CN111723852A (en) Robust training method for target detection network
CN113269038B (en) Multi-scale-based pedestrian detection method
CN114037886A (en) Image recognition method and device, electronic equipment and readable storage medium
CN102968622B (en) A kind of TV station symbol recognition method and TV station symbol recognition device
Montalvo et al. A novel threshold to identify plant textures in agricultural images by Otsu and Principal Component Analysis
CN112307894A (en) Pedestrian age identification method based on wrinkle features and posture features in community monitoring scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant