CN110223291B - Network method for training fundus lesion point segmentation based on loss function - Google Patents

Network method for training fundus lesion point segmentation based on loss function Download PDF

Info

Publication number
CN110223291B
CN110223291B CN201910534317.6A CN201910534317A CN110223291B CN 110223291 B CN110223291 B CN 110223291B CN 201910534317 A CN201910534317 A CN 201910534317A CN 110223291 B CN110223291 B CN 110223291B
Authority
CN
China
Prior art keywords
function
samples
segmentation
fundus
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910534317.6A
Other languages
Chinese (zh)
Other versions
CN110223291A (en
Inventor
郭松
李涛
王恺
康宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nankai University
Original Assignee
Nankai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nankai University filed Critical Nankai University
Priority to CN201910534317.6A priority Critical patent/CN110223291B/en
Publication of CN110223291A publication Critical patent/CN110223291A/en
Application granted granted Critical
Publication of CN110223291B publication Critical patent/CN110223291B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses a method for training a fundus lesion point segmentation network based on a loss function. And judging whether the negative sample is reserved or discarded according to the result of the indicator function, if the value of the indicator function is 1, reserving the negative sample, and otherwise, discarding the negative sample. Therefore, the discrimination capability and the learning rate of the network are improved, wherein samples which are easy to be classified are discarded at a higher probability, and samples which are difficult to be classified are discarded at a lower probability; under the condition of keeping the hard samples, a large amount of sample selection time can be saved, so that the network is focused on the learning of the hard samples. The method can solve the problems of more error division conditions of the division network and lower learning efficiency caused by the class balance cross entropy loss function, and efficiently divides the eyeground lesion points.

Description

Network method for training fundus lesion point segmentation based on loss function
Technical Field
The invention belongs to the technical field of neural networks, and particularly relates to a method for training fundus lesion point segmentation network based on a loss function.
Background
The deep convolutional neural network is used as a deep learning model and achieves the most advanced performance on a plurality of computer vision tasks such as image classification, target detection and target segmentation. In recent years, a semantic segmentation model based on deep learning has been widely studied, and a significant effect has been obtained. However, to our knowledge, most existing models focus on normal-sized objects, such as animals and vehicles. Semantic segmentation of small objects has not been fully studied. Such as fundus lesion point detection problems in the medical field. However, segmenting small lesion points is different from segmenting normal sized objects. There is always a class imbalance problem in small object segmentation, which is common in medical images, for example, the proportion of lesion points in a fundus image may be as low as 0.1%. This extreme imbalance will make the penalty function in large object segmentation inapplicable in small object segmentation, since it is easy to classify all pixels as background and obtain a meaningless accuracy of 99.9%. An intuitive solution to the class imbalance problem is to assign different weights to different classes, which we call the class balance cross entropy loss function. A small proportion of pixels are given a high weight and a high proportion of pixels are given a low weight. However, this method does not consider the weights between samples, and all negative samples are treated equally, sharing the same weight. Thus, in class-balanced cross-entropy loss, negative samples tend to be misclassified as positive samples because the loss of misclassified background pixels is much less than the loss of misclassified positive samples.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
Aiming at the technical problems in the prior art, the invention provides a method for training a fundus lesion point segmentation network based on a loss function, which can solve the problems of more segmentation network error segmentation conditions and lower learning efficiency caused by a class balance cross entropy loss function and efficiently segment the fundus lesion point.
In order to solve the technical problems, the invention adopts the technical scheme that: a method for training a fundus lesion point segmentation network based on a loss function comprises the following steps:
step 1: preprocessing an IDRiD fundus data set, and respectively selecting a certain number of fundus images as a training set and a testing set; down-sampling each image at a resolution; performing data enhancement on the training set; setting a hyper-parameter of a segmentation network;
step 2: initializing weights of each layer of the segmentation network;
and step 3: randomly selecting a fundus image from the expanded training set; randomly cutting out a region with a certain size from the image;
and 4, step 4: the fundus image is processed by a processing module in a segmentation network to obtain the input of a loss function layer, namely a segmentation probability graph p and a corresponding label y;
and 5: selecting a discard function according to the setting;
step 6: for each negative sample (background pixel) in the image; determining whether it is to be retained or discarded according to the result of the indicator function;
and 7: the weight factor beta is calculated and,
Figure BDA0002100705300000021
wherein | Y+I is the number of positive samples, i.e. the number of pixels of the segmentation samples, | Y-L is the number of reserved negative samples, namely the number of background pixels;
and 8: the forward propagation loss is calculated and,
the loss function is as follows:
Figure BDA0002100705300000022
wherein beta is the sum of the number of the retained negative samples divided by the number of the retained negative samples and the number of the positive samples, and is calculated by using a formula in the step 7;
and step 9: calculating respective gradient information for each sample in the image;
step 10: the gradient of the loss layer is propagated reversely, and weight parameters in the segmentation network feature processing module are updated;
step 11: if the network is not converged or the maximum iteration number is not reached, returning to the step 3;
step 12: and after the network training is finished, carrying out microaneurysm segmentation on the fundus image on the test set, and calculating a PR curve according to a segmentation result.
Preferably, in step 1, the training set is subjected to data enhancement by adopting a rotation and mirroring mode.
Preferably, in step 5, the discarding function maps the activation probability to the discarding probability, and the loss function has three types of discarding functions according to the difference between the discarding intensity and the calculation cost, which are as follows:
linear discard function: p is a radical ofdrop(pj)=1.0-pj
The square discard function: p is a radical ofdrop(pj)=(1.0-pj)2
Log discard function: p is a radical ofdrop(pj)=1.0+log(1.0-pj)
The penalty for the discarded negative samples is 0.
Preferably, in step 6, the indicator function is as follows,
Figure BDA0002100705300000031
wherein r is a random number between 0 and 1;
pdrop(pj) For the discard function, negative samples are retained for 1, otherwise negative samples are discarded.
Preferably, in step 9, the gradient information calculation method is as follows:
initializing gradients of all samples;
for a certain sample i, its gradient giThe initialization is as follows: p is a radical ofi-yiWherein p isiTo activate the probability value, yiIs marked;
for a positive sample, its gradient is updated as: gi=β×gi
For the remaining negative samples, the gradient is updated as: gi=(1.0-β)×gi
For the negative samples discarded, the gradient is set to 0.
Compared with the prior art, the invention has the beneficial effects that: the invention solves the problem of unbalanced classification when segmenting fundus lesion points, relieves the problem of false identification of segmentation network types of the lesion points to a certain extent, accelerates the learning rate of the network, and can act on various deep learning models.
Drawings
FIG. 1 is a segmented network training and testing flow diagram;
FIG. 2 is a schematic diagram of an alternative fundus lesion segmentation network according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of three drop functions;
FIG. 4 is a PR curve comparison of fundus microaneurysm segmentation results for different methods;
FIG. 5 is a PR curve comparison graph of division results of fundus hemorrhage points by different methods;
FIG. 6a is an expert annotation view of a fundus microaneurysm;
FIG. 6b is a segmentation map corresponding to class-equilibrium cross-entropy loss for fundus microaneurysms;
FIG. 6c is a segmentation map corresponding to a new loss function for a fundus microaneurysm;
FIG. 7a is an expert annotation of a fundus hemorrhage site;
FIG. 7b is a segmentation map corresponding to class-equilibrium cross-entropy loss for fundus hemorrhage points;
fig. 7c is a graph of a division corresponding to the new loss function of the fundus hemorrhage point.
Detailed Description
In order to make the technical solutions of the present invention better understood, the present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
Example 1
In related techniques, such as class-balanced cross entropy loss functions, a small proportion of pixels are assigned a high weight and a high proportion of pixels are assigned a low weight. However, this method does not consider the weights between samples, and all negative samples are treated equally, sharing the same weight. Thus, in class-balanced cross-entropy loss, negative samples tend to be misclassified as positive samples. In addition, the training efficiency of such a loss function is not high. Therefore, in order to solve the technical problem, a new loss function is provided in this embodiment to train a lesion point segmentation network, so that the network focuses on learning of hard-to-segment samples, and the learning efficiency and the interference resistance of the network are improved.
Before describing the technical solution of the embodiment of the present invention, it is first necessary to define a lesion point segmentation network used in the embodiment of the present invention, and in the embodiment of the present invention, as shown in fig. 1, an optional lesion point segmentation network is shown, which is composed of three parts: the device comprises an input module, a processing module and an output module. The input module pre-processes the training data, such as scaling, expanding, etc. The processing module includes convolution operation and pooling operation in the neural network. And the output module stores and visualizes the lesion point segmentation result.
On a task of segmenting the microaneurysm in the fundus image, a segmentation network in the graph 1 is trained by using a loss function, and the specific method comprises the following steps:
step 1: the IDRiD fundus data set was preprocessed, 54 fundus images as a training set, and 27 images as a test set. Since the size of the image is too large to be handled by computer hardware, the resolution of each image is down-sampled to 1440 × 960. And performing data enhancement on the training set by adopting modes of rotation, mirror image and the like. Setting a hyper-parameter of a segmentation network;
step 2: initializing weights of each layer of the segmentation network;
and step 3: randomly selecting a fundus image from the extended training set. Randomly cutting out 800 × 800 areas from the image;
and 4, step 4: the fundus image is processed by a processing module in a segmentation network to obtain the input of a loss function layer, namely a segmentation probability graph p and a corresponding label y;
and 5: depending on the setting, one is selected from a linear dropping function, a square dropping function and a logarithmic dropping function. The schematic diagram of the three functions is shown in fig. 2;
step 6: for each negative sample (background pixel) in the image, it is determined whether it is retained or discarded, depending on the result of the indicator function, which is as follows,
Figure BDA0002100705300000061
wherein r is a random number between 0 and 1, pdrop(pj) For the discarding function, the role of the discarding function is to map the activation probability to the discarding probability, and the loss function provides three discarding functions according to the difference of the discarding intensity and the calculation cost, which are respectively defined as follows:
linear discard function: p is a radical ofdrop(pj)=1.0-pj
The square discard function: p is a radical ofdrop(pj)=(1.0-pj)2
Log discard function: p is a radical ofdrop(pj)=1.0+log(1.0-pj)
The penalty for the discarded negative samples is 0;
and 7: the weight factor beta is calculated and,
Figure BDA0002100705300000062
wherein | Y+I is the number of positive samples (microaneurysm pixels), Y-And | is the number of negative samples (background pixels) that remain.
And 8: the forward propagation loss is calculated and,
the loss function is as follows:
Figure BDA0002100705300000063
wherein beta is the sum of the number of the retained negative samples divided by the number of the retained negative samples and the number of the positive samples, and is calculated by using a formula in the step 7.
And step 9: calculating respective gradient information for each sample in the image;
step 9.1: the gradients of all samples are initialized.
For a certain sample i, its gradient giThe initialization is as follows: p is a radical ofi-yiWherein p isiTo activate the probability value, yiIs marked;
step 9.2: for a positive sample, its gradient is updated as: gi=β×gi
Step 9.3: for the remaining negative samples, the gradient is updated as: gi=(1.0-β)×gi
Step 9.4: for the negative samples discarded, the gradient is set to 0.
Step 10: the gradient of the loss layer is propagated reversely, and weight parameters in the segmentation network feature processing module are updated;
step 11: if the network is not converged or the maximum iteration number is not reached, returning to the step 3;
step 12: after the network training is finished, the eye fundus image is subjected to microaneurysm segmentation on the test set, and a PR curve is calculated according to the segmentation result, wherein the result is shown in FIG. 4. The probability map of segmentation of microaneurysms on the test set is shown in fig. 6 c.
Table 1: results of comparing microaneurysm segmentation effects
Figure BDA0002100705300000071
Figure BDA0002100705300000081
The results in table 1 show that the loss function has a significant advantage in the segmentation of microaneurysms compared to the prior art, and the effect of the three discarding functions is superior to that of the quasi-equilibrium cross entropy loss function.
Example 2
On a fundus image hemorrhage point segmentation task, a segmentation network in the figure 1 is trained by using a loss function, and the specific method comprises the following steps:
step 1: the IDRiD fundus data set was preprocessed, 54 fundus images as a training set, and 27 images as a test set. Since the size of the image is too large to be handled by computer hardware, the resolution of each image is down-sampled to 1440 × 960. And performing data enhancement on the training set by adopting modes of rotation, mirror image and the like. Setting a hyper-parameter of a segmentation network;
step 2: initializing weights of each layer of the segmentation network;
and step 3: randomly selecting a fundus image from the extended training set. Randomly cutting out 800 × 800 areas from the image;
and 4, step 4: the fundus image is processed by a processing module in a segmentation network to obtain the input of a loss function layer, namely a segmentation probability graph p and a corresponding label y;
and 5: depending on the setting, one is selected from a linear dropping function, a square dropping function and a logarithmic dropping function. The schematic diagram of the three functions is shown in fig. 3;
step 6: for each negative sample (background pixel) in the image, it is determined whether it is retained or discarded, depending on the result of the indicator function, which is as follows,
Figure BDA0002100705300000091
wherein r is a random number between 0 and 1, pdrop(pj) For the discarding function, the role of the discarding function is to map the activation probability to the discarding probability, and the loss function provides three discarding functions according to the difference of the discarding intensity and the calculation cost, which are respectively defined as follows:
linear discard function: p is a radical ofdrop(pj)=1.0-pj
The square discard function: p is a radical ofdrop(pj)=(1.0-pj)2
Log discard function: p is a radical ofdrop(pj)=1.0+log(1.0-pj)
The penalty for the discarded negative samples is 0;
and 7: the weight factor beta is calculated and,
Figure BDA0002100705300000092
wherein | Y+I is the number of positive samples (bleeding point pixels), Y-And | is the number of negative samples (background pixels) that remain.
And 8: the forward propagation loss is calculated and,
the loss function is as follows:
Figure BDA0002100705300000093
wherein beta is the sum of the number of the retained negative samples divided by the number of the retained negative samples and the number of the positive samples, and is calculated by using a formula in the step 7.
And step 9: calculating respective gradient information for each sample in the image;
step 9.1: the gradients of all samples are initialized.
For a certain sample i, its gradient giThe initialization is as follows: p is a radical ofi-yiWherein p isiTo activate the probability value, yiIs marked;
step 9.2: for a positive sample, its gradient is updated as: gi=β×gi
Step 9.3: for the remaining negative samples, the gradient is updated as: gi=(1.0-β)×gi
Step 9.4: for the negative samples discarded, the gradient is set to 0.
Step 10: the gradient of the loss layer is propagated reversely, and weight parameters in the segmentation network feature processing module are updated;
step 11: if the network is not converged or the maximum iteration number is not reached, returning to the step 3;
step 12: after the network training is finished, the fundus image is segmented into bleeding points on the test set, and a PR curve is calculated according to the segmentation result, with the result shown in fig. 5. The segmentation probability map of the bleeding point on the test set is shown in fig. 7 c.
Table 2: result of comparing the segmentation effect of bleeding point
Figure BDA0002100705300000101
The results in table 2 show that the loss function has obvious advantages in the segmentation of bleeding points compared with the prior art, and the effects of the three discarding functions are all superior to those of the class balance cross entropy loss function.
The present invention has been described in detail with reference to the embodiments, but the description is only illustrative of the present invention and should not be construed as limiting the scope of the present invention. The scope of the invention is defined by the claims. The technical solutions of the present invention or those skilled in the art, based on the teaching of the technical solutions of the present invention, should be considered to be within the scope of the present invention, and all equivalent changes and modifications made within the scope of the present invention or equivalent technical solutions designed to achieve the above technical effects are also within the scope of the present invention. It should be noted that for the sake of clarity, parts of the description of the invention have been omitted where there is no direct explicit connection with the scope of protection of the invention, but where components and processes are known to those skilled in the art.

Claims (4)

1. A method for training a fundus lesion point segmentation network based on a loss function is characterized by comprising the following steps:
step 1: preprocessing an IDRiD fundus data set, and respectively selecting a certain number of fundus images as a training set and a testing set; down-sampling each image at a resolution; performing data enhancement on the training set; setting a hyper-parameter of a segmentation network;
step 2: initializing weights of each layer of the segmentation network;
and step 3: randomly selecting a fundus image from the expanded training set; randomly cutting out a region with a certain size from the image;
and 4, step 4: the fundus image is processed by a processing module in a segmentation network to obtain the input of a loss function layer, namely a segmentation probability graph p and a corresponding label y;
and 5: selecting a discard function according to the setting;
step 6: for each negative sample in the image, determining whether it is to be retained or discarded according to the result of the indicator function; wherein: the indicator function is as follows,
Figure FDA0002905197900000011
wherein r is a random number between 0 and 1;
pdrop(pj) If the function is a discarding function, the negative sample is reserved if the function is 1, otherwise, the negative sample is discarded;
and 7: the weight factor beta is calculated and,
Figure FDA0002905197900000012
wherein | Y+I is the number of positive samples, i.e. the number of pixels of the segmentation samples, | Y_L is the number of reserved negative samples, namely the number of background pixels;
and 8: the forward propagation loss is calculated and,
the loss function is as follows:
Figure FDA0002905197900000013
wherein, beta is the sum of the number of the retained negative samples divided by the number of the positive samples and is calculated by using the formula in the step 7; 1 (p)j) For the exponentiator function described in step 6, 1 (p)j) If the value is 1, the negative sample is reserved, otherwise, the negative sample is discarded;
and step 9: calculating respective gradient information for each sample in the image;
step 10: the gradient of the loss layer is propagated reversely, and weight parameters in the segmentation network feature processing module are updated;
step 11: if the network is not converged or the maximum iteration number is not reached, returning to the step 3;
step 12: and after the network training is finished, carrying out microaneurysm segmentation on the fundus image on the test set, and calculating a PR curve according to a segmentation result.
2. The method for training the fundus lesion point segmentation network based on the loss function as claimed in claim 1, wherein in the step 1, the training set is subjected to data enhancement by adopting a rotation and mirror image mode.
3. The method for training the fundus lesion point segmentation network based on the loss function as claimed in claim 1, wherein in step 5, the discarding function maps the activation probability to the discarding probability, and the loss function has three discarding functions according to the difference of the discarding intensity and the calculation cost, which are respectively as follows:
linear discard function: p is a radical ofdrop(pj)=1.0-pj
The square discard function: p is a radical ofdrop(pj)=(1.0-pj)2
Log discard function: p is a radical ofdrop(pj)=1.0+log(1.0-pj)
The penalty for the discarded negative samples is 0.
4. The method for training the division network of the fundus lesion points based on the loss function as claimed in claim 1, wherein in the step 9, the gradient information calculation method is as follows:
initializing gradients of all samples;
for a certain sample i, its gradient giThe initialization is as follows: p is a radical ofi-yiWherein p isiTo activate the probability value, yiIs marked;
for a positive sample, its gradient is updated as: gi=β×gi
For the remaining negative samples, the gradient is updated as: gi=(1.0-β)×gi
For the negative samples discarded, the gradient is set to 0.
CN201910534317.6A 2019-06-20 2019-06-20 Network method for training fundus lesion point segmentation based on loss function Active CN110223291B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910534317.6A CN110223291B (en) 2019-06-20 2019-06-20 Network method for training fundus lesion point segmentation based on loss function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910534317.6A CN110223291B (en) 2019-06-20 2019-06-20 Network method for training fundus lesion point segmentation based on loss function

Publications (2)

Publication Number Publication Date
CN110223291A CN110223291A (en) 2019-09-10
CN110223291B true CN110223291B (en) 2021-03-19

Family

ID=67814254

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910534317.6A Active CN110223291B (en) 2019-06-20 2019-06-20 Network method for training fundus lesion point segmentation based on loss function

Country Status (1)

Country Link
CN (1) CN110223291B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260665B (en) * 2020-01-17 2022-01-21 北京达佳互联信息技术有限公司 Image segmentation model training method and device
CN111666997B (en) * 2020-06-01 2023-10-27 安徽紫薇帝星数字科技有限公司 Sample balancing method and target organ segmentation model construction method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107679557A (en) * 2017-09-19 2018-02-09 平安科技(深圳)有限公司 Driving model training method, driver's recognition methods, device, equipment and medium
CN108021916A (en) * 2017-12-31 2018-05-11 南京航空航天大学 Deep learning diabetic retinopathy sorting technique based on notice mechanism
CN109829877A (en) * 2018-09-20 2019-05-31 中南大学 A kind of retinal fundus images cup disc ratio automatic evaluation method
CN109886965A (en) * 2019-04-09 2019-06-14 山东师范大学 The layer of retina dividing method and system that a kind of level set and deep learning combine
KR101953752B1 (en) * 2018-05-31 2019-06-17 주식회사 뷰노 Method for classifying and localizing images using deep neural network and apparatus using the same

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268568B (en) * 2014-09-17 2018-03-23 电子科技大学 Activity recognition method based on Independent subspace network
US10748036B2 (en) * 2017-11-21 2020-08-18 Nvidia Corporation Training a neural network to predict superpixels using segmentation-aware affinity loss
CN108345846A (en) * 2018-01-29 2018-07-31 华东师范大学 A kind of Human bodys' response method and identifying system based on convolutional neural networks

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107679557A (en) * 2017-09-19 2018-02-09 平安科技(深圳)有限公司 Driving model training method, driver's recognition methods, device, equipment and medium
CN108021916A (en) * 2017-12-31 2018-05-11 南京航空航天大学 Deep learning diabetic retinopathy sorting technique based on notice mechanism
KR101953752B1 (en) * 2018-05-31 2019-06-17 주식회사 뷰노 Method for classifying and localizing images using deep neural network and apparatus using the same
CN109829877A (en) * 2018-09-20 2019-05-31 中南大学 A kind of retinal fundus images cup disc ratio automatic evaluation method
CN109886965A (en) * 2019-04-09 2019-06-14 山东师范大学 The layer of retina dividing method and system that a kind of level set and deep learning combine

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
deeply supervised neural network with short connection for retinal vessel segmentation;Guosong等;《Arxiv》;20180311;全文 *
一种基于级联卷积网络的三维脑肿瘤精细分割;褚晶辉等;《激光与光电子学进展》;20190531;第56卷(第10期);全文 *

Also Published As

Publication number Publication date
CN110223291A (en) 2019-09-10

Similar Documents

Publication Publication Date Title
CN111815574B (en) Fundus retina blood vessel image segmentation method based on rough set neural network
CN110599476B (en) Disease grading method, device, equipment and medium based on machine learning
CN106776842B (en) Multimedia data detection method and device
WO2020164282A1 (en) Yolo-based image target recognition method and apparatus, electronic device, and storage medium
US20180114313A1 (en) Medical Image Segmentation Method and Apparatus
CN106846362B (en) Target detection tracking method and device
Saif et al. Gradient based image edge detection
CN105718937B (en) Multi-class object classification method and system
CN107590512B (en) The adaptive approach and system of parameter in a kind of template matching
CN109344851A (en) Image classification display methods and device, analysis instrument and storage medium
CN110223291B (en) Network method for training fundus lesion point segmentation based on loss function
CN107103608B (en) A kind of conspicuousness detection method based on region candidate samples selection
CN109165658B (en) Strong negative sample underwater target detection method based on fast-RCNN
CN111724342A (en) Method for detecting thyroid nodule in ultrasonic image
CN112991280B (en) Visual detection method, visual detection system and electronic equipment
CN111382766A (en) Equipment fault detection method based on fast R-CNN
CN111583226B (en) Cell pathological infection evaluation method, electronic device and storage medium
CN111598844B (en) Image segmentation method and device, electronic equipment and readable storage medium
CN114445356A (en) Multi-resolution-based full-field pathological section image tumor rapid positioning method
CN114723010B (en) Automatic learning enhancement method and system for asynchronous event data
CN115424093A (en) Method and device for identifying cells in fundus image
US20210256717A1 (en) Edge-guided ranking loss for monocular depth prediction
CN112183237A (en) Automatic white blood cell classification method based on color space adaptive threshold segmentation
CN112991281B (en) Visual detection method, system, electronic equipment and medium
CN113505712B (en) Sea surface oil spill detection method of convolutional neural network based on quasi-balance loss function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant