CN110276763B - Retina blood vessel segmentation map generation method based on credibility and deep learning - Google Patents

Retina blood vessel segmentation map generation method based on credibility and deep learning Download PDF

Info

Publication number
CN110276763B
CN110276763B CN201810213111.9A CN201810213111A CN110276763B CN 110276763 B CN110276763 B CN 110276763B CN 201810213111 A CN201810213111 A CN 201810213111A CN 110276763 B CN110276763 B CN 110276763B
Authority
CN
China
Prior art keywords
image
blood vessel
credibility
region
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810213111.9A
Other languages
Chinese (zh)
Other versions
CN110276763A (en
Inventor
邹北骥
何骐
朱承璋
陈瑶
张子谦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN201810213111.9A priority Critical patent/CN110276763B/en
Publication of CN110276763A publication Critical patent/CN110276763A/en
Application granted granted Critical
Publication of CN110276763B publication Critical patent/CN110276763B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Abstract

The invention discloses a retinal vessel segmentation map generation method based on credibility and deep learning, which comprises the following steps: step 1: acquiring training data, and constructing a training set by using a preset credibility model and the training data; step 2: selecting data from the training set and inputting the data into a deep learning model based on a convolutional neural network for training to obtain a classifier; and step 3: acquiring an image to be detected, and performing image preprocessing on the image to be detected; and 4, step 4: inputting the image to be detected after the image preprocessing in the step 3 into the classifier in the step 2 to obtain five prediction probability values of pixel points in the image to be detected in five credibility areas; and 5: and (4) generating a retinal vessel segmentation map according to the prediction probability values of the pixel points in the image to be detected in the five credibility areas in the step 4. The method can accurately segment both the coarse blood vessels and the fine blood vessels.

Description

Retina blood vessel segmentation map generation method based on credibility and deep learning
Technical Field
The invention belongs to the technical field of retinal vessel segmentation in fundus images, and particularly relates to a retinal vessel segmentation image generation method based on credibility and deep learning.
Background
The morphological structure (width, branching and tortuosity, etc.) of retinal blood vessels is often an important biomarker for the diagnosis and assessment of various cardiovascular and ophthalmic diseases such as diabetes, hypertension and choroidal neovascularization. The best current approach to vessel segmentation is to have a trained expert manually calibrate the vessels, but this task is exceptionally tedious and time consuming. This has prompted the development of automatic vessel segmentation methods.
Many automated vessel segmentation methods have been proposed over the last two decades, but none of them have proven to be accurate enough to be used as a standard in the medical community. These segmentation methods can be divided into two broad categories: supervised and unsupervised. Supervised methods require a set of labeled training samples to learn the model. These samples consist of calibrated pixels and their features. In supervised approaches, researchers are basically looking for new features for training or trying better classifiers for pixel classification, in order to try to improve the segmentation results. The main advantage of the unsupervised approach is that training without manual calibration is required. However, according to the research studies, their results are worse than the supervised approach.
Deep learning is also widely applied to retinal vessel segmentation, but the results obtained by the existing deep learning application lose a large amount of information after thresholding, because in the normalized exponential function (softmax) results of the deep learning vessel segmentation, the predicted values of fine vessels, coarse vessel edges and noise are often lower than a threshold value, so if the fine vessels are reserved by adjusting the threshold value to be lower, a large amount of noise detail information and focus information are also reserved, and the overall accuracy is reduced.
Disclosure of Invention
The invention aims to provide a retinal vessel segmentation map generation method based on credibility and deep learning, which overcomes the problem that the existing method is inaccurate in fine vessel segmentation, and can accurately segment both coarse vessels and fine vessels.
The invention provides a retinal vessel segmentation map generation method based on credibility and deep learning, which comprises the following steps:
step 1: acquiring training data, and constructing a training set by using a preset credibility model and the training data;
the training data comprises training images and gold standard images matched with the training images, and pixel points in the matched training images and the gold standard images are in one-to-one correspondence;
the preset credibility model is constructed by dividing the gold standard image into five credibility areas according to the credibility of the pixel points, wherein the five credibility areas are respectively as follows: a background central region, a background edge region, a thick blood vessel central region, a thick blood vessel edge region and a thin blood vessel region;
the training set comprises training images after image preprocessing and credibility regions to which pixel points in the training images belong, and the image preprocessing at least comprises gray processing;
step 2: selecting data from the training set and inputting the data into a deep learning model based on a convolutional neural network for training to obtain a classifier;
the input data of the classifier is an image, and the output data is five corresponding prediction probability values of pixel points in the image in the five credibility areas;
and step 3: acquiring an image to be detected, and performing image preprocessing on the image to be detected;
and 4, step 4: inputting the image to be detected after the image preprocessing in the step 3 into the classifier in the step 2 to obtain five prediction probability values of pixel points in the image to be detected in the five credibility areas;
and 5: generating a retinal vessel segmentation map according to the prediction probability values of the pixel points in the image to be detected in the five credibility areas in the step 4;
wherein, the execution process of the step 5 is as follows:
step A: determining a main blood vessel region and an estimated region of a fine blood vessel in the image to be detected according to the prediction probability values of pixel points in the image to be detected in a coarse blood vessel central region, a coarse blood vessel edge region and a fine blood vessel region;
and B: and B, extracting a blood vessel skeleton from the estimated region of the small blood vessel in the step A to obtain a precise region of the small blood vessel, and combining the obtained precise region of the small blood vessel with the main blood vessel region in the step A to obtain a retina blood vessel segmentation map of the image to be detected.
The method provided by the invention integrates credibility and deep learning to obtain the retinal vessel segmentation map, a credibility model is used to obtain a credibility area to which each pixel point in a gold standard image and a training image belongs, a classifier is obtained by training a deep learning model based on a convolutional neural network, then the prediction probability value of each pixel point in the image to be detected in the five credibility areas can be rapidly and accurately obtained for the image to be detected, and the retinal vessel segmentation map is constructed by obtaining the boundary of a thick vessel and a thin vessel through a binarization means.
The retinal vessel segmentation graph obtained by the method has obvious advantages in accuracy, sensitivity and specificity, and the condition of easy error division around the optic disc is greatly reduced.
Further preferably, the step of acquiring the main vessel region during step a of step 5 is as follows:
firstly, calculating the sum of the prediction probability values of each pixel point in the image to be detected in the center region of the thick blood vessel, the edge region of the thick blood vessel and the thin blood vessel region;
then, sequentially judging whether the sum of the prediction probability values of each pixel point in the central region of the thick blood vessel, the edge region of the thick blood vessel and the thin blood vessel region is greater than a preset threshold value, and if so, locating the pixel point in the main blood vessel region; otherwise, the pixel point is not in the main blood vessel region;
wherein the preset threshold is 0.5.
If the prediction probability value of each pixel point in the five credibility areas is as follows:
predicted probability value of background center region
Figure BDA0001597779860000031
Prediction probability value of background edge region
Figure BDA0001597779860000032
Predicted probability value of central region of coarse blood vessel
Figure BDA0001597779860000033
Predicted probability value of edge region of coarse blood vessel
Figure BDA0001597779860000034
And predicted probability values of thin vessel regions
Figure BDA0001597779860000035
The sum of the prediction probability values for each pixel point in the image to be detected in the thick blood vessel central region, the thick blood vessel edge region and the thin blood vessel region is as follows:
Figure BDA0001597779860000036
further preferably, the step of obtaining the estimated region of the small blood vessel during step a of step 5 is as follows:
firstly, calculating an enhanced transformation value of a prediction probability value of each pixel point in a thin blood vessel region in an image to be detected according to the following formula;
Figure BDA0001597779860000037
wherein the content of the first and second substances,
Figure BDA0001597779860000038
a transformation value of the prediction probability value representing that the pixel is positioned in the thin blood vessel region, t represents the prediction probability value representing that the pixel is positioned in the thin blood vessel region,
then, calculating the sum of the prediction probability value of each pixel point in the center region and the edge region of the coarse blood vessel and the enhancement transformation value of the same pixel point;
finally, sequentially judging whether the sum of the prediction probability value of each pixel point in the center region of the thick blood vessel and the edge region of the thick blood vessel and the enhancement transformation value of the same pixel point is greater than a preset threshold value, if so, locating the pixel point in the prediction region of the small blood vessel; otherwise, the pixel point is not in the estimated region of the small blood vessel;
wherein the preset threshold is 0.5.
The sum of the prediction probability value of each pixel point in the center region and the edge region of the coarse blood vessel and the enhancement transformation value of the same pixel point is as follows:
Figure BDA0001597779860000039
Figure BDA00015977798600000310
the invention determines the estimated areas of the main blood vessel area and the tiny blood vessels by a binarization means, extracts the blood vessel skeleton by the existing blood vessel skeleton extraction method, thereby obtaining the precise area of the tiny blood vessels, which is based on the precise tiny blood vessels included in the blood vessel skeleton, and finally combines the precise area of the tiny blood vessels with the main blood vessel area to obtain the retina blood vessel segmentation map which has the precise edge of the thick blood vessels and the precise position of the tiny blood vessels.
Further preferably, the credibility of the pixel points in the gold standard image is calculated according to the following formula:
Figure BDA0001597779860000041
wherein, P represents the credibility of pixel point x in the gold standard image, and NxRepresenting the neighborhood of pixel point x, z representing neighborhood NxThe probability when the gold standard result c corresponding to the pixel point z is 1 and 0 is represented by P (1| z) and P (0| z), and the probability P (c | z) of the gold standard result c corresponding to the pixel point z is calculated according to the following formula:
Figure BDA0001597779860000042
Figure BDA0001597779860000043
wherein y represents the class label of the pixel point in the gold standard image.
The gold standard refers to a blood vessel binarization result manually calibrated by an expert, each pixel point in a gold standard image has a corresponding category label, and if the pixel point is located in a blood vessel region, the corresponding category label is 1; if the pixel point is not in the blood vessel region, the corresponding category label is 0. Among them, the blood vessel having a width greater than 1 pixel is a thick blood vessel, and is classified into 1 st, and the blood vessel having a width equal to 1 pixel is a thin blood vessel, and is classified into 2 nd.
In the step 1, the credibility of each pixel point in the gold standard image is calculated according to the formula, and then the credibility area to which each pixel point belongs is identified based on the credibility. It should be understood that, since the gold-labeled image and the training image are matched, when the confidence level regions to which the pixels in the gold-labeled image belong are known, the confidence level regions to which the corresponding pixels in the training image belong are also known.
Preferably, the preset credibility model comprises credibility ranges corresponding to the five credibility areas;
wherein, the credibility value corresponding to the background central area is 0;
the confidence level range corresponding to the background edge area is (0, 1/2);
the confidence level range corresponding to the thin blood vessel region is (1/2,2/3 ];
the confidence level range corresponding to the edge region of the coarse blood vessel is (2/3, 1);
the confidence value corresponding to the central region of the coarse vessel is 1.
More preferably, the classifier is obtained by training with a focus loss function and using the data of the training set as a reference until the focus loss function reaches a minimum value, and the deep learning model based on the convolutional neural network is as follows:
A=f(X,W)
in the formula, A represents the prediction probability value of an output pixel point belonging to a five-class credibility area, X represents input training set data, f represents a function set defined by a deep neural network, and W represents the weight obtained by training the function set f on the training set X;
wherein the weight W is determined according to a focus loss function as follows:
Figure BDA0001597779860000051
in the formula, LflRepresenting a focus loss function, N representing the number of pixel points in the input image, k representing five types of confidence regions,
Figure BDA0001597779860000052
the ith pixel point belongs to the jth credibility area in the five credibility areas,
Figure BDA0001597779860000053
and gamma is a focus parameter, and represents the prediction probability value of the ith pixel belonging to the jth class credibility area.
The input data of the classifier is an image, and the output data is five corresponding prediction probability values of pixel points in the image in the five credibility areas; the process of training the deep learning model by using the training set to obtain the classifier is to take training images in the training set and credibility regions to which pixel points belong as input data, meanwhile, the deep learning model also synchronously outputs five corresponding prediction probability values of the pixel points in the training images in the five credibility regions, then calculates errors between the five corresponding prediction probability values of the pixel points in the five credibility regions and actual results according to the reference of the credibility regions to which the pixel points in the input data belong, then updates the network weight by applying a random gradient descent method along the direction of reducing the loss function until the loss function reaches a minimum value and does not change any more, obtains a target weight value, and finishes the training of the classifier.
Wherein, the value of the focus function γ is obtained by tuning parameters, and the range of the value is [0.5,3], and 2 is preferred in the scheme.
The classifier identifies the positions of the pixels according to the gray values of the pixels in the image. The classifier records the gray value and the position of the pixel point in the training image during training, and identifies the position of the pixel point in the image to be detected by taking the gray value and the position as identification basis, so that the corresponding five prediction probability values of the pixel point in the five credibility regions can be accurately calculated.
Further preferably, the process of selecting data from the training set in step 2 is as follows:
selecting 48 multiplied by 48 pixel image squares for random sampling in the training set by taking a single pixel point as a center according to a preset sampling proportion;
the selected image square and the credibility region to which the pixel points in the image square belong are data to be input into the deep learning model;
the preset sampling proportion is the sampling proportion of five types of credibility areas, and the preset sampling proportion is as follows:
|C1|:|PC1|:|C2∪PC2|:|PN|:|N|=3:2:1:2:3
in the formula, C1Indicates the central region of the coarse vessel, PC1Indicates the area of the edge of the coarse vessel, C2∪PC2Represents a thin blood vessel region, PN represents a background edge region, and N represents a background center region.
The collected 48 × 48 pixel image square at least includes a single central pixel point, and also includes other pixel points, and the reliability regions of the other pixel points may be different from the reliability region to which the central pixel point belongs.
Sampling is performed according to the sampling ratio, for example, 11 image blocks are acquired in each round, and a total of 1000 rounds are acquired, so that 110,000 48 × 48 image blocks are obtained.
Further preferably, the step 4 is performed as follows:
firstly, dividing an image to be detected after image preprocessing into image squares with the size of 48 multiplied by 48 pixels by taking 5 as a step length;
then, inputting all image squares with the size of 48 × 48 pixels of the image to be detected into the classifier in the step 2 to obtain five prediction probability values of each pixel point in each image square in the five types of credibility areas.
Further preferably, the method further comprises the steps of performing precision processing on the five prediction probability values of the pixel points obtained in the step 4 by using an overlapping tiling strategy, and then executing the step 5;
the overlapping tiling strategy is:
if the pixel points at the same position in the image to be detected are located in different image squares, calculating the average value of the prediction probability values of the pixel points at the same position in the same type of credibility area in the different image squares, and taking the average value as the prediction probability value of the pixel points in the corresponding type of credibility area;
if the pixel point at the same position in the image to be detected is located in one image square, the five prediction probability values of the pixel point in the five types of credibility areas are kept unchanged.
Because the image square block comprises a plurality of pixel points, the condition that the same pixel point is positioned in different image square blocks exists, namely the same pixel point corresponds to a plurality of prediction probability values in the same class of credibility, and the data precision and accuracy can be improved by carrying out mean calculation on the plurality of prediction probability values of the same pixel point in the same class of credibility.
Further preferably, the image preprocessing further comprises brightness normalization, restrictive contrast histogram equalization, and gamma correction.
Advantageous effects
Compared with the existing prediction method, the method has the advantages that:
1. according to the method, the credibility and the deep learning are fused, so that the prediction result of the classifier is more accurate, on one hand, because the golden standard image is divided based on the credibility model, the thick and thin blood vessels in the golden standard image can be more meticulously and accurately distinguished, and the accuracy of the prediction result of the classifier is further improved; on the other hand, the classifier adopts the focus loss function, so that the problem that the sample class in the training set is extremely unbalanced can be effectively solved, and the weight of the sample which is easy to classify is reduced, so that the model is more concentrated on the sample which is difficult to classify during training, and the accuracy of the prediction result of the classifier is improved; in addition, the accurate edge of the coarse blood vessel and the accurate position of the fine blood vessel are determined after the prediction probability values of the pixel points in the predicted image in the five types of credibility regions are obtained through the classifier, and accurate blood vessel segmentation results can be obtained without trying different threshold values in the process. In conclusion, the method of the invention can accurately segment the coarse blood vessels and the fine blood vessels, and greatly improves the blood vessel segmentation accuracy of the retina blood vessel segmentation map.
2. The sample data for training the deep learning model is obtained by randomly sampling image squares of 48 multiplied by 48 pixels by taking a single pixel point as a center in a training set and collecting the sample data according to a preset sampling proportion, so that the collected sample is ensured to comprise the pixel points in five types of credibility areas, the data is expanded, and the accuracy of the prediction result of the classifier is improved.
3. And performing precision processing by adopting an overlapping tiling strategy according to five prediction probability values of pixel points in the obtained image to be detected in the five credibility regions, so that the precision and the accuracy of the data can be improved.
Drawings
Fig. 1 is a block flow diagram of a retinal vessel segmentation map generation method based on reliability and deep learning according to an embodiment of the present invention;
FIG. 2 is a schematic flowchart of a retinal vessel segmentation map generation method based on reliability and deep learning according to an embodiment of the present invention;
fig. 3 is a diagram illustrating the credibility region partition of the gold standard image according to an embodiment of the present invention, where (a) is a diagram illustrating a coarse blood vessel in the gold standard image, (b) is a diagram illustrating a credibility calculation result of the coarse blood vessel, (c) is a diagram illustrating the credibility partition of the coarse blood vessel in a simplified manner, (d) is a diagram illustrating a thin blood vessel in the gold standard image, (e) is a diagram illustrating a credibility calculation result of the thin blood vessel, and (f) is a diagram illustrating the credibility partition of the thin blood vessel in a simplified manner;
fig. 4 is a schematic diagram of a relationship between different reliability regions in a gold standard image according to an embodiment of the present invention, where (a) is a diagram illustrating a distribution diagram of reliability regions in the gold standard image, and (b) is a diagram illustrating a nesting relationship between five types of reliability regions in the present invention;
fig. 5 is a retinal vessel segmentation map, wherein (a) and (f) are fundus images, (b) and (g) are golden standard images, (c) and (h) are results obtained by a class of prior art methods using VGGNet as a pre-training model, (d) and (i) are results obtained by a class of prior art methods using a convolutional neural network that improves multiple fully-connected outputs, and (e) and (j) are results of the method of the present invention.
Detailed Description
The present invention will be further described with reference to the following examples.
Referring to fig. 1 and fig. 2, a retinal vessel segmentation map generation method based on reliability and deep learning according to an embodiment of the present invention includes the following steps:
step 1: and acquiring training data, and constructing a training set by using a preset credibility model and the training data.
The training data comprises training images and gold standard images matched with the training images, and pixel points in the matched training images and the gold standard images are in one-to-one correspondence. The gold standard refers to a blood vessel binarization result manually calibrated by an expert, wherein x is set to represent a point in the fundus image, and y represents a gold standard result of x, namely a category label. The following formula is satisfied:
Figure BDA0001597779860000081
the execution process of step 1 is as follows:
a: acquiring training data, and performing image preprocessing on a training image in the training data;
in this embodiment, the image preprocessing includes gray processing, brightness normalization, limiting contrast histogram equalization, and gamma correction.
b: dividing a gold standard image in the training data into five credibility areas according to a preset credibility model;
as shown in fig. 3, (a) is a diagram of a coarse blood vessel in a gold standard image, (b) is a diagram of a reliability calculation result of the coarse blood vessel, (c) is a diagram of simplified reliability classification of the coarse blood vessel, (d) is a diagram of a thin blood vessel in the gold standard image, (e) is a diagram of a reliability calculation result of the thin blood vessel, and (f) is a diagram of simplified reliability classification of the thin blood vessel. The graph demonstrates the process of dividing the credibility region of the gold standard image, namely, the credibility of the pixel points in the gold standard image is calculated first, and then the credibility region is divided according to the credibility of the pixel points.
Wherein, presetThe credibility model divides the gold standard image into five credibility areas according to the credibility of the pixel points, wherein the five credibility areas are respectively as follows: background center region N, background edge region PN, coarse vessel center region C1And a thin blood vessel region C2∪PC2And the coarse vessel edge region PC1. The principle on which the above divisions are based is: areas with similar credibility are combined and simplified into Credible (creditable), probabilistic Credible (probablity credit), probabilistic incredible (probablity not credit) and incredible (not credit). For these four types, we simply refer to C, PC, PN, and N. In this embodiment, the golden standard image is further divided into five types of confidence areas according to the above principle. Wherein, the credibility value corresponding to the background central area is 0; the confidence level range corresponding to the background edge area is (0, 1/2); the confidence level range corresponding to the thin blood vessel region is (1/2, 2/3)](ii) a The confidence level range corresponding to the edge region of the coarse blood vessel is (2/3, 1); the confidence value corresponding to the central region of the coarse vessel is 1. Fig. 4 shows (a) a graph showing a distribution diagram of reliability regions in a gold standard image, and (b) a graph showing a nesting relationship between five types of reliability regions in the invention.
The credibility of the pixel points in the gold standard image is calculated according to the following formula:
Figure BDA0001597779860000091
wherein, P represents the credibility of pixel point x in the gold standard image, and NxRepresenting the neighborhood of pixel point x, z representing neighborhood NxThe probability when the gold standard result c corresponding to the pixel point z is 1 and 0 is represented by P (1| z) and P (0| z), and the probability P (c | z) of the gold standard result c corresponding to the pixel point z is calculated according to the following formula:
Figure BDA0001597779860000092
Figure BDA0001597779860000093
wherein y represents the class label of the pixel point in the gold standard image, and the class label of the pixel point in the gold standard image is regarded as a known parameter.
Step 2: and selecting data from the training set and inputting the data into a deep learning model based on a convolutional neural network for training to obtain a classifier.
Wherein, the process of selecting data from the training set in the step 2 is as follows:
and selecting an image square with 48 multiplied by 48 pixels as a center in the training set according to a preset sampling proportion to carry out random sampling.
Specifically, the selected image square and the reliability region to which the pixel points in the image square belong are data to be input into the deep learning model. During deep learning model training, the gray values and the positions of the pixel points in the image blocks are recorded, and the positions of the pixel points in the image to be detected are identified according to the gray values of the pixel points.
The preset sampling proportion is the sampling proportion of five types of credibility areas, and the preset sampling proportion is as follows:
|C1|:|PC1|:|C2∪PC2|:|PN|:|N|=3:2:1:2:3
in the formula, C1Indicates the central region of the coarse vessel, PC1Indicates the area of the edge of the coarse vessel, C2∪PC2Represents a thin blood vessel region, PN represents a background edge region, and N represents a background center region. In this embodiment, 11 image blocks are acquired in each round, and a total of 1000 rounds are acquired, so that 110,000 image blocks of 48 × 48 are obtained.
The classifier is trained by using a focus loss function and taking the data of the training set as a reference until the focus loss function reaches a minimum value; the input data of the classifier is an image, and the output data is five corresponding prediction probability values of pixel points in the image in the five credibility areas. In this embodiment, the classifier adopts a U-net structure, which is a convolutional network widely used for biomedical image segmentation, and includes a down-sampling path and another up-sampling path, and the specific structure thereof is shown in table 1 below, where I — input layer, the entry of input image data; a C-convolution layer, which is composed of neurons recording local features, and the features are extracted through convolution operation of a filter; an MP-pooling layer for data dimensionality reduction to facilitate neural network learning to more global features; US-upsampling, used for feature upscaling, to facilitate alignment of feature data of different dimensions; the M-merging layer is used for splicing the feature data with the same size in different stages together so as to improve the richness of feature expression; RS-matrix dimension changing; p-matrix transposition; a-an active layer.
TABLE 1
Figure BDA0001597779860000101
It should be noted that the deep learning model based on the convolutional neural network adopted by the present invention is as follows:
A=f(X,W)
in the formula, A represents the prediction probability value of an output pixel point belonging to a five-class credibility area, X represents input training set data, f represents a function set defined by a deep neural network, and W represents the weight obtained by training the function set f on the training set X;
wherein the weight W is determined according to a focus loss function. The loss function is an objective function of neural network optimization, and the process of neural network training is a process of minimizing the loss function. In the retinal image, the number of blood vessel pixels and the number of background pixels are approximately equal to 1: 10, resulting in an imbalance in training sample classes. The method of the invention is mainly different from the conventional U-net in the design of a neural network in that the focus-loss function (Focal-loss) is adopted in the structure instead of the standard cross entropy loss (cross entropy loss). The focus loss function is used to account for samples in training that are extremely unbalanced in sample class. In an experiment, cross entropy loss easily leads a larger number of samples to dominate the optimization direction of the model, and a focus loss function leads the model to be more concentrated on samples which are difficult to classify during training by reducing the weight of samples which are easy to classify. The focus loss function is as follows:
Figure BDA0001597779860000111
in the formula, LflRepresenting a focus loss function, N representing the number of pixel points in the input image, k representing five types of confidence regions,
Figure BDA0001597779860000112
the ith pixel point belongs to the jth credibility area in the five credibility areas,
Figure BDA0001597779860000113
and the prediction probability value which represents that the ith pixel belongs to the jth confidence level area, wherein gamma is a focus parameter and is used for reducing the weight of the sample which is easy to classify.
Based on the description of the deep learning model based on the convolutional neural network, the process of constructing the classifier is briefly described as follows: and taking training images in the training set and credibility regions to which the pixel points belong as input data, meanwhile, synchronously outputting five corresponding prediction probability values of the five credibility regions of the pixel points in the training images by the deep learning model, calculating errors between the five corresponding prediction probability values of the five credibility regions of the pixel points output by the deep learning model and actual results according to the reference of the credibility regions to which the pixel points in the input data belong, then updating the network weight W along the direction of reducing the loss function by applying a random gradient descent method until the loss function reaches the minimum value and does not change any more, obtaining a target weight value, and finishing the training of the classifier. In the embodiment of the invention, 110,000 acquired 48 × 48 image blocks are used as samples for training, 32 images are input for training each time, the weight W is updated once according to a random gradient descent method, 70000 times of iteration (about 6 hours) are trained totally, and the training of the model is completed.
And step 3: acquiring an image to be detected, and performing image preprocessing on the image to be detected.
The image preprocessing performed on the image to be detected is the same as the image preprocessing performed on the training image.
And 4, step 4: and (3) inputting the image to be detected after the image preprocessing in the step (3) into the classifier in the step (2) to obtain five prediction probability values of pixel points in the image to be detected in the five credibility areas.
Specifically, the execution process of step 4 is as follows:
c: dividing the image to be detected after the image preprocessing into image squares with the size of 48 multiplied by 48 pixels by taking 5 as step length;
d: and inputting all image squares with the size of 48 multiplied by 48 pixels of the image to be detected into the classifier in the step 2 to obtain five prediction probability values of each pixel point in each image square in the five credibility areas.
Because the input data of the classifier is an image and the output data is five corresponding prediction probability values of pixel points in the image in the five-class credibility regions, the image to be detected is input into the classifier to obtain five corresponding prediction probability values of each pixel point in the image to be detected in the five-class credibility regions. Which are respectively as follows: predicted probability value of background center region
Figure BDA0001597779860000121
Prediction probability value of background edge region
Figure BDA0001597779860000122
Predicted probability value of central region of coarse blood vessel
Figure BDA0001597779860000123
Predicted probability values for thin vessel regions
Figure BDA0001597779860000124
And predicted probability values of the edge regions of the coarse vessel
Figure BDA0001597779860000125
In this embodiment, preferably, the step 4 and the step 5 further include: and (4) performing precision processing on the five predicted probability values of the pixel points obtained in the step (4) by using an overlapping tiling strategy.
The overlapping tiling strategy is: if the pixel points at the same position in the image to be detected are located in different image squares, calculating the average value of the prediction probability values of the pixel points at the same position in the same type of credibility area in the different image squares, and taking the average value as the prediction probability value of the pixel points in the corresponding type of credibility area; if the pixel point at the same position in the image to be detected is located in one image square, the five prediction probability values of the pixel point in the five types of credibility areas are kept unchanged.
In other feasible embodiments, one of the prediction probability values in the same class of confidence level regions can be randomly selected as a final prediction probability value for the situation that the pixel point at the same position in the image to be detected is located in different image squares.
And 5: generating a retinal vessel segmentation map according to the prediction probability values of the pixel points in the image to be detected in the five credibility areas in the step 4;
wherein, the execution process of the step 5 is as follows:
a: determining a main blood vessel region and an estimated region of a fine blood vessel in the image to be detected according to the prediction probability values of pixel points in the image to be detected in a coarse blood vessel central region, a coarse blood vessel edge region and a fine blood vessel region;
b: and extracting a blood vessel skeleton from the estimated region of the small blood vessel in the A to obtain a precise region of the small blood vessel, and combining the obtained precise region of the small blood vessel with the main blood vessel region in the A to obtain a retina blood vessel segmentation map of the image to be detected.
The step of obtaining the main vessel region in the process A of the step 5 is as follows:
e: and calculating the sum of the prediction probability values of each pixel point in the image to be detected in the center region of the thick blood vessel, the edge region of the thick blood vessel and the thin blood vessel region.
The sum of the prediction probability values of each pixel point in the image to be detected in the thick blood vessel central region, the thick blood vessel edge region and the thin blood vessel region is as follows:
Figure BDA0001597779860000126
f: sequentially judging whether the sum of the prediction probability values of each pixel point in the central region of the thick blood vessel, the edge region of the thick blood vessel and the thin blood vessel region is greater than a preset threshold value, and if so, locating the pixel point in the main blood vessel region; otherwise, the pixel point is not in the main blood vessel region. Wherein the preset threshold is 0.5.
The step of obtaining the estimated region of the tiny blood vessel in the process A of the step 5 comprises the following steps:
firstly, calculating an enhanced transformation value of a prediction probability value of each pixel point in a thin blood vessel region in an image to be detected according to the following formula;
Figure BDA0001597779860000131
wherein the content of the first and second substances,
Figure BDA0001597779860000132
a transformation value of the prediction probability value representing that the pixel is positioned in the thin blood vessel region, t represents the prediction probability value representing that the pixel is positioned in the thin blood vessel region,
then, calculating the sum of the prediction probability value of each pixel point in the center region and the edge region of the coarse blood vessel and the enhancement transformation value of the same pixel point;
the sum of the prediction probability value of each pixel point in the center region and the edge region of the coarse blood vessel and the enhancement transformation value of the same pixel point is as follows:
Figure BDA0001597779860000133
finally, sequentially judging whether the sum of the prediction probability value of each pixel point in the center region of the thick blood vessel and the edge region of the thick blood vessel and the enhancement transformation value of the same pixel point is greater than a preset threshold value, if so, locating the pixel point in the prediction region of the small blood vessel; otherwise, the pixel point is not in the estimated region of the small blood vessel.
Simulation and verification
The present invention can obtain a retinal vessel segmentation map by the above method, and in order to verify the advantages of the retinal vessel segmentation map obtained by the method of the present invention, the following verification is performed. The accuracy (Acc), sensitivity (Sen) and specificity (Spec) of the retinal vessel segmentation map obtained by the present invention were calculated according to the following formulas.
Figure BDA0001597779860000134
Figure BDA0001597779860000135
Figure BDA0001597779860000136
Wherein, TP is the number of paired blood vessel pixel points, TN is the number of paired background pixel points, FP is the number of misclassified blood vessel pixel points, and FN is the number of misclassified background pixel points.
The accuracy, sensitivity and specificity of the retinal vessel segmentation map obtained by the invention and the retinal vessel segmentation map obtained by the existing method are calculated according to the above calculation formula, and are shown in the following table 2:
TABLE 2
Method Accuracy of measurement Sensitivity of the probe Specificity of
DRIU.[4] 0.9528 0.8330 0.9714
CNN[5] 0.9517 0.8295 0.9707
HED[6] 0.9462 0.8009 0.9688
The invention 0.9519 0.7761 0.9792
As shown in fig. 5, the segmentation result of the minute blood vessels in the retinal blood vessel segmentation map obtained by the present invention is particularly fine, and the easy error of the periphery of the optic disc is greatly reduced. In fig. 5, the images (a) and (f) are fundus images, the images (b) and (g) are golden standard images, the images (c) and (h) are results obtained by a type of conventional method using VGGNet as a pre-training model, the images (d) and (i) are results obtained by a type of conventional method using a convolutional neural network which improves a plurality of fully-connected outputs, and the images (e) and (j) are results obtained by the method of the present invention. As can be seen from the graphs (b) - (e), the result graph (e) obtained by the method of the present invention is closer to the gold standard image (b) than the result graphs (c) and (d) obtained by the existing method in the segmentation of the small blood vessels. From the graphs (g) - (j), it can be seen that the result graphs (h) and (i) obtained by the method have poor blood vessel segmentation effect in the optic disc region, more wrongly-divided blood vessels and less-divided blood vessels, and the result graph (j) obtained by the method has good result in the optic disc region and is closer to the gold standard image (g).
Our method can achieve very good results on both normal and absent fundus images. The method has the advantages that the amount of data required by sampling is small, the obtained result is accurate, the obtained fine blood vessels are very high in accuracy, and the method has great significance in assisting diagnosis of doctors in reality. And the reliability model which is innovatively proposed by the people is different from the past improved thought of finding the characteristics on the picture, and the people find the characteristics from the data of the golden standard.
It should be emphasized that the examples described herein are illustrative and not restrictive, and thus the invention is not to be limited to the examples described herein, but rather to other embodiments that may be devised by those skilled in the art based on the teachings herein, and that various modifications, alterations, and substitutions are possible without departing from the spirit and scope of the present invention.

Claims (9)

1. A retinal vessel segmentation map generation method based on credibility and deep learning is characterized by comprising the following steps: the method comprises the following steps:
step 1: acquiring training data, and constructing a training set by using a preset credibility model and the training data;
the training data comprises training images and gold standard images matched with the training images, and pixel points in the matched training images and the gold standard images are in one-to-one correspondence;
the preset credibility model is constructed by dividing the gold standard image into five credibility areas according to the credibility of the pixel points, wherein the five credibility areas are respectively as follows: a background central region, a background edge region, a thick blood vessel central region, a thick blood vessel edge region and a thin blood vessel region;
the training set comprises training images after image preprocessing and credibility regions to which pixel points in the training images belong, and the image preprocessing at least comprises gray processing;
step 2: selecting data from the training set and inputting the data into a deep learning model based on a convolutional neural network for training to obtain a classifier;
the input data of the classifier is an image, and the output data is five corresponding prediction probability values of pixel points in the image in the five credibility areas;
and step 3: acquiring an image to be detected, and performing image preprocessing on the image to be detected;
and 4, step 4: inputting the image to be detected after the image preprocessing in the step 3 into the classifier in the step 2 to obtain five prediction probability values of pixel points in the image to be detected in the five credibility areas;
and 5: generating a retinal vessel segmentation map according to the prediction probability values of the pixel points in the image to be detected in the five credibility areas in the step 4;
wherein, the execution process of the step 5 is as follows:
step A: determining a main blood vessel region and an estimated region of a fine blood vessel in the image to be detected according to the prediction probability values of pixel points in the image to be detected in a coarse blood vessel central region, a coarse blood vessel edge region and a fine blood vessel region;
and B: b, extracting a blood vessel skeleton from the estimated region of the fine blood vessel in the step A to obtain a fine blood vessel accurate region, and combining the obtained fine blood vessel accurate region with the main blood vessel region in the step A to obtain a retina blood vessel segmentation map of the image to be detected;
wherein, the process of selecting data from the training set in the step 2 is as follows:
selecting 48 multiplied by 48 pixel image squares for random sampling in the training set by taking a single pixel point as a center according to a preset sampling proportion;
the selected image square and the credibility region to which the pixel points in the image square belong are data to be input into the deep learning model;
the preset sampling proportion is the sampling proportion of five types of credibility areas, and the preset sampling proportion is as follows:
|C1|:|PC1|:|C2∪PC2|:|PN|:|N|=3:2:1:2:3
in the formula, C1Indicates the central region of the coarse vessel, PC1Indicates the area of the edge of the coarse vessel, C2∪PC2Represents a thin blood vessel region, PN represents a background edge region, and N represents a background center region.
2. The method of claim 1, wherein: the steps of acquiring the main vessel region during step a of step 5 are as follows:
firstly, calculating the sum of the prediction probability values of each pixel point in the image to be detected in the center region of the thick blood vessel, the edge region of the thick blood vessel and the thin blood vessel region;
then, sequentially judging whether the sum of the prediction probability values of each pixel point in the central region of the thick blood vessel, the edge region of the thick blood vessel and the thin blood vessel region is greater than a preset threshold value, and if so, locating the pixel point in the main blood vessel region; otherwise, the pixel point is not in the main blood vessel region;
wherein the preset threshold is 0.5.
3. The method of claim 1, wherein: the step of obtaining the estimated region of the tiny blood vessel in the step A of the step 5 is as follows:
firstly, calculating an enhanced transformation value of a prediction probability value of each pixel point in a thin blood vessel region in an image to be detected according to the following formula;
Figure FDA0002969354090000021
wherein the content of the first and second substances,
Figure FDA0002969354090000022
a transformation value of the prediction probability value representing that the pixel is positioned in the thin blood vessel region, t represents the prediction probability value representing that the pixel is positioned in the thin blood vessel region,
then, calculating the sum of the prediction probability value of each pixel point in the center region and the edge region of the coarse blood vessel and the enhancement transformation value of the same pixel point;
finally, sequentially judging whether the sum of the prediction probability value of each pixel point in the center region of the thick blood vessel and the edge region of the thick blood vessel and the enhancement transformation value of the same pixel point is greater than a preset threshold value, if so, locating the pixel point in the prediction region of the small blood vessel; otherwise, the pixel point is not in the estimated region of the small blood vessel;
wherein the preset threshold is 0.5.
4. The method of claim 1, wherein: the credibility of the pixel points in the gold standard image is calculated according to the following formula:
Figure FDA0002969354090000031
wherein, P represents the credibility of pixel point x in the gold standard image, and NxRepresenting the neighborhood of pixel point x, z representing neighborhood NxThe probability when the gold standard result c corresponding to the pixel point z is 1 and 0 is represented by P (1| z) and P (0| z), and the probability P (c | z) of the gold standard result c corresponding to the pixel point z is calculated according to the following formula:
Figure FDA0002969354090000032
Figure FDA0002969354090000033
wherein y represents the class label of the pixel point in the gold standard image.
5. The method of claim 1, further comprising: the preset credibility model comprises credibility ranges corresponding to the five credibility areas;
wherein, the credibility value corresponding to the background central area is 0;
the confidence level range corresponding to the background edge area is (0, 1/2);
the confidence level range corresponding to the thin blood vessel region is (1/2,2/3 ];
the confidence level range corresponding to the edge region of the coarse blood vessel is (2/3, 1);
the confidence value corresponding to the central region of the coarse vessel is 1.
6. The method of claim 1, wherein: the classifier is obtained by training by using a focus loss function and taking the data of the training set as a reference until the focus loss function reaches a minimum value, and the deep learning model based on the convolutional neural network is as follows:
A=f(X,W)
in the formula, A represents the prediction probability value of an output pixel point belonging to a five-class credibility area, X represents input training set data, f represents a function set defined by a deep neural network, and W represents the weight obtained by training the function set f on the training set X;
wherein the weight W is determined according to a focus loss function as follows:
Figure FDA0002969354090000034
W=arg min Lfl
in the formula, LflRepresenting a focus loss function, N representing the number of pixel points in the input image, k representing five types of confidence regions,
Figure FDA0002969354090000041
the ith pixel point belongs to the jth credibility area in the five credibility areas,
Figure FDA0002969354090000042
a prediction probability value representing that the ith pixel belongs to the jth class credibility region, and gamma is a focus parameter。
7. The method of claim 1, wherein: the execution process of step 4 is as follows:
firstly, dividing an image to be detected after image preprocessing into image squares with the size of 48 multiplied by 48 pixels by taking 5 as a step length;
then, inputting all image squares with the size of 48 × 48 pixels of the image to be detected into the classifier in the step 2 to obtain five prediction probability values of each pixel point in each image square in the five types of credibility areas.
8. The method of claim 7, wherein: performing precision processing on the five predicted probability values of the pixel points obtained in the step (4) by using an overlapping tiling strategy, and then executing a step (5);
the overlapping tiling strategy is:
if the pixel points at the same position in the image to be detected are located in different image squares, calculating the average value of the prediction probability values of the pixel points at the same position in the same type of credibility area in the different image squares, and taking the average value as the prediction probability value of the pixel points in the corresponding type of credibility area;
if the pixel point at the same position in the image to be detected is located in one image square, the five prediction probability values of the pixel point in the five types of credibility areas are kept unchanged.
9. The method of claim 1, further comprising: the image preprocessing further comprises brightness normalization, restrictive contrast histogram equalization and gamma correction.
CN201810213111.9A 2018-03-15 2018-03-15 Retina blood vessel segmentation map generation method based on credibility and deep learning Active CN110276763B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810213111.9A CN110276763B (en) 2018-03-15 2018-03-15 Retina blood vessel segmentation map generation method based on credibility and deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810213111.9A CN110276763B (en) 2018-03-15 2018-03-15 Retina blood vessel segmentation map generation method based on credibility and deep learning

Publications (2)

Publication Number Publication Date
CN110276763A CN110276763A (en) 2019-09-24
CN110276763B true CN110276763B (en) 2021-05-11

Family

ID=67958361

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810213111.9A Active CN110276763B (en) 2018-03-15 2018-03-15 Retina blood vessel segmentation map generation method based on credibility and deep learning

Country Status (1)

Country Link
CN (1) CN110276763B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110650153B (en) * 2019-10-14 2021-04-23 北京理工大学 Industrial control network intrusion detection method based on focus loss deep neural network
CN112668710B (en) * 2019-10-16 2022-08-05 阿里巴巴集团控股有限公司 Model training, tubular object extraction and data recognition method and equipment
CN110910372B (en) * 2019-11-23 2021-06-18 郑州智利信信息技术有限公司 Deep convolutional neural network-based uniform light plate defect detection method
CN111145183B (en) * 2019-12-30 2022-06-07 中南大学 Segmentation system and method for transparent separation cavity ultrasonic image
CN111242933B (en) * 2020-01-15 2023-06-20 华南理工大学 Retinal image artery and vein classification device, apparatus, and storage medium
EP4099012A4 (en) * 2020-01-29 2023-08-23 JFE Steel Corporation Metal structure phase classification method, metal structure phase classification device, metal structure phase learning method, metal structure phase learning device, material property prediction method for metal material, and material property prediction device for metal material
CN111968127B (en) * 2020-07-06 2021-08-27 中国科学院计算技术研究所 Cancer focus area identification method and system based on full-section pathological image
CN113011514B (en) * 2021-03-29 2022-01-14 吉林大学 Intracranial hemorrhage sub-type classification algorithm applied to CT image based on bilinear pooling

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104809480A (en) * 2015-05-21 2015-07-29 中南大学 Retinal vessel segmentation method of fundus image based on classification and regression tree and AdaBoost
CN106920227A (en) * 2016-12-27 2017-07-04 北京工业大学 Based on the Segmentation Method of Retinal Blood Vessels that deep learning is combined with conventional method
CN106934816A (en) * 2017-03-23 2017-07-07 中南大学 A kind of eye fundus image Segmentation Method of Retinal Blood Vessels based on ELM
US9757023B2 (en) * 2015-05-27 2017-09-12 The Regents Of The University Of Michigan Optic disc detection in retinal autofluorescence images
CN107292868A (en) * 2017-05-31 2017-10-24 瑞达昇科技(大连)有限公司 A kind of optic disk localization method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8351669B2 (en) * 2011-02-01 2013-01-08 Universidade Da Coruna-Otri Method, apparatus, and system for retinal image analysis
CN104537669B (en) * 2014-12-31 2017-11-07 浙江大学 The arteriovenous Segmentation Method of Retinal Blood Vessels of eye fundus image
CN104573712B (en) * 2014-12-31 2018-01-16 浙江大学 Arteriovenous retinal vessel sorting technique based on eye fundus image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104809480A (en) * 2015-05-21 2015-07-29 中南大学 Retinal vessel segmentation method of fundus image based on classification and regression tree and AdaBoost
US9757023B2 (en) * 2015-05-27 2017-09-12 The Regents Of The University Of Michigan Optic disc detection in retinal autofluorescence images
CN106920227A (en) * 2016-12-27 2017-07-04 北京工业大学 Based on the Segmentation Method of Retinal Blood Vessels that deep learning is combined with conventional method
CN106934816A (en) * 2017-03-23 2017-07-07 中南大学 A kind of eye fundus image Segmentation Method of Retinal Blood Vessels based on ELM
CN107292868A (en) * 2017-05-31 2017-10-24 瑞达昇科技(大连)有限公司 A kind of optic disk localization method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Segmenting Retinal Blood Vessels With Deep Neural Networks";Paweł Liskowski等;《IEEE TRANSACTIONS ON MEDICAL IMAGING》;20161130;第35卷(第11期);第2369-2380页 *

Also Published As

Publication number Publication date
CN110276763A (en) 2019-09-24

Similar Documents

Publication Publication Date Title
CN110276763B (en) Retina blood vessel segmentation map generation method based on credibility and deep learning
Li et al. Fully automated detection of retinal disorders by image-based deep learning
Seeböck et al. Unsupervised identification of disease marker candidates in retinal OCT imaging data
CN110197493B (en) Fundus image blood vessel segmentation method
CN107016681B (en) Brain MRI tumor segmentation method based on full convolution network
CN112132817B (en) Retina blood vessel segmentation method for fundus image based on mixed attention mechanism
CN113344849B (en) Microemulsion head detection system based on YOLOv5
CN109345538A (en) A kind of Segmentation Method of Retinal Blood Vessels based on convolutional neural networks
CN109685770B (en) Method for determining retinal vascular tortuosity
CN108537751B (en) Thyroid ultrasound image automatic segmentation method based on radial basis function neural network
CN111079620B (en) White blood cell image detection and identification model construction method and application based on transfer learning
CN106340016A (en) DNA quantitative analysis method based on cell microscope image
CN112819821B (en) Cell nucleus image detection method
US20210278655A1 (en) Automated segmentation and guided correction of endothelial cell images
US20210052157A1 (en) Assessment of endothelial cells and corneas at risk from ophthalmological images
CN114359288B (en) Medical image cerebral aneurysm detection and positioning method based on artificial intelligence
CN114600155A (en) Weakly supervised multitask learning for cell detection and segmentation
CN114821189A (en) Focus image classification and identification method based on fundus images
Sarhan et al. Transfer learning through weighted loss function and group normalization for vessel segmentation from retinal images
CN115661066A (en) Diabetic retinopathy detection method based on segmentation and classification fusion
CN111028230A (en) Fundus image optic disc and macula lutea positioning detection algorithm based on YOLO-V3
Mahapatra Retinal image quality classification using neurobiological models of the human visual system
Dong et al. Supervised learning-based retinal vascular segmentation by m-unet full convolutional neural network
CN110288041A (en) Chinese herbal medicine classification model construction method and system based on deep learning
CN112396580B (en) Method for detecting defects of round part

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant