CN111815562A - Retinal vessel segmentation method combining U-Net and self-adaptive PCNN - Google Patents

Retinal vessel segmentation method combining U-Net and self-adaptive PCNN Download PDF

Info

Publication number
CN111815562A
CN111815562A CN202010524251.5A CN202010524251A CN111815562A CN 111815562 A CN111815562 A CN 111815562A CN 202010524251 A CN202010524251 A CN 202010524251A CN 111815562 A CN111815562 A CN 111815562A
Authority
CN
China
Prior art keywords
image
net
pcnn
segmentation
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010524251.5A
Other languages
Chinese (zh)
Other versions
CN111815562B (en
Inventor
徐光柱
林文杰
陈莎
雷帮军
石勇涛
周军
刘蓉
王阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Bio Newvision Medical Equipment Ltd
Original Assignee
China Three Gorges University CTGU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Three Gorges University CTGU filed Critical China Three Gorges University CTGU
Priority to CN202010524251.5A priority Critical patent/CN111815562B/en
Publication of CN111815562A publication Critical patent/CN111815562A/en
Application granted granted Critical
Publication of CN111815562B publication Critical patent/CN111815562B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A retinal vessel segmentation method combining U-Net and self-adaptive PCNN, which is used for data augmentation of an image database of the fundus selected in an experiment; carrying out gray processing on the data set picture; CLAHE processing is carried out on the data set picture, and the contrast between retinal blood vessels and the background is increased; image blocking; building and training a U-Net neural network model and enhancing pictures; building a self-adaptive PCNN neural network model; vessel segmentation was performed using adaptive PCNN. On one hand, the invention provides an improved U-Net based secondary iteration type fundus blood vessel image enhancement method, which can obviously inhibit the background, highlight the blood vessel region, weaken noise interference and increase the picture contrast, thereby improving the picture quality of a data set. The invention further provides a fundus blood vessel image segmentation method based on the self-adaptive PCNN. Accurate parameters estimated by an Otsu algorithm are used, and then the output result of U-Net secondary iteration enhancement is sent into a self-adaptive PCNN, so that effective segmentation of the complete fundus blood vessel is realized.

Description

Retinal vessel segmentation method combining U-Net and self-adaptive PCNN
Technical Field
The invention discloses a retinal vessel segmentation method combining U-Net and self-adaptive PCNN, which is used for accurate segmentation of retinal vessels.
Background
The incidence of diabetes, cardiovascular and cerebrovascular diseases and various ophthalmic diseases is continuously increased along with the continuous improvement of the living standard of people, and the health of people is seriously threatened. The morphological change of retinal blood vessels is closely related to the occurrence of the diseases, and can reflect the occurrence of the diseases to a certain extent. However, due to the particularity of retinal blood vessels, the acquisition and manual segmentation costs are high, the data set is rare, and the blood vessel segmentation in the current retinal image has a plurality of difficulties:
1) and the contrast between the blood vessel and the background in the fundus image is low. Due to the influence of the acquisition equipment and the acquisition environment, such as uneven illumination and other factors, the noise interference is serious, and in addition, the influence of physiological changes of people on blood vessels causes the conditions of low contrast and poor picture quality;
2) and the structural features of the blood vessel are complex. The retinal blood vessels are different in length, width, tortuosity and shape and distributed in a tree shape, so that the segmentation is difficult to achieve. At present, blood vessel images obtained by most segmentation methods have broken blood vessels and misjudgment, and microminiature blood vessels are difficult to completely segment, so that the segmentation effect is poor.
Therefore, it is important to design an accurate segmentation scheme for retinal blood vessels.
A commonly used method in the conventional retinal vessel segmentation method is a method based on matched filtering. In the document [1] chinese patent "segmentation method of retinal blood vessel image" (application No. 201611185885.2), based on the feature that the blood vessel characteristics substantially conform to gaussian distribution, matching filtering is performed on retinal blood vessels and gaussian distribution functions in different directions, then thresholding is performed on the response result, the matching filtering result with the largest response is selected as blood vessel output, and finally the retinal blood vessel image is extracted.
Document [2] chinese patent "a retinal vessel segmentation method based on region growing PCNN" (application No. 201910013381.X) also performs matching filtering in different directions using two-dimensional gaussian filtering and two-dimensional Gabor filtering, respectively, and then fuses the two results to segment a vessel using region growing PCNN. The method is not sensitive to the characteristics of the lesion part in the image, so that misjudgment can be caused. Moreover, the preprocessing process is insufficient, and the quality of the processed picture is still not high. Therefore, although the method can completely segment the coarse blood vessels, the method can lack the information of the microminiature blood vessels.
The deep learning technology is rapidly developed, and has great advantages in the field of medical image processing. Many researchers apply the deep learning technique to the segmentation task in fundus images.
In the document [3] chinese patent, "retinal vessel segmentation method based on deep learning combined with conventional method" (application number: 201611228597.0), CLAHE and gaussian matched filtering are performed on the data set image, the image is input to the FCN-HNED network to extract retinal vessels, and finally the two obtained vascular probability maps are weighted and averaged.
In the document [4] chinese patent, "retinal vessel segmentation method and system for fundus images given deep learning" (application No. 201610844032.9), after a picture is preprocessed, a deep convolutional neural network is trained and used to perform image segmentation, a random forest classifier is used to perform pixel classification, and then two results of the convolutional neural network and the random forest classifier are fused to obtain a final segmentation result. These methods all have some disadvantages. The retinal vessel image data set images have large quality difference and insufficient quantity, and the use of a common neural network model easily causes the problems of poor quality of segmentation results, overfitting and the like, so that the requirements of medical personnel cannot be met.
In the deep learning field, there are many network models, among which, the technical solution described in the document [5] Olaf Ronneberger, Philipp Fischer, Thomas Brox.U-Net: volumetric Networks for biological image segmentation. MICCAI,2015: 234-. And thus used by researchers to perform retinal vessel segmentation.
In the document [6] Chinese patent, "a method for segmenting retinal vessel images by combining a multi-scale feature convolution neural network" (application number: 201810635753.8), a picture is preprocessed, a multi-scale feature segmentation network is constructed, and spatial pyramid hole convolution is introduced into a U-Net neural network to segment retinal vessels.
In the document [7] Chinese patent, "symmetrical full convolution neural network model construction method, fundus image blood vessel segmentation method, device, computer equipment and storage medium" (application number: 201910009415.8), a dense connection U-Net neural network is used to extract a retinal blood vessel image, an original image is grayed and whitened to enhance the image, then the dense connection is introduced into the U-Net neural network, the generalization capability is improved, and the blood vessel image is segmented. However, the feature images obtained by these methods are grayscale images, and are directly segmented by using a hard threshold method, and the classification of pixels close to 0 or 1 is very accurate, but the grayscale value of some blood vessel pixels is less than 0.5, and blood vessels around 0.5 grayscale may be confused with the background, resulting in misclassification, noise and reduced segmentation accuracy, as shown in fig. 1(a), 1(b), 1(c), 1(d) and 1(e), where:
FIG. 1(a) is an expert annotation diagram. FIG. 1(b) is a graph of the output result of U-Net. In FIG. 1(c), red is a hard threshold segmentation of U-Net, and blood vessels with false background below 0.5 gray are segmented; in FIG. 1(d), blue is the U-Net hard threshold segmentation, and is mistakenly segmented into the background of the blood vessel at a gray level of 0.5 or more. Fig. 1(e) is the misclassified pixel histogram of fig. 1(c) and 1(d), wherein red is the blood vessel classified as the background pixel histogram, and blue is the background classified as the blood vessel pixel histogram.
The Pulse Coupled Neural Network (PCNN) is an expansion on the basis of a cat visual cortex model provided by Eckhorn, simulates a biological visual complex system and belongs to a third-generation neural network. Unlike traditional neural networks, there are multiplicative couplings and dynamic pulse threshold characteristics between PCNN neurons, making PCNN suitable for image segmentation.
Document [8] chinese patent "fundus image blood vessel automatic detection method based on adaptive PCNN" (application No. 201210458362.6) first adopts CLAHE and two-dimensional gaussian matched filter for preprocessing, then uses laplacian energy of pixel as connection coefficient, and combines maximum inter-class variance and PCNN to perform blood vessel segmentation on fundus image.
Document [9] chinese patent "retinal vessel image segmentation method and system based on PCNN" (application number: 201710145321.4) proposes a retinal vessel segmentation method based on PCNN, which extracts a green channel to perform CLAHE and gaussian matched filtering, then subtracts and negates the CLAHE processed image and the gaussian matched filtered image, and then combines PCNN with a fast connection mechanism with seed region growth to extract fundus vessels. However, due to the fact that the preprocessing process is insufficient, the contrast between blood vessels and the background in the preprocessed fundus image is low, the subsequent PCNN processing is difficult, a large number of broken blood vessels exist in the segmentation result, and the accuracy is not high.
In a word, the segmentation result is not satisfactory only by using the U-Net neural network or the PCNN network. The blood vessels are segmented only by using the U-Net model, and the hard threshold segmentation is used in the post-processing, so that the intermediate gray level, namely the pixel at the intersection of the blood vessels and the background gray level is mixed up, the classification is wrong, and the accuracy is reduced. Only the PCNN neural network is used, but the preprocessing effect is poor, the picture quality is still good and uneven after enhancement, the noise interference is serious, and the contrast is poor. Even though the PCNN has multiplicative coupling and dynamic pulse threshold characteristics, the segmentation results remain unsatisfactory.
Disclosure of Invention
The invention combines U-Net and PCNN, mutually supplements advantages and disadvantages, and provides a retinal vessel segmentation method combining U-Net and self-adaptive PCNN. Through the enhancement, the background can be obviously inhibited, the blood vessel area is highlighted, the noise interference is weakened, the picture contrast is increased, and the data set picture quality is improved. The U-Net is a deep convolutional neural network, so that the generalization capability is strong;
on the other hand, the method provides a fundus blood vessel image segmentation method based on self-adaptive PCNN aiming at the condition that fundus tiny blood vessels are easy to lose and defect after the U-Net output is subjected to hard threshold segmentation. Accurate parameters estimated by an Otsu algorithm are used, and then the output result of U-Net secondary iteration enhancement is sent into a self-adaptive PCNN, so that effective segmentation of the complete fundus blood vessel is realized.
The technical scheme adopted by the invention is as follows:
a retinal vessel segmentation method combining U-Net and self-adaptive PCNN, firstly, preprocessing an original color fundus image; then, training and enhancing the deep learning model by using the preprocessed data set; then, an improved U-Net model is used for secondary enhancement, a primary enhancement result is fused with an original color image, graying and CLAHE processing are carried out, and then the picture is input into the improved U-Net model to enhance the picture quality; and obtaining a target and background segmentation threshold value by an Otsu algorithm, obtaining related self-adaptive parameters by using a formula, and performing vessel segmentation by using PCNN.
A retinal vessel segmentation method combining U-Net and adaptive PCNN comprises the following steps:
step 1: data augmentation is carried out on an eye fundus image database selected in an experiment;
step 2: carrying out gray processing on the data set picture;
and step 3: CLAHE processing is carried out on the data set picture, and the contrast between retinal blood vessels and the background is increased;
and 4, step 4: image blocking;
and 5: building and training a U-Net neural network model and enhancing pictures;
step 6: building a self-adaptive PCNN neural network model;
and 7: vessel segmentation was performed using adaptive PCNN.
The invention discloses a retinal vessel segmentation method combining U-Net and self-adaptive PCNN, which has the following technical effects:
1: the invention provides a blood vessel image enhancement method aiming at the conditions of low picture contrast, strong noise interference and poor quality of a current data set. And fusing the data set image enhanced by the once improved U-Net model with the original color image, then carrying out CLAHE equalization processing, and then inputting the trained improved U-Net model. The secondary image enhancement processing is more suitable for PCNN segmentation than before, so that micro blood vessels difficult to be identified by naked eyes are shown, and the blood vessels identified by the naked eyes are further enhanced, as shown in figure 9.
2: aiming at the problems that the segmentation precision of the current algorithm is insufficient, the micro blood vessels are difficult to segment completely, and the hard threshold is used for segmenting the blood vessels for U-Net, so that pixels with the gray value about 0.5 are easy to classify by mistake, the invention uses the self-adaptive PCNN to segment the retina blood vessels of the preprocessed image. Setting the PCNN division result to be corresponding to the pixels above 0.7 and below 0.3 of the quadratic enhancement graph to be 1 and 0, and only allowing the pixels between 0.3 and 0.7 to carry out the PCNN division. And calculating a threshold value by using an Otsu algorithm to obtain PCNN related parameters, and then carrying out PCNN segmentation. Therefore, the microminiature blood vessels are specially divided aiming at the pixels between 0.3 and 0.7, the information of the microminiature blood vessels can be reserved, and the number of the pixels which are wrongly classified can be reduced.
Drawings
FIG. 1(a) is a label diagram;
FIG. 1(b) is an original U-Net segmentation image;
FIG. 1(c) is a schematic diagram of a misclassification diagram I, wherein red is a U-Net hard threshold segmentation, and blood vessels with a background are misclassified below 0.5 gray;
FIG. 1(d) is a schematic diagram of a misclassification diagram II, wherein blue is a U-Net hard threshold segmentation, and the misclassification is a blood vessel background above 0.5 gray level;
fig. 1(e) shows the misclassified pixel histograms of fig. 1(c) and 1 (d).
FIG. 2 is a DRIVE data set augmentation graph;
FIG. 3(a) is a gray scale map before CLAHE processing;
fig. 3(b) is a diagram after CLAHE processing.
FIG. 4 is a block diagram of a test set image.
Fig. 5 is a diagram of a U-net network architecture.
FIG. 6 is a down-sampled exploded view;
FIG. 7 is an up-sampling exploded view;
fig. 8 is a diagram of an image enhancement process.
FIG. 9(a) is a first U-Net boost map;
FIG. 9(b) is a second U-Net boost chart.
Fig. 10 is a diagram of a PCNN structure.
Fig. 11 is a 0.3-0.7 gray scale image.
FIG. 12(a) is a graph comparing the results of U-Net and the segmentation according to the present invention (U-Net + hard threshold segmentation results);
FIG. 12(b) is a graph comparing the results of U-Net and the segmentation according to the present invention (improved U-Net + hard threshold segmentation results);
FIG. 12(c) is a graph comparing the results of U-Net and the present invention segmentation (two times improved U-Net + hard threshold segmentation);
FIG. 12(d) is a graph comparing the results of U-Net and the present invention segmentation (two times improved U-Net + PCNN segmentation).
Fig. 13 is a graph comparing the results of four segmentations.
Detailed Description
The invention combines U-Net and PCNN, mutually supplements advantages and disadvantages, and provides a retinal vessel segmentation method combining U-Net and self-adaptive PCNN, which comprises the following main processes: firstly, preprocessing an original color fundus image; then, training and enhancing the deep learning model by using the preprocessed data set; then, an improved U-Net model is used for secondary enhancement, a primary enhancement result is fused with an original color image, graying and CLAHE processing are carried out, and then the picture is input into the improved U-Net model to enhance the picture quality; and obtaining a target and background segmentation threshold value by an Otsu algorithm, obtaining related self-adaptive parameters by using a formula, and performing vessel segmentation by using PCNN.
The invention comprises the following steps:
step 1: data augmentation is carried out on an eye fundus image database selected in an experiment;
step 2: carrying out gray processing on the data set picture; all pictures are proportionally extracted into a red, green and blue three-channel image X of 0.299R +0.587G +0.114B, namely graying processing is carried out, and the image X is shown in fig. 3 (a);
and step 3: CLAHE processing is carried out on the data set picture, and the contrast between retinal blood vessels and the background is increased;
and 4, step 4: image blocking;
and 5: building and training a U-Net neural network model and enhancing pictures;
step 6: building a self-adaptive PCNN neural network model;
and 7: vessel segmentation was performed using adaptive PCNN.
Further, the step 1 specifically comprises:
step 1.1: horizontally flipping each picture in the three data sets DRIVE, start and CHASE _ DB 1;
step 1.2: vertically turning over the data set picture;
step 1.3: and performing Gamma transformation on the data set picture.
After the data is augmented, each picture in the data set is expanded into four, and 352 pictures are used for training, testing and picture enhancement of the model.
Further, the step 4 specifically includes:
step 4.1: the blocks of the training set picture are pictures with the size of 48 multiplied by 48 randomly selected from the pictures;
step 4.2: the test set picture partitioning is to take an image block every 5 pixels in an image, divide the image into overlapped 48 × 48 image blocks, and prevent the problem of insufficient number of pixels in a manner of supplementing 0 gray level pixels at the edge, and the partitioning is as shown in fig. 4.
Further, the step 5 specifically includes:
step 5.1: aiming at the problem of uneven image quality of a data set, the invention builds a U-Net model as a preprocessing neural network to improve the image quality. U-Net is a full convolution neural network with a U-shaped symmetrical structure, and a U-Net model schematic diagram is shown in figure 5. The left half is the downsampling part and the right half is the upsampling part. The down-sampling unit is composed of a convolution layer, a batch normalization layer, an activation layer, a regularization layer, and a maximum pooling layer, and can perform path shrinkage on an input image during training of a network to capture global information, as shown in fig. 6. The output image is reduced to 1/4 for the input image every time it passes through the down-sampling unit. The activation function is a Leaky-ReLU function commonly used for deep learning at present. The function is evolved based on the ReLU function, and compared with the ReLU function, the Leaky-ReLU function gives a smaller non-zero slope to all negative values, so that the information of the negative axis is not completely lost. The up-sampling unit is composed of an up-sampling layer, a convolution layer, a batch normalization layer, an activation layer and a regularization layer, as shown in fig. 7. The output image is enlarged to 4 times of the input image every time the up-sampling unit passes. And then performing two-dimensional convolution on the convolutional layer, and stretching the convolutional layer by the batch standard layer to be in normal distribution. The Leaky-ReLU function performs nonlinear mapping, and the regularization layer discards neuron activation values with a certain probability. The output layer activation function is a softmax function.
TABLE 1 hidden layer parameter table
Downsampling layer Size of feature map Upper sampling layer Size of feature map Convolution kernel size
Layer_1
48×48 Layer_1 6×6 3×3
Layer_2 24×24 Layer_2 12×12 3×3
Layer_3 12×12 Layer_3 24×24 3×3
Layer_4 6×6 Layer_4 48×48 3×3
Compared with the original U-Net model, the method has the following improvements:
firstly, the up-sampling unit and the down-sampling unit of the invention have 4 layers respectively, and are all transverse convolution for 3 times, and the general U-Net model uses the transverse convolution for 2 times, thus increasing the depth of the U-Net network, extracting more detail information by the 3 times transverse convolution, and being capable of storing the microminiature blood vessel.
Secondly, the input of the activation function is forcibly pulled to the standard normal distribution by using the batch normalization layer, so that the input value of the nonlinear transformation function falls into a region sensitive to the input, the problem of gradient dispersion frequently occurring in deep learning is avoided, and the network convergence speed is accelerated.
Thirdly, the invention uses a regularization layer. The method has the effects that the neurons in the convolution layer discard the activation values with a certain probability, the effect among the nodes in each layer is reduced, and the phenomenon of overfitting of the model is prevented. The batch normalization layer and the regularization layer are used together, so that the model training speed is accelerated, and the generalization capability and robustness of the model are enhanced.
Step 5.2: in the model training stage, 90% of data in the training set is selected for training in each training round, and the remaining 10% of data is used for verification. The error is calculated in the model by using a cross entropy cost function, then the cost is minimized by using a random gradient descent mode, and then the updating weight and the bias are propagated reversely.
Step 5.3: inputting the primarily processed test set picture into a trained improved U-Net model for first enhancement;
step 5.4: and performing secondary enhancement on the primary enhanced picture, wherein fig. 8 is an image enhancement process diagram.
Firstly, the primary enhancement result is fused and grayed with the green channel of the original image, and the formula is as follows:
X=0.299*R+0.587*(0.99*G+0.01*Gray)+0.114*B (1)
and then carrying out second CLAHE treatment. And inputting the processed image into the improved U-Net model again for secondary enhancement. The image enhancement does not need to train a model again, and the image enhancement is directly used. Fig. 9(a) and 9(b) show that after two times of U-Net enhancement result comparison, the secondary enhancement picture further increases the information of the micro blood vessels in the image, so that the contrast between the blood vessels and the background in the output image is increased, and the noise interference is weakened. Therefore, the quality of the picture is enhanced by using the two-wheel U-Net model, the image preprocessing operation effect is better than that of a common image, and the difficulty of subsequent PCNN blood vessel segmentation can be greatly reduced. And finally, recombining the output result of the U-Net model into an enhanced image with the size of the original image.
Further, the step 6 specifically includes:
the invention constructs the self-adaptive PCNN aiming at the binary segmentation of the gray level image. PCNN was proposed by Eckhorn according to the mammalian visual cortex model, belonging to the third generation neural networks. The PCNN configuration is shown in fig. 10.
The PCNN model is composed of an accepting domain, a modulating domain and a pulse generating domain. The accepting field consists of a connection input and a feedback input; the modulation domain mainly generates internal activity items; the pulse generation domain is composed of a threshold regulator and a pulse generator. When the internal activity item is greater than the dynamic threshold, the neuron fires, 1. The mathematical expression of PCNN is:
L[n]=∑klWijklYkl[n-1](2)
U[n]=F[n](1+βL[n]) (3)
Figure BDA0002533230970000081
Figure BDA0002533230970000082
wherein, is a connection input, is an internal activity item, is a feedback input, is a pulse output, is a dynamic threshold, is a connection weight, β is a connection strength coefficient, α is a connection strength coefficientθIs a threshold attenuation factor, VθThe coefficient is adjusted for a value, the number of iterations. The self-adaptive PCNN divides an image into a plurality of blocks, calculates the target background segmentation threshold T of each block by using an OTSU method, and calculates the initial ignition threshold theta by using the formulas (6) and (7)0And a connection coefficient beta of the connection of the first and second,
θ0=m0+k10(6)
Figure BDA0002533230970000083
wherein, theta0Is PCNN initial ignition threshold, T is target background segmentation threshold, m0Is the target pixel mean, σ0Is the target pixel variance, k1、k2Is a constant coefficient, usually k1∈[1,2],k2∈[0,1]。
Further, the step 7 specifically includes:
step 7.1: dividing the gray level image obtained by secondary enhancement into a plurality of M multiplied by N non-overlapping image blocks;
step 7.2: only the pixel values of 0.3-0.7 in the tile are retained, and the other pixel values are returned to 0, as shown in FIG. 11. The invention is specially used for dividing pixels between 0.3 and 0.7 under the condition that the hard threshold division causes the confusion of pixel classes around 0.5 gray level. And too many 0-gray-level pixels in the vessel image will interfere with the Otsu algorithm to get the optimal threshold.
Step 7.3: each image block removes 0 gray level pixels, and then Otsu is used for calculating respective target background segmentation threshold value T and then calculating initial ignition threshold value theta0Initial connection coefficient beta;
step 7.4: inputting processed tiles into PCNN
Step 7.5: performing iterative segmentation using the initial firing threshold and the connection coefficient and equations (2) - (5);
step 7.6: after iteration is finished, reconstructing a PCNN segmentation result into an original image size;
step 7.7: the pixels corresponding to the pixel positions of 0.7 or more and 0.3 or less in the secondary enhanced image in the divided image are set to 1 and 0, respectively.
The invention provides a retinal vessel segmentation method based on U-Net and self-adaptive PCNN, and the structure of the retinal vessel segmentation method is shown in figures 5 and 10.
(I): the method uses the improved U-Net model as a preprocessing neural network model to enhance the retinal blood vessels twice, and the structure of the retinal blood vessels is shown in figure 5. The specific method comprises the following steps: 1) primary vascular enhancement. Graying an original picture, CLAHE and blocking. Building an improved U-Net network model, using three times of transverse convolution for each circulation unit, introducing a batch normalization layer and a regularization layer to prevent gradient dispersion and overfitting, accelerating the training speed, training the U-Net model again, and then using the U-Net model to finish the first blood vessel enhancement; 2) secondary vascular enhancement. And fusing the data set image enhanced by the once improved U-Net model with the original color image, then carrying out CLAHE equalization processing, and then inputting the trained improved U-Net model. The U-Net model is used for carrying out blood vessel enhancement twice, so that the contrast between the blood vessel and the background is increased, the micro blood vessel which is invisible to naked eyes can be clearly seen even in a dark area, and a gray image with more detailed information is obtained, as shown in fig. 9.
(II): the method combines U-Net and self-adaptive PCNN, and provides a retinal vessel segmentation method. The improved U-Net model is used for enhancing the picture quality, and then the self-adaptive PCNN segmentation is carried out on the blood vessel. Adaptive PCNN allows only between 0.3 and 0.7 pixels to enter the PCNN for segmentation, since pixels outside the range of 0.3 to 0.7 have a high probability of being correctly classified, and pixels in between are prone to confuse the classes and are therefore specifically segmented for them. And calculating PCNN related parameters by using an Otsu algorithm and a formula, and performing PCNN segmentation. Finally, the segmentation result is set to be 1 corresponding to more than 0.7 pixel and 0 corresponding to less than 0.3 pixel, and the micro blood vessel information is extracted.
Fig. 12(a) to 12(d) are comparisons of four segmentation methods, taking one image of a data set as an example. FIG. 13 is an average representation of the four segmentation methods on a data set. Compared with the traditional U-Net network hard threshold value combined segmentation method, the improved U-Net, the two times of U-Net enhancement processing and the two times of U-Net enhancement and PCNN combined method provided by the invention can obviously improve each index of the segmentation result.

Claims (7)

1. A retinal vessel segmentation method combining U-Net and adaptive PCNN is characterized in that: firstly, preprocessing an original color fundus image; then, training and enhancing the deep learning model by using the preprocessed data set; secondly, performing secondary enhancement by using an improved U-Net model, fusing a primary enhancement result with an original color image, graying and CLAHE processing, and inputting the picture into the improved U-Net model to enhance the picture quality; inputting the picture into the U-Net again, and integrally improving the picture quality; and obtaining a target and background segmentation threshold value through an Otsu algorithm, obtaining related adaptive parameters by using a formula, and performing vessel segmentation by using PCNN.
2. A retinal vessel segmentation method combining U-Net and adaptive PCNN is characterized by comprising the following steps:
step 1: data augmentation is carried out on an eye fundus image database selected in an experiment;
step 2: carrying out gray processing on the data set picture;
and step 3: CLAHE processing is carried out on the data set picture, and the contrast between retinal blood vessels and the background is increased;
and 4, step 4: image blocking;
and 5: building and training a U-Net neural network model and enhancing pictures;
step 6: building a self-adaptive PCNN neural network model;
and 7: vessel segmentation was performed using adaptive PCNN.
3. The method of claim 2, wherein the method of retinal vessel segmentation by combination of U-Net and adaptive PCNN comprises: the step 1 comprises the following steps:
step 1.1: horizontally flipping each picture in the three data sets DRIVE, start and CHASE _ DB 1;
step 1.2: vertically turning over the data set picture;
step 1.3: and performing Gamma transformation on the data set picture.
4. The method of claim 2, wherein the method of retinal vessel segmentation by combination of U-Net and adaptive PCNN comprises: the step 4 comprises the following steps:
step 4.1: the blocks of the training set picture are pictures with the size of 48 multiplied by 48 randomly selected from the pictures;
step 4.2: the test set picture partitioning is to take one image block every 5 pixels in an image, divide the image into overlapped 48 × 48 image blocks, and prevent the problem of insufficient number of pixels in a manner of supplementing 0 gray-scale pixels at the edge.
5. The method of claim 2, wherein the method of retinal vessel segmentation by combination of U-Net and adaptive PCNN comprises: the step 5 comprises the following steps:
step 5.1: aiming at the problem of uneven image quality of a data set, a U-Net model is built as a preprocessing neural network to improve the image quality, the U-Net model is a full convolution neural network with a U-shaped symmetrical structure and comprises a down-sampling unit and an up-sampling unit,
the downsampling unit consists of a first convolution layer, a first batch of normalization layers, a first activation layer, a first regularization layer and a maximum pooling layer, can perform path shrinkage on an input image in network training so as to capture global information, outputs 1/4 the image is shrunk to the input image every time the downsampling unit is passed, and the activation function adopts a Leaky-ReLU function;
the up-sampling unit consists of an up-sampling layer, a second convolution layer, a second batch of normalization layer, a second activation layer and a second regularization layer, and the output image is enlarged to 4 times of the input image after passing through the up-sampling unit once; the second batch of normalized layers stretch the second convolutional layers and output the second convolutional layers to normal distribution, nonlinear mapping is carried out on a Leaky-ReLU activation function, the second normalized layers discard neuron activation values with certain probability, and the output layer activation function is a softmax function;
step 5.2: in the U-Net model training stage, 90% of data in a training set is selected for training during each round of training, the rest 10% of data is used for verification, a cross entropy cost function is used for calculating errors in the U-Net model, then a random gradient descending mode is used for minimizing the cost, and then updating weight and bias are propagated reversely;
step 5.3: inputting the primarily processed test set picture into a trained improved U-Net model for first enhancement;
step 5.4: and performing secondary enhancement on the primary enhanced picture.
6. The method of claim 2, wherein the method of retinal vessel segmentation by combination of U-Net and adaptive PCNN comprises: the step 6 specifically comprises the following steps:
aiming at binary segmentation of a gray level image, a self-adaptive PCNN model is constructed, wherein the PCNN model consists of a receiving domain, a modulation domain and a pulse generation domain; the accept domain consists of a connection input L and a feedback input F; the modulation domain mainly generates an internal activity item U; the pulse generation domain consists of a threshold regulator and a pulse generator; when the internal activity item U is larger than the dynamic threshold theta, the neuron fires, and Y is 1; the mathematical expression of PCNN is:
L[n]=∑klWijklYkl[n-1](2)
U[n]=F[n](1+βL[n]) (3)
Figure FDA0002533230960000021
Figure FDA0002533230960000022
the self-adaptive PCNN divides an image into a plurality of blocks, calculates the target background segmentation threshold T of each block by using an OTSU method, and calculates the initial ignition threshold theta by using formulas (6) and (7)0And a connection coefficient beta of the connection of the first and second,
θ0=m0+k10(6)
Figure FDA0002533230960000023
k1、k2is a constant coefficient, usually k1∈[1,2],k2∈[0,1]。
7. The method of claim 2, wherein the method of retinal vessel segmentation by combination of U-Net and adaptive PCNN comprises: the step 7 comprises the following steps:
step 7.1: dividing the gray level image obtained by secondary enhancement into a plurality of M multiplied by N non-overlapping image blocks;
step 7.2: only the pixel values of 0.3-0.7 in the image block are reserved, other pixel values return to 0,
step 7.3: each image block removes 0 gray level pixels, calculates respective target background segmentation threshold T by using 0tsu, and calculates initial ignition threshold theta0Initial connection coefficient beta;
step 7.4: inputting the processed tiles into the PCNN;
step 7.5: performing iterative segmentation using the initial firing threshold and the connection coefficient and equation (2) -equation (5);
step 7.6: after iteration is finished, reconstructing a PCNN segmentation result into an original image size;
step 7.7: the pixels corresponding to the pixel positions of 0.7 or more and 0.3 or less in the secondary enhanced image in the divided image are set to 1 and 0, respectively.
CN202010524251.5A 2020-06-10 2020-06-10 Retina blood vessel segmentation method combining U-Net and self-adaptive PCNN Active CN111815562B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010524251.5A CN111815562B (en) 2020-06-10 2020-06-10 Retina blood vessel segmentation method combining U-Net and self-adaptive PCNN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010524251.5A CN111815562B (en) 2020-06-10 2020-06-10 Retina blood vessel segmentation method combining U-Net and self-adaptive PCNN

Publications (2)

Publication Number Publication Date
CN111815562A true CN111815562A (en) 2020-10-23
CN111815562B CN111815562B (en) 2024-04-09

Family

ID=72845660

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010524251.5A Active CN111815562B (en) 2020-06-10 2020-06-10 Retina blood vessel segmentation method combining U-Net and self-adaptive PCNN

Country Status (1)

Country Link
CN (1) CN111815562B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258486A (en) * 2020-10-28 2021-01-22 汕头大学 Retinal vessel segmentation method for fundus image based on evolutionary neural architecture search
CN112716446A (en) * 2020-12-28 2021-04-30 深圳硅基智能科技有限公司 Method and system for measuring pathological change characteristics of hypertensive retinopathy
CN112884770A (en) * 2021-04-28 2021-06-01 腾讯科技(深圳)有限公司 Image segmentation processing method and device and computer equipment
CN113191987A (en) * 2021-05-31 2021-07-30 齐鲁工业大学 Palm print image enhancement method based on PCNN and Otsu
CN116087036A (en) * 2023-02-14 2023-05-09 中国海洋大学 Device for identifying images of sediment plume of deep sea mining and image analysis method
CN116246067A (en) * 2023-01-12 2023-06-09 兰州交通大学 CoA Unet-based medical image segmentation method
CN116580008A (en) * 2023-05-16 2023-08-11 山东省人工智能研究院 Biomedical marking method based on local augmentation space geodesic

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109448006A (en) * 2018-11-01 2019-03-08 江西理工大学 A kind of U-shaped intensive connection Segmentation Method of Retinal Blood Vessels of attention mechanism
CN109727259A (en) * 2019-01-07 2019-05-07 哈尔滨理工大学 A kind of retinal images partitioning algorithm based on residual error U-NET network
CN109859146A (en) * 2019-02-28 2019-06-07 电子科技大学 A kind of colored eye fundus image blood vessel segmentation method based on U-net convolutional neural networks
CN110197493A (en) * 2019-05-24 2019-09-03 清华大学深圳研究生院 Eye fundus image blood vessel segmentation method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109448006A (en) * 2018-11-01 2019-03-08 江西理工大学 A kind of U-shaped intensive connection Segmentation Method of Retinal Blood Vessels of attention mechanism
CN109727259A (en) * 2019-01-07 2019-05-07 哈尔滨理工大学 A kind of retinal images partitioning algorithm based on residual error U-NET network
CN109859146A (en) * 2019-02-28 2019-06-07 电子科技大学 A kind of colored eye fundus image blood vessel segmentation method based on U-net convolutional neural networks
CN110197493A (en) * 2019-05-24 2019-09-03 清华大学深圳研究生院 Eye fundus image blood vessel segmentation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴晨;易本顺;章云港;黄松;冯雨;: "基于改进卷积神经网络的视网膜血管图像分割", 光学学报, no. 11, pages 133 - 139 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258486A (en) * 2020-10-28 2021-01-22 汕头大学 Retinal vessel segmentation method for fundus image based on evolutionary neural architecture search
CN112716446A (en) * 2020-12-28 2021-04-30 深圳硅基智能科技有限公司 Method and system for measuring pathological change characteristics of hypertensive retinopathy
CN112716446B (en) * 2020-12-28 2023-03-24 深圳硅基智能科技有限公司 Method and system for measuring pathological change characteristics of hypertensive retinopathy
CN112884770A (en) * 2021-04-28 2021-06-01 腾讯科技(深圳)有限公司 Image segmentation processing method and device and computer equipment
CN112884770B (en) * 2021-04-28 2021-07-02 腾讯科技(深圳)有限公司 Image segmentation processing method and device and computer equipment
CN113191987A (en) * 2021-05-31 2021-07-30 齐鲁工业大学 Palm print image enhancement method based on PCNN and Otsu
CN116246067A (en) * 2023-01-12 2023-06-09 兰州交通大学 CoA Unet-based medical image segmentation method
CN116246067B (en) * 2023-01-12 2023-10-27 兰州交通大学 CoA Unet-based medical image segmentation method
CN116087036A (en) * 2023-02-14 2023-05-09 中国海洋大学 Device for identifying images of sediment plume of deep sea mining and image analysis method
CN116087036B (en) * 2023-02-14 2023-09-22 中国海洋大学 Device for identifying images of sediment plume of deep sea mining and image analysis method
CN116580008A (en) * 2023-05-16 2023-08-11 山东省人工智能研究院 Biomedical marking method based on local augmentation space geodesic
CN116580008B (en) * 2023-05-16 2024-01-26 山东省人工智能研究院 Biomedical marking method based on local augmentation space geodesic

Also Published As

Publication number Publication date
CN111815562B (en) 2024-04-09

Similar Documents

Publication Publication Date Title
CN111815562B (en) Retina blood vessel segmentation method combining U-Net and self-adaptive PCNN
CN109345538B (en) Retinal vessel segmentation method based on convolutional neural network
CN108648191B (en) Pest image recognition method based on Bayesian width residual error neural network
CN111815574B (en) Fundus retina blood vessel image segmentation method based on rough set neural network
CN108986106B (en) Automatic segmentation method for retinal blood vessels for glaucoma
Lim et al. Integrated optic disc and cup segmentation with deep learning
CN112132817B (en) Retina blood vessel segmentation method for fundus image based on mixed attention mechanism
CN108268859A (en) A kind of facial expression recognizing method based on deep learning
CN106530283A (en) SVM (support vector machine)-based medical image blood vessel recognition method
CN109978848B (en) Method for detecting hard exudation in fundus image based on multi-light-source color constancy model
Mahapatra et al. A CNN based neurobiology inspired approach for retinal image quality assessment
CN111815563B (en) Retina optic disc segmentation method combining U-Net and region growing PCNN
Chen et al. Cell nuclei detection and segmentation for computational pathology using deep learning
Chiem et al. A novel hybrid system for skin lesion detection
CN104463215A (en) Tiny aneurysm occurrence risk prediction system based on retina image processing
CN107229937A (en) A kind of retinal vessel sorting technique and device
Abiyev et al. Fuzzy neural networks for identification of breast cancer using images' shape and texture features
Ma et al. Retinal vessel segmentation based on generative adversarial network and dilated convolution
Milletari et al. Robust segmentation of various anatomies in 3d ultrasound using hough forests and learned data representations
Fan et al. Automated blood vessel segmentation in fundus image based on integral channel features and random forests
CN109165551B (en) Expression recognition method for adaptively weighting and fusing significance structure tensor and LBP characteristics
Sreng et al. Feature extraction from retinal fundus image for early detection of diabetic retinopathy
Upadhyay et al. Wavelet based fine-to-coarse retinal blood vessel extraction using U-net model
CN109272004B (en) Influenza strain egg embryo viability detection method based on convolutional neural network model
Ghosh et al. Classification of diabetic retinopathy using few-shot transfer learning from imbalanced data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240201

Address after: 1003, Building A, Zhiyun Industrial Park, No. 13 Huaxing Road, Tongsheng Community, Dalang Street, Longhua District, Shenzhen City, Guangdong Province, 518000

Applicant after: Shenzhen Wanzhida Enterprise Management Co.,Ltd.

Country or region after: China

Address before: 443002 No. 8, University Road, Xiling District, Yichang, Hubei

Applicant before: CHINA THREE GORGES University

Country or region before: China

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240314

Address after: 400000, 2nd Floor, No. 27-5 Fengsheng Road, Jinfeng Town, Chongqing High tech Zone, Jiulongpo District, Chongqing

Applicant after: CHONGQING BIO NEWVISION MEDICAL EQUIPMENT Ltd.

Country or region after: China

Address before: 1003, Building A, Zhiyun Industrial Park, No. 13 Huaxing Road, Tongsheng Community, Dalang Street, Longhua District, Shenzhen City, Guangdong Province, 518000

Applicant before: Shenzhen Wanzhida Enterprise Management Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant