WO2021253939A1 - 一种用于眼底视网膜血管图像分割的粗糙集神经网络方法 - Google Patents

一种用于眼底视网膜血管图像分割的粗糙集神经网络方法 Download PDF

Info

Publication number
WO2021253939A1
WO2021253939A1 PCT/CN2021/086437 CN2021086437W WO2021253939A1 WO 2021253939 A1 WO2021253939 A1 WO 2021253939A1 CN 2021086437 W CN2021086437 W CN 2021086437W WO 2021253939 A1 WO2021253939 A1 WO 2021253939A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
blood vessel
neural network
retinal blood
pixel
Prior art date
Application number
PCT/CN2021/086437
Other languages
English (en)
French (fr)
Inventor
丁卫平
孙颖
鞠恒荣
张毅
冯志豪
李铭
万杰
曹金鑫
Original Assignee
南通大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南通大学 filed Critical 南通大学
Publication of WO2021253939A1 publication Critical patent/WO2021253939A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/008Artificial life, i.e. computing arrangements simulating life based on physical entities controlled by simulated intelligence so as to replicate intelligent life forms, e.g. based on robots replicating pets or humans in their appearance or behaviour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/143Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Definitions

  • the invention relates to the technical field of medical image processing, in particular to a rough set neural network method for segmentation of fundus retinal blood vessel images.
  • retinal blood vessels in fundus images are of great significance and value for doctors in the early diagnosis of diabetic cardio-cerebrovascular diseases and various ophthalmic diseases.
  • due to the complex structure of retinal blood vessels they are also susceptible to the influence of light factors in the collection environment.
  • Clinically, manual segmentation of retinal blood vessels is not only a huge workload, but also requires a high level of experience and skills for medical staff.
  • the present invention provides a rough set neural network method for the segmentation of fundus retinal blood vessel images, which reduces the workload of medical staff and avoids the difference in experience and skills of medical staff on the same fundus image segmentation result. Differences, effective segmentation of color fundus retinal blood vessel images, to obtain higher segmentation accuracy and efficiency.
  • a rough set neural network method for segmentation of fundus retinal blood vessel images including the following steps: S10 image preprocessing, each standard RGB color fundus retinal blood vessel image with a size of M ⁇ M ⁇ 3 is imaged by rough set theory Enhance preprocessing to obtain a rough set-enhanced fundus retinal blood vessel image; S20 constructs a U-net neural network model to segment the rough set-enhanced fundus retinal blood vessel image to obtain a segmentation map, and compare the segmentation map with the standard RGB The error between the standard segmentation label maps corresponding to the color fundus retinal blood vessel image is used as the error function of the constructed U-net neural network to obtain the U-net neural network model; S30 uses the particle swarm optimization algorithm (PSO) to analyze the The U-net neural network model is optimized and trained.
  • PSO particle swarm optimization algorithm
  • the rough set-enhanced fundus retinal blood vessel image is used as particles. Through the continuous iteration of the particle swarm, the optimal population of particles is obtained, and the U-net neural network parameters are performed using gradient descent. Adjustment to obtain the PSO-U-net neural network model; and S40 uses the rough set theory to perform image enhancement pre-processing on the color fundus retinal blood vessel image to be tested, and then uses the PSO-U-net neural network model to analyze the color fundus to be tested Retinal blood vessel image segmentation.
  • the U-net neural network model includes an input layer, a convolution layer, a ReLU nonlinear layer, a pooling layer, a deconvolution layer, and an output layer.
  • step S10 includes the following steps: S11 stores each standard RGB color fundus retinal blood vessel image with a size of M ⁇ M ⁇ 3 as three matrices each with a size of M ⁇ M, which are respectively denoted as R * , G * and B * , each value in the matrix represents the component value of each color of each pixel of the three channels; the HSI model is established through the matrix R * , G * and B * , where H represents hue and S represents saturation Degree, I means brightness:
  • the brightness I component of S12 is equivalent to the grayscale image of the fundus retinal blood vessel image, which is regarded as an image information system, and the rough set theory is used for image preprocessing, and the two-dimensional fundus retinal image with the size of M ⁇ M is taken as the domain of discussion.
  • each pixel x in the fundus retinal image represents an object in U
  • the gray value of the pixel x is denoted as f(m,n), where (m,n) represents the position of the pixel x as
  • avg(S i,j ) represents the average pixel value of the sub-block Si ,j, if satisfy
  • S15 judges the set to which the pixel belongs based on the above two conditional attributes c 1 and c 2 , as a basis for decision-making, classifies the pixel, and analyzes the original fundus retinal blood vessel image P is divided into sub-images.
  • the original image is divided into brighter noise-free sub-images P 1 , bright edge noise sub-images P 2 , darker none according to conditional attribute c 2
  • Noise sub-image P 3 and dark edge noise sub-image P 4 complement the brighter and noise-free sub-image P 1 , that is, at all darker and noisy pixel positions, use gray threshold ⁇ and noise threshold ⁇ respectively Fill it to form P 1 ′;
  • complement the darker and noise-free sub-image P 3 that is, at all the lighter and noisy pixel positions, fill it with gray threshold ⁇ and noise threshold ⁇ to form P 3 ′ ;
  • S16 respectively perform enhancement transformation on P 1 ′ and P 3 ′: perform histogram equalization transformation on P 1 ′, perform histogram exponential transformation on P 3 ′, and perform the histogram transformation on P 1 ′ and P 3 ′
  • the images are overlapped to obtain an enhanced fundus retinal blood vessel image P′, and the enhanced fundus retinal blood
  • a rough set-enhanced fundus retinal blood vessel image is obtained, where x i represents the i-th pixel value of the fundus retinal blood vessel image, and min(x) and max(x) represent the minimum and maximum values of the fundus retinal blood vessel image pixels, respectively.
  • step S20 includes the following steps: S21 uses down-sampling to extract features of the fundus retinal blood vessel image from the rough set, and uses a 3 ⁇ 3 convolution kernel to perform 2 convolution operations on the input fundus retinal blood vessel image , And select the ReLU activation function for nonlinear transformation, and then perform 2 ⁇ 2 pooling operation, repeat 4 times, after each pooling, the first 3 ⁇ 3 convolution operation, the number of 3 ⁇ 3 convolution kernels becomes Double increase; after that, perform 2 3 ⁇ 3 convolution operations to continue to complete the above-mentioned down-sampling feature extraction related operations; for the calculation of the convolutional layer, it is expressed as follows:
  • M j represents the set of input feature maps
  • f() represents the activation function
  • is the weight constant of the feature map of the down-sampling layer
  • down() is the down-sampling function
  • S22 uses up-sampling to operate, first performs 2 3 ⁇ 3 deconvolution operations to copy the image of the maximum pooling layer And trimming, and stitching with the image obtained by deconvolution; then perform 3 ⁇ 3 convolution operation, repeat 4 times, after each stitching, the first 3 ⁇ 3 convolution operation, 3 ⁇ 3 convolution kernel The number is reduced exponentially; finally, two 3 ⁇ 3 convolution operations and one 1 ⁇ 1 convolution operation are performed, and the up-sampling process is completed at this time; and S23 goes through the up-sampling and down-sampling processes, and calculates U through the forward calculation.
  • the segmentation map obtained by the -net neural network is subjected to error calculation with the standard segmentation label map corresponding to the standard RGB color fundus retinal blood vessel image, and the error function is expressed as follows:
  • T represents the number of fundus image samples input to the U-net neural network
  • y_out t (i) represents the gray value of the i-th pixel in the t-th fundus retinal image sample output by the U-net neural network
  • y_true t (i) Represents the gray value of the i-th pixel in the i-th fundus retinal image label.
  • the step S23 sets an error threshold, the error threshold is 0.1, and when the error is not greater than the error threshold, the required U-net neural network model is obtained; when the error is greater than the error threshold At this time, perform back propagation according to the gradient descent algorithm to adjust the network weight, and then repeat steps S21 to S22 for forward calculation until the error is not greater than the error threshold.
  • U-net neural network error function as the fitness function of the particle swarm, calculate the fitness of each particle, and arrange them in ascending order to obtain the best position pbest of each particle and the best position gbest of the entire particle swarm; S33 if The minimum value of the error threshold range has been reached, indicating that the training has converged, so stop running; otherwise, continue to update the position and velocity of each particle according to formulas (8) and (9);
  • v′ in wv′ in + ⁇ 1 ⁇ rand() ⁇ (pbest in -x′ in )+ ⁇ 2 ⁇ rand() ⁇ (gbest in -x′ in ) (8)
  • v in and x in represent the current position and velocity of particle i
  • v′ in and x′ in represent the updated velocity and position of particle i
  • w is the inertia weight
  • ⁇ 1 and ⁇ 2 are acceleration constants
  • rand() is Random function within the interval [0,1]
  • S34 sends the updated particles back to the U-net neural network, updates the connection weights that need to be trained, performs the up-sampling and down-sampling process again, and calculates the error
  • S35 splits the obtained best position gbest of the particle swarm, and maps it to the weights and thresholds of the U-net neural network model. Then the particle swarm optimization algorithm PSO completes the optimization of the U-net neural network weights. process.
  • the rough set neural network method for the segmentation of fundus retinal blood vessel images of the present invention reduces the workload of medical staff, avoids the difference in experience and skills of medical staff to the same fundus image segmentation result, and effectively performs Color fundus retinal blood vessel image segmentation to obtain higher segmentation accuracy and efficiency.
  • FIG. 1 is a flowchart of a rough set neural network method for segmentation of fundus retinal blood vessel images according to an embodiment of the present invention
  • FIG. 2 shows a detailed flowchart of a rough set neural network method for segmentation of fundus retinal blood vessel images according to an embodiment of the present invention
  • Fig. 3 is a structural diagram of a U-net neural network model according to an embodiment of the present invention.
  • This embodiment provides a rough set neural network method for segmentation of fundus retinal blood vessel images, as shown in Figs.
  • the standard RGB color fundus retinal blood vessel image is preprocessed by rough set theory for image enhancement, and an enhanced fundus retinal blood vessel image based on rough set is obtained.
  • S20 constructs a U-net neural network model, performs segmentation based on the rough set-enhanced fundus retinal vascular image to obtain a segmentation map, and divides the segmentation map with the standard segmentation label map corresponding to the standard RGB color fundus retinal vascular image As the error function of the constructed U-net neural network, the U-net neural network model is obtained.
  • S30 uses the particle swarm optimization algorithm (PSO) to optimize the training of the U-net neural network model, uses the rough set-enhanced fundus retinal blood vessel image as particles, and continuously iterates through the particle swarm to obtain the optimal population of particles.
  • Gradient descent adjusts the parameters of the U-net neural network to obtain a PSO-U-net neural network model.
  • the color fundus retinal blood vessel image to be tested is preprocessed by the rough set theory for image enhancement and then the PSO-U-net neural network model is used to segment the color fundus retinal blood vessel image to be tested.
  • the step S10 includes the following steps: S11 stores each standard RGB color fundus retinal blood vessel image with a size of M ⁇ M ⁇ 3 as three matrices each of which are M ⁇ M, denoted as R * , G *, and B * , each value in the matrix represents the component value of each color of each pixel of the three channels; the HSI model is established through the matrices R* , G *, and B * , where H represents hue, S represents saturation, and I Indicates brightness:
  • the brightness I component of S12 is equivalent to the grayscale image of the fundus retinal blood vessel image, which is regarded as an image information system, and the rough set theory is used for image preprocessing, and the two-dimensional fundus retinal image with the size of M ⁇ M is taken as the domain of discussion.
  • each pixel x in the fundus retinal image represents an object in U
  • the gray value of the pixel x is denoted as f(m,n), where (m,n) represents the position of the pixel x as
  • P 1 'and P. 3' enhanced conversion 1 to P 'for converting histogram equalization, P. 3 of the' exponential transformation for the histogram, and the histogram transformed image P 1 'and P. 3' for Overlapping, obtain an enhanced fundus retinal blood vessel image P′, and normalize the enhanced fundus retinal blood vessel image P′ according to formula (4) as follows:
  • a rough set-enhanced fundus retinal blood vessel image is obtained, where x i represents the i-th pixel value of the fundus retinal blood vessel image, and min(x) and max(x) represent the minimum and maximum values of the fundus retinal blood vessel image pixels, respectively.
  • the U-net neural network model includes an input layer, a convolution layer, a ReLU nonlinear layer, a pooling layer, a deconvolution layer, and an output layer.
  • the step S20 includes the following steps: S21 uses down-sampling to extract features of the fundus retinal blood vessel image of the rough set augmentation, uses a 3 ⁇ 3 convolution kernel to perform 2 convolution operations on the input fundus retinal blood vessel image, and select The ReLU activation function performs a nonlinear transformation, and then performs a 2 ⁇ 2 pooling operation, which is repeated 4 times. After each pooling, the number of 3 ⁇ 3 convolution kernels is doubled for the first 3 ⁇ 3 convolution operation; After that, perform 2 3 ⁇ 3 convolution operations, and continue to complete the above-mentioned down-sampling feature extraction related operations;
  • M j represents the set of input feature maps
  • f() represents the activation function
  • is the weight constant of the feature map of the down-sampling layer
  • down() is the down-sampling function
  • S22 uses up-sampling to operate, first performs 2 3 ⁇ 3 deconvolution operations, copies and crops the image of the largest pooling layer, and stitches it with the deconvoluted image; then performs 3 ⁇ 3 convolution The operation is repeated 4 times. After each splicing, the number of 3 ⁇ 3 convolution kernels is doubled in the first 3 ⁇ 3 convolution operation; finally, 2 3 ⁇ 3 convolution operations and 1 1 ⁇ The convolution operation of 1, completes the up-sampling process at this time. as well as
  • the segmentation map obtained by the U-net neural network is calculated through the forward calculation, and the error calculation is performed with the standard segmentation label map corresponding to the standard RGB color fundus retinal blood vessel image.
  • the error function is expressed as follows:
  • T represents the number of fundus image samples input to the U-net neural network
  • y_out t (i) represents the gray value of the i-th pixel in the t-th fundus retinal image sample output by the U-net neural network
  • y_true t (i) Represents the gray value of the i-th pixel in the i-th fundus retinal image label.
  • the neural network error function is used as the fitness function of the particle swarm to calculate the fitness of each particle and arrange it in ascending order to obtain the best position pbest of each particle and the
  • v′ in wv′ in + ⁇ 1 ⁇ rand() ⁇ (pbest in -x′ in )+ ⁇ 2 ⁇ rand() ⁇ (gbest in -x′ in ) (8)
  • v in and x in represent the current position and velocity of particle i
  • v′ in and x′ in represent the updated velocity and position of particle i
  • w is the inertia weight
  • ⁇ 1 and ⁇ 2 are acceleration constants
  • rand() is Random function within the interval [0,1]
  • S34 sends the updated particles back to the U-net neural network, updates the connection weights that need to be trained, performs the up-sampling and down-sampling process again, and calculates the error
  • S35 splits the obtained best position gbest of the particle swarm, and maps it to the weights and thresholds of the U-net neural network model. Then the particle swarm optimization algorithm PSO completes the optimization of the U-net neural network weights. process.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Robotics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

本发明提供了一种用于眼底视网膜血管图像分割的粗糙集神经网络方法包括如下步骤:S10图像预处理,获得基于粗糙集增强眼底视网膜血管图像;S20构建U-net神经网络模型;S30利用粒子群优化算法(PSO)对所述U-net神经网络模型进行优化训练,获得PSO-U-net神经网络模型;以及S40将待测彩色眼底视网膜血管图像采用粗糙集理论进行图像增强预处理后使用所述PSO-U-net神经网络模型对所述待测彩色眼底视网膜血管图像分割。本发明的一种用于眼底视网膜血管图像分割的粗糙集神经网络方法,减少了医护人员的工作量,避免了医护人员经验和技能的差别对同一幅眼底图像分割结果存在的差异,有效的进行彩色眼底视网膜血管图像分割,获得更高的分割精度和效率。

Description

一种用于眼底视网膜血管图像分割的粗糙集神经网络方法 技术领域
本发明涉及医学图像处理技术领域,具体涉及一种用于眼底视网膜血管图像分割的粗糙集神经网络方法。
背景技术
眼底图像中视网膜血管的健康状况对于医生早期诊断糖尿病心脑血管疾病及其多种眼科疾病具有重要的意义与价值,但由于视网膜血管自身结构复杂,同时易受采集环境中光照因素的影响,使得临床上手动分割视网膜血管不仅工作量巨大而且对医疗人员的经验和技能要求颇高。另外不同的医疗人员对同一幅眼底图像的分割结果可能存在差异,手动分割已不能满足临床的需要。
随着计算机技术的不断发展,利用人工智能技术对眼底视网膜血管图像进行自动分割,可有效对眼科疾病进行早期的辅助诊断和决策,目前已成为国内外学者关注的研究热点。深度学习中的卷积神经网络模型凭借其局部感知、参数共享的特殊结构在医学图像处理方面有着独特的优越性。由于图像信息具有较强的空间复杂性、相关性,以及图像处理过程中会遇到不完整性和不确定性等问题,因此将粗糙集理论应用到图像处理中,在很多场合具有比传统方法更好的效果。
发明内容
为了解决上述问题,本发明提供一种用于眼底视网膜血管图像分割的粗糙集神经网络方法,减少了医护人员的工作量,避免了医护人员经验和技能的差别对同一幅眼底图像分割结果存在的差异,有效的进行彩色眼底视网膜血管图像分割,获得更高的分割精度和效率。
为了实现以上目的,本发明采取的一种技术方案是:
一种用于眼底视网膜血管图像分割的粗糙集神经网络方法,包括如下步骤:S10图像预处理,将每一张大小为M×M×3的标准RGB彩色眼底视网膜血管图像采用粗糙集理论进行图像增强预处理,获得基于粗糙集增强眼底视网膜血管图像;S20构建U-net神经网络模型,对所述基于粗糙集增强眼底视网膜血管图像进行分割获得分割图,将所述分割图与所述标准RGB彩色眼底视网膜血管图像所对应的标准分割标签图之间的误差作为所构U-net神经网络的误差函数,获得所述U-net神经网络模型;S30利用粒子群优化算法(PSO)对所述U-net神经网络模型进行优化训练,将所述基于粗糙集增强的眼底视网膜血管图像作为粒子,通过粒子群不断迭代,得到最优种群粒子,利用梯度下降对所述U-net神经网络参数进行调整,获得PSO-U-net神经网络模型;以及S40将待测彩色眼底视网膜血管图像采用粗糙集理论进行图像增强预处理后使用所述PSO-U-net神经网络模型对所述待测彩色眼底视网膜血管图像分割。
进一步地,所述U-net神经网络模型包括输入层、卷积层、ReLU非线性层、池化层、反卷积层以及输出层。
进一步地,所述步骤S10包括如下步骤:S11将每一张大小为M×M×3的标准RGB彩色眼底视网膜血管图像存储为三个大小均为M×M的矩阵,分别记为R *、G *以及B *,矩阵中的每个值表示三通道的每个像素点的每个颜色的分量值;通过矩阵R *、G *以及B *建立HSI模型,其中H表示色调,S表示饱和度,I表示亮度:
Figure PCTCN2021086437-appb-000001
其中
Figure PCTCN2021086437-appb-000002
Figure PCTCN2021086437-appb-000003
Figure PCTCN2021086437-appb-000004
S12所述亮度I分量相当于眼底视网膜血管图像的灰度化图,将其看作一个图像信息***,利用粗糙集理论进行图像预处理,大小为M×M的二维眼底视网膜图像作为论域U,眼底视网膜图像中的每个像素点x表示U中的一个对象,像素点x的灰度值记为f(m,n),,其中,(m,n)表示像素点x的位置为第m行第n列确定眼底视网膜血管灰度图像的两个条件属性为c 1和c 2,即C={c 1,c 2},其中c 1表示像素点的灰度值属性,属性值为c 1={0,1},c 2记为噪声属性,表示两个相邻子块的平均灰度之差的绝对值,属性值为c 2={0,1};决策属性D表示像素点的分类,D={d 1,d 2,d 3,d 4},其中d 1表示较亮无噪声区域,d 2表示亮区边缘噪声区,d 3表示较暗无噪声区,d 4表示暗区边缘噪声区,从而构造一个眼底视网膜血管图像信息***(U,C∪D);S13确定灰度值阈值α,对于U中第m行第n列像素点x的灰度值表示为
Figure PCTCN2021086437-appb-000005
如果
Figure PCTCN2021086437-appb-000006
满足
Figure PCTCN2021086437-appb-000007
那么c 1=1,表示像素点x的灰度值为[α+1,255]之间,归为
Figure PCTCN2021086437-appb-000008
这个等价类,表示该像素点属于眼底视网膜血管图像中的较亮集合;否则c 1=0,表示像素点x的灰度值为[0,α]之间,归为
Figure PCTCN2021086437-appb-000009
这个等价类,表示该像素点属于眼底视网膜血管图像中较暗集合;S14确定噪声阈值β,将眼底视网膜血管图像按照2×2像素划分子块,
Figure PCTCN2021086437-appb-000010
表示子块与相邻子块像素灰度平均值之差的绝对值,即
Figure PCTCN2021086437-appb-000011
其中avg(S i,j)表示子块S i,j的像素平均值,
Figure PCTCN2021086437-appb-000012
Figure PCTCN2021086437-appb-000013
如果
Figure PCTCN2021086437-appb-000014
满足
Figure PCTCN2021086437-appb-000015
则c 2=1,表示像素点x有噪声,归为
Figure PCTCN2021086437-appb-000016
等价类,即该像素属于边缘噪声集合;否则c 2=0,表示像素点x无噪声,归为
Figure PCTCN2021086437-appb-000017
等价类,即该像素属于无噪声集合;S15根据以上两个条件属性c 1和c 2,判断像素点所属于的集合,作为决策依据,对像素点进行决策分类,对原眼底视网膜血管图像P进行子图划分,根据灰度值属性c 1和噪声属性c 2,根据条件属性c 2将原图划分为较亮无噪声子图P 1、亮区边缘噪声子图P 2、较暗无噪声子图P 3和暗区边缘噪声子图P 4;将较亮无噪声子图P 1补全,即在所有较暗和噪声像素位置处,分别用灰度阈值α和噪声阈值β对其进行填充,构成P 1′;将较暗无噪声子图P 3补全,即在所有较亮和噪声像素位置处,分别用灰度阈值α和噪声阈 值β对其进行填充,构成P 3′;以及S16分别对P 1′和P 3′进行增强变换:对P 1′作直方图均衡变换,对P 3′作直方图指数变换,并对P 1′和P 3′直方图变换后的图像进行重叠,获得增强的眼底视网膜血管图像P′,并对增强眼底视网膜血管图像P′根据公式(4)进行归一化操作如下:
Figure PCTCN2021086437-appb-000018
获得基于粗糙集增强眼底视网膜血管图像,其中x i表示眼底视网膜血管图像的第i个像素点值,min(x)和max(x)分别表示眼底视网膜血管图像像素的最小值和最大值。
进一步地,所述步骤S20包括如下步骤:S21采用下采样对所述粗糙集增眼底视网膜血管图像行特征提取,利用大小为3×3的卷积核对输入眼底视网膜血管图像进行2次卷积操作,并且选用ReLU激活函数进行非线性变换,然后进行2×2池化操作,重复4次,在每进行一次池化之后的第一个3×3卷积操作,3×3卷积核数量成倍增加;之后再进行2次3×3的卷积操作,继续完成上述下采样特征提取相关操作;对于卷积层的计算,表示如下:
Figure PCTCN2021086437-appb-000019
其中M j表示输入的特征图集合,
Figure PCTCN2021086437-appb-000020
表示第n层的第j个特征图,
Figure PCTCN2021086437-appb-000021
表示卷积核函数,f()表示激活函数,使用ReLU函数作为激活函数,
Figure PCTCN2021086437-appb-000022
是偏置参数;对于池化层的计算,表示如下:
Figure PCTCN2021086437-appb-000023
其中β是下采样层特征图的权值常数,down()是下采样函数;S22采用上采样进行操作,首先进行2次3×3的反卷积操作,对最大池化层的图像进行复制和剪裁,并与反卷积所得图像进行拼接;然后进行3×3的卷积操作,重复4次,在每进行一次拼接之后的第一个3×3卷积操作,3×3卷积核数量成倍减少;最后进行2次3×3的卷积操作和1次1×1的卷积操作,此时完成上采样过程;以及S23经过上采样和下采样过程,通过前向计算出U-net神经网络得到的分割图,与所述标准RGB彩色眼底视网膜血管图像所对应的标准分割标签图进行误差运算,误差函数表示如下:
Figure PCTCN2021086437-appb-000024
其中T表示输入U-net神经网络的眼底图像样本数量,y_out t(i)表示U-net神经网络输出的第t个眼底视网膜图像样本中第i个像素点灰度值,y_true t(i)表示第i个眼底视网膜图像标签中第i个像素点灰度值。
进一步地,所述步骤S23设置误差阈值,所述误差阈值为0.1,当所述误差不大于所述误差阈值时,获得所需的U-net神经网络模型;当所述误差大于所述误差阈值时,根据梯度下降算法进行反向传播来调整网络权值,然后重复S21~S22步骤进行前向计算,直至所述误差不大于所述误差阈值为止。
进一步地,所述步骤S30包括如下步骤:S31从所述基于粗糙集增强眼底视网膜血管图像训练集中随机选取少量H张眼底图像作为参照图像,将粒子群Q表示为Q=(Q 1,Q 2,...,Q H),H表示粒子群Q中的粒子个数,其数量与选取的眼底图像的张数保持一致,每个 粒子的每一位表示一位连接权值或者阈值,第i个粒子Q i的编码方式为Q i={Q i1,Q i2,...,Q in},其中n表示连接权值或者阈值的总个数,初始化加速常数σ 1,σ 2和惯性权重w的初始值,将每个粒子位置向量Y i={y i1,y i2,...,y in}和粒子速度向量V i={v i1,v i2,...,v in}初始化为区间[0,1]范围之内的随机数,其中n表示U-net模型中参数的个数;S32对每一个粒子,在U-net模型中分别完成下采样和上采样过程,将U-net神经网络误差函数作为粒子群适应度函数,计算每个粒子的适应度,并对其按照升序排列,得到每个粒子的最佳位置pbest和整个粒子群的最佳位置gbest;S33如果已经达到误差阈值范围的极小值,表示训练已经收敛,则停止运行;否则按照公式(8)和(9)继续更新每个粒子的位置和速度;
v′ in=wv′ in1·rand()·(pbest in-x′ in)+σ 2·rand()·(gbest in-x′ in)   (8)
x′ in=x in+v′ in   (9)
其中v in和x in表示粒子i当前的位置和速度,v′ in和x′ in分别表示粒子i更新后速度和位置,w为惯性权重,σ 1和σ 2为加速常数,rand()是区间[0,1]范围之内的随机函数;S34将更新后的粒子传回U-net神经网路,更新需要训练的连接权值,再次进行上采样和下采样过程,并计算其误差;以及S35将得到的粒子群的最佳位置gbest进行拆分,将其映射到U-net神经网络模型的权值和阈值,则完成粒子群优化算法PSO对U-net神经网络权值优化的全过程。
本发明的上述技术方案相比现有技术具有以下优点:
本发明的一种用于眼底视网膜血管图像分割的粗糙集神经网络方法,减少了医护人员的工作量,避免了医护人员经验和技能的差别对同一幅眼底图像分割结果存在的差异,有效的进行彩色眼底视网膜血管图像分割,获得更高的分割精度和效率。
附图说明
下面结合附图,通过对本发明的具体实施方式详细描述,将使本发明的技术方案及其有益效果显而易见。
图1所示为本发明一实施例的用于眼底视网膜血管图像分割的粗糙集神经网络方法的流程图;
图2所示为本发明一实施例的用于眼底视网膜血管图像分割的粗糙集神经网络方法详细流程图;
图3所示为本发明一实施例的U-net神经网络模型结构图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本实施例提供了一种用于眼底视网膜血管图像分割的粗糙集神经网络方法,如图1~2所示,包括如下步骤:S10图像预处理,将每一张大小为M×M×3的标准RGB彩色眼底视网膜血管图像采用粗糙集理论进行图像增强预处理,获得基于粗糙集增强眼底视网膜血管图像。S20构建U-net神经网络模型,对所述基于粗糙集增强眼底视网膜血管图像进行 分割获得分割图,将所述分割图与所述标准RGB彩色眼底视网膜血管图像所对应的标准分割标签图之间的误差作为所构U-net神经网络的误差函数,获得所述U-net神经网络模型。S30利用粒子群优化算法(PSO)对所述U-net神经网络模型进行优化训练,将所述基于粗糙集增强的眼底视网膜血管图像作为粒子,通过粒子群不断迭代,得到最优种群粒子,利用梯度下降对所述U-net神经网络参数进行调整,获得PSO-U-net神经网络模型。以及S40将待测彩色眼底视网膜血管图像采用粗糙集理论进行图像增强预处理后使用所述PSO-U-net神经网络模型对所述待测彩色眼底视网膜血管图像分割。
所述步骤S10包括如下步骤:S11将每一张大小为M×M×3的标准RGB彩色眼底视网膜血管图像存储为三个大小均为M×M的矩阵,分别记为R *、G *以及B *,矩阵中的每个值表示三通道的每个像素点的每个颜色的分量值;通过矩阵R *、G *以及B *建立HSI模型,其中H表示色调,S表示饱和度,I表示亮度:
Figure PCTCN2021086437-appb-000025
其中
Figure PCTCN2021086437-appb-000026
Figure PCTCN2021086437-appb-000027
Figure PCTCN2021086437-appb-000028
S12所述亮度I分量相当于眼底视网膜血管图像的灰度化图,将其看作一个图像信息***,利用粗糙集理论进行图像预处理,大小为M×M的二维眼底视网膜图像作为论域U,眼底视网膜图像中的每个像素点x表示U中的一个对象,像素点x的灰度值记为f(m,n),,其中,(m,n)表示像素点x的位置为第m行第n列确定眼底视网膜血管灰度图像的两个条件属性为c 1和c 2,即C={c 1,c 2},其中c 1表示像素点的灰度值属性,属性值为c 1={0,1},c 2记为噪声属性,表示两个相邻子块的平均灰度之差的绝对值,属性值为c 2={0,1};决策属性D表示像素点的分类,D={d 1,d 2,d 3,d 4},其中d 1表示较亮无噪声区域,d 2表示亮区边缘噪声区,d 3表示较暗无噪声区,d 4表示暗区边缘噪声区,从而构造一个眼底视网膜血管图像信息***(U,C∪D)。
S13确定灰度值阈值α,对于U中第m行第n列像素点x的灰度值表示为
Figure PCTCN2021086437-appb-000029
如果
Figure PCTCN2021086437-appb-000030
满足
Figure PCTCN2021086437-appb-000031
那么c 1=1,表示像素点x的灰度值为[α+1,255]之间,归为
Figure PCTCN2021086437-appb-000032
这个等价类,表示该像素点属于眼底视网膜血管图像中的较亮集合;否则c 1=0,表示像素点x的灰度值为[0,α]之间,归为
Figure PCTCN2021086437-appb-000033
这个等价类,表示该像素点属于眼底视网膜血管图像中较暗集合。
S14确定噪声阈值β,将眼底视网膜血管图像按照2×2像素划分子块,
Figure PCTCN2021086437-appb-000034
表示子块与相邻子块像素灰度平均值之差的绝对值,即
Figure PCTCN2021086437-appb-000035
其中avg(S i,j)表示子块S i,j的像素平均值,
Figure PCTCN2021086437-appb-000036
如果
Figure PCTCN2021086437-appb-000037
满足
Figure PCTCN2021086437-appb-000038
则c 2=1,表示像素点x有噪声,归为
Figure PCTCN2021086437-appb-000039
等价类,即该像素属于边缘噪声集合;否则c 2=0,表示像素点x无噪声,归为
Figure PCTCN2021086437-appb-000040
等价类,即该像素属于无噪声集合。
S15根据以上两个条件属性c 1和c 2,判断像素点所属于的集合,作为决策依据,对像素点进行决策分类,对原眼底视网膜血管图像P进行子图划分,根据灰度值属性c 1和噪声属性c 2,将原图划分为较亮无噪声子图P 1、亮区边缘噪声子图P 2、较暗无噪声子图P 3和暗区边缘噪声子图P 4;将较亮无噪声子图P 1补全,即在所有较暗和噪声像素位置处,分别用灰度阈值α和噪声阈值β对其进行填充,构成P 1′;将较暗无噪声子图P 3补全,即在所有较亮和噪声像素位置处,分别用灰度阈值α和噪声阈值β对其进行填充,构成P 3′。以及
S16分别对P 1′和P 3′进行增强变换:对P 1′作直方图均衡变换,对P 3′作直方图指数变换,并对P 1′和P 3′直方图变换后的图像进行重叠,获得增强的眼底视网膜血管图像P′,并对增强眼底视网膜血管图像P′根据公式(4)进行归一化操作如下:
Figure PCTCN2021086437-appb-000041
获得基于粗糙集增强眼底视网膜血管图像,其中x i表示眼底视网膜血管图像的第i个像素点值,min(x)和max(x)分别表示眼底视网膜血管图像像素的最小值和最大值。
如图3所示,所述U-net神经网络模型包括输入层、卷积层、ReLU非线性层、池化层、反卷积层以及输出层。所述步骤S20包括如下步骤:S21采用下采样对所述粗糙集增眼底视网膜血管图像行特征提取,利用大小为3×3的卷积核对输入眼底视网膜血管图像进行2次卷积操作,并且选用ReLU激活函数进行非线性变换,然后进行2×2池化操作,重复4次,在每进行一次池化之后的第一个3×3卷积操作,3×3卷积核数量成倍增加;之后再进行2次3×3的卷积操作,继续完成上述下采样特征提取相关操作;
对于卷积层的计算,表示如下:
Figure PCTCN2021086437-appb-000042
其中M j表示输入的特征图集合,
Figure PCTCN2021086437-appb-000043
表示第n层的第j个特征图,
Figure PCTCN2021086437-appb-000044
表示卷积核函数,f()表示激活函数,使用ReLU函数作为激活函数,
Figure PCTCN2021086437-appb-000045
是偏置参数;
对于池化层的计算,表示如下:
Figure PCTCN2021086437-appb-000046
其中β是下采样层特征图的权值常数,down()是下采样函数。
S22采用上采样进行操作,首先进行2次3×3的反卷积操作,对最大池化层的图像进行复制和剪裁,并与反卷积所得图像进行拼接;然后进行3×3的卷积操作,重复4次,在每进行一次拼接之后的第一个3×3卷积操作,3×3卷积核数量成倍减少;最后进行2次3×3的卷积操作和1次1×1的卷积操作,此时完成上采样过程。以及
S23经过上采样和下采样过程,通过前向计算出U-net神经网络得到的分割图,与所述标准RGB彩色眼底视网膜血管图像所对应的标准分割标签图进行误差运算,误差函数表示如下:
Figure PCTCN2021086437-appb-000047
其中T表示输入U-net神经网络的眼底图像样本数量,y_out t(i)表示U-net神经网络输出的第t个眼底视网膜图像样本中第i个像素点灰度值,y_true t(i)表示第i个眼底视网膜图像标签中第i个像素点灰度值。
所述步骤S30包括如下步骤:S31从所述基于粗糙集增强眼底视网膜血管图像训练集中随机选取少量H张眼底图像作为参照图像,将粒子群Q表示为Q=(Q 1,Q 2,...,Q H),H表示粒子群Q中的粒子个数,其数量与选取的眼底图像的张数保持一致,每个粒子的每一位表示一位连接权值或者阈值,第i个粒子Q i的编码方式为Q i={Q i1,Q i2,...,Q in},其中n表示连接权值或者阈值的总个数,初始化加速常数σ 1,σ 2和惯性权重w的初始值,将每个粒子位置向量Y i={y i1,y i2,...,y in}和粒子速度向量V i={v i1,v i2,...,v in}初始化为区间[0,1]范围之内的随机数,其中n表示U-net模型中参数的个数;S32对每一个粒子,在U-net模型中分别完成下采样和上采样过程,将U-net神经网络误差函数作为粒子群适应度函数,计算每个粒子的适应度,并对其按照升序排列,得到每个粒子的最佳位置pbest和整个粒子群的最佳位置gbest;S33如果已经达到误差阈值范围的极小值,表示训练已经收敛,则停止运行;否则按照公式(8)和(9)继续更新每个粒子的位置和速度;
v′ in=wv′ in1·rand()·(pbest in-x′ in)+σ 2·rand()·(gbest in-x′ in)   (8)
x′ in=x in+v′ in   (9)
其中v in和x in表示粒子i当前的位置和速度,v′ in和x′ in分别表示粒子i更新后速度和位置,w为惯性权重,σ 1和σ 2为加速常数,rand()是区间[0,1]范围之内的随机函数;S34将更新后的粒子传回U-net神经网路,更新需要训练的连接权值,再次进行上采样和下采样过程,并计算其误差;以及S35将得到的粒子群的最佳位置gbest进行拆分,将其映射到U-net神经网络模型的权值和阈值,则完成粒子群优化算法PSO对U-net神经网络权值优化的全过程。
以上所述仅为本发明的示例性实施例,并非因此限制本发明专利保护范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。

Claims (6)

  1. 一种用于眼底视网膜血管图像分割的粗糙集神经网络方法,其特征在于,包括如下步骤:
    S10图像预处理,将每一张大小为M×M×3的标准RGB彩色眼底视网膜血管图像采用粗糙集理论进行图像增强预处理,获得基于粗糙集增强眼底视网膜血管图像;
    S20构建U-net神经网络模型,对所述基于粗糙集增强眼底视网膜血管图像进行分割获得分割图,将所述分割图与所述标准RGB彩色眼底视网膜血管图像所对应的标准分割标签图之间的误差作为所构U-net神经网络的误差函数,获得所述U-net神经网络模型;
    S30利用粒子群优化算法(PSO)对所述U-net神经网络模型进行优化训练,将所述基于粗糙集增强的眼底视网膜血管图像作为粒子,通过粒子群不断迭代,得到最优种群粒子,利用梯度下降对所述U-net神经网络参数进行调整,获得PSO-U-net神经网络模型;以及
    S40将待测彩色眼底视网膜血管图像采用粗糙集理论进行图像增强预处理后使用所述PSO-U-net神经网络模型对所述待测彩色眼底视网膜血管图像分割。
  2. 根据权利要求1所述的用于眼底视网膜血管图像分割的粗糙集神经网络方法,其特征在于,所述U-net神经网络模型包括输入层、卷积层、ReLU非线性层、池化层、反卷积层以及输出层。
  3. 根据权利要求2所述的用于眼底视网膜血管图像分割的粗糙集神经网络方法,其特征在于,所述步骤S10包括如下步骤:
    S11将每一张大小为M×M×3的标准RGB彩色眼底视网膜血管图像存储为三个大小均为M×M的矩阵,分别记为R *、G *以及B *,矩阵中的每个值表示三通道的每个像素点的每个颜色的分量值;通过矩阵R *、G *以及B *建立HSI模型,其中H表示色调,S表示饱和度,I表示亮度:
    Figure PCTCN2021086437-appb-100001
    其中
    Figure PCTCN2021086437-appb-100002
    Figure PCTCN2021086437-appb-100003
    Figure PCTCN2021086437-appb-100004
    S12所述亮度I分量相当于眼底视网膜血管图像的灰度化图,将其看作一个图像信息***,利用粗糙集理论进行图像预处理,大小为M×M的二维眼底视网膜图像作为论域U,眼底视网膜图像中的每个像素点x表示U中的一个对象,像素点x的灰度值记为f(m,n),其中,(m,n)表示像素点x的位置为第m行第n列,确定眼底视网膜血管灰度图像的两个条件属性为c 1和c 2,即C={c 1,c 2},其中c 1表示像素点的灰度值属性,属性值为c 1={0,1},c 2记为噪声属性,表示两个相邻子块的平均灰度之差的绝对值,属性值为c 2={0,1};决策属性D表示像素点的分类,D={d 1,d 2,d 3,d 4},其中d 1表示较亮无噪声区域,d 2表示亮区边缘噪声区,d 3表示较暗无噪声区,d 4表示暗区边缘噪声区,从而构造一个眼底视网膜血管图像信息***(U,C∪D);
    S13确定灰度值阈值α,对于U中第m行第n列像素点x的灰度值表示为
    Figure PCTCN2021086437-appb-100005
    如果
    Figure PCTCN2021086437-appb-100006
    满足
    Figure PCTCN2021086437-appb-100007
    那么c 1=1,表示像素点x的灰度值为[α+1,255]之间,归为
    Figure PCTCN2021086437-appb-100008
    这个等价类,表示该像素点属于眼底视网膜血管图像中的较亮集合;否则c 1=0,表示像素点x的灰度值为[0,α]之间,归为
    Figure PCTCN2021086437-appb-100009
    这个等价类,表示该像素点属于眼底视网膜血管图像中较暗集合;
    S14确定噪声阈值β,将眼底视网膜血管图像按照2×2像素划分子块,
    Figure PCTCN2021086437-appb-100010
    表示子块与相邻子块像素灰度平均值之差的绝对值,即
    Figure PCTCN2021086437-appb-100011
    其中avg(Si,j)表示子块Si,j的像素平均值,
    Figure PCTCN2021086437-appb-100012
    如果
    Figure PCTCN2021086437-appb-100013
    满足
    Figure PCTCN2021086437-appb-100014
    则c 2=1,表示像素点x有噪声,归为
    Figure PCTCN2021086437-appb-100015
    等价类,即该像素属于边缘噪声集合;否则c 2=0,表示像素点x无噪声,归为
    Figure PCTCN2021086437-appb-100016
    等价类,即该像素属于无噪声集合;
    S15根据以上两个条件属性c 1和c 2,判断像素点所属于的集合,作为决策依据,对像素点进行决策分类,对原眼底视网膜血管图像P进行子图划分,根据灰度值属性c 1和噪声属性c 2,将原图划分为较亮无噪声子图P 1、亮区边缘噪声子图P 2、较暗无噪声子图P 3和暗区边缘噪声子图P 4;将较亮无噪声子图P 1补全,即在所有较暗和噪声像素位置处,分别用灰度阈值α和噪声阈值β对其进行填充,构成P 1′;将较暗无噪声子图P 3补全,即在所有较亮和噪声像素位置处,分别用灰度阈值α和噪声阈值β对其进行填充,构成P 3′;以及
    S16分别对P 1′和P 3′进行增强变换:对P 1′作直方图均衡变换,对P 3′作直方图指数变换,并对P 1′和P 3′直方图变换后的图像进行重叠,获得增强的眼底视网膜血管图像P′,并对增强眼底视网膜血管图像P′根据公式(4)进行归一化操作如下:
    Figure PCTCN2021086437-appb-100017
    获得所述基于粗糙集增强眼底视网膜血管图像,其中x i表示眼底视网膜血管图像的第i个像素点值,min(x)和max(x)分别表示眼底视网膜血管图像像素的最小值和最大值。
  4. 根据权利要求3所述的用于眼底视网膜血管图像分割的粗糙集神经网络方法,其特征在于,所述步骤S20包括如下步骤:
    S21采用下采样对所述粗糙集增眼底视网膜血管图像行特征提取,利用大小为3×3的卷积核对输入眼底视网膜血管图像进行2次卷积操作,并且选用ReLU激活函数进行非线性变换,然后进行2×2池化操作,重复4次,在每进行一次池化之后的第一个3×3卷积操作,3×3卷积核数量成倍增加;之后再进行2次3×3的卷积操作,继续完成上述下采样特征提取相关操作;
    对于卷积层的计算,表示如下:
    Figure PCTCN2021086437-appb-100018
    其中M j表示输入的特征图集合,
    Figure PCTCN2021086437-appb-100019
    表示第n层的第j个特征图,
    Figure PCTCN2021086437-appb-100020
    表示卷积核函数,f()表示激活函数,使用ReLU函数作为激活函数,
    Figure PCTCN2021086437-appb-100021
    是偏置参数;
    对于池化层的计算,表示如下:
    Figure PCTCN2021086437-appb-100022
    其中β是下采样层特征图的权值常数,down()是下采样函数;
    S22采用上采样进行操作,首先进行2次3×3的反卷积操作,对最大池化层的图像进行复制和剪裁,并与反卷积所得图像进行拼接;然后进行3×3的卷积操作,重复4次,在每进行一次拼接之后的第一个3×3卷积操作,3×3卷积核数量成倍减少;最后进行2次3×3的卷积操作和1次1×1的卷积操作,此时完成上采样过程;以及
    S23经过上采样和下采样过程,通过前向计算出U-net神经网络得到的分割图,与所述标准RGB彩色眼底视网膜血管图像所对应的标准分割标签图进行误差运算,误差函数表示如下:
    Figure PCTCN2021086437-appb-100023
    其中T表示输入U-net神经网络的眼底图像样本数量,y_out t(i)表示U-net神经网络输出的第t个眼底视网膜图像样本中第i个像素点灰度值,y_true t(i)表示第i个眼底视网膜图像标签中第i个像素点灰度值。
  5. 根据权利要求4所述的用于眼底视网膜血管图像分割的粗糙集神经网络方法,其特征在于,所述步骤S23设置误差阈值,所述误差阈值为0.1,当所述误差不大于所述误差阈值时,获得所需的U-net神经网络模型;当所述误差大于所述误差阈值时,根据梯度下降算法进行反向传播来调整网络权值,然后重复S21~S22步骤进行前向计算,直至所述误差不大于所述误差阈值为止。
  6. 根据权利要求5所述的用于眼底视网膜血管图像分割的粗糙集神经网络方法,其特征在于,所述步骤S30包括如下步骤:
    S31从所述基于粗糙集增强眼底视网膜血管图像训练集中随机选取少量H张眼底图像作为参照图像,将粒子群Q表示为Q=(Q 1,Q 2,...,Q H),H表示粒子群Q中的粒子个数,其数量与选取的眼底图像的张数保持一致,每个粒子的每一位表示一位连接权值或者阈值,第i个粒子Q i的编码方式为Q i={Q i1,Q i2,...,Q in},其中n表示连接权值或者阈值的总个数,初始化加速常数σ 1,σ 2和惯性权重w的初始值,将每个粒子位置向量Y i={y i1,y i2,...,y in}和粒子速度向量V i={v i1,v i2,...,v in}初始化为区间[0,1]范围之内的随机数,其中n表示U-net模型中参数的个数;
    S32对每一个粒子,在U-net模型中分别完成下采样和上采样过程,将U-net神经网络误差函数作为粒子群适应度函数,计算每个粒子的适应度,并对其按照升序排列,得到每个粒子的最佳位置pbest和整个粒子群的最佳位置gbest;
    S33如果已经达到误差阈值范围的极小值,表示训练已经收敛,则停止运行;否则按照公式(8)和(9)继续更新每个粒子的位置和速度;
    v′ in=wv′ in1·rand()·(pbest in-x′ in)+σ 2·rand()·(gbest in-x′ in)    (8)
    x′ in=x in+v′ in    (9)
    其中v in和x in表示粒子i当前的位置和速度,v′ in和x′ in分别表示粒子i更新后速度和位置,w为惯性权重,σ 1和σ 2为加速常数,rand()是区间[0,1]范围之内的随机函数;
    S34将更新后的粒子传回U-net神经网路,更新需要训练的连接权值,再次进行上采样 和下采样过程,并计算其误差;以及
    S35将得到的粒子群的最佳位置gbest进行拆分,将其映射到U-net神经网络模型的权值和阈值,则完成粒子群优化算法PSO对U-net神经网络权值优化的全过程。
PCT/CN2021/086437 2020-06-18 2021-04-12 一种用于眼底视网膜血管图像分割的粗糙集神经网络方法 WO2021253939A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010558465.4 2020-06-18
CN202010558465.4A CN111815574B (zh) 2020-06-18 2020-06-18 一种基于粗糙集神经网络的眼底视网膜血管图像分割方法

Publications (1)

Publication Number Publication Date
WO2021253939A1 true WO2021253939A1 (zh) 2021-12-23

Family

ID=72844725

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/086437 WO2021253939A1 (zh) 2020-06-18 2021-04-12 一种用于眼底视网膜血管图像分割的粗糙集神经网络方法

Country Status (3)

Country Link
CN (1) CN111815574B (zh)
LU (1) LU500959B1 (zh)
WO (1) WO2021253939A1 (zh)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114359104A (zh) * 2022-01-10 2022-04-15 北京理工大学 一种基于分级生成的白内障眼底图像增强方法
CN114494196A (zh) * 2022-01-26 2022-05-13 南通大学 基于遗传模糊树的视网膜糖尿病变深度网络检测方法
CN114612484A (zh) * 2022-03-07 2022-06-10 中国科学院苏州生物医学工程技术研究所 基于无监督学习的视网膜oct图像分割方法
CN115829883A (zh) * 2023-02-16 2023-03-21 汶上县恒安钢结构有限公司 一种异性金属结构件表面图像去噪方法
CN116228545A (zh) * 2023-04-04 2023-06-06 深圳市眼科医院(深圳市眼病防治研究所) 基于视网膜特征点的眼底彩色照相图像拼接方法及***
CN116342588B (zh) * 2023-05-22 2023-08-11 徕兄健康科技(威海)有限责任公司 一种脑血管图像增强方法
CN116580008A (zh) * 2023-05-16 2023-08-11 山东省人工智能研究院 基于局部增广空间测地线生物医学标记方法
CN116740203A (zh) * 2023-08-15 2023-09-12 山东理工职业学院 用于眼底相机数据的安全存储方法
CN117058468A (zh) * 2023-10-11 2023-11-14 青岛金诺德科技有限公司 用于新能源汽车锂电池回收的图像识别与分类***
CN117437350A (zh) * 2023-09-12 2024-01-23 南京诺源医疗器械有限公司 一种用于手术术前规划的三维重建***及方法
CN117611599A (zh) * 2023-12-28 2024-02-27 齐鲁工业大学(山东省科学院) 融合中心线图和增强对比度网络的血管分割方法及其***
CN117974692A (zh) * 2024-03-29 2024-05-03 贵州毅丹恒瑞医药科技有限公司 一种基于区域生长的眼科医学影像处理方法

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111815574B (zh) * 2020-06-18 2022-08-12 南通大学 一种基于粗糙集神经网络的眼底视网膜血管图像分割方法
CN115409765B (zh) * 2021-05-28 2024-01-09 南京博视医疗科技有限公司 一种基于眼底视网膜图像的血管提取方法及装置
CN115187609A (zh) * 2022-09-14 2022-10-14 合肥安杰特光电科技有限公司 一种大米黄粒检测方法和***
CN116523877A (zh) * 2023-05-04 2023-08-01 南通大学 一种基于卷积神经网络的脑mri图像肿瘤块分割方法
CN117372284B (zh) * 2023-12-04 2024-02-23 江苏富翰医疗产业发展有限公司 眼底图像处理方法及***

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254224A (zh) * 2011-07-06 2011-11-23 无锡泛太科技有限公司 一种基于粗糙集神经网络的图像识别的物联网电动汽车充电桩***
CN110232372A (zh) * 2019-06-26 2019-09-13 电子科技大学成都学院 基于粒子群优化bp神经网络的步态识别方法
WO2020056454A1 (en) * 2018-09-18 2020-03-26 MacuJect Pty Ltd A method and system for analysing images of a retina
CN111815574A (zh) * 2020-06-18 2020-10-23 南通大学 一种用于眼底视网膜血管图像分割的粗糙集神经网络方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2013245862A1 (en) * 2012-04-11 2014-10-30 University Of Florida Research Foundation, Inc. System and method for analyzing random patterns
CN108615051B (zh) * 2018-04-13 2020-09-15 博众精工科技股份有限公司 基于深度学习的糖尿病视网膜图像分类方法及***
CN111091916A (zh) * 2019-12-24 2020-05-01 郑州科技学院 人工智能中基于改进粒子群算法的数据分析处理方法及***

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254224A (zh) * 2011-07-06 2011-11-23 无锡泛太科技有限公司 一种基于粗糙集神经网络的图像识别的物联网电动汽车充电桩***
WO2020056454A1 (en) * 2018-09-18 2020-03-26 MacuJect Pty Ltd A method and system for analysing images of a retina
CN110232372A (zh) * 2019-06-26 2019-09-13 电子科技大学成都学院 基于粒子群优化bp神经网络的步态识别方法
CN111815574A (zh) * 2020-06-18 2020-10-23 南通大学 一种用于眼底视网膜血管图像分割的粗糙集神经网络方法

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MEI XUZHANG, JIANG HONG, SUN JUN: "Retinal Vessel Image Segmentation Based on Dense Attention Network", COMPUTER ENGINEERING, SHANGHAI JISUANJI XUEHUI, CN, vol. 46, no. 3, 15 March 2020 (2020-03-15), CN , XP055881352, ISSN: 1000-3428, DOI: 10.19678/j.issn.1000-3428.0054379 *
ZHANG KAI, YU HAN, JIAN CHEN, ZICHAO ZHANG, SHUBO WANG: "Semantic Segmentation for Remote Sensing based on RGB Images and Lidar Data using Model-Agnostic Meta-Learning and Partical Swarm Optimization", IFAC PAPERONLINE, vol. 53, 31 January 2020 (2020-01-31), pages 397 - 402, XP055881359, DOI: 10.1016/j.ifacol.2021.04.117 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114359104A (zh) * 2022-01-10 2022-04-15 北京理工大学 一种基于分级生成的白内障眼底图像增强方法
CN114494196A (zh) * 2022-01-26 2022-05-13 南通大学 基于遗传模糊树的视网膜糖尿病变深度网络检测方法
CN114494196B (zh) * 2022-01-26 2023-11-17 南通大学 基于遗传模糊树的视网膜糖尿病变深度网络检测方法
CN114612484A (zh) * 2022-03-07 2022-06-10 中国科学院苏州生物医学工程技术研究所 基于无监督学习的视网膜oct图像分割方法
CN114612484B (zh) * 2022-03-07 2023-07-07 中国科学院苏州生物医学工程技术研究所 基于无监督学习的视网膜oct图像分割方法
CN115829883A (zh) * 2023-02-16 2023-03-21 汶上县恒安钢结构有限公司 一种异性金属结构件表面图像去噪方法
CN116228545B (zh) * 2023-04-04 2023-10-03 深圳市眼科医院(深圳市眼病防治研究所) 基于视网膜特征点的眼底彩色照相图像拼接方法及***
CN116228545A (zh) * 2023-04-04 2023-06-06 深圳市眼科医院(深圳市眼病防治研究所) 基于视网膜特征点的眼底彩色照相图像拼接方法及***
CN116580008B (zh) * 2023-05-16 2024-01-26 山东省人工智能研究院 基于局部增广空间测地线生物医学标记方法
CN116580008A (zh) * 2023-05-16 2023-08-11 山东省人工智能研究院 基于局部增广空间测地线生物医学标记方法
CN116342588B (zh) * 2023-05-22 2023-08-11 徕兄健康科技(威海)有限责任公司 一种脑血管图像增强方法
CN116740203A (zh) * 2023-08-15 2023-09-12 山东理工职业学院 用于眼底相机数据的安全存储方法
CN116740203B (zh) * 2023-08-15 2023-11-28 山东理工职业学院 用于眼底相机数据的安全存储方法
CN117437350A (zh) * 2023-09-12 2024-01-23 南京诺源医疗器械有限公司 一种用于手术术前规划的三维重建***及方法
CN117437350B (zh) * 2023-09-12 2024-05-03 南京诺源医疗器械有限公司 一种用于手术术前规划的三维重建***及方法
CN117058468A (zh) * 2023-10-11 2023-11-14 青岛金诺德科技有限公司 用于新能源汽车锂电池回收的图像识别与分类***
CN117058468B (zh) * 2023-10-11 2023-12-19 青岛金诺德科技有限公司 用于新能源汽车锂电池回收的图像识别与分类***
CN117611599A (zh) * 2023-12-28 2024-02-27 齐鲁工业大学(山东省科学院) 融合中心线图和增强对比度网络的血管分割方法及其***
CN117611599B (zh) * 2023-12-28 2024-05-31 齐鲁工业大学(山东省科学院) 融合中心线图和增强对比度网络的血管分割方法及其***
CN117974692A (zh) * 2024-03-29 2024-05-03 贵州毅丹恒瑞医药科技有限公司 一种基于区域生长的眼科医学影像处理方法
CN117974692B (zh) * 2024-03-29 2024-06-07 贵州毅丹恒瑞医药科技有限公司 一种基于区域生长的眼科医学影像处理方法

Also Published As

Publication number Publication date
CN111815574B (zh) 2022-08-12
LU500959A1 (en) 2022-01-04
CN111815574A (zh) 2020-10-23
LU500959B1 (en) 2022-05-04

Similar Documents

Publication Publication Date Title
WO2021253939A1 (zh) 一种用于眼底视网膜血管图像分割的粗糙集神经网络方法
CN106920227B (zh) 基于深度学习与传统方法相结合的视网膜血管分割方法
CN108021916B (zh) 基于注意力机制的深度学习糖尿病视网膜病变分类方法
CN108648191B (zh) 基于贝叶斯宽度残差神经网络的害虫图像识别方法
Li et al. Accurate retinal vessel segmentation in color fundus images via fully attention-based networks
CN107610087B (zh) 一种基于深度学习的舌苔自动分割方法
US20190228268A1 (en) Method and system for cell image segmentation using multi-stage convolutional neural networks
CN106651899B (zh) 基于Adaboost的眼底图像微动脉瘤检测***
CN111476283A (zh) 基于迁移学习的青光眼眼底图像识别方法
CN107256550A (zh) 一种基于高效cnn‑crf网络的视网膜图像分割方法
CN106530283A (zh) 一种基于svm的医疗图像血管识别方法
CN110930416A (zh) 一种基于u型网络的mri图像***分割方法
CN109272107A (zh) 一种改进深层卷积神经网络的参数个数的方法
CN109902558A (zh) 一种基于cnn-lstm的人体健康深度学习预测方法
CN112150476A (zh) 基于时空判别性特征学习的冠状动脉序列血管分割方法
CN106683080A (zh) 一种视网膜眼底图像预处理方法
CN108053398A (zh) 一种半监督特征学习的黑色素瘤自动检测方法
CN114648806A (zh) 一种多机制自适应的眼底图像分割方法
Bhatkalkar et al. Automated fundus image quality assessment and segmentation of optic disc using convolutional neural networks
CN109087310A (zh) 睑板腺纹理区域的分割方法、***、存储介质及智能终端
CN113643297B (zh) 一种基于神经网络的计算机辅助牙龄分析方法
Bhuvaneswari et al. Contrast enhancement of retinal images using green plan masking and whale optimization algorithm
Zheng et al. Fruit tree disease recognition based on convolutional neural networks
Xiao et al. SE-MIDNet based on deep learning for diabetic retinopathy classification
CN116092667A (zh) 基于多模态影像的疾病检测方法、***、装置及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21825530

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21825530

Country of ref document: EP

Kind code of ref document: A1