CN109859139A - The blood vessel Enhancement Method of colored eye fundus image - Google Patents

The blood vessel Enhancement Method of colored eye fundus image Download PDF

Info

Publication number
CN109859139A
CN109859139A CN201910117094.3A CN201910117094A CN109859139A CN 109859139 A CN109859139 A CN 109859139A CN 201910117094 A CN201910117094 A CN 201910117094A CN 109859139 A CN109859139 A CN 109859139A
Authority
CN
China
Prior art keywords
data
image
blood vessel
eye fundus
fundus image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910117094.3A
Other languages
Chinese (zh)
Other versions
CN109859139B (en
Inventor
邹北骥
陈瑶
朱承璋
陈昌龙
张子谦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN201910117094.3A priority Critical patent/CN109859139B/en
Publication of CN109859139A publication Critical patent/CN109859139A/en
Application granted granted Critical
Publication of CN109859139B publication Critical patent/CN109859139B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses a kind of blood vessel Enhancement Methods of colored eye fundus image, including obtain training data and handle;It enters data into generate and generation model is trained in model and obtains final generation model;Obtain data to be reinforced and processing;It enters data into final generation model and generates the enhanced colored eye fundus image of blood vessel.The foundation that the present invention passes through generation model, use the blood vessel imaging feature of deep neural network study fluoroscopic visualization image, it may learn information more deeper than gray scale texture etc., so that the blood vessel reinforcing effect of eyeground figure is more preferably significant, and by the design of loss function, it can effectively to generate image and target image is more nearly;Therefore, the method for the present invention effectively can generate the enhanced colored eye fundus image of blood vessel according to existing colored eye fundus image, and the high reliablity of the method for the present invention, safety are good and applied widely.

Description

The blood vessel Enhancement Method of colored eye fundus image
Technical field
Present invention relates particularly to a kind of blood vessel Enhancement Methods of colored eye fundus image.
Background technique
With the development and the improvement of people's living standards of economic technology, requirement of the people for health is also higher and higher. And with the appearance of intelligent algorithm, development and popularize, more and more intelligent algorithms are applied to medical field of auxiliary, to people's Medical treatment brings great help.The analysis of retinal images, especially accurate Ground Split retinal vessel and to retinal blood Pipe structure is analyzed, and screening diabetic retinopathy can be assisted.Therefore, in recent years for the analysis of retinal images, One of have been a hot spot of research.
Optical fundus blood vessel structure, the retina colour eye fundus image usually shot by fundus camera obtain.But Due to the illumination unevenness in image acquisition process, the retina colour eye fundus image that existing fundus camera shooting obtains, Generally all it is widely present the problem that noisiness is big and contrast is lower.
For more colored eye fundus image, fundus fluorescence element contrastographic picture is more sensitive to retinal vasculopathy, accurate, therefore Fundus fluorescence element contrastographic picture is one of solution of the above problem.But the acquisition of fundus fluorescence element contrastographic picture, it needs It with fluorescent characteristic and contrast agent-fluorescein sodium of retina and choroidal artery will be can enter is injected into the vein of patient In.But using fluorescein, some patientss have the reaction such as slight Nausea and vomiting, and it is anti-that allergy may occur for few patients It answers or even shock death.Therefore, existing fundus fluorescence element radiography safety is excessively poor, and the scope of application is extremely limited.
Summary of the invention
The purpose of the present invention is to provide a kind of high reliablity, safety is good and colored eye fundus image applied widely Blood vessel Enhancement Method.
The blood vessel Enhancement Method of this colored eye fundus image provided by the invention, includes the following steps:
S1. training data is obtained;The training data includes the colored eye fundus image of several patients and corresponding same The fundus fluorescein angiography image of one people;
S2. the step S1 training data obtained is handled;
S3. the step S2 data that obtain that treated are input to generate in model and are trained to generating model, thus Obtain final generation model;
S4. data to be reinforced are obtained;
S5. data to be reinforced are handled;
S6. by step S5, treated that data input final generation model, to generate the enhanced colored eye of blood vessel Base map picture.
The step S1 training data obtained is handled described in step S2, specially using at following steps Reason:
A. the step S1 training data obtained is normalized;
B. the training data after normalization is cut.
Normalization described in step A is specially normalized using following formula:
P (x, y) is the pixel value after point (x, y) normalization in formula,For the original pixel value of point (x, y), P is The sets of pixel values of all the points in image, max (P) are the maximum value of pixel value, and min (P) is the minimum value of pixel value.
The training data after normalization is cut described in step B, is specially cut into the image after normalization The smaller image of area improves the robustness of model to increase the number of training data.
Generation model described in step S3, specially circulation confrontation generate network.
The circulation confrontation generates network, specifically includes two generation models, two discrimination models and three loss letters Number:
Generate model:WithWherein, X is original colored eye fundus image, and Y is corresponding Fundus fluorescein angiography image,For the colored eye fundus image of generation,For the fundus fluorescein angiography image of generation;G and F make a living At model;
Discrimination model: DXAnd DY, wherein DXFor distinguishing X and F (Y), DYFor distinguishing Y and G (X);
Loss function:
Using following formula as confrontation loss function LGAN(G,DY, X, Y) and LGAN(F,DX, Y, X):
X is the sample in X in formula, and y is the sample in Y, y~pdata(y) indicate that y comes from Y, x~pdata(x) indicate that x comes from X;E is to seek expectation function;
X is the sample in X in formula, and y is the sample in Y, y~pdata(y) indicate that y comes from Y, x~pdata(x) indicate that x comes from X;
Using following formula as consistency loss function Lcyc(G, F):
X is the sample in X in formula, and y is the sample in Y, y~pdata(y) indicate that y comes from Y, x~pdata(x) indicate that x comes from X;E is to seek expectation function;|| ||1To seek 1 norm;
Using following formula as regularization loss function LL1:
LL1=LL1(G)+LL1(F)
L in formulaL1(G)=Ex,y[||y-G(x)||1], LL1(F)=Ey,x[||x-F(y)||1], x is the sample in X, and y is Sample in Y, y~pdata(y) indicate that y comes from Y, x~pdata(x) indicate that x comes from X;E is to seek expectation function;|| ||1To ask 1 Norm.
Data to be reinforced are handled described in step S5, place specially is normalized in data to be reinforced Reason.
The normalization is specially normalized using following formula:
P (x, y) is the pixel value after point (x, y) normalization in formula,For the original pixel value of point (x, y), P is The sets of pixel values of all the points in image, max (P) are the maximum value of pixel value, and min (P) is the minimum value of pixel value.
The blood vessel Enhancement Method of this colored eye fundus image provided by the invention uses depth by generating the foundation of model The blood vessel imaging feature for spending neural network learning fluoroscopic visualization image, may learn letter more deeper than gray scale texture etc. Breath, so that the blood vessel reinforcing effect of eyeground figure is more preferably significant, and by the design of loss function, can effectively to generate Image and target image are more nearly;Therefore, the method for the present invention effectively can generate blood according to existing colored eye fundus image Manage enhanced colored eye fundus image, and the high reliablity of the method for the present invention, safety are good and applied widely.
Detailed description of the invention
Fig. 1 is the method flow diagram of the method for the present invention.
Fig. 2 is the structural schematic diagram of generation the model G and F of the method for the present invention.
Fig. 3 is the discrimination model D of the method for the present inventionXAnd DYStructural schematic diagram.
Specific embodiment
It is as shown in Figure 1 the method flow diagram of the method for the present invention: the blood vessel of this colored eye fundus image provided by the invention Enhancement Method includes the following steps:
S1. training data is obtained;The training data includes the colored eye fundus image of several patients and corresponding same The fundus fluorescein angiography image of one people;
S2. the step S1 training data obtained is handled;Specially handled using following steps:
A. the step S1 training data obtained is normalized;It is normalized using following formula:
P (x, y) is the pixel value after point (x, y) normalization in formula,For the original pixel value of point (x, y), P is The sets of pixel values of all the points in image, max (P) are the maximum value of pixel value, and min (P) is the minimum value of pixel value;
B. the training data after normalization is cut;It is smaller that the image after normalization is specially cut into area Image improves the robustness of model to increase the number of training data;For example, by treated whole 576*720 pixel Image is cut into the small image of 30 128*128;
S3. the step S2 data that obtain that treated are input to generate in model and are trained to generating model, thus Obtain final generation model;
It generates model and preferably recycles confrontation generation network: including two generation models, two discrimination models and three damages Lose function:
Generate model:WithWherein, X is original colored eye fundus image, and Y is corresponding Fundus fluorescein angiography image,For the colored eye fundus image of generation,For the fundus fluorescein angiography image of generation;G and F make a living At model, structure is as shown in Figure 2;
Generate the structure for the Unet that model uses: U-net includes that the left side for stacking convolutional layer and down-sampling layer is inputted from image Side path, and stack convolutional layer and up-sample the right hand path of layer, there are also the characteristic pattern duplication in left side is spliced to right side phase With the position of scale;Initial data size is 128*128*3 characteristic block;The down-sampling layer in left side, it is defeated including the leftmost side first Enter the entrance of image data, followed by 3*3 convolutional layer, the number of filter is 3 (128*128*3);Followed by the maximum of 2*2 Pond layer, size of data are original half (64*64*3);Followed by the convolutional layer of 3*3, the number of filter are 128 (64* 64*128);Followed by the convolutional layer of 6 pond layers and 7 3*3, the number of convolution filter be in figure above convolutional layer or Number under person (size of data is 1*1*1024 at this time);Convolutional layer is the layer that the arrow meaning of 3*3 convolution operation is represented in figure; The up-sampling on right side operates, and up-sampling operation first, size of data expands as original one times, while duplication splicing as shown in the figure The characteristic layer of corresponding left side same size, and carry out the convolution operation of 3*3;The number of convolution filter is in the top of spliced map; After being operated as 6 times, size of data 64*64*128;It carries out once up-sampling operation again at this time, then carry out again The convolution operation of 3*3, the number of convolution kernel are 3.Image size is 128*128*3, i.e. our target image at this time;
Discrimination model: DXAnd DY, wherein DXFor distinguishing X and F (Y), DYFor distinguishing Y and G (X);The network of discrimination model Structure is as shown in Figure 3;Image data (128*128*3) is inputted first, then successively carries out the behaviour in maximum pond after 6 convolution Make (2*2*1024), then carries out successively pondization operation (1*1*1024) again;Finally activated using sigmoid function, obtain 0~ Confidence score between 1, to differentiate that input picture is true picture or the fault image for generating model generation;
Loss function:
Using following formula as confrontation loss function LGAN(G,DY, X, Y) and LGAN(F,DX, Y, X): confrontation loss function For making generation image consistent with target image distribution;
G:X → Y and its corresponding arbiter DYConfrontation loss function it is as follows:
X is the sample in X in formula, and y is the sample in Y, y~pdata(y) indicate that y comes from Y, x~pdata(x) indicate that x comes from X;E is to seek expectation function;
F:Y → X and its corresponding arbiter DXConfrontation loss function it is as follows:
X is the sample in X in formula, and y is the sample in Y, y~pdata(y) indicate that y comes from Y, x~pdata(x) indicate that x comes from X;
Using following formula as consistency loss function Lcyc(G, F), for preventing G and F conflicting, meanwhile, unanimously Property loss function makes x → G (x) → F (G (x)) ≈ x and y → G (y) → F (G (y)) ≈ y;
X is the sample in X in formula, and y is the sample in Y, y~pdata(y) indicate that y comes from Y, x~pdata(x) indicate that x comes from X;E is to seek expectation function;|| ||1To seek 1 normal form function;
Using following formula as regularization loss function LL1, for so that the image and target image phase as far as possible generated Seemingly;
LL1=LL1(G)+LL1(F)
L in formulaL1(G)=Ex,y[||y-G(x)||1], LL1(F)=Ey,x[||x-F(y)||1], x is the sample in X, and y is Sample in Y, y~pdata(y) indicate that y comes from Y, x~pdata(x) indicate that x comes from X;E is to seek expectation function;|| ||1To ask 1 Normal form function;
In the specific implementation, above-mentioned confrontation loss function, consistency loss function are added with regularization loss function Network is optimized as final loss function, and with the value of loss function minimum optimization aim, until loss function Final prototype network is obtained when reaching minimum;
S4. data to be reinforced are obtained;
S5. data to be reinforced are handled;Specially data to be reinforced are normalized, using as follows Formula is normalized:
P (x, y) is the pixel value after point (x, y) normalization in formula,For the original pixel value of point (x, y), P is The sets of pixel values of all the points in image, max (P) are the maximum value of pixel value, and min (P) is the minimum value of pixel value;
S6. by step S5, treated that data input final generation model, to generate the enhanced colored eye of blood vessel Base map picture.

Claims (8)

1. a kind of blood vessel Enhancement Method of colour eye fundus image, includes the following steps:
S1. training data is obtained;The training data include several patients colored eye fundus image and corresponding same people Fundus fluorescein angiography image;
S2. the step S1 training data obtained is handled;
S3. the step S2 data that obtain that treated are input to generate in model and are trained to generating model, to obtain Final generation model;
S4. data to be reinforced are obtained;
S5. data to be reinforced are handled;
S6. by step S5, treated that data input final generation model, to generate the enhanced colored eyeground figure of blood vessel Picture.
2. it is according to claim 1 colour eye fundus image blood vessel Enhancement Method, it is characterised in that described in step S2 general The training data that step S1 is obtained is handled, and is specially handled using following steps:
A. the step S1 training data obtained is normalized;
B. the training data after normalization is cut.
3. the blood vessel Enhancement Method of colour eye fundus image according to claim 2, it is characterised in that normalizing described in step A Change, be specially normalized using following formula:
P (x, y) is the pixel value after point (x, y) normalization in formula,For the original pixel value of point (x, y), P is image The sets of pixel values of middle all the points, max (P) are the maximum value of pixel value, and min (P) is the minimum value of pixel value.
4. the blood vessel Enhancement Method of colour eye fundus image according to claim 2, it is characterised in that will return described in step B Training data after one change is cut, and the image after normalization is specially cut into the smaller image of area, to increase The number of training data improves the robustness of model.
5. the blood vessel Enhancement Method of colored eye fundus image described according to claim 1~one of 4, it is characterised in that step S3 institute The generation model stated, specially circulation confrontation generate network.
6. the blood vessel Enhancement Method of colour eye fundus image according to claim 5, it is characterised in that the circulation confrontation Network is generated, two generation models, two discrimination models and three loss functions are specifically included:
Generate model:WithWherein, X is original colored eye fundus image, and Y is corresponding eyeground Fluoroscopic visualization image,For the colored eye fundus image of generation,For the fundus fluorescein angiography image of generation;G and F is to generate mould Type;
Discrimination model: DXAnd DY, wherein DXFor distinguishing X and F (Y), DYFor distinguishing Y and G (X);
Loss function:
Using following formula as confrontation loss function LGAN(G,DY, X, Y) and LGAN(F,DX, Y, X):
X is the sample in X in formula, and y is the sample in Y, y~pdata(y) indicate that y comes from Y, x~pdata(x) indicate that x comes from X;E To seek expectation function;
X is the sample in X in formula, and y is the sample in Y, y~pdata(y) indicate that y comes from Y, x~pdata(x) indicate that x comes from X;
Using following formula as consistency loss function Lcyc(G, F):
X is the sample in X in formula, and y is the sample in Y, y~pdata(y) indicate that y comes from Y, x~pdata(x) indicate that x comes from X;E To seek expectation function;||||1To seek 1 normal form function;
Using following formula as regularization loss function LL1:
LL1=LL1(G)+LL1(F)
L in formulaL1(G)=Ex,y[||y-G(x)||1], LL1(F)=Ey,x[||x-F(y)||1], x is the sample in X, and y is in Y Sample, y~pdata(y) indicate that y comes from Y, x~pdata(x) indicate that x comes from X;E is to seek expectation function;||||1To seek 1 norm.
7. it is according to claim 6 colour eye fundus image blood vessel Enhancement Method, it is characterised in that described in step S5 general Data to be reinforced are handled, and specially data to be reinforced are normalized.
8. the blood vessel Enhancement Method of colour eye fundus image according to claim 7, it is characterised in that the normalization, tool Body is to be normalized using following formula:
P (x, y) is the pixel value after point (x, y) normalization in formula,For the original pixel value of point (x, y), P is image The sets of pixel values of middle all the points, max (P) are the maximum value of pixel value, and min (P) is the minimum value of pixel value.
CN201910117094.3A 2019-02-15 2019-02-15 Blood vessel enhancement method for color fundus image Active CN109859139B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910117094.3A CN109859139B (en) 2019-02-15 2019-02-15 Blood vessel enhancement method for color fundus image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910117094.3A CN109859139B (en) 2019-02-15 2019-02-15 Blood vessel enhancement method for color fundus image

Publications (2)

Publication Number Publication Date
CN109859139A true CN109859139A (en) 2019-06-07
CN109859139B CN109859139B (en) 2022-12-09

Family

ID=66897971

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910117094.3A Active CN109859139B (en) 2019-02-15 2019-02-15 Blood vessel enhancement method for color fundus image

Country Status (1)

Country Link
CN (1) CN109859139B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111292270A (en) * 2020-02-18 2020-06-16 广州柏视医疗科技有限公司 Three-dimensional image blood vessel enhancement method based on deep learning network
CN111612856A (en) * 2020-05-25 2020-09-01 中南大学 Retina neovascularization detection method and imaging method for color fundus image
CN112037217A (en) * 2020-09-09 2020-12-04 南京诺源医疗器械有限公司 Intraoperative blood flow imaging method based on fluorescence imaging
CN117876242A (en) * 2024-03-11 2024-04-12 深圳大学 Fundus image enhancement method, fundus image enhancement device, fundus image enhancement apparatus, and fundus image enhancement program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2312581C1 (en) * 2006-06-23 2007-12-20 ГУ Научно-исследовательский институт глазных болезней РАМН Method for diagnosing pathological changes in macular region
CN106408562A (en) * 2016-09-22 2017-02-15 华南理工大学 Fundus image retinal vessel segmentation method and system based on deep learning
WO2018028255A1 (en) * 2016-08-11 2018-02-15 深圳市未来媒体技术研究院 Image saliency detection method based on adversarial network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2312581C1 (en) * 2006-06-23 2007-12-20 ГУ Научно-исследовательский институт глазных болезней РАМН Method for diagnosing pathological changes in macular region
WO2018028255A1 (en) * 2016-08-11 2018-02-15 深圳市未来媒体技术研究院 Image saliency detection method based on adversarial network
CN106408562A (en) * 2016-09-22 2017-02-15 华南理工大学 Fundus image retinal vessel segmentation method and system based on deep learning

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111292270A (en) * 2020-02-18 2020-06-16 广州柏视医疗科技有限公司 Three-dimensional image blood vessel enhancement method based on deep learning network
CN111612856A (en) * 2020-05-25 2020-09-01 中南大学 Retina neovascularization detection method and imaging method for color fundus image
CN111612856B (en) * 2020-05-25 2023-04-18 中南大学 Retina neovascularization detection method and imaging method for color fundus image
CN112037217A (en) * 2020-09-09 2020-12-04 南京诺源医疗器械有限公司 Intraoperative blood flow imaging method based on fluorescence imaging
CN117876242A (en) * 2024-03-11 2024-04-12 深圳大学 Fundus image enhancement method, fundus image enhancement device, fundus image enhancement apparatus, and fundus image enhancement program
CN117876242B (en) * 2024-03-11 2024-05-28 深圳大学 Fundus image enhancement method, fundus image enhancement device, fundus image enhancement apparatus, and fundus image enhancement program

Also Published As

Publication number Publication date
CN109859139B (en) 2022-12-09

Similar Documents

Publication Publication Date Title
CN109859139A (en) The blood vessel Enhancement Method of colored eye fundus image
CN106920227B (en) The Segmentation Method of Retinal Blood Vessels combined based on deep learning with conventional method
CN107358605B (en) The deep neural network apparatus and system of diabetic retinopathy for identification
CN110197493A (en) Eye fundus image blood vessel segmentation method
CN109166126A (en) A method of paint crackle is divided on ICGA image based on condition production confrontation network
CN108172291A (en) Diabetic retinopathy identifying system based on eye fundus image
Esfahani et al. Classification of diabetic and normal fundus images using new deep learning method
CN109448006A (en) A kind of U-shaped intensive connection Segmentation Method of Retinal Blood Vessels of attention mechanism
CN106682389B (en) A kind of Eye disease for monitoring hypertension initiation is health management system arranged
CN111259982A (en) Premature infant retina image classification method and device based on attention mechanism
CN108806792A (en) Deep learning facial diagnosis system
CN108021916A (en) Deep learning diabetic retinopathy sorting technique based on notice mechanism
CN109859172A (en) Based on the sugared net lesion of eyeground contrastographic picture deep learning without perfusion area recognition methods
CN108537282A (en) A kind of diabetic retinopathy stage division using extra lightweight SqueezeNet networks
CN108986106A (en) Retinal vessel automatic division method towards glaucoma clinical diagnosis
CN112017185B (en) Focus segmentation method, device and storage medium
CN109635618A (en) Visible images vein developing method based on convolutional neural networks
CN109726743A (en) A kind of retina OCT image classification method based on Three dimensional convolution neural network
Saleh et al. Transfer learning‐based platform for detecting multi‐classification retinal disorders using optical coherence tomography images
Nayak et al. Automatic identification of diabetic maculopathy stages using fundus images
Ram et al. The relationship between Fully Connected Layers and number of classes for the analysis of retinal images
CN106780439A (en) A kind of method for screening eye fundus image
Firke et al. Convolutional neural network for diabetic retinopathy detection
CN110013216A (en) A kind of artificial intelligence cataract analysis system
CN113887662A (en) Image classification method, device, equipment and medium based on residual error network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant