A kind of eye fundus image quality evaluating method
Technical field
The present invention relates to a kind of image quality evaluating methods, more particularly, to a kind of eye fundus image quality evaluating method.
Background technique
Eye fundus image is shot by special fundus camera and is obtained, and eye fundus image contains optic disk in retina, macula lutea and blood
The major physiologicals structure such as pipe is a kind of important image in medical image.Wherein, optic disk is shown as in normal eye fundus image
Approximate circle light tone region, it is most strong with the contrast of background area, it is the initiation region of optic nerve and blood vessel;Macula lutea is due to it
Lutein rich in, therefore dark areas is shown as in normal eye fundus image, and the dark areas is without blood vessel structure,
There is the region being recessed inwardly to be known as central fovea in the centre of macula lutea;Blood vessel is by optic disk region and extends to entire eye
Inside ball, presentation is tree-shaped to be distributed in entire eye fundus image, and the blood vessel in optic disk region is most thick, density is maximum, and substantially along vertical
Histogram is to extension.
Although eye fundus image quality evaluating method is more and more mature, the quality evaluation of wherein most eye fundus image
Dependent on the subjective judgement of medical researchers, although the mode of this subjective judgement is on the total quality of evaluation eye fundus image
There is certain reference value, but is difficult to evaluate influence of the eye fundus image quality to segmentation by subjective mode.Eye fundus image can
As the applications such as vessel information, positioning optic disk, positioning macular area are extracted, the diagnosis and treatment that these applications are ophthalmology disease are all
Huge information is provided, and eye fundus image can cause to adopt in collection and transmission inevitably by noise jamming
The missing of the retina effective information collected, and then influence the diagnosis of doctor.Therefore, the shadow that eye fundus image is divided in research distortion
Sound is just particularly important.
Summary of the invention
Technical problem to be solved by the invention is to provide a kind of eye fundus image quality evaluating methods, can accurately certainly
Dynamic evaluation eye fundus image quality, effectively improves the correlation objectively evaluated between result and subjective perception.
The technical scheme of the invention to solve the technical problem is: a kind of eye fundus image quality evaluating method,
It is characterized in that including two processes of training stage and test phase;
The specific steps of the training stage process are as follows:
1. the real blood vessels segmented image of N original eye fundus image and every original eye fundus image _ 1, is chosen, by the
The real blood vessels segmented image of u original eye fundus image is denoted as Mu;Then L are carried out respectively to every original eye fundus image
Different grades of fuzzy distortion, L different grades of overexposure optical distortions and L different grades of under-exposure distortions, obtain every width
The corresponding 3L width of original eye fundus image is distorted eye fundus image, distortion eye fundus image, L width overexposure comprising the fuzzy distortion of L width
The distortion eye fundus image of distortion, L under-exposure distortion distortion eye fundus image, by the corresponding v of the u original eye fundus image
Width distortion eye fundus image is denoted as Su,v;Wherein, N > 1, u are positive integer, and 1≤u≤N, L > 1, v are positive integer, 1≤v≤3L, Mu
And Su,vWidth be W and height be H;
1. _ 2, using the blood vessel segmentation method based on matched filtering, every width corresponding to every original eye fundus image is lost
True eye fundus image carries out blood vessel segmentation, obtains the blood vessel segmentation of the corresponding every width distortion eye fundus image of every original eye fundus image
Image, by Su,vBlood vessel segmentation image be denoted as Qu,v;Then according to the real blood vessels segmented image of every original eye fundus image,
The segmentation accuracy for calculating the blood vessel segmentation image of the corresponding every width distortion eye fundus image of every original eye fundus image, by Qu,v
Segmentation accuracy be denoted as ρu,v,Wherein, Qu,vWidth be W and height is H, TP indicates Qu,vIn
Blood vessel and corresponding pixel points are detected as in MuIn also for blood vessel pixel number, FP indicate Qu,vIn be detected as blood vessel but right
Answer pixel in MuIn for non-vascular pixel number, FN indicate Qu,vIn be detected as non-vascular and corresponding pixel points in MuIn
It also is the number of the pixel of non-vascular;
1. _ 3, all distortion eye fundus images and its segmentation accuracy composing training collection of blood vessel segmentation image are denoted as
{Sv',ρv'|1≤v'≤N×3L};Wherein, v' is positive integer, and 1≤v'≤N × 3L, N × 3L are the total width for being distorted eye fundus image
Number, Sv'Indicate { Sv',ρv'| 1≤v'≤N × 3L } in v' width be distorted eye fundus image, ρv'Indicate { Sv',ρv'|1≤v'≤N×
3L } in v' width distortion eye fundus image blood vessel segmentation image segmentation accuracy;
1. _ 4, calculating { Sv',ρv'| 1≤v'≤N × 3L } in every width distortion eye fundus image statistical nature vector, by Sv'
Statistical nature vector be denoted asWherein,Dimension be 5 × 1;
1. _ 5, calculating { Sv',ρv'| 1≤v'≤N × 3L } in every width distortion eye fundus image textural characteristics vector, by Sv'
Textural characteristics vector be denoted asWherein,Dimension be 4 × 1;
1. _ 6, calculating { Sv',ρv'| 1≤v'≤N × 3L } in every width distortion eye fundus image shape feature vector, by Sv'
Shape feature vector be denoted asWherein,Dimension be 2 × 1;
1. _ 7, by { Sv',ρv'| 1≤v'≤N × 3L } in every width distortion statistical nature vector of eye fundus image, texture it is special
Levy vector, shape feature vector arranged in sequence constitutes { Sv',ρv'| 1≤v'≤N × 3L } in every width distortion eye fundus image spy
Vector is levied, by Sv'Characteristic vector be denoted as Fv',Wherein, Fv'Dimension be 11 ×
1,ForTransposition,ForTransposition,ForTransposition,ForTransposition;
1. _ 8, by { Sv',ρv'| 1≤v'≤N × 3L } in all distortion eye fundus images respective blood vessel segmentation images
Segmentation accuracy and characteristic vector composing training sample data sets include N × 3L segmentation standard in training sample data set
Exactness and N × 3L characteristic vector;Then method of the support vector regression as machine learning is used, to training sample data collection
All characteristic vectors in conjunction are trained, so that by the error between the obtained regression function value of training and segmentation accuracy
Minimum, fitting obtain optimal weight vector woptWith optimal bias term bopt;Followed by optimal weight vector woptMost
Excellent bias term bopt, structure forecast model is denoted as f (F),Wherein, f () is function representation
Form, F are used to indicate the characteristic vector of distortion eye fundus image, and the input vector as prediction model, (wopt)TFor woptTurn
It sets,For the linear function of F;
The specific steps of the test phase process are as follows:
2. being used as the distortion eye fundus image of test for any one width, it is denoted as Stest;Then according to step 1. _ 4 to step
1. _ 7 process obtains S with identical operationtestCharacteristic vector, be denoted as Ftest;Further according to the prediction mould of training stage construction
Type f (F) is to FtestIt is tested, prediction obtains FtestCorresponding predicted value, using the predicted value as StestSegmentation accuracy
Value, is denoted as ρtest,Wherein, StestWidth be W' and height be H', FtestDimension be 11
× 1,For FtestLinear function.
The step 1. _ 4 inAcquisition process are as follows:
1. _ 4a, calculating Sv'Characteristics of mean, standard deviation characteristic, skewness feature, kurtosis and Information Entropy Features, it is right
F should be denoted as1、f2、f3、f4、f5, Wherein, 1≤x≤W,
1≤y≤H, Sv'(x, y) indicates Sv'Middle coordinate position is the pixel value of the pixel of (x, y), and j is positive integer, 1≤j≤J, J table
Show Sv'Included in gray level total number, Sv'(j) S is indicatedv'In j-th of gray level gray value, p [Sv'(j)] it indicates
Sv'(j) in Sv'The probability of middle appearance, Indicate Sv'Middle gray value is equal to Sv'(j) picture
The total number of vegetarian refreshments;
1. _ 4b, by f1、f2、f3、f4、f5Arranged in sequence constitutes Sv'Statistical nature vector Wherein, [f1,f2,f3,f4,f5]TFor [f1,f2,f3,f4,f5] transposition.
The step 1. _ 5 inAcquisition process are as follows:
1. _ 5a, to Sv'In all pixels point be scanned in 0 ° of degree direction of level, obtain Sv'In 0 ° of degree direction of level
Gray level co-occurrence matrixes, be denoted as { p0°(j1,j2)|1≤j1≤J,1≤j2≤J};Wherein, j1And j2It is positive integer, 1≤j1≤
J, 1≤j2≤ J, j1≠j2, J expression Sv'Included in gray level total number, p0°(j1,j2) indicate Sv'Middle gray value is j1
Pixel and gray value be j2Pixel 0 ° of level spend direction simultaneously occur probability;
1. _ 5b, basis { p0°(j1,j2)|1≤j1≤J,1≤j2≤ J }, calculate Sv'The contrast that direction is spent at 0 ° of level is special
Sign, degree of correlation feature, angular second moment feature, homogeney feature, correspondence are denoted as C0°、R0°、E0°And H0°, Wherein,Indicate Sv'The first of direction is spent at 0 ° of level
Mean value, Indicate Sv'Second mean value in direction is spent at 0 ° of level, Indicate Sv'First standard deviation in direction is spent at 0 ° of level, Indicate Sv'Second standard deviation in direction is spent at 0 ° of level,Symbol " | | " it is the symbol that takes absolute value;
1. _ 5c, to Sv'In all pixels point be scanned in right diagonal 45 ° of degree direction, obtain Sv'At right diagonal 45 °
The gray level co-occurrence matrixes for spending direction, are denoted as { p45°(j1,j2)|1≤j1≤J,1≤j2≤J};Wherein, p45°(j1,j2) indicate Sv'In
Gray value is j1Pixel and gray value be j2The probability that occurs simultaneously in right diagonal 45 ° of degree direction of pixel;
1. _ 5d, basis { p45°(j1,j2)|1≤j1≤J,1≤j2≤ J }, calculate Sv'Comparison in right diagonal 45 ° of degree direction
Feature, degree of correlation feature, angular second moment feature, homogeney feature are spent, correspondence is denoted as C45°、R45°、E45°And H45°, Wherein,Indicate Sv'In right diagonal 45 ° of degree direction
The first mean value, Indicate Sv'The second mean value in right diagonal 45 ° of degree direction, Indicate Sv'The first standard deviation in right diagonal 45 ° of degree direction, Indicate Sv'The second standard deviation in right diagonal 45 ° of degree direction,
1. _ 5e, to Sv'In all pixels point be scanned in vertical 90 ° of degree direction, obtain Sv'In vertical 90 ° of degree sides
To gray level co-occurrence matrixes, be denoted as { p90°(j1,j2)|1≤j1≤J,1≤j2≤J};Wherein, p90°(j1,j2) indicate Sv'Middle gray scale
Value is j1Pixel and gray value be j2Pixel vertical 90 ° spend directions simultaneously occur probability;
1. _ 5f, basis { p90°(j1,j2)|1≤j1≤J,1≤j2≤ J }, calculate Sv'The contrast in direction is spent at vertical 90 °
Feature, degree of correlation feature, angular second moment feature, homogeney feature, correspondence are denoted as C90°、R90°、E90°And H90°, Wherein,Indicate Sv'The of direction is spent at vertical 90 °
One mean value, Indicate Sv'Second mean value in direction is spent at vertical 90 °, Indicate Sv'First standard deviation in direction is spent at vertical 90 °, Indicate Sv'Second standard deviation in direction is spent at vertical 90 °,
1. _ 5g, to Sv'In all pixels point be scanned in left diagonal 135 ° of degree direction, obtain Sv'Left diagonal
The gray level co-occurrence matrixes in 135 ° of degree directions, are denoted as { p135°(j1,j2)|1≤j1≤J,1≤j2≤J};Wherein, p135°(j1,j2) table
Show Sv'Middle gray value is j1Pixel and gray value be j2The probability that occurs simultaneously in left diagonal 135 ° of degree direction of pixel;
1. _ 5h, basis { p135°(j1,j2)|1≤j1≤J,1≤j2≤ J }, calculate Sv'Pair in left diagonal 135 ° of degree direction
Than degree feature, degree of correlation feature, angular second moment feature, homogeney feature, correspondence is denoted as C135°、R135°、E135°And H135°, Wherein,Indicate Sv'In left diagonal 135 ° of degree sides
To the first mean value, Indicate Sv'The second mean value in left diagonal 135 ° of degree direction, Indicate Sv'The first standard deviation in left diagonal 135 ° of degree direction, Indicate Sv'The second standard deviation in left diagonal 135 ° of degree direction,
1. _ 5i, calculating Sv'Contrast metric, degree of correlation feature, angular second moment feature, homogeney feature, correspondence be denoted as
f6、f7、f8And f9,
1. _ 5j, by f6、f7、f8And f9Arranged in sequence constitutes Sv'Textural characteristics vector Wherein, [f6,f7,f8,f9]TFor [f6,f7,f8,f9] transposition.
The step 1. _ 6 inAcquisition process are as follows:
1. _ 6a, calculating Sv'Horizontal gradient image and vertical gradient image, correspondence be denoted as GXAnd GY, by GXMiddle coordinate position
Pixel value for the pixel of (x, y) is denoted as GX(x, y),By GYMiddle coordinate position
Pixel value for the pixel of (x, y) is denoted as GY(x, y),Wherein, 1≤x≤
W, 1≤y≤H, symbolFor convolution operation symbol, Sv'(x, y) indicates Sv'Middle coordinate position is the picture of the pixel of (x, y)
Element value;
1. _ 6b, according to GXAnd GYCalculate Sv'Gradient magnitude image, be denoted as G, by coordinate position in G be (x, y) pixel
The pixel value of point is denoted as G (x, y),
1. _ 6c, calculating Sv'Blood vessel gradient Variance feature and vascular bending degree feature, correspondence be denoted as f10And f11,Wherein, n table
Show Sv'In for blood vessel pixel number, i is positive integer, 1≤i≤n-1, and symbol " | | " is the symbol that takes absolute value, θiIt indicates
Sv'In for blood vessel ith pixel point tangential angle, θi+1Indicate Sv'In for blood vessel i+1 pixel the angle of contingence
Degree;
1. _ 6d, by f10And f11Arranged in sequence constitutes Sv'Shape feature vector Wherein,
[f10,f11]TFor [f10,f11] transposition.
Compared with the prior art, the advantages of the present invention are as follows:
The method of the present invention considers fuzzy, overexposure and the under-exposure influence to eye fundus image segmentation precision, extracts system
Characteristic vector, textural characteristics vector, shape feature vector constitutive characteristic vector are counted, then using support vector regression to being lost
The characteristic vector of true eye fundus image is trained, structure forecast model;In test phase, the distortion eye of test is used as by calculating
The characteristic vector of base map picture, and according to the prediction model that the training stage constructs, prediction obtains the distortion eye fundus image as test
Segmentation accuracy value, since the characteristic vector information of acquisition can preferably reflect that the distortion of distortion eye fundus image is quasi- to segmentation
The situation of change of true property, therefore effectively improve the phase between the segmentation accuracy value of prediction and true segmentation accuracy value
Guan Xing, can accurately automatic Evaluation eye fundus image quality, effectively improve and objectively evaluate between result and subjective perception
Correlation.
Detailed description of the invention
Fig. 1 is that the overall of the method for the present invention realizes block diagram.
Specific embodiment
The present invention will be described in further detail below with reference to the embodiments of the drawings.
A kind of eye fundus image quality evaluating method proposed by the present invention, it is overall to realize that block diagram is as shown in Figure 1 comprising instruction
Practice two processes of stage and test phase;
The specific steps of the training stage process are as follows:
1. the real blood vessels segmented image of N original eye fundus image and every original eye fundus image _ 1, is chosen, by the
The real blood vessels segmented image of u original eye fundus image is denoted as Mu;Then L are carried out respectively to every original eye fundus image
Different grades of fuzzy distortion, L different grades of overexposure optical distortions and L different grades of under-exposure distortions, obtain every width
The corresponding 3L width of original eye fundus image is distorted eye fundus image, distortion eye fundus image, L width overexposure comprising the fuzzy distortion of L width
The distortion eye fundus image of distortion, L under-exposure distortion distortion eye fundus image, by the corresponding v of the u original eye fundus image
Width distortion eye fundus image is denoted as Su,v;Wherein, N > 1, taking N=20, u in the present embodiment is positive integer, 1≤u≤N, L > 1,
It is positive integer, 1≤v≤3L, M that L=8, v are taken in the present embodimentuAnd Su,vWidth be W and height be H.
1. _ 2, the blood vessel segmentation method using existing based on matched filtering, corresponding to every original eye fundus image
Every width distortion eye fundus image carries out blood vessel segmentation, obtains the blood of the corresponding every width distortion eye fundus image of every original eye fundus image
Pipe segmented image, by Su,vBlood vessel segmentation image be denoted as Qu,v;Then according to the real blood vessels of every original eye fundus image point
Image is cut, the segmentation for calculating the blood vessel segmentation image of the corresponding every width distortion eye fundus image of every original eye fundus image is accurate
Degree, by Qu,vSegmentation accuracy be denoted as ρu,v,Wherein, Qu,vWidth be W and height be H, TP table
Show Qu,vIn be detected as blood vessel and corresponding pixel points in MuIn also for blood vessel pixel number, FP indicate Qu,vIn be detected as blood
It manages but corresponding pixel points is in MuIn for non-vascular pixel number, FN indicate Qu,vIn be detected as non-vascular and respective pixel
Point is in MuIn also for non-vascular pixel number, TP, FP and FN can by statistics obtain.
1. _ 3, all distortion eye fundus images and its segmentation accuracy composing training collection of blood vessel segmentation image are denoted as
{Sv',ρv'|1≤v'≤N×3L};Wherein, v' is positive integer, and 1≤v'≤N × 3L, N × 3L are the total width for being distorted eye fundus image
Number, Sv'Indicate { Sv',ρv'| 1≤v'≤N × 3L } in v' width be distorted eye fundus image, ρv'Indicate { Sv',ρv'|1≤v'≤N×
3L } in v' width distortion eye fundus image blood vessel segmentation image segmentation accuracy.
1. _ 4, calculating { Sv',ρv'| 1≤v'≤N × 3L } in every width distortion eye fundus image statistical nature vector, by Sv'
Statistical nature vector be denoted asWherein,Dimension be 5 × 1.
In the present embodiment, step 1. _ 4 inAcquisition process are as follows:
1. _ 4a, calculating Sv'Characteristics of mean, standard deviation characteristic, skewness feature, kurtosis and Information Entropy Features, it is right
F should be denoted as1、f2、f3、f4、f5, Wherein, 1≤x≤W,
1≤y≤H, Sv'(x, y) indicates Sv'Middle coordinate position is the pixel value of the pixel of (x, y), and j is positive integer, 1≤j≤J, J table
Show Sv'Included in gray level total number, Sv'(j) S is indicatedv'In j-th of gray level gray value, p [Sv'(j)] it indicates
Sv'(j) in Sv'The probability of middle appearance, Indicate Sv'Middle gray value is equal to Sv'(j)
The total number of pixel.
1. _ 4b, by f1、f2、f3、f4、f5Arranged in sequence constitutes Sv'Statistical nature vector Wherein, [f1,f2,f3,f4,f5]TFor [f1,f2,f3,f4,f5] transposition.
1. _ 5, calculating { Sv',ρv'| 1≤v'≤N × 3L } in every width distortion eye fundus image textural characteristics vector, by Sv'
Textural characteristics vector be denoted asWherein,Dimension be 4 × 1.
In the present embodiment, step 1. _ 5 inAcquisition process are as follows:
1. _ 5a, to Sv'In all pixels point be scanned in 0 ° of degree direction of level, obtain Sv'In 0 ° of degree direction of level
Gray level co-occurrence matrixes, be denoted as { p0°(j1,j2)|1≤j1≤J,1≤j2≤J};Wherein, j1And j2It is positive integer, 1≤j1≤
J, 1≤j2≤ J, j1≠j2, J expression Sv'Included in gray level total number, p0°(j1,j2) indicate Sv'Middle gray value is j1
Pixel and gray value be j2Pixel 0 ° of level spend direction simultaneously occur probability.
1. _ 5b, basis { p0°(j1,j2)|1≤j1≤J,1≤j2≤ J }, calculate Sv'The contrast that direction is spent at 0 ° of level is special
Sign, degree of correlation feature, angular second moment feature, homogeney feature, correspondence are denoted as C0°、R0°、E0°And H0°, Wherein,Indicate Sv'The first of direction is spent at 0 ° of level
Mean value, Indicate Sv'Second mean value in direction is spent at 0 ° of level, Indicate Sv'First standard deviation in direction is spent at 0 ° of level, Indicate Sv'Second standard deviation in direction is spent at 0 ° of level,Symbol " | | " it is the symbol that takes absolute value.
1. _ 5c, to Sv'In all pixels point be scanned in right diagonal 45 ° of degree direction, obtain Sv'At right diagonal 45 °
The gray level co-occurrence matrixes for spending direction, are denoted as { p45°(j1,j2)|1≤j1≤J,1≤j2≤J};Wherein, p45°(j1,j2) indicate Sv'In
Gray value is j1Pixel and gray value be j2The probability that occurs simultaneously in right diagonal 45 ° of degree direction of pixel.
1. _ 5d, basis { p45°(j1,j2)|1≤j1≤J,1≤j2≤ J }, calculate Sv'Comparison in right diagonal 45 ° of degree direction
Feature, degree of correlation feature, angular second moment feature, homogeney feature are spent, correspondence is denoted as C45°、R45°、E45°And H45°, Wherein,Indicate Sv'In right diagonal 45 ° of degree direction
The first mean value, Indicate Sv'The second mean value in right diagonal 45 ° of degree direction, Indicate Sv'The first standard deviation in right diagonal 45 ° of degree direction, Indicate Sv'The second standard deviation in right diagonal 45 ° of degree direction,
1. _ 5e, to Sv'In all pixels point be scanned in vertical 90 ° of degree direction, obtain Sv'In vertical 90 ° of degree sides
To gray level co-occurrence matrixes, be denoted as { p90°(j1,j2)|1≤j1≤J,1≤j2≤J};Wherein, p90°(j1,j2) indicate Sv'Middle gray scale
Value is j1Pixel and gray value be j2Pixel vertical 90 ° spend directions simultaneously occur probability.
1. _ 5f, basis { p90°(j1,j2)|1≤j1≤J,1≤j2≤ J }, calculate Sv'The contrast in direction is spent at vertical 90 °
Feature, degree of correlation feature, angular second moment feature, homogeney feature, correspondence are denoted as C90°、R90°、E90°And H90°, Wherein,Indicate Sv'The of direction is spent at vertical 90 °
One mean value, Indicate Sv'Second mean value in direction is spent at vertical 90 °, Indicate Sv'First standard deviation in direction is spent at vertical 90 °, Indicate Sv'Second standard deviation in direction is spent at vertical 90 °,
1. _ 5g, to Sv'In all pixels point be scanned in left diagonal 135 ° of degree direction, obtain Sv'Left diagonal
The gray level co-occurrence matrixes in 135 ° of degree directions, are denoted as { p135°(j1,j2)|1≤j1≤J,1≤j2≤J};Wherein, p135°(j1,j2) table
Show Sv'Middle gray value is j1Pixel and gray value be j2The probability that occurs simultaneously in left diagonal 135 ° of degree direction of pixel.
1. _ 5h, basis { p135°(j1,j2)|1≤j1≤J,1≤j2≤ J }, calculate Sv'Pair in left diagonal 135 ° of degree direction
Than degree feature, degree of correlation feature, angular second moment feature, homogeney feature, correspondence is denoted as C135°、R135°、E135°And H135°, Wherein,Indicate Sv'In left diagonal 135 ° of degree sides
To the first mean value, Indicate Sv'The second mean value in left diagonal 135 ° of degree direction, Indicate Sv'The first standard deviation in left diagonal 135 ° of degree direction, Indicate Sv'The second standard deviation in left diagonal 135 ° of degree direction,
1. _ 5i, calculating Sv'Contrast metric, degree of correlation feature, angular second moment feature, homogeney feature, correspondence be denoted as
f6、f7、f8And f9,
1. _ 5j, by f6、f7、f8And f9Arranged in sequence constitutes Sv'Textural characteristics vector Wherein, [f6,f7,f8,f9]TFor [f6,f7,f8,f9] transposition.
1. _ 6, calculating { Sv',ρv'| 1≤v'≤N × 3L } in every width distortion eye fundus image shape feature vector, by Sv'
Shape feature vector be denoted asWherein,Dimension be 2 × 1.
In the present embodiment, step 1. _ 6 inAcquisition process are as follows:
1. _ 6a, calculating Sv'Horizontal gradient image and vertical gradient image, correspondence be denoted as GXAnd GY, by GXMiddle coordinate position
Pixel value for the pixel of (x, y) is denoted as GX(x, y),By GYMiddle coordinate position
Pixel value for the pixel of (x, y) is denoted as GY(x, y),Wherein, 1≤x≤W,
1≤y≤H, symbolFor convolution operation symbol, Sv'(x, y) indicates Sv'Middle coordinate position is the pixel of the pixel of (x, y)
Value.
1. _ 6b, according to GXAnd GYCalculate Sv'Gradient magnitude image, be denoted as G, by coordinate position in G be (x, y) pixel
The pixel value of point is denoted as G (x, y),
1. _ 6c, calculating Sv'Blood vessel gradient Variance feature and vascular bending degree feature, correspondence be denoted as f10And f11,Wherein, n table
Show Sv'In for blood vessel pixel number, i is positive integer, 1≤i≤n-1, and symbol " | | " is the symbol that takes absolute value, θiIt indicates
Sv'In for blood vessel ith pixel point tangential angle, θi+1Indicate Sv'In for blood vessel i+1 pixel the angle of contingence
Degree.
1. _ 6d, by f10And f11Arranged in sequence constitutes Sv'Shape feature vector Wherein,
[f10,f11]TFor [f10,f11] transposition.
1. _ 7, by { Sv',ρv'| 1≤v'≤N × 3L } in every width distortion statistical nature vector of eye fundus image, texture it is special
Levy vector, shape feature vector arranged in sequence constitutes { Sv',ρv'| 1≤v'≤N × 3L } in every width distortion eye fundus image spy
Vector is levied, by Sv'Characteristic vector be denoted as Fv',Wherein, Fv'Dimension be 11 ×
1,ForTransposition,ForTransposition,ForTransposition,
ForTransposition.
1. _ 8, by { Sv',ρv'| 1≤v'≤N × 3L } in all distortion eye fundus images respective blood vessel segmentation images
Segmentation accuracy and characteristic vector composing training sample data sets include N × 3L segmentation standard in training sample data set
Exactness and N × 3L characteristic vector;Then method of the support vector regression as machine learning is used, to training sample data collection
All characteristic vectors in conjunction are trained, so that by the error between the obtained regression function value of training and segmentation accuracy
Minimum, fitting obtain optimal weight vector woptWith optimal bias term bopt;Followed by optimal weight vector woptMost
Excellent bias term bopt, structure forecast model is denoted as f (F),Wherein, f () is function representation
Form, F are used to indicate the characteristic vector of distortion eye fundus image, and the input vector as prediction model, (wopt)TFor woptTurn
It sets,For the linear function of F.
The specific steps of the test phase process are as follows:
2. being used as the distortion eye fundus image of test for any one width, it is denoted as Stest;Then according to step 1. _ 4 to step
1. _ 7 process obtains S with identical operationtestCharacteristic vector, be denoted as Ftest;Further according to the prediction mould of training stage construction
Type f (F) is to FtestIt is tested, prediction obtains FtestCorresponding predicted value, using the predicted value as StestSegmentation accuracy
Value, is denoted as ρtest,Wherein, StestWidth be W' and height is H', W' can be identical as W or not
Identical, H' can be identical as H or not identical, FtestDimension be 11 × 1,For FtestLinear function.
The feasibility and validity of method in order to further illustrate the present invention, tests the method for the present invention.
In the present embodiment, the eye fundus image database that University Of Ningbo establishes is tested using the method for the present invention, it should
Eye fundus image database includes 20 original eye fundus images, carries out 8 different grades of moulds to every original eye fundus image
480 width distortion eyeground figure is obtained in paste distortion, 8 different grades of overexposure optical distortions and 8 different grades of under-exposure distortions
Picture, every width are distorted the segmentation accuracy value that eye fundus image specifies [0, a 1] range, and 1 indicates that segmentation quality is excellent, 0 expression point
It is bad for cutting quality.
In the present embodiment, objective parameter is commonly used as evaluation index, i.e., using 4 of assessment image quality evaluating method
Under the conditions of nonlinear regression Pearson correlation coefficient (Pearson linear correlation coefficient,
PLCC), Spearman related coefficient (Spearman rankorder correlation coefficient, SROCC),
Kendall related coefficient (Kendall rank-order correlation coefficient, KROCC), mean square error
(rootmean squared error, RMSE), the accuracy of PLCC and RMSE reflection evaluating objective quality predicted value, SROCC
Reflect its monotonicity with KROCC.Table 1 gives accurate using the segmentation accuracy value of the method for the present invention prediction and true segmentation
Correlation between angle value, from table 1 it follows that even if all distortion eyes of the original eye fundus image using different proportion
The segmentation accuracy composing training collection of base map picture and its blood vessel segmentation image, the segmentation accuracy value predicted using the method for the present invention
Correlation between true segmentation accuracy value is very high, it is sufficient to illustrate the validity of the method for the present invention.
Table 1 is using the correlation between the segmentation accuracy value and true segmentation accuracy value of the method for the present invention prediction
Index |
80% eye fundus image |
60% eye fundus image |
40% eye fundus image |
20% eye fundus image |
PLCC |
0.9633 |
0.9562 |
0.9424 |
0.9104 |
SROCC |
0.9554 |
0.9506 |
0.9417 |
0.9210 |
KROCC |
0.8342 |
0.8129 |
0.7921 |
0.7512 |
RMSE |
5.5214 |
6.4437 |
7.3802 |
9.1348 |