CN109377472A - A kind of eye fundus image quality evaluating method - Google Patents

A kind of eye fundus image quality evaluating method Download PDF

Info

Publication number
CN109377472A
CN109377472A CN201811059964.8A CN201811059964A CN109377472A CN 109377472 A CN109377472 A CN 109377472A CN 201811059964 A CN201811059964 A CN 201811059964A CN 109377472 A CN109377472 A CN 109377472A
Authority
CN
China
Prior art keywords
eye fundus
fundus image
denoted
indicate
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811059964.8A
Other languages
Chinese (zh)
Other versions
CN109377472B (en
Inventor
邵枫
杨伟山
李福翠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yunshi Information Technology Co ltd
Dragon Totem Technology Hefei Co ltd
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN201811059964.8A priority Critical patent/CN109377472B/en
Publication of CN109377472A publication Critical patent/CN109377472A/en
Application granted granted Critical
Publication of CN109377472B publication Critical patent/CN109377472B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses a kind of eye fundus image quality evaluating methods, that takes into account fuzzy, overexposure and the under-exposure influences to eye fundus image segmentation precision, extract statistical nature vector, textural characteristics vector, shape feature vector constitutive characteristic vector, then it is trained using characteristic vector of the support vector regression to all distortion eye fundus images, structure forecast model;In test phase, it is used as the characteristic vector of the distortion eye fundus image of test by calculating, and the prediction model constructed according to the training stage, prediction obtains the segmentation accuracy value of the distortion eye fundus image as test, since the characteristic vector information of acquisition can preferably reflect situation of change of the distortion to segmentation accuracy of distortion eye fundus image, therefore the correlation between the segmentation accuracy value of prediction and true segmentation accuracy value is effectively improved, it can accurately automatic Evaluation eye fundus image quality, effectively improve the correlation objectively evaluated between result and subjective perception.

Description

A kind of eye fundus image quality evaluating method
Technical field
The present invention relates to a kind of image quality evaluating methods, more particularly, to a kind of eye fundus image quality evaluating method.
Background technique
Eye fundus image is shot by special fundus camera and is obtained, and eye fundus image contains optic disk in retina, macula lutea and blood The major physiologicals structure such as pipe is a kind of important image in medical image.Wherein, optic disk is shown as in normal eye fundus image Approximate circle light tone region, it is most strong with the contrast of background area, it is the initiation region of optic nerve and blood vessel;Macula lutea is due to it Lutein rich in, therefore dark areas is shown as in normal eye fundus image, and the dark areas is without blood vessel structure, There is the region being recessed inwardly to be known as central fovea in the centre of macula lutea;Blood vessel is by optic disk region and extends to entire eye Inside ball, presentation is tree-shaped to be distributed in entire eye fundus image, and the blood vessel in optic disk region is most thick, density is maximum, and substantially along vertical Histogram is to extension.
Although eye fundus image quality evaluating method is more and more mature, the quality evaluation of wherein most eye fundus image Dependent on the subjective judgement of medical researchers, although the mode of this subjective judgement is on the total quality of evaluation eye fundus image There is certain reference value, but is difficult to evaluate influence of the eye fundus image quality to segmentation by subjective mode.Eye fundus image can As the applications such as vessel information, positioning optic disk, positioning macular area are extracted, the diagnosis and treatment that these applications are ophthalmology disease are all Huge information is provided, and eye fundus image can cause to adopt in collection and transmission inevitably by noise jamming The missing of the retina effective information collected, and then influence the diagnosis of doctor.Therefore, the shadow that eye fundus image is divided in research distortion Sound is just particularly important.
Summary of the invention
Technical problem to be solved by the invention is to provide a kind of eye fundus image quality evaluating methods, can accurately certainly Dynamic evaluation eye fundus image quality, effectively improves the correlation objectively evaluated between result and subjective perception.
The technical scheme of the invention to solve the technical problem is: a kind of eye fundus image quality evaluating method, It is characterized in that including two processes of training stage and test phase;
The specific steps of the training stage process are as follows:
1. the real blood vessels segmented image of N original eye fundus image and every original eye fundus image _ 1, is chosen, by the The real blood vessels segmented image of u original eye fundus image is denoted as Mu;Then L are carried out respectively to every original eye fundus image Different grades of fuzzy distortion, L different grades of overexposure optical distortions and L different grades of under-exposure distortions, obtain every width The corresponding 3L width of original eye fundus image is distorted eye fundus image, distortion eye fundus image, L width overexposure comprising the fuzzy distortion of L width The distortion eye fundus image of distortion, L under-exposure distortion distortion eye fundus image, by the corresponding v of the u original eye fundus image Width distortion eye fundus image is denoted as Su,v;Wherein, N > 1, u are positive integer, and 1≤u≤N, L > 1, v are positive integer, 1≤v≤3L, Mu And Su,vWidth be W and height be H;
1. _ 2, using the blood vessel segmentation method based on matched filtering, every width corresponding to every original eye fundus image is lost True eye fundus image carries out blood vessel segmentation, obtains the blood vessel segmentation of the corresponding every width distortion eye fundus image of every original eye fundus image Image, by Su,vBlood vessel segmentation image be denoted as Qu,v;Then according to the real blood vessels segmented image of every original eye fundus image, The segmentation accuracy for calculating the blood vessel segmentation image of the corresponding every width distortion eye fundus image of every original eye fundus image, by Qu,v Segmentation accuracy be denoted as ρu,v,Wherein, Qu,vWidth be W and height is H, TP indicates Qu,vIn Blood vessel and corresponding pixel points are detected as in MuIn also for blood vessel pixel number, FP indicate Qu,vIn be detected as blood vessel but right Answer pixel in MuIn for non-vascular pixel number, FN indicate Qu,vIn be detected as non-vascular and corresponding pixel points in MuIn It also is the number of the pixel of non-vascular;
1. _ 3, all distortion eye fundus images and its segmentation accuracy composing training collection of blood vessel segmentation image are denoted as {Sv'v'|1≤v'≤N×3L};Wherein, v' is positive integer, and 1≤v'≤N × 3L, N × 3L are the total width for being distorted eye fundus image Number, Sv'Indicate { Sv'v'| 1≤v'≤N × 3L } in v' width be distorted eye fundus image, ρv'Indicate { Sv'v'|1≤v'≤N× 3L } in v' width distortion eye fundus image blood vessel segmentation image segmentation accuracy;
1. _ 4, calculating { Sv'v'| 1≤v'≤N × 3L } in every width distortion eye fundus image statistical nature vector, by Sv' Statistical nature vector be denoted asWherein,Dimension be 5 × 1;
1. _ 5, calculating { Sv'v'| 1≤v'≤N × 3L } in every width distortion eye fundus image textural characteristics vector, by Sv' Textural characteristics vector be denoted asWherein,Dimension be 4 × 1;
1. _ 6, calculating { Sv'v'| 1≤v'≤N × 3L } in every width distortion eye fundus image shape feature vector, by Sv' Shape feature vector be denoted asWherein,Dimension be 2 × 1;
1. _ 7, by { Sv'v'| 1≤v'≤N × 3L } in every width distortion statistical nature vector of eye fundus image, texture it is special Levy vector, shape feature vector arranged in sequence constitutes { Sv'v'| 1≤v'≤N × 3L } in every width distortion eye fundus image spy Vector is levied, by Sv'Characteristic vector be denoted as Fv',Wherein, Fv'Dimension be 11 × 1,ForTransposition,ForTransposition,ForTransposition,ForTransposition;
1. _ 8, by { Sv'v'| 1≤v'≤N × 3L } in all distortion eye fundus images respective blood vessel segmentation images Segmentation accuracy and characteristic vector composing training sample data sets include N × 3L segmentation standard in training sample data set Exactness and N × 3L characteristic vector;Then method of the support vector regression as machine learning is used, to training sample data collection All characteristic vectors in conjunction are trained, so that by the error between the obtained regression function value of training and segmentation accuracy Minimum, fitting obtain optimal weight vector woptWith optimal bias term bopt;Followed by optimal weight vector woptMost Excellent bias term bopt, structure forecast model is denoted as f (F),Wherein, f () is function representation Form, F are used to indicate the characteristic vector of distortion eye fundus image, and the input vector as prediction model, (wopt)TFor woptTurn It sets,For the linear function of F;
The specific steps of the test phase process are as follows:
2. being used as the distortion eye fundus image of test for any one width, it is denoted as Stest;Then according to step 1. _ 4 to step 1. _ 7 process obtains S with identical operationtestCharacteristic vector, be denoted as Ftest;Further according to the prediction mould of training stage construction Type f (F) is to FtestIt is tested, prediction obtains FtestCorresponding predicted value, using the predicted value as StestSegmentation accuracy Value, is denoted as ρtest,Wherein, StestWidth be W' and height be H', FtestDimension be 11 × 1,For FtestLinear function.
The step 1. _ 4 inAcquisition process are as follows:
1. _ 4a, calculating Sv'Characteristics of mean, standard deviation characteristic, skewness feature, kurtosis and Information Entropy Features, it is right F should be denoted as1、f2、f3、f4、f5, Wherein, 1≤x≤W, 1≤y≤H, Sv'(x, y) indicates Sv'Middle coordinate position is the pixel value of the pixel of (x, y), and j is positive integer, 1≤j≤J, J table Show Sv'Included in gray level total number, Sv'(j) S is indicatedv'In j-th of gray level gray value, p [Sv'(j)] it indicates Sv'(j) in Sv'The probability of middle appearance, Indicate Sv'Middle gray value is equal to Sv'(j) picture The total number of vegetarian refreshments;
1. _ 4b, by f1、f2、f3、f4、f5Arranged in sequence constitutes Sv'Statistical nature vector Wherein, [f1,f2,f3,f4,f5]TFor [f1,f2,f3,f4,f5] transposition.
The step 1. _ 5 inAcquisition process are as follows:
1. _ 5a, to Sv'In all pixels point be scanned in 0 ° of degree direction of level, obtain Sv'In 0 ° of degree direction of level Gray level co-occurrence matrixes, be denoted as { p(j1,j2)|1≤j1≤J,1≤j2≤J};Wherein, j1And j2It is positive integer, 1≤j1≤ J, 1≤j2≤ J, j1≠j2, J expression Sv'Included in gray level total number, p(j1,j2) indicate Sv'Middle gray value is j1 Pixel and gray value be j2Pixel 0 ° of level spend direction simultaneously occur probability;
1. _ 5b, basis { p(j1,j2)|1≤j1≤J,1≤j2≤ J }, calculate Sv'The contrast that direction is spent at 0 ° of level is special Sign, degree of correlation feature, angular second moment feature, homogeney feature, correspondence are denoted as C、R、EAnd H, Wherein,Indicate Sv'The first of direction is spent at 0 ° of level Mean value, Indicate Sv'Second mean value in direction is spent at 0 ° of level, Indicate Sv'First standard deviation in direction is spent at 0 ° of level, Indicate Sv'Second standard deviation in direction is spent at 0 ° of level,Symbol " | | " it is the symbol that takes absolute value;
1. _ 5c, to Sv'In all pixels point be scanned in right diagonal 45 ° of degree direction, obtain Sv'At right diagonal 45 ° The gray level co-occurrence matrixes for spending direction, are denoted as { p45°(j1,j2)|1≤j1≤J,1≤j2≤J};Wherein, p45°(j1,j2) indicate Sv'In Gray value is j1Pixel and gray value be j2The probability that occurs simultaneously in right diagonal 45 ° of degree direction of pixel;
1. _ 5d, basis { p45°(j1,j2)|1≤j1≤J,1≤j2≤ J }, calculate Sv'Comparison in right diagonal 45 ° of degree direction Feature, degree of correlation feature, angular second moment feature, homogeney feature are spent, correspondence is denoted as C45°、R45°、E45°And H45°, Wherein,Indicate Sv'In right diagonal 45 ° of degree direction The first mean value, Indicate Sv'The second mean value in right diagonal 45 ° of degree direction, Indicate Sv'The first standard deviation in right diagonal 45 ° of degree direction, Indicate Sv'The second standard deviation in right diagonal 45 ° of degree direction,
1. _ 5e, to Sv'In all pixels point be scanned in vertical 90 ° of degree direction, obtain Sv'In vertical 90 ° of degree sides To gray level co-occurrence matrixes, be denoted as { p90°(j1,j2)|1≤j1≤J,1≤j2≤J};Wherein, p90°(j1,j2) indicate Sv'Middle gray scale Value is j1Pixel and gray value be j2Pixel vertical 90 ° spend directions simultaneously occur probability;
1. _ 5f, basis { p90°(j1,j2)|1≤j1≤J,1≤j2≤ J }, calculate Sv'The contrast in direction is spent at vertical 90 ° Feature, degree of correlation feature, angular second moment feature, homogeney feature, correspondence are denoted as C90°、R90°、E90°And H90°, Wherein,Indicate Sv'The of direction is spent at vertical 90 ° One mean value, Indicate Sv'Second mean value in direction is spent at vertical 90 °, Indicate Sv'First standard deviation in direction is spent at vertical 90 °, Indicate Sv'Second standard deviation in direction is spent at vertical 90 °,
1. _ 5g, to Sv'In all pixels point be scanned in left diagonal 135 ° of degree direction, obtain Sv'Left diagonal The gray level co-occurrence matrixes in 135 ° of degree directions, are denoted as { p135°(j1,j2)|1≤j1≤J,1≤j2≤J};Wherein, p135°(j1,j2) table Show Sv'Middle gray value is j1Pixel and gray value be j2The probability that occurs simultaneously in left diagonal 135 ° of degree direction of pixel;
1. _ 5h, basis { p135°(j1,j2)|1≤j1≤J,1≤j2≤ J }, calculate Sv'Pair in left diagonal 135 ° of degree direction Than degree feature, degree of correlation feature, angular second moment feature, homogeney feature, correspondence is denoted as C135°、R135°、E135°And H135°, Wherein,Indicate Sv'In left diagonal 135 ° of degree sides To the first mean value, Indicate Sv'The second mean value in left diagonal 135 ° of degree direction, Indicate Sv'The first standard deviation in left diagonal 135 ° of degree direction, Indicate Sv'The second standard deviation in left diagonal 135 ° of degree direction,
1. _ 5i, calculating Sv'Contrast metric, degree of correlation feature, angular second moment feature, homogeney feature, correspondence be denoted as f6、f7、f8And f9,
1. _ 5j, by f6、f7、f8And f9Arranged in sequence constitutes Sv'Textural characteristics vector Wherein, [f6,f7,f8,f9]TFor [f6,f7,f8,f9] transposition.
The step 1. _ 6 inAcquisition process are as follows:
1. _ 6a, calculating Sv'Horizontal gradient image and vertical gradient image, correspondence be denoted as GXAnd GY, by GXMiddle coordinate position Pixel value for the pixel of (x, y) is denoted as GX(x, y),By GYMiddle coordinate position Pixel value for the pixel of (x, y) is denoted as GY(x, y),Wherein, 1≤x≤ W, 1≤y≤H, symbolFor convolution operation symbol, Sv'(x, y) indicates Sv'Middle coordinate position is the picture of the pixel of (x, y) Element value;
1. _ 6b, according to GXAnd GYCalculate Sv'Gradient magnitude image, be denoted as G, by coordinate position in G be (x, y) pixel The pixel value of point is denoted as G (x, y),
1. _ 6c, calculating Sv'Blood vessel gradient Variance feature and vascular bending degree feature, correspondence be denoted as f10And f11,Wherein, n table Show Sv'In for blood vessel pixel number, i is positive integer, 1≤i≤n-1, and symbol " | | " is the symbol that takes absolute value, θiIt indicates Sv'In for blood vessel ith pixel point tangential angle, θi+1Indicate Sv'In for blood vessel i+1 pixel the angle of contingence Degree;
1. _ 6d, by f10And f11Arranged in sequence constitutes Sv'Shape feature vector Wherein, [f10,f11]TFor [f10,f11] transposition.
Compared with the prior art, the advantages of the present invention are as follows:
The method of the present invention considers fuzzy, overexposure and the under-exposure influence to eye fundus image segmentation precision, extracts system Characteristic vector, textural characteristics vector, shape feature vector constitutive characteristic vector are counted, then using support vector regression to being lost The characteristic vector of true eye fundus image is trained, structure forecast model;In test phase, the distortion eye of test is used as by calculating The characteristic vector of base map picture, and according to the prediction model that the training stage constructs, prediction obtains the distortion eye fundus image as test Segmentation accuracy value, since the characteristic vector information of acquisition can preferably reflect that the distortion of distortion eye fundus image is quasi- to segmentation The situation of change of true property, therefore effectively improve the phase between the segmentation accuracy value of prediction and true segmentation accuracy value Guan Xing, can accurately automatic Evaluation eye fundus image quality, effectively improve and objectively evaluate between result and subjective perception Correlation.
Detailed description of the invention
Fig. 1 is that the overall of the method for the present invention realizes block diagram.
Specific embodiment
The present invention will be described in further detail below with reference to the embodiments of the drawings.
A kind of eye fundus image quality evaluating method proposed by the present invention, it is overall to realize that block diagram is as shown in Figure 1 comprising instruction Practice two processes of stage and test phase;
The specific steps of the training stage process are as follows:
1. the real blood vessels segmented image of N original eye fundus image and every original eye fundus image _ 1, is chosen, by the The real blood vessels segmented image of u original eye fundus image is denoted as Mu;Then L are carried out respectively to every original eye fundus image Different grades of fuzzy distortion, L different grades of overexposure optical distortions and L different grades of under-exposure distortions, obtain every width The corresponding 3L width of original eye fundus image is distorted eye fundus image, distortion eye fundus image, L width overexposure comprising the fuzzy distortion of L width The distortion eye fundus image of distortion, L under-exposure distortion distortion eye fundus image, by the corresponding v of the u original eye fundus image Width distortion eye fundus image is denoted as Su,v;Wherein, N > 1, taking N=20, u in the present embodiment is positive integer, 1≤u≤N, L > 1, It is positive integer, 1≤v≤3L, M that L=8, v are taken in the present embodimentuAnd Su,vWidth be W and height be H.
1. _ 2, the blood vessel segmentation method using existing based on matched filtering, corresponding to every original eye fundus image Every width distortion eye fundus image carries out blood vessel segmentation, obtains the blood of the corresponding every width distortion eye fundus image of every original eye fundus image Pipe segmented image, by Su,vBlood vessel segmentation image be denoted as Qu,v;Then according to the real blood vessels of every original eye fundus image point Image is cut, the segmentation for calculating the blood vessel segmentation image of the corresponding every width distortion eye fundus image of every original eye fundus image is accurate Degree, by Qu,vSegmentation accuracy be denoted as ρu,v,Wherein, Qu,vWidth be W and height be H, TP table Show Qu,vIn be detected as blood vessel and corresponding pixel points in MuIn also for blood vessel pixel number, FP indicate Qu,vIn be detected as blood It manages but corresponding pixel points is in MuIn for non-vascular pixel number, FN indicate Qu,vIn be detected as non-vascular and respective pixel Point is in MuIn also for non-vascular pixel number, TP, FP and FN can by statistics obtain.
1. _ 3, all distortion eye fundus images and its segmentation accuracy composing training collection of blood vessel segmentation image are denoted as {Sv'v'|1≤v'≤N×3L};Wherein, v' is positive integer, and 1≤v'≤N × 3L, N × 3L are the total width for being distorted eye fundus image Number, Sv'Indicate { Sv'v'| 1≤v'≤N × 3L } in v' width be distorted eye fundus image, ρv'Indicate { Sv'v'|1≤v'≤N× 3L } in v' width distortion eye fundus image blood vessel segmentation image segmentation accuracy.
1. _ 4, calculating { Sv'v'| 1≤v'≤N × 3L } in every width distortion eye fundus image statistical nature vector, by Sv' Statistical nature vector be denoted asWherein,Dimension be 5 × 1.
In the present embodiment, step 1. _ 4 inAcquisition process are as follows:
1. _ 4a, calculating Sv'Characteristics of mean, standard deviation characteristic, skewness feature, kurtosis and Information Entropy Features, it is right F should be denoted as1、f2、f3、f4、f5, Wherein, 1≤x≤W, 1≤y≤H, Sv'(x, y) indicates Sv'Middle coordinate position is the pixel value of the pixel of (x, y), and j is positive integer, 1≤j≤J, J table Show Sv'Included in gray level total number, Sv'(j) S is indicatedv'In j-th of gray level gray value, p [Sv'(j)] it indicates Sv'(j) in Sv'The probability of middle appearance, Indicate Sv'Middle gray value is equal to Sv'(j) The total number of pixel.
1. _ 4b, by f1、f2、f3、f4、f5Arranged in sequence constitutes Sv'Statistical nature vector Wherein, [f1,f2,f3,f4,f5]TFor [f1,f2,f3,f4,f5] transposition.
1. _ 5, calculating { Sv'v'| 1≤v'≤N × 3L } in every width distortion eye fundus image textural characteristics vector, by Sv' Textural characteristics vector be denoted asWherein,Dimension be 4 × 1.
In the present embodiment, step 1. _ 5 inAcquisition process are as follows:
1. _ 5a, to Sv'In all pixels point be scanned in 0 ° of degree direction of level, obtain Sv'In 0 ° of degree direction of level Gray level co-occurrence matrixes, be denoted as { p(j1,j2)|1≤j1≤J,1≤j2≤J};Wherein, j1And j2It is positive integer, 1≤j1≤ J, 1≤j2≤ J, j1≠j2, J expression Sv'Included in gray level total number, p(j1,j2) indicate Sv'Middle gray value is j1 Pixel and gray value be j2Pixel 0 ° of level spend direction simultaneously occur probability.
1. _ 5b, basis { p(j1,j2)|1≤j1≤J,1≤j2≤ J }, calculate Sv'The contrast that direction is spent at 0 ° of level is special Sign, degree of correlation feature, angular second moment feature, homogeney feature, correspondence are denoted as C、R、EAnd H, Wherein,Indicate Sv'The first of direction is spent at 0 ° of level Mean value, Indicate Sv'Second mean value in direction is spent at 0 ° of level, Indicate Sv'First standard deviation in direction is spent at 0 ° of level, Indicate Sv'Second standard deviation in direction is spent at 0 ° of level,Symbol " | | " it is the symbol that takes absolute value.
1. _ 5c, to Sv'In all pixels point be scanned in right diagonal 45 ° of degree direction, obtain Sv'At right diagonal 45 ° The gray level co-occurrence matrixes for spending direction, are denoted as { p45°(j1,j2)|1≤j1≤J,1≤j2≤J};Wherein, p45°(j1,j2) indicate Sv'In Gray value is j1Pixel and gray value be j2The probability that occurs simultaneously in right diagonal 45 ° of degree direction of pixel.
1. _ 5d, basis { p45°(j1,j2)|1≤j1≤J,1≤j2≤ J }, calculate Sv'Comparison in right diagonal 45 ° of degree direction Feature, degree of correlation feature, angular second moment feature, homogeney feature are spent, correspondence is denoted as C45°、R45°、E45°And H45°, Wherein,Indicate Sv'In right diagonal 45 ° of degree direction The first mean value, Indicate Sv'The second mean value in right diagonal 45 ° of degree direction, Indicate Sv'The first standard deviation in right diagonal 45 ° of degree direction, Indicate Sv'The second standard deviation in right diagonal 45 ° of degree direction,
1. _ 5e, to Sv'In all pixels point be scanned in vertical 90 ° of degree direction, obtain Sv'In vertical 90 ° of degree sides To gray level co-occurrence matrixes, be denoted as { p90°(j1,j2)|1≤j1≤J,1≤j2≤J};Wherein, p90°(j1,j2) indicate Sv'Middle gray scale Value is j1Pixel and gray value be j2Pixel vertical 90 ° spend directions simultaneously occur probability.
1. _ 5f, basis { p90°(j1,j2)|1≤j1≤J,1≤j2≤ J }, calculate Sv'The contrast in direction is spent at vertical 90 ° Feature, degree of correlation feature, angular second moment feature, homogeney feature, correspondence are denoted as C90°、R90°、E90°And H90°, Wherein,Indicate Sv'The of direction is spent at vertical 90 ° One mean value, Indicate Sv'Second mean value in direction is spent at vertical 90 °, Indicate Sv'First standard deviation in direction is spent at vertical 90 °, Indicate Sv'Second standard deviation in direction is spent at vertical 90 °,
1. _ 5g, to Sv'In all pixels point be scanned in left diagonal 135 ° of degree direction, obtain Sv'Left diagonal The gray level co-occurrence matrixes in 135 ° of degree directions, are denoted as { p135°(j1,j2)|1≤j1≤J,1≤j2≤J};Wherein, p135°(j1,j2) table Show Sv'Middle gray value is j1Pixel and gray value be j2The probability that occurs simultaneously in left diagonal 135 ° of degree direction of pixel.
1. _ 5h, basis { p135°(j1,j2)|1≤j1≤J,1≤j2≤ J }, calculate Sv'Pair in left diagonal 135 ° of degree direction Than degree feature, degree of correlation feature, angular second moment feature, homogeney feature, correspondence is denoted as C135°、R135°、E135°And H135°, Wherein,Indicate Sv'In left diagonal 135 ° of degree sides To the first mean value, Indicate Sv'The second mean value in left diagonal 135 ° of degree direction, Indicate Sv'The first standard deviation in left diagonal 135 ° of degree direction, Indicate Sv'The second standard deviation in left diagonal 135 ° of degree direction,
1. _ 5i, calculating Sv'Contrast metric, degree of correlation feature, angular second moment feature, homogeney feature, correspondence be denoted as f6、f7、f8And f9,
1. _ 5j, by f6、f7、f8And f9Arranged in sequence constitutes Sv'Textural characteristics vector Wherein, [f6,f7,f8,f9]TFor [f6,f7,f8,f9] transposition.
1. _ 6, calculating { Sv'v'| 1≤v'≤N × 3L } in every width distortion eye fundus image shape feature vector, by Sv' Shape feature vector be denoted asWherein,Dimension be 2 × 1.
In the present embodiment, step 1. _ 6 inAcquisition process are as follows:
1. _ 6a, calculating Sv'Horizontal gradient image and vertical gradient image, correspondence be denoted as GXAnd GY, by GXMiddle coordinate position Pixel value for the pixel of (x, y) is denoted as GX(x, y),By GYMiddle coordinate position Pixel value for the pixel of (x, y) is denoted as GY(x, y),Wherein, 1≤x≤W, 1≤y≤H, symbolFor convolution operation symbol, Sv'(x, y) indicates Sv'Middle coordinate position is the pixel of the pixel of (x, y) Value.
1. _ 6b, according to GXAnd GYCalculate Sv'Gradient magnitude image, be denoted as G, by coordinate position in G be (x, y) pixel The pixel value of point is denoted as G (x, y),
1. _ 6c, calculating Sv'Blood vessel gradient Variance feature and vascular bending degree feature, correspondence be denoted as f10And f11,Wherein, n table Show Sv'In for blood vessel pixel number, i is positive integer, 1≤i≤n-1, and symbol " | | " is the symbol that takes absolute value, θiIt indicates Sv'In for blood vessel ith pixel point tangential angle, θi+1Indicate Sv'In for blood vessel i+1 pixel the angle of contingence Degree.
1. _ 6d, by f10And f11Arranged in sequence constitutes Sv'Shape feature vector Wherein, [f10,f11]TFor [f10,f11] transposition.
1. _ 7, by { Sv'v'| 1≤v'≤N × 3L } in every width distortion statistical nature vector of eye fundus image, texture it is special Levy vector, shape feature vector arranged in sequence constitutes { Sv'v'| 1≤v'≤N × 3L } in every width distortion eye fundus image spy Vector is levied, by Sv'Characteristic vector be denoted as Fv',Wherein, Fv'Dimension be 11 × 1,ForTransposition,ForTransposition,ForTransposition, ForTransposition.
1. _ 8, by { Sv'v'| 1≤v'≤N × 3L } in all distortion eye fundus images respective blood vessel segmentation images Segmentation accuracy and characteristic vector composing training sample data sets include N × 3L segmentation standard in training sample data set Exactness and N × 3L characteristic vector;Then method of the support vector regression as machine learning is used, to training sample data collection All characteristic vectors in conjunction are trained, so that by the error between the obtained regression function value of training and segmentation accuracy Minimum, fitting obtain optimal weight vector woptWith optimal bias term bopt;Followed by optimal weight vector woptMost Excellent bias term bopt, structure forecast model is denoted as f (F),Wherein, f () is function representation Form, F are used to indicate the characteristic vector of distortion eye fundus image, and the input vector as prediction model, (wopt)TFor woptTurn It sets,For the linear function of F.
The specific steps of the test phase process are as follows:
2. being used as the distortion eye fundus image of test for any one width, it is denoted as Stest;Then according to step 1. _ 4 to step 1. _ 7 process obtains S with identical operationtestCharacteristic vector, be denoted as Ftest;Further according to the prediction mould of training stage construction Type f (F) is to FtestIt is tested, prediction obtains FtestCorresponding predicted value, using the predicted value as StestSegmentation accuracy Value, is denoted as ρtest,Wherein, StestWidth be W' and height is H', W' can be identical as W or not Identical, H' can be identical as H or not identical, FtestDimension be 11 × 1,For FtestLinear function.
The feasibility and validity of method in order to further illustrate the present invention, tests the method for the present invention.
In the present embodiment, the eye fundus image database that University Of Ningbo establishes is tested using the method for the present invention, it should Eye fundus image database includes 20 original eye fundus images, carries out 8 different grades of moulds to every original eye fundus image 480 width distortion eyeground figure is obtained in paste distortion, 8 different grades of overexposure optical distortions and 8 different grades of under-exposure distortions Picture, every width are distorted the segmentation accuracy value that eye fundus image specifies [0, a 1] range, and 1 indicates that segmentation quality is excellent, 0 expression point It is bad for cutting quality.
In the present embodiment, objective parameter is commonly used as evaluation index, i.e., using 4 of assessment image quality evaluating method Under the conditions of nonlinear regression Pearson correlation coefficient (Pearson linear correlation coefficient, PLCC), Spearman related coefficient (Spearman rankorder correlation coefficient, SROCC), Kendall related coefficient (Kendall rank-order correlation coefficient, KROCC), mean square error (rootmean squared error, RMSE), the accuracy of PLCC and RMSE reflection evaluating objective quality predicted value, SROCC Reflect its monotonicity with KROCC.Table 1 gives accurate using the segmentation accuracy value of the method for the present invention prediction and true segmentation Correlation between angle value, from table 1 it follows that even if all distortion eyes of the original eye fundus image using different proportion The segmentation accuracy composing training collection of base map picture and its blood vessel segmentation image, the segmentation accuracy value predicted using the method for the present invention Correlation between true segmentation accuracy value is very high, it is sufficient to illustrate the validity of the method for the present invention.
Table 1 is using the correlation between the segmentation accuracy value and true segmentation accuracy value of the method for the present invention prediction
Index 80% eye fundus image 60% eye fundus image 40% eye fundus image 20% eye fundus image
PLCC 0.9633 0.9562 0.9424 0.9104
SROCC 0.9554 0.9506 0.9417 0.9210
KROCC 0.8342 0.8129 0.7921 0.7512
RMSE 5.5214 6.4437 7.3802 9.1348

Claims (4)

1. a kind of eye fundus image quality evaluating method, it is characterised in that including two processes of training stage and test phase;
The specific steps of the training stage process are as follows:
1. the real blood vessels segmented image of N original eye fundus image and every original eye fundus image _ 1, is chosen, by u width The real blood vessels segmented image of original eye fundus image is denoted as Mu;Then L are carried out respectively not to every original eye fundus image The fuzzy distortion of ad eundem, L different grades of overexposure optical distortions and L different grades of under-exposure distortions, it is former to obtain every width The corresponding 3L width of the eye fundus image of beginning is distorted eye fundus image, and distortion eye fundus image, L width overexposure comprising the fuzzy distortion of L width lose It is really distorted the distortion eye fundus image of eye fundus image, L under-exposure distortion, by the corresponding v width of the u original eye fundus image Distortion eye fundus image is denoted as Su,v;Wherein, N > 1, u are positive integer, and 1≤u≤N, L > 1, v are positive integer, 1≤v≤3L, MuWith Su,vWidth be W and height be H;
1. _ 2, using the blood vessel segmentation method based on matched filtering, every width corresponding to every original eye fundus image is distorted eye Base map picture carries out blood vessel segmentation, obtains the blood vessel segmentation figure of the corresponding every width distortion eye fundus image of every original eye fundus image Picture, by Su,vBlood vessel segmentation image be denoted as Qu,v;Then according to the real blood vessels segmented image of every original eye fundus image, meter The segmentation accuracy for calculating the blood vessel segmentation image of the corresponding every width distortion eye fundus image of every original eye fundus image, by Qu,v's Segmentation accuracy is denoted as ρu,v,Wherein, Qu,vWidth be W and height is H, TP indicates Qu,vMiddle inspection It surveys as blood vessel and corresponding pixel points is in MuIn also for blood vessel pixel number, FP indicate Qu,vIn be detected as blood vessel but correspondence Pixel is in MuIn for non-vascular pixel number, FN indicate Qu,vIn be detected as non-vascular and corresponding pixel points in MuIn For the number of the pixel of non-vascular;
1. all distortion eye fundus images and its segmentation accuracy composing training collection of blood vessel segmentation image _ 3, are denoted as { Sv'v'| 1≤v'≤N×3L};Wherein, v' is positive integer, and 1≤v'≤N × 3L, N × 3L are the total width number for being distorted eye fundus image, Sv'It indicates {Sv'v'| 1≤v'≤N × 3L } in v' width be distorted eye fundus image, ρv'Indicate { Sv'v'| 1≤v'≤N × 3L } in V' width is distorted the segmentation accuracy of the blood vessel segmentation image of eye fundus image;
1. _ 4, calculating { Sv'v'| 1≤v'≤N × 3L } in every width distortion eye fundus image statistical nature vector, by Sv'System Meter characteristic vector is denoted asWherein,Dimension be 5 × 1;
1. _ 5, calculating { Sv'v'| 1≤v'≤N × 3L } in every width distortion eye fundus image textural characteristics vector, by Sv'Line Reason characteristic vector is denoted asWherein,Dimension be 4 × 1;
1. _ 6, calculating { Sv'v'| 1≤v'≤N × 3L } in every width distortion eye fundus image shape feature vector, by Sv'Shape Shape characteristic vector is denoted asWherein,Dimension be 2 × 1;
1. _ 7, by { Sv'v'| 1≤v'≤N × 3L } in every width distortion eye fundus image statistical nature vector, textural characteristics arrow Amount, shape feature vector arranged in sequence constitute { Sv'v'| 1≤v'≤N × 3L } in every width distortion eye fundus image Characteristic Vectors Amount, by Sv'Characteristic vector be denoted as Fv',Wherein, Fv'Dimension be 11 × 1,ForTransposition,ForTransposition,ForTransposition,ForTransposition;
1. _ 8, by { Sv'v'| 1≤v'≤N × 3L } in the respective blood vessel segmentation image of all distortion eye fundus images segmentation it is quasi- Exactness and characteristic vector composing training sample data sets include N × 3L segmentation accuracy and N in training sample data set × 3L characteristic vector;Then method of the support vector regression as machine learning is used, in training sample data set All characteristic vectors are trained, so that it is minimum by the error between the obtained regression function value of training and segmentation accuracy, Fitting obtains optimal weight vector woptWith optimal bias term bopt;Followed by optimal weight vector woptWith it is optimal Bias term bopt, structure forecast model is denoted as f (F),Wherein, f () is function representation form, F For indicating the characteristic vector of distortion eye fundus image, and the input vector as prediction model, (wopt)TFor woptTransposition,For the linear function of F;
The specific steps of the test phase process are as follows:
2. being used as the distortion eye fundus image of test for any one width, it is denoted as Stest;Then according to step 1. _ 4 to step 1. _ 7 Process obtains S with identical operationtestCharacteristic vector, be denoted as Ftest;Further according to the prediction model f (F) of training stage construction To FtestIt is tested, prediction obtains FtestCorresponding predicted value, using the predicted value as StestSegmentation accuracy value, be denoted as ρtest,Wherein, StestWidth be W' and height be H', FtestDimension be 11 × 1,For FtestLinear function.
2. a kind of eye fundus image quality evaluating method according to claim 1, it is characterised in that the step 1. _ 4 in 'sAcquisition process are as follows:
1. _ 4a, calculating Sv'Characteristics of mean, standard deviation characteristic, skewness feature, kurtosis and Information Entropy Features, corresponding note For f1、f2、f3、f4、f5, Wherein, 1≤x≤W, 1≤y≤H, Sv'(x, y) indicates Sv'Middle coordinate position is the pixel value of the pixel of (x, y), and j is positive integer, 1≤j≤J, J table Show Sv'Included in gray level total number, Sv'(j) S is indicatedv'In j-th of gray level gray value, p [Sv'(j)] it indicates Sv'(j) in Sv'The probability of middle appearance, Indicate Sv'Middle gray value is equal to Sv'(j) picture The total number of vegetarian refreshments;
1. _ 4b, by f1、f2、f3、f4、f5Arranged in sequence constitutes Sv'Statistical nature vector Wherein, [f1,f2,f3,f4,f5]TFor [f1,f2,f3,f4,f5] transposition.
3. a kind of eye fundus image quality evaluating method according to claim 1 or 2, it is characterised in that the step is 1. _ 5 InAcquisition process are as follows:
1. _ 5a, to Sv'In all pixels point be scanned in 0 ° of degree direction of level, obtain Sv'The gray scale in direction is spent at 0 ° of level Co-occurrence matrix is denoted as { p(j1,j2)|1≤j1≤J,1≤j2≤J};Wherein, j1And j2It is positive integer, 1≤j1≤ J, 1≤j2 ≤ J, j1≠j2, J expression Sv'Included in gray level total number, p(j1,j2) indicate Sv'Middle gray value is j1Pixel It is j with gray value2Pixel 0 ° of level spend direction simultaneously occur probability;
1. _ 5b, basis { p(j1,j2)|1≤j1≤J,1≤j2≤ J }, calculate Sv'0 ° of level spend direction contrast metric, Degree of correlation feature, angular second moment feature, homogeney feature, correspondence are denoted as C、R、EAnd H, Wherein,Indicate Sv'The first of direction is spent at 0 ° of level Mean value, Indicate Sv'Second mean value in direction is spent at 0 ° of level, Indicate Sv'First standard deviation in direction is spent at 0 ° of level, Indicate Sv'Second standard deviation in direction is spent at 0 ° of level,Symbol " | | " it is the symbol that takes absolute value;
1. _ 5c, to Sv'In all pixels point be scanned in right diagonal 45 ° of degree direction, obtain Sv'In right diagonal 45 ° of degree direction Gray level co-occurrence matrixes, be denoted as { p45°(j1,j2)|1≤j1≤J,1≤j2≤J};Wherein, p45°(j1,j2) indicate Sv'Middle gray value For j1Pixel and gray value be j2The probability that occurs simultaneously in right diagonal 45 ° of degree direction of pixel;
1. _ 5d, basis { p45°(j1,j2)|1≤j1≤J,1≤j2≤ J }, calculate Sv'Contrast in right diagonal 45 ° of degree direction is special Sign, degree of correlation feature, angular second moment feature, homogeney feature, correspondence are denoted as C45°、R45°、E45°And H45°, Wherein,Indicate Sv'In right diagonal 45 ° of degree direction The first mean value, Indicate Sv'The second mean value in right diagonal 45 ° of degree direction, Indicate Sv'The first standard deviation in right diagonal 45 ° of degree direction, Indicate Sv'The second standard deviation in right diagonal 45 ° of degree direction,
1. _ 5e, to Sv'In all pixels point be scanned in vertical 90 ° of degree direction, obtain Sv'The ash in direction is spent at vertical 90 ° Co-occurrence matrix is spent, { p is denoted as90°(j1,j2)|1≤j1≤J,1≤j2≤J};Wherein, p90°(j1,j2) indicate Sv'Middle gray value is j1 Pixel and gray value be j2Pixel vertical 90 ° spend directions simultaneously occur probability;
1. _ 5f, basis { p90°(j1,j2)|1≤j1≤J,1≤j2≤ J }, calculate Sv'The contrast that direction is spent at vertical 90 ° is special Sign, degree of correlation feature, angular second moment feature, homogeney feature, correspondence are denoted as C90°、R90°、E90°And H90°, Wherein,Indicate Sv'The of direction is spent at vertical 90 ° One mean value, Indicate Sv'Second mean value in direction is spent at vertical 90 °, Indicate Sv'First standard deviation in direction is spent at vertical 90 °, Indicate Sv'Second standard deviation in direction is spent at vertical 90 °,
1. _ 5g, to Sv'In all pixels point be scanned in left diagonal 135 ° of degree direction, obtain Sv'In left diagonal 135 ° of degree sides To gray level co-occurrence matrixes, be denoted as { p135°(j1,j2)|1≤j1≤J,1≤j2≤J};Wherein, p135°(j1,j2) indicate Sv'Middle ash Angle value is j1Pixel and gray value be j2The probability that occurs simultaneously in left diagonal 135 ° of degree direction of pixel;
1. _ 5h, basis { p135°(j1,j2)|1≤j1≤J,1≤j2≤ J }, calculate Sv'Contrast in left diagonal 135 ° of degree direction Feature, degree of correlation feature, angular second moment feature, homogeney feature, correspondence are denoted as C135°、R135°、E135°And H135°, Wherein,Indicate Sv'In left diagonal 135 ° of degree sides To the first mean value, Indicate Sv'The second mean value in left diagonal 135 ° of degree direction, Indicate Sv'The first standard deviation in left diagonal 135 ° of degree direction, Indicate Sv'The second standard deviation in left diagonal 135 ° of degree direction,
1. _ 5i, calculating Sv'Contrast metric, degree of correlation feature, angular second moment feature, homogeney feature, correspondence be denoted as f6、f7、 f8And f9,
1. _ 5j, by f6、f7、f8And f9Arranged in sequence constitutes Sv'Textural characteristics vector Its In, [f6,f7,f8,f9]TFor [f6,f7,f8,f9] transposition.
4. a kind of eye fundus image quality evaluating method according to claim 3, it is characterised in that the step 1. _ 6 in 'sAcquisition process are as follows:
1. _ 6a, calculating Sv'Horizontal gradient image and vertical gradient image, correspondence be denoted as GXAnd GY, by GXMiddle coordinate position is The pixel value of the pixel of (x, y) is denoted as GX(x, y),By GYMiddle coordinate position is The pixel value of the pixel of (x, y) is denoted as GY(x, y),Wherein, 1≤x≤W, 1 ≤ y≤H, symbolFor convolution operation symbol, Sv'(x, y) indicates Sv'Middle coordinate position is the pixel of the pixel of (x, y) Value;
1. _ 6b, according to GXAnd GYCalculate Sv'Gradient magnitude image, be denoted as G, be the pixel of (x, y) by coordinate position in G Pixel value is denoted as G (x, y),
1. _ 6c, calculating Sv'Blood vessel gradient Variance feature and vascular bending degree feature, correspondence be denoted as f10And f11,Wherein, n Indicate Sv'In for blood vessel pixel number, i is positive integer, 1≤i≤n-1, and symbol " | | " is the symbol that takes absolute value, θiTable Show Sv'In for blood vessel ith pixel point tangential angle, θi+1Indicate Sv'In for blood vessel i+1 pixel the angle of contingence Degree;
1. _ 6d, by f10And f11Arranged in sequence constitutes Sv'Shape feature vector Wherein, [f10, f11]TFor [f10,f11] transposition.
CN201811059964.8A 2018-09-12 2018-09-12 Fundus image quality evaluation method Active CN109377472B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811059964.8A CN109377472B (en) 2018-09-12 2018-09-12 Fundus image quality evaluation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811059964.8A CN109377472B (en) 2018-09-12 2018-09-12 Fundus image quality evaluation method

Publications (2)

Publication Number Publication Date
CN109377472A true CN109377472A (en) 2019-02-22
CN109377472B CN109377472B (en) 2021-08-03

Family

ID=65405561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811059964.8A Active CN109377472B (en) 2018-09-12 2018-09-12 Fundus image quality evaluation method

Country Status (1)

Country Link
CN (1) CN109377472B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111489328A (en) * 2020-03-06 2020-08-04 浙江工业大学 Fundus image quality evaluation method based on blood vessel segmentation and background separation
CN113222996A (en) * 2021-03-03 2021-08-06 中南民族大学 Heart segmentation quality evaluation method, device, equipment and storage medium
CN113362354A (en) * 2021-05-07 2021-09-07 安徽国际商务职业学院 Method, system, terminal and storage medium for evaluating quality of tone mapping image
CN114882014A (en) * 2022-06-16 2022-08-09 深圳大学 Dual-model-based fundus image quality evaluation method and device and related medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102202227A (en) * 2011-06-21 2011-09-28 珠海世纪鼎利通信科技股份有限公司 No-reference objective video quality assessment method
CN105338343A (en) * 2015-10-20 2016-02-17 北京理工大学 No-reference stereo image quality evaluation method based on binocular perception
CN105894507A (en) * 2016-03-31 2016-08-24 西安电子科技大学 Image quality evaluation method based on image information content natural scenario statistical characteristics
US9552374B2 (en) * 2013-08-19 2017-01-24 Kodak Alaris, Inc. Imaging workflow using facial and non-facial features
US20170140518A1 (en) * 2015-11-12 2017-05-18 University Of Virginia Patent Foundation D/B/A/ University Of Virginia Licensing & Ventures Group System and method for comparison-based image quality assessment
CN106709916A (en) * 2017-01-19 2017-05-24 泰康保险集团股份有限公司 Image quality assessment method and device
US20180005412A1 (en) * 2016-06-29 2018-01-04 Siemens Medical Solutions Usa, Inc. Reconstruction quality assessment with local non-uniformity in nuclear imaging
CN107767363A (en) * 2017-09-05 2018-03-06 天津大学 It is a kind of based on natural scene without refer to high-dynamics image quality evaluation algorithm
CN107862678A (en) * 2017-10-19 2018-03-30 宁波大学 A kind of eye fundus image reference-free quality evaluation method
CN108364017A (en) * 2018-01-24 2018-08-03 华讯方舟科技有限公司 A kind of picture quality sorting technique, system and terminal device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102202227A (en) * 2011-06-21 2011-09-28 珠海世纪鼎利通信科技股份有限公司 No-reference objective video quality assessment method
US9552374B2 (en) * 2013-08-19 2017-01-24 Kodak Alaris, Inc. Imaging workflow using facial and non-facial features
CN105338343A (en) * 2015-10-20 2016-02-17 北京理工大学 No-reference stereo image quality evaluation method based on binocular perception
US20170140518A1 (en) * 2015-11-12 2017-05-18 University Of Virginia Patent Foundation D/B/A/ University Of Virginia Licensing & Ventures Group System and method for comparison-based image quality assessment
CN105894507A (en) * 2016-03-31 2016-08-24 西安电子科技大学 Image quality evaluation method based on image information content natural scenario statistical characteristics
US20180005412A1 (en) * 2016-06-29 2018-01-04 Siemens Medical Solutions Usa, Inc. Reconstruction quality assessment with local non-uniformity in nuclear imaging
CN106709916A (en) * 2017-01-19 2017-05-24 泰康保险集团股份有限公司 Image quality assessment method and device
CN107767363A (en) * 2017-09-05 2018-03-06 天津大学 It is a kind of based on natural scene without refer to high-dynamics image quality evaluation algorithm
CN107862678A (en) * 2017-10-19 2018-03-30 宁波大学 A kind of eye fundus image reference-free quality evaluation method
CN108364017A (en) * 2018-01-24 2018-08-03 华讯方舟科技有限公司 A kind of picture quality sorting technique, system and terminal device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ANISH MITTAL等: "No-Reference Image Quality Assessment in the Spatial Domain", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
JINGTAO XU等: "Blind Image Quality Assessment Based on High Order Statistics Aggregation", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
富振奇等: "基于深层特征学习的无参考立体图像质量评价", 《光电子·激光》 *
王志明: "无参考图像质量评价综述", 《自动化学报》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111489328A (en) * 2020-03-06 2020-08-04 浙江工业大学 Fundus image quality evaluation method based on blood vessel segmentation and background separation
CN111489328B (en) * 2020-03-06 2023-06-30 浙江工业大学 Fundus image quality evaluation method based on blood vessel segmentation and background separation
CN113222996A (en) * 2021-03-03 2021-08-06 中南民族大学 Heart segmentation quality evaluation method, device, equipment and storage medium
CN113362354A (en) * 2021-05-07 2021-09-07 安徽国际商务职业学院 Method, system, terminal and storage medium for evaluating quality of tone mapping image
CN113362354B (en) * 2021-05-07 2024-04-30 安徽国际商务职业学院 Quality evaluation method, system, terminal and storage medium for tone mapping image
CN114882014A (en) * 2022-06-16 2022-08-09 深圳大学 Dual-model-based fundus image quality evaluation method and device and related medium
CN114882014B (en) * 2022-06-16 2023-02-03 深圳大学 Dual-model-based fundus image quality evaluation method and device and related medium

Also Published As

Publication number Publication date
CN109377472B (en) 2021-08-03

Similar Documents

Publication Publication Date Title
CN109345469B (en) Speckle denoising method in OCT imaging based on condition generation countermeasure network
CN109377472A (en) A kind of eye fundus image quality evaluating method
CN110400289B (en) Fundus image recognition method, fundus image recognition device, fundus image recognition apparatus, and fundus image recognition storage medium
CN109658422A (en) A kind of retinal images blood vessel segmentation method based on multiple dimensioned deep supervision network
CN110517235B (en) OCT image choroid automatic segmentation method based on GCS-Net
CN112132817B (en) Retina blood vessel segmentation method for fundus image based on mixed attention mechanism
Wang et al. Human visual system-based fundus image quality assessment of portable fundus camera photographs
CN109345538A (en) A kind of Segmentation Method of Retinal Blood Vessels based on convolutional neural networks
Lin et al. Retinal image quality assessment for diabetic retinopathy screening: A survey
CN108537282A (en) A kind of diabetic retinopathy stage division using extra lightweight SqueezeNet networks
CN105069304A (en) Machine learning-based method for evaluating and predicting ASD
CN107105223B (en) A kind of tone mapping method for objectively evaluating image quality based on global characteristics
CN109308692A (en) Based on the OCT image quality evaluating method for improving Resnet and SVR mixed model
CN112837805A (en) Deep learning-based eyelid topological morphology feature extraction method
TWI719587B (en) Pre-processing method and storage device for quantitative analysis of fundus image
CN109949277A (en) A kind of OCT image quality evaluating method based on sequence study and simplified residual error network
CN113576508A (en) Cerebral hemorrhage auxiliary diagnosis system based on neural network
CN112446860B (en) Automatic screening method for diabetic macular edema based on transfer learning
CN114694236A (en) Eyeball motion segmentation positioning method based on cyclic residual convolution neural network
CN104318565B (en) Interactive method for retinal vessel segmentation based on bidirectional region growing of constant-gradient distance
CN102567734A (en) Specific value based retina thin blood vessel segmentation method
CN110598652B (en) Fundus data prediction method and device
CN114093018B (en) Vision screening equipment and system based on pupil positioning
CN112365535A (en) Retinal lesion evaluation model establishing method and system
CN114372985A (en) Diabetic retinopathy focus segmentation method and system adapting to multi-center image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231109

Address after: 608, 5th floor, No. 17 Madian East Road, Haidian District, Beijing, 100000

Patentee after: Beijing Yunshi Information Technology Co.,Ltd.

Address before: 230000 floor 1, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee before: Dragon totem Technology (Hefei) Co.,Ltd.

Effective date of registration: 20231109

Address after: 230000 floor 1, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee after: Dragon totem Technology (Hefei) Co.,Ltd.

Address before: 315211, Fenghua Road, Jiangbei District, Zhejiang, Ningbo 818

Patentee before: Ningbo University

TR01 Transfer of patent right