CN109978848A - Method based on hard exudate in multiple light courcess color constancy model inspection eye fundus image - Google Patents

Method based on hard exudate in multiple light courcess color constancy model inspection eye fundus image Download PDF

Info

Publication number
CN109978848A
CN109978848A CN201910211745.5A CN201910211745A CN109978848A CN 109978848 A CN109978848 A CN 109978848A CN 201910211745 A CN201910211745 A CN 201910211745A CN 109978848 A CN109978848 A CN 109978848A
Authority
CN
China
Prior art keywords
image
eye fundus
pixel
channel
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910211745.5A
Other languages
Chinese (zh)
Other versions
CN109978848B (en
Inventor
孔轩
彭真明
王慧
范文澜
赵学功
曹兆洋
张文超
袁国慧
王卓然
蒲恬
何艳敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201910211745.5A priority Critical patent/CN109978848B/en
Publication of CN109978848A publication Critical patent/CN109978848A/en
Application granted granted Critical
Publication of CN109978848B publication Critical patent/CN109978848B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20072Graph-based image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • General Health & Medical Sciences (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of methods based on hard exudate in multiple light courcess color constancy model inspection eye fundus image, belong to technical field of image processing, solve to carry out color correction, the problem for causing hard exudate detection accuracy not high using single light source in the prior art.The present invention inputs former eye fundus image, is pre-processed;Using multiple light courcess color constancy algorithm, color correction is carried out to pretreated former eye fundus image;The vessel information in eye fundus image after color combining correction carries out optic disk positioning, is partitioned into optic disk region using Quick and equal displacement, obtains optic disk image;The image obtained based on optic disk image and pretreatment carries out Threshold segmentation and morphological reconstruction, extracts the candidate region of hard exudate, obtains candidate region image;Extract color histogram feature, color constancy feature and the textural characteristics of candidate region image.And detected with the feature extracted, obtain testing result.The present invention is used to carry out feature extraction to eye fundus image and hard exudate detects.

Description

Method based on hard exudate in multiple light courcess color constancy model inspection eye fundus image
Technical field
A method of based on hard exudate in multiple light courcess color constancy model inspection eye fundus image, for eye fundus image Feature extraction and hard exudate detection are carried out, technical field of image processing is belonged to.
Background technique
In existing technology, eye fundus image is by the way that extract the area in candidate region, perimeter, like circularity, diameter is average The shape features such as gradient, or extract depth characteristic using deep neural network and carry out the subsequent hard exudate detection of further progress, But it is based on extracted feature, accuracy rate detected is not high.
Also have in the prior art and hard exudate detection is carried out using color correction technology, but assume using single light source Color calibration method usually assumes that the spatial distribution in source in scene is relatively uniform.In actual environment, due to non-by multiple light courcess The influence of even illumination, body surface reflection characteristic and " internal reflection ", this hypothesis are difficult to meet.And color correction will be direct The detection accuracy of hard exudate is influenced, hard exudate detection accuracy is not high, examines so currently used color calibration method causes It is lower to survey efficiency.
Summary of the invention
Aiming at the problem that the studies above, the purpose of the present invention is to provide one kind to be based on multiple light courcess color constancy model inspection The method of hard exudate in eye fundus image solves the extracted feature of feature extraction mode used in the prior art, causes The low problem of subsequent hard exudate verification and measurement ratio, and color correction is carried out using single light source in the prior art, lead to hard exudate The not high problem of detection accuracy.
In order to achieve the above object, the present invention adopts the following technical scheme:
A method of based on multiple light courcess color constancy model extraction eye fundus image feature, include the following steps:
Step 1: inputting former eye fundus image, pre-processed, wherein former eye fundus image is eye fundus image to be analyzed;
Step 2: utilizing multiple light courcess color constancy algorithm, color correction is carried out to pretreated former eye fundus image;
Step 3: the vessel information in eye fundus image after color combining correction carries out optic disk positioning, then is based on color correction Eye fundus image afterwards is partitioned into optic disk region using Quick and equal displacement, obtains optic disk image;
Step 4: the image obtained based on optic disk image and pretreatment carries out Threshold segmentation and morphological reconstruction, extracts The candidate region of hard exudate obtains candidate region image;
Step 5: extracting color histogram feature, color constancy feature and the textural characteristics of candidate region image.
Further, the specific steps of the step 1 are as follows:
Step 1.1 inputs former eye fundus image, and the R channel components figure of light conditions can most be reflected by choosing former eye fundus image, into Row ROI region is extracted, and ROI region image is obtained, and extracts formula are as follows:
Imaskd(Tt(IR))
T=0.05*tmax
Wherein, α indicate morphological erosion operation, T indicate carry out Threshold segmentation, due to ROI region outside maximum brightness value There are 5% ratio relations with the maximum brightness value in ROI region, therefore choose maximum brightness value tmax5%, that is, 0.05*tmaxAs threshold value, etching operation then is carried out with circular configuration element d and obtains exposure mask, IRIt is logical for the R of former eye fundus image Road component map, ImaskFor ROI region image;
Step 1.2 denoises ROI region image using the adaptive median filter of 3*3, i.e., adaptive intermediate value filter Wave device dynamically changes the window size of median filter according to the condition preset, denoises to ROI region image;
Step 1.3 is carried out the RGB triple channel image of the ROI region image after denoising pair using the transformation of multiple dimensioned top cap Enhance than degree, and the enhanced result in each channel is merged, has finally obtained the enhanced ROI region image of contrast IenTo get arrive pretreated eye fundus image:
Wherein, γ withIt respectively indicates mathematical morphology and opens operation and closed operation, SiIndicate that scale is the morphological structure of i Element,Indicate the optimal bright area in the region r obtained after Morphological scale-space in bright area w,Indicate Morphological scale-space Optimal bright details in the details d obtained afterwards in bright details w,Indicate dark areas b in the region r obtained after Morphological scale-space In optimal bright area,Indicate the optimal dark details in the details d obtained after Morphological scale-space in dark details b, ItnExpression is gone Any channel in ROI region image tn after making an uproar.
Further, the specific steps of the step 2 are as follows:
Step 2.1, the region that pretreated eye fundus image is divided into multiple 10*10, utilize Grey-world algorithm Light source estimation, light source estimation formulas are carried out to the triple channel in each region are as follows:
Wherein, f (x) is the pixel value of x point on region, and e is light source, and k is gain coefficient, the gain coefficient difference of triple channel Are as follows:
Step 2.2 is clustered using light source estimated value of the K-means clustering algorithm to each region;
Step 2.3, by Von-Kries model, the unknown light source after cluster that step 2.2 obtains is converted into standard light To get to the eye fundus image after color correction, conversion formula is as follows in source:
Ic=AuIen,
Wherein, IenFor pretreated eye fundus image, IcFor the eye under the standard sources c that is obtained after diagonal model is converted Base map picture, the i.e. eye fundus image of color correction prognosis, diagonal model refer to Von-Kries model, AuFor the diagonal of unknown light source u Square, R, G, B represent triple channel component.
Further, the specific steps in the step 3 are as follows:
Step 3.1, the combination that the blood vessel in the eye fundus image after color correction is set as to multistage parallel zone set length For L, width is 3 σ, obtains a Gaussian curve based on Gauss matched filtering device, Gaussian curve is used on simulated blood vessel cross section Grey scale curve, the formula of Gauss matched filtering device is as follows:
If A represents the number of the pixel in neighborhood N, the average response of Gauss matched filtering device is obtained are as follows:
Finally obtain convolution exposure mask are as follows:After convolution exposure mask and pretreatment It obtains eye fundus image and carries out convolution, obtain the convolution results on a direction, formula are as follows: Ik(x, y)=Ic(x, y) * K 't(x, y).
Gauss matched filtering device is primary from 0 degree to the every 15 degree of rotations of 180 degree, obtains the Gauss matched filtering on 12 directions Then device carries out convolution respectively, a convolution results for choosing peak response are exported as final response, obtains vessel graph, Rotate the spin matrix of θ are as follows:
Wherein, θ value range is 1 to 12, represents which time of rotation;
Step 3.2 is based on vessel graph, is interconnected into tree according to blood vessel and comes together in single optic disk region Feature obtains the central point of optic disk, rete vasculosum Direction matching filtering device using the rete vasculosum Direction matching filtering device after weighting For the template of 9*9, specifically:
The Weight template that rete vasculosum Direction matching filtering device is weighted are as follows:
Rete vasculosum Direction matching filtering device template is multiplied with Weight template and is matched to get to the rete vasculosum direction after weighting Filter;
Step 3.3, each pixel (x, y) and its triple channel pixel value I by the eye fundus image after color correctionc(x, Y), Ic(x, y) refers toUnion feature space (x, y, I as 5 dimensionsc(x, y)), it is σ's with standard deviation Gaussian function is kernel function, obtains the probability density of each pixel, probability density formula are as follows:
According to obtained probability density, the maximized point of proximity of the probability density is searched, using local maximum as poly- Each pixel is marked according to its cluster centre to obtain corresponding segmentation for class center, that is, passes through Quick and equal displacement point It cuts to obtain multiple super-pixel;Quick and equal displacement segmentation obtains the specific steps of multiple super-pixel are as follows:
The probability density for comparing pixel in each pixel and neighborhood, as P (xma, yma, Ic(xma, yma)) > P (x, y, Ic(x, When y)), (xma, yma) indicate that probability density is greater than the pixel of the probability density of other pixels in neighborhood, pixel (x, y) and (xma, yma) be marked, by (xma, yma) as father's layer super-pixel, form the branch of tree;
If the probability density of each pixel and the probability density of pixel in neighborhood all compared with, by pixel according to probability density size It is built into " tree ", and saves the information of each layer super-pixel, otherwise goes to step S3.2.2;
Based on obtained with neighborhood quantity identical " tree ", the distance of " tree " node, calculation formula are calculated in each tree are as follows:
Wherein, every " tree " by the lower level node of " tree ", the distance of " tree " node is compared with threshold tau, if More than given threshold value τ, corresponding branch of adjusting the distance is marked, and subtree --- " local mode " otherwise will be less than or wait for formation It in the branch of threshold tau, is merged into local mode, is constantly calculated from lower layer to upper layer, obtain the point structure for belonging to same mode At a super-pixel;
Step 3.4 searches super-pixel, as optic disk region corresponding to the central point of optic disk, obtains elder generation behind optic disk region Morphological dilations are carried out, then remove the optic disk region of the eye fundus image after color correction, using R channel components figure as mark after removal Remember image, tag image carries out morphological reconstruction using the R channel components figure of the eye fundus image after color correction as mask image, The image without optic disk region is obtained, the image that R channel components figure and morphological reconstruction obtain is made the difference and is regarded to get to complete Disk image.
Further, the specific steps of the step 4 are as follows:
The ROI region image of extraction is carried out morphological dilations by step 4.1, subtracts original with the ROI region image after expansion ROI region image, obtains image border, and optic disk image is added with image border, obtains tag image, tag image is with color The G channel components figure of eye fundus image after correction is mask images, morphological reconstruction is carried out, by G channel components figure and morphology Obtained image is rebuild to make the difference to get preliminary candidate region is arrived;
Preliminary candidate region is first carried out morphological dilations by step 4.2, then is removed in the eye fundus image after color correction Preliminary candidate region, using G channel components figure as tag image after removal, tag image is with the eye fundus image after color correction G channel components figure carries out morphological reconstruction as mask image, the image without hard exudate region is obtained, by G channel components The image for scheming to obtain with morphological reconstruction makes the difference, and progress thresholding obtains two-value after making normalized to obtained error image Image extracts the candidate region of hard exudate using the bianry image in former eye fundus image, i.e., by bianry image and former eye Base map picture is multiplied, and obtains candidate region.
Further, the specific steps of the step 5 are as follows:
Step 5.1, the hard exudate candidate region I in former eye fundus imagehIn extract color histogram feature;
Step 5.2, using the bianry image in S4.2, extracted in the eye fundus image after color correction hard exudate time Favored area I 'h, and color constancy feature is extracted in this region, color constancy feature includes R, G, the brightness of channel B Mean value, R, G, the luminance standard of channel B is poor, R, G, entropy, the R of channel B, and the edge of G, the contrast of channel B, R, G, channel B are strong Degree and R, G, the area compact degree of channel B, specific formula for calculation are as follows:
R, G, the luminance mean value of channel B:N ∈ { R, G, B }Indicate candidate region figure in n-channel As the brightness value of pixel each in h, Ω indicates the pixel collection of candidate region image;
The luminance standard of R, G, channel B are poor:N ∈ { R, G, B }
R, G, the entropy of channel B:N ∈ { R, G, B }, fn(i) it indicates The pixel value of n-channel is the number of pixels of i, and the value range of i is 1-256;
R, G, the contrast of channel B:N ∈ { R, G, B }, wherein δ (i, j) is adjacent picture Gray scale difference between element, PδThe pixel distribution probability that the gray scale difference of (i, j) between adjacent pixel is δ;
R, G, the edge strength of channel B: going out the edge in each channel using Sobel operator extraction, calculates the bright of edge pixel Spend mean value;
R, G, the area compact degree of channel B:N ∈ { R, G, B }, CnFor the area circumference of n-channel, SnIt is logical for n The region area in road;
Step 5.3, former eye fundus image hard exudate candidate region IhIn extract LBP textural characteristics, extract public Formula is as follows:
Wherein, p indicates that p-th of pixel in 3 × 3 windows in addition to central pixel point, I (cn) indicate central pixel point cn Gray value, I (p) indicate neighborhood in p-th of pixel gray value.
A method of based on hard exudate in multiple light courcess color constancy model inspection eye fundus image, include the following steps:
S1: former eye fundus image is inputted, is pre-processed, wherein former eye fundus image is eye fundus image to be analyzed;
S2: utilizing multiple light courcess color constancy algorithm, carries out color correction to pretreated former eye fundus image;
S3: the vessel information in eye fundus image after color combining correction carries out optic disk positioning, then based on after color correction Eye fundus image, using Quick and equal displacement be partitioned into optic disk region, obtain optic disk image;
S4: the image obtained based on optic disk image and pretreatment carries out Threshold segmentation and morphological reconstruction, extracts hard Property exudation candidate region, obtain candidate region image;
S5: color histogram feature, color constancy feature and the textural characteristics of candidate region image are extracted;
S6: color histogram feature, color constancy feature and the textural characteristics input of candidate region image are trained Support vector machines, obtain final testing result.
Further, the specific steps of the step S1 are as follows:
S1.1, former eye fundus image is inputted, the R channel components figure of light conditions can most be reflected by choosing former eye fundus image, be carried out ROI region is extracted, and ROI region image is obtained, and extracts formula are as follows:
Imaskd(Tt(IR))
T=0.05*tmax
Wherein, α indicate morphological erosion operation, T indicate carry out Threshold segmentation, due to ROI region outside maximum brightness value There are 5% ratio relations with the maximum brightness value in ROI region, therefore choose maximum brightness value tmax5%, that is, 0.05*tmaxAs threshold value, etching operation then is carried out with circular configuration element d and obtains exposure mask, IRIt is logical for the R of former eye fundus image Road component map, ImaskFor ROI region image;
S1.2, ROI region image is denoised using the adaptive median filter of 3*3, i.e. adaptive median filter Device dynamically changes the window size of median filter according to the condition preset, denoises to ROI region image;
S1.3, it is converted using multiple dimensioned top cap by the RGB triple channel image degree of comparing of the ROI region image after denoising Enhancing, and the enhanced result in each channel is merged, finally obtain the enhanced ROI region image I of contrasten, i.e., Obtain pretreated eye fundus image:
Wherein, γ withIt respectively indicates mathematical morphology and opens operation and closed operation, SiIndicate that scale is the morphological structure of i Element,Indicate the optimal bright area in the region r obtained after Morphological scale-space in bright area w,Indicate Morphological scale-space Optimal bright details in the details d obtained afterwards in bright details w,Indicate dark areas b in the region r obtained after Morphological scale-space In optimal bright area,Indicate the optimal dark details in the details d obtained after Morphological scale-space in dark details b, ItnExpression is gone Any channel in ROI region image tn after making an uproar.
Further, the specific steps of the step S2 are as follows:
S2.1, the region that pretreated eye fundus image is divided into multiple 10*10, using Grey-world algorithm to every The triple channel in a region carries out light source estimation, light source estimation formulas are as follows:
Wherein, f (x) is the pixel value of x point on region, and e is light source, and k is gain coefficient, the gain coefficient difference of triple channel Are as follows:
S2.2, it is clustered using light source estimated value of the K-means clustering algorithm to each region;
S2.3, by Von-Kries model, the unknown light source after cluster that step S2.2 is obtained is converted into standard light To get to the eye fundus image after color correction, conversion formula is as follows in source:
Ic=AuIen,
Wherein, IenFor pretreated eye fundus image, IcFor the eye under the standard sources c that is obtained after diagonal model is converted Base map picture, the i.e. eye fundus image of color correction prognosis, diagonal model refer to Von-Kries model, AuFor the diagonal of unknown light source u Square, R, G, B represent triple channel component;
Specific steps in the step S3 are as follows:
S3.1, the combination that the blood vessel in the eye fundus image of color correction prognosis is set as to multistage parallel zone set length For L, width is 3 σ, obtains a Gaussian curve based on Gauss matched filtering device, Gaussian curve is used on simulated blood vessel cross section Grey scale curve, the formula of Gauss matched filtering device is as follows:
If A represents the number of the pixel in neighborhood N, the average response of Gauss matched filtering device is obtained are as follows:
Finally obtain convolution exposure mask are as follows:After convolution exposure mask and pretreatment It obtains eye fundus image and carries out convolution, obtain the convolution results on a direction, formula are as follows: Ik(x, y)=Ic(x, y) * K 't(x, y).
Gauss matched filtering device is primary from 0 degree to the every 15 degree of rotations of 180 degree, obtains the Gauss matched filtering on 12 directions Then device carries out convolution respectively, a convolution results for choosing peak response are exported as final response, obtains vessel graph, Rotate the spin matrix of θ are as follows:
Wherein, θ value range is 1 to 12, represents which time of rotation;
S3.2, it is based on vessel graph, tree is interconnected into according to blood vessel and comes together in the spy in single optic disk region Point, the central point of optic disk is obtained using the rete vasculosum Direction matching filtering device after weighting, and rete vasculosum Direction matching filtering device is The template of 9*9, specifically:
The Weight template that rete vasculosum Direction matching filtering device is weighted are as follows:
Rete vasculosum Direction matching filtering device template is multiplied with Weight template and is matched to get to the rete vasculosum direction after weighting Filter;
S3.3, each pixel (x, y) and its triple channel pixel value I by the eye fundus image after color correctionc(x, y), Ic (x, y) refers toUnion feature space (x, y, I as 5 dimensionsc(x, y)), it is the Gauss of σ with standard deviation Function is kernel function, obtains the probability density of each pixel, probability density formula are as follows:
According to obtained probability density, the maximized point of proximity of the probability density is searched, using local maximum as poly- Each pixel is marked according to its cluster centre to obtain corresponding segmentation for class center, that is, passes through Quick and equal displacement point It cuts to obtain multiple super-pixel;Quick and equal displacement segmentation obtains the specific steps of multiple super-pixel are as follows:
The probability density for comparing pixel in each pixel and neighborhood, as P (xma, yma, Ic(xma, yma)) > P (x, y, Ic(x, When y)), (xma, yma) indicate that probability density is greater than the pixel of the probability density of other pixels in neighborhood, pixel (x, y) and (xma, yma) be marked, by (xma, yma) as father's layer super-pixel, form the branch of tree;
If the probability density of each pixel and the probability density of pixel in neighborhood all compared with, by pixel according to probability density size It is built into " tree ", and saves the information of each layer super-pixel, otherwise goes to step S3.2.2;
Based on obtained with neighborhood quantity identical " tree ", the distance of " tree " node, calculation formula are calculated in each tree are as follows:
Wherein, every " tree " by the lower level node of " tree ", the distance of " tree " node is compared with threshold tau, if More than given threshold value τ, corresponding branch of adjusting the distance is marked, and subtree --- " local mode " otherwise will be less than or wait for formation It in the branch of threshold tau, is merged into local mode, is constantly calculated from lower layer to upper layer, obtain the point structure for belonging to same mode At a super-pixel;
S3.4, super-pixel, as optic disk region corresponding to the central point of optic disk are searched, is first carried out after obtaining optic disk region Morphological dilations, then the optic disk region of the eye fundus image after color correction is removed, scheme after removal using R channel components figure as label Picture, tag image are carried out morphological reconstruction using the R channel components figure of the eye fundus image after color correction as mask image, obtained Image without optic disk region makes the difference the image that R channel components figure and morphological reconstruction obtain to get complete optic disk figure is arrived Picture.
The specific steps of the step S4 are as follows:
S4.1, the ROI region image of extraction is subjected to morphological dilations, subtracts former ROI with the ROI region image after expansion Area image obtains image border, and optic disk image is added with image border, obtains tag image, tag image is with color school The G channel components figure of eye fundus image after just is mask images, morphological reconstruction is carried out, by G channel components figure and morphology weight The image built makes the difference to arrive preliminary candidate region;
S4.2, first by preliminary candidate region carry out morphological dilations, then remove in the eye fundus image after color correction just Candidate region is walked, using G channel components figure as tag image after removal, tag image is with the G of the eye fundus image after color correction Channel components figure carries out morphological reconstruction as mask image, the image without hard exudate region is obtained, by G channel components figure The image obtained with morphological reconstruction makes the difference, and progress thresholding obtains binary map after making normalized to obtained error image Picture extracts the candidate region of hard exudate using the bianry image in former eye fundus image, i.e., by bianry image and former eyeground Image is multiplied, and obtains candidate region;
The specific steps of the step S5 are as follows:
S5.1, the hard exudate candidate region I in former eye fundus imagehIn extract color histogram feature;
S5.2, using the bianry image in S4.2, extracted in the eye fundus image after color correction hard exudate candidate Region I 'h, and color constancy feature is extracted in this region, color constancy feature includes R, and the brightness of G, channel B are equal Value, R, G, the luminance standard of channel B is poor, R, G, entropy, the R of channel B, G, the contrast of channel B, R, G, the edge strength of channel B And R, G, the area compact degree of channel B, specific formula for calculation are as follows:
R, G, the luminance mean value of channel B:N ∈ { R, G, B }Indicate candidate region figure in n-channel As the brightness value of pixel each in h, Ω indicates the pixel collection of candidate region image;
The luminance standard of R, G, channel B are poor:N ∈ { R, G, B },
R, G, the entropy of channel B:N ∈ { R, G, B }, fn(i) it indicates The pixel value of n-channel is the number of pixels of i, and the value range of i is 1-256;
R, G, the contrast of channel B:N ∈ { R, G, B }, wherein δ (i, j) is adjacent picture Gray scale difference between element, PδThe pixel distribution probability that the gray scale difference of (i, j) between adjacent pixel is δ;
R, G, the edge strength of channel B: going out the edge in each channel using Sobel operator extraction, calculates the bright of edge pixel Spend mean value;
R, G, the area compact degree of channel B:N ∈ { R, G, B }, CnFor the area circumference of n-channel, SnIt is logical for n The region area in road;
S5.3, former eye fundus image hard exudate candidate region IhIn extract LBP textural characteristics, extract formula such as Under:
Wherein, p indicates that p-th of pixel in 3 × 3 windows in addition to central pixel point, I (cn) indicate central pixel point cn Gray value, I (p) indicate neighborhood in p-th of pixel gray value.
Further, the specific steps of the step S6 are as follows:
S6.1, the original data set that will acquire are training set and test set according to 7: 3 ratio cut partition, are partitioned into original data set Candidate region image, extract candidate region image color histogram feature, color constancy feature and textural characteristics;
S6.2, using PCA, i.e., principal component analytical method to the color histogram feature of extraction, color constancy feature and Textural characteristics data carry out dimension-reduction treatment, and 107 original dimensional features are reduced to 27 dimensional features;
S6.3, by the feature after training set dimension-reduction treatment, be trained using support vector machines;
S6.4, each candidate region image for traversing test set, it is defeated by the feature after dimensionality reduction in each candidate region image Enter support vector machines to be tested, determines whether candidate region image belonging to this feature is hard exudate, after the completion of traversal, if Determination rate of accuracy then needs to adjust the parameter of support vector machines less than 90%, re-execute the steps S6.3- step S6.4, otherwise Support vector machines after being trained;
S6.5, the color histogram feature that former eye fundus image is extracted, color constancy feature and textural characteristics input Support vector machines after perfecting is separated, and final testing result is obtained.
The present invention compared with the existing technology, its advantages are shown in:
1, after the present invention is due to carrying out color correction to eye fundus image using color constancy algorithm, that is, candidate regions are extracted The color constancy feature in domain, including R, G, luminance mean value, the luminance standard of channel B be poor, entropy, contrast, edge strength and area Domain compactness can preferably be described hard exudate region, then supplemented by color histogram and textural characteristics, can be improved Hard exudate Detection accuracy;
2, the present invention carries out color correction to eye fundus image, makes it not by scene by multiple light courcess color constancy algorithm Illumination and reflection etc. influence, and greatly reduce hard exudate omission factor caused by uneven illumination, can effectively improve hardness Ooze out the detection accuracy of candidate region.
3, the present invention is applied to hard exudate region and non-rigid infiltration in candidate region by extracting color constancy feature The separation in region out effectively improves the Detection accuracy in hard exudate region, sensitivity and specificity.
4, method of the invention calculates easy, and speed is fast, has higher real-time.
Detailed description of the invention
Fig. 1 is flow chart of the method for the present invention.
Fig. 2 is the original eye fundus image of the embodiment of the present invention.
Fig. 3 is the pretreated former eye fundus image of the embodiment of the present invention.
Fig. 4 is the eye fundus image after the color correction of the embodiment of the present invention.
Fig. 5 is the vessel graph that the embodiment of the present invention is extracted.
Fig. 6 is the optic disk image that the embodiment of the present invention is extracted.
Fig. 7 is the hard exudate candidate region image that the embodiment of the present invention is extracted.
Fig. 8 is that the embodiment of the present invention separates hard exudate with non-rigid seepage areas, after being tagged to original eye fundus image Image.
Specific embodiment
Below in conjunction with the drawings and the specific embodiments, the invention will be further described.
A method of based on hard exudate in multiple light courcess color constancy model inspection eye fundus image, include the following steps:
S1: former eye fundus image is inputted, is pre-processed, wherein former eye fundus image is eye fundus image to be analyzed;Specific steps Are as follows:
S1.1, former eye fundus image is inputted, the R channel components figure of light conditions can most be reflected by choosing former eye fundus image, be carried out ROI region is extracted, and ROI region image is obtained, and extracts formula are as follows:
Imaskd(Tt(IR))
T=0.05*tmax
Wherein, α indicate morphological erosion operation, T indicate carry out Threshold segmentation, due to ROI region outside maximum brightness value There are 5% ratio relations with the maximum brightness value in ROI region, therefore choose maximum brightness value tmax5%, that is, 0.05*tmaxAs threshold value, etching operation then is carried out with circular configuration element d and obtains exposure mask, IRIt is logical for the R of former eye fundus image Road component map, ImaskFor ROI region image;
S1.2, ROI region image is denoised using the adaptive median filter of 3*3, i.e. adaptive median filter Device dynamically changes the window size of median filter according to the condition preset, denoises to ROI region image;It uses The purpose of adaptive median filter is exactly, according to the condition preset, to dynamically change the window size of median filter, with Combine the effect of denoising effect and protection details
S1.3, it is converted using multiple dimensioned top cap by the RGB triple channel image degree of comparing of the ROI region image after denoising Enhancing, and the enhanced result in each channel is merged, finally obtain the enhanced ROI region image I of contrasten, i.e., Obtain pretreated eye fundus image:
Wherein, γ withIt respectively indicates mathematical morphology and opens operation and closed operation, SiIndicate that scale is the morphological structure of i Element,Indicate the optimal bright area in the region r obtained after Morphological scale-space in bright area w,Indicate Morphological scale-space Optimal bright details in the details d obtained afterwards in bright details w,Indicate dark areas b in the region r obtained after Morphological scale-space In optimal bright area,Indicate the optimal dark details in the details d obtained after Morphological scale-space in dark details b, ItnExpression is gone Any channel in ROI region image tn after making an uproar.
S2: utilizing multiple light courcess color constancy algorithm, carries out color correction to pretreated former eye fundus image;Specific step Suddenly are as follows:
S2.1, the region that pretreated eye fundus image is divided into multiple 10*10, using Grey-world algorithm to every The triple channel in a region carries out light source estimation, light source estimation formulas are as follows:
Wherein, f (x) is the pixel value of x point on region, and e is light source, and k is gain coefficient, the gain coefficient difference of triple channel Are as follows:
S2.2, it is clustered using light source estimated value of the K-means clustering algorithm to each region;
S2.3, by Von-Kries model, the unknown light source after cluster that step S2.2 is obtained is converted into standard light To get to the eye fundus image after color correction, conversion formula is as follows in source:
Ic=AuIen,
Wherein, IenFor pretreated eye fundus image, IcFor the eye under the standard sources c that is obtained after diagonal model is converted Base map picture, the i.e. eye fundus image of color correction prognosis, diagonal model refer to Von-Kries model, AuFor the diagonal of unknown light source u Square, R, G, B represent triple channel component.
S3: the vessel information in eye fundus image after color combining correction carries out optic disk positioning, then based on after color correction Eye fundus image, using Quick and equal displacement be partitioned into optic disk region, obtain optic disk image;Specific steps are as follows:
S3.1, the combination that the blood vessel in the obtained image of pretreatment is set as to multistage parallel zone, set length as L, wide Degree is 3 σ, obtains a Gaussian curve based on Gauss matched filtering device, Gaussian curve is used to the gray scale on simulated blood vessel cross section The formula of curve, Gauss matched filtering device is as follows:
If A represents the number of the pixel in neighborhood N, the average response of Gauss matched filtering device is obtained are as follows:
Finally obtain convolution exposure mask are as follows:After convolution exposure mask and pretreatment It obtains eye fundus image and carries out convolution, obtain the convolution results on a direction, formula are as follows: Ik(x, y)=Ic(x, y) * K 't(x, y).
Since eye fundus image medium vessels are directive, and the direction of blood vessel is stretched at random, and Gauss matches filter There is peak value on the direction perpendicular to blood vessel in wave device, so Gauss matched filtering device is allowed to rotate one from 0 degree to every 15 degree of 180 degree It is secondary, the Gauss matched filtering device on 12 directions is obtained, then carries out convolution respectively, chooses a convolution results of peak response It is exported as final response, obtains vessel graph, rotate the spin matrix of θ are as follows:
Wherein, θ value range is 1 to 12, represents which time of rotation;
S3.2, it is based on vessel graph, tree is interconnected into according to blood vessel and comes together in the spy in single optic disk region Point, the central point of optic disk is obtained using the rete vasculosum Direction matching filtering device after weighting, and rete vasculosum Direction matching filtering device is The template (in view of the blood vessel within and around optic disk dissipates feature, so using the template of 9*9) of 9*9, specifically:
The Weight template that rete vasculosum Direction matching filtering device is weighted are as follows:
Rete vasculosum Direction matching filtering device template is multiplied with Weight template and is matched to get to the rete vasculosum direction after weighting Filter;
S3.3, each pixel (x, y) and its triple channel pixel value I by the eye fundus image after color correctionc(x, y), Ic (x, y) refers toUnion feature space (x, y, I as 5 dimensionsc(x, y)), it is the Gauss of σ with standard deviation Function is kernel function, obtains the probability density of each pixel, probability density formula are as follows:
According to obtained probability density, the maximized point of proximity of the probability density is searched, using local maximum as poly- Each pixel is marked according to its cluster centre to obtain corresponding segmentation for class center, that is, passes through Quick and equal displacement point It cuts to obtain multiple super-pixel;Quick and equal displacement segmentation obtains the specific steps of multiple super-pixel are as follows:
The probability density for comparing pixel in each pixel and neighborhood, as P (xma, yma, Ic(xma, yma)) > P (x, y, Ic(x, When y)), (xma, yma) indicate that probability density is greater than the pixel of the probability density of other pixels in neighborhood, pixel (x, y) and (xma, yma) be marked, by (xma, yma) as father's layer super-pixel, form the branch of tree;
If the probability density of each pixel and the probability density of pixel in neighborhood all compared with, by pixel according to probability density size It is built into " tree ", and saves the information of each layer super-pixel, otherwise goes to step S3.2.2;
Based on obtained with neighborhood quantity identical " tree ", the distance of " tree " node, calculation formula are calculated in each tree are as follows:
Wherein, every " tree " by the lower level node of " tree ", the distance of " tree " node is compared with threshold tau, if More than given threshold value τ, corresponding branch of adjusting the distance is marked, and subtree --- " local mode " otherwise will be less than or wait for formation It in the branch of threshold tau, is merged into local mode, is constantly calculated from lower layer to upper layer, obtain the point structure for belonging to same mode At a super-pixel;
S3.4, super-pixel, as optic disk region corresponding to the central point of optic disk are searched, is first carried out after obtaining optic disk region Morphological dilations, then the optic disk region of the eye fundus image after color correction is removed, scheme after removal using R channel components figure as label Picture, tag image are carried out morphological reconstruction using the R channel components figure of the eye fundus image after color correction as mask image, obtained Image without optic disk region makes the difference the image that R channel components figure and morphological reconstruction obtain to get complete optic disk figure is arrived Picture.
S4: the image obtained based on optic disk image and pretreatment carries out Threshold segmentation and morphological reconstruction, extracts hard Property exudation candidate region, obtain candidate region image;Specific steps are as follows:
S4.1, the ROI region image of extraction is subjected to morphological dilations, subtracts former ROI with the ROI region image after expansion Area image obtains image border, and optic disk image is added with image border, obtains tag image, tag image is with color school The G channel components figure of eye fundus image after just is mask images, morphological reconstruction is carried out, by G channel components figure and morphology weight The image built makes the difference to arrive preliminary candidate region;
S4.2, first by preliminary candidate region carry out morphological dilations, then remove in the eye fundus image after color correction just Candidate region is walked, using G channel components figure as tag image after removal, tag image is with the G of the eye fundus image after color correction Channel components figure carries out morphological reconstruction as mask image, the image without hard exudate region is obtained, by G channel components figure The image obtained with morphological reconstruction makes the difference, and progress thresholding obtains binary map after making normalized to obtained error image Picture extracts the candidate region of hard exudate using the bianry image in former eye fundus image, i.e., by bianry image and former eyeground Image is multiplied, and obtains candidate region.
S5: color histogram feature, color constancy feature and the textural characteristics of candidate region image are extracted;Specific steps Are as follows:
S5.1, the hard exudate candidate region I in former eye fundus imagehIn extract color histogram feature;
S5.2, using the bianry image in S4.2, extracted in the eye fundus image after color correction hard exudate candidate Region I 'h, and color constancy feature is extracted in this region, color constancy feature includes R, and the brightness of G, channel B are equal Value, R, G, the luminance standard of channel B is poor, R, G, entropy, the R of channel B, G, the contrast of channel B, R, G, the edge strength of channel B And R, G, the area compact degree of channel B, specific formula for calculation are as follows:
R, G, the luminance mean value of channel B:N ∈ { R, G, B }Indicate candidate region figure in n-channel As the brightness value of pixel each in h, Ω indicates the pixel collection of candidate region image;
The luminance standard of R, G, channel B are poor:N ∈ { R, G, B },
R, G, the entropy of channel B:N ∈ { R, G, B }, fn(i) it indicates The pixel value of n-channel is the number of pixels of i, and the value range of i is 1-256;
R, G, the contrast of channel B:N ∈ { R, G, B }, wherein δ (i, j) is adjacent picture Gray scale difference between element, PδThe pixel distribution probability that the gray scale difference of (i, j) between adjacent pixel is δ;
R, G, the edge strength of channel B: going out the edge in each channel using Sobel operator extraction, calculates the bright of edge pixel Spend mean value;
R, G, the area compact degree of channel B:N ∈ { R, G, B }, CnFor the area circumference of n-channel, SnIt is logical for n The region area in road;
S5.3, former eye fundus image hard exudate candidate region IhIn extract LBP textural characteristics, extract formula such as Under:
Wherein, p indicates that p-th of pixel in 3 × 3 windows in addition to central pixel point, I (cn) indicate central pixel point cn Gray value, I (p) indicate neighborhood in p-th of pixel gray value.
S6: color histogram feature, color constancy feature and the textural characteristics input of candidate region image are trained Support vector machines, obtain final testing result.Specific steps are as follows:
S6.1, the original data set that will acquire are training set and test set according to 7: 3 ratio cut partition, are partitioned into original data set Candidate region image, extract candidate region image color histogram feature, color constancy feature and textural characteristics;
S6.2, using PCA, i.e., principal component analytical method to the color histogram feature of extraction, color constancy feature and Textural characteristics data carry out dimension-reduction treatment, and 107 original dimensional features are reduced to 27 dimensional features;
S6.3, by the feature after training set dimension-reduction treatment, be trained using support vector machines;
S6.4, each candidate region image for traversing test set, it is defeated by the feature after dimensionality reduction in each candidate region image Enter support vector machines to be tested, determines whether candidate region image belonging to this feature is hard exudate, after the completion of traversal, if Determination rate of accuracy then needs to adjust the parameter of support vector machines less than 90%, re-execute the steps S6.3- step S6.4, otherwise Support vector machines after being trained;
S6.5, the color histogram feature that former eye fundus image is extracted, color constancy feature and textural characteristics input Support vector machines after perfecting is separated, and final testing result is obtained.
Embodiment
Eye fundus image data set diaretdb1 shares 89 eye fundus images as original data set disclosed in online downloading. Diaretdb1 data set is opened according to 7: 3 ratio cut partition for training set 62 and is opened with test set 27, the time of original data set is partitioned into Constituency area image extracts color histogram feature, color constancy feature and textural characteristics;
Using PCA, i.e. principal component analytical method is special to the color histogram feature, color constancy feature and texture of extraction It levies data and carries out dimension-reduction treatment, 107 original dimensional features are reduced to 27 dimensional features;
Training set shares 747 candidate region images, by the feature after all candidate region image dimension-reduction treatment of training set, It is trained using support vector machines;
Test set shares 364 candidate region images, each candidate region image of test set is traversed, by each candidate regions Feature in area image after dimensionality reduction, input support vector machines are tested, and determine whether region belonging to this feature is rigid infiltration Out, determination rate of accuracy is 91.2% (332/364), and hard exudate zone marker is arrived to corresponding original eye fundus image Complete classification and Detection is as a result, specific as shown in figure 8, being the result that an eye fundus image is final in test set.
After obtaining trained support vector machines, by former eye fundus image, i.e., image to be analyzed is handled, and obtains former eye Color histogram feature, color constancy feature and the textural characteristics that base map picture extracts, then input perfect after support to Amount machine is detected, and final classification results are obtained, and classification results will be detected when there is hard exudate, and can be extracted It out, is exactly the result of no hard exudate in no hard exudate.
The above is only the representative embodiment in the numerous concrete application ranges of the present invention, to protection scope of the present invention not structure At any restrictions.It is all using transformation or equivalence replacement and the technical solution that is formed, all fall within rights protection scope of the present invention it It is interior.

Claims (10)

1. a kind of method based on multiple light courcess color constancy model extraction eye fundus image feature, which is characterized in that including walking as follows It is rapid:
Step 1: inputting former eye fundus image, pre-processed, wherein former eye fundus image is eye fundus image to be analyzed;
Step 2: utilizing multiple light courcess color constancy algorithm, color correction is carried out to pretreated former eye fundus image;
Step 3: color combining correction after eye fundus image in vessel information carry out optic disk positioning, then based on color correction after Eye fundus image is partitioned into optic disk region using Quick and equal displacement, obtains optic disk image;
Step 4: the image obtained based on optic disk image and pretreatment carries out Threshold segmentation and morphological reconstruction, extracts hardness The candidate region of exudation obtains candidate region image;
Step 5: extracting color histogram feature, color constancy feature and the textural characteristics of candidate region image.
2. a kind of method based on multiple light courcess color constancy model extraction eye fundus image feature according to claim 1, It is characterized in that, the specific steps of the step 1 are as follows:
Step 1.1 inputs former eye fundus image, and the R channel components figure of light conditions can most be reflected by choosing former eye fundus image, carries out ROI region is extracted, and ROI region image is obtained, and extracts formula are as follows:
Imaskd(Tt(IR))
T=0.05*tmax
Wherein, α indicate morphological erosion operation, T indicate carry out Threshold segmentation, due to ROI region outside maximum brightness value and ROI There are 5% ratio relations for maximum brightness value in region, therefore choose maximum brightness value tmax5%, that is, 0.05*tmax As threshold value, etching operation then is carried out with circular configuration element d and obtains exposure mask, IRFor the R channel components figure of former eye fundus image, ImaskFor ROI region image;
Step 1.2 denoises ROI region image using the adaptive median filter of 3*3, i.e. adaptive median filter According to the condition preset, the window size of median filter is dynamically changed, ROI region image is denoised;
Step 1.3 is converted using multiple dimensioned top cap by the RGB triple channel image degree of comparing of the ROI region image after denoising Enhancing, and the enhanced result in each channel is merged, finally obtain the enhanced ROI region image I of contrasten, i.e., Obtain pretreated eye fundus image:
Wherein, γ withIt respectively indicates mathematical morphology and opens operation and closed operation, SiIndicate that scale is the morphological structuring elements of i,Indicate the optimal bright area in the region r obtained after Morphological scale-space in bright area w,It is obtained after indicating Morphological scale-space Details d in optimal bright details in bright details w,Indicate optimal in dark areas b in the region r obtained after Morphological scale-space Bright area,Indicate the optimal dark details in the details d obtained after Morphological scale-space in dark details b, ItnAfter indicating denoising Any channel in ROI region image tn.
3. a kind of method based on multiple light courcess color constancy model extraction eye fundus image feature according to claim 2, It is characterized in that, the specific steps of the step 2 are as follows:
Step 2.1, the region that pretreated eye fundus image is divided into multiple 10*10, using Grey-world algorithm to every The triple channel in a region carries out light source estimation, light source estimation formulas are as follows:
Wherein, f (x) is the pixel value of x point on region, and e is light source, and k is gain coefficient, and the gain coefficient of triple channel is respectively as follows:
Step 2.2 is clustered using light source estimated value of the K-means clustering algorithm to each region;
Step 2.3, by Von-Kries model, the unknown light source after cluster that step 2.2 obtains is converted into standard sources, Eye fundus image after obtaining color correction, conversion formula are as follows:
Ic=AuIen,
Wherein, IenFor pretreated eye fundus image, IcFor the eyeground figure under the standard sources c that is obtained after diagonal model is converted Picture, the i.e. eye fundus image of color correction prognosis, diagonal model refer to Von-Kries model, AuIt is unknown light source u to angular moment, R, G, B represent triple channel component.
4. a kind of method based on multiple light courcess color constancy model extraction eye fundus image feature according to claim 2 or 3, It is characterized in that, the specific steps in the step 3 are as follows:
Step 3.1, the combination that the blood vessel in the eye fundus image after color correction is set as to multistage parallel zone, set length as L, Width is 3 σ, obtains a Gaussian curve based on Gauss matched filtering device, Gaussian curve is used to the ash on simulated blood vessel cross section It writes music line, the formula of Gauss matched filtering device is as follows:
K (x, y)=- exp (- u2/2σ2)
If A represents the number of the pixel in neighborhood N, the average response of Gauss matched filtering device is obtained are as follows:
Finally obtain convolution exposure mask are as follows: K 'i(x, y)=K (x, y)-mi Eyeground will be obtained after convolution exposure mask and pretreatment Image carries out convolution, obtains the convolution results on a direction, formula are as follows: Ik(x, y)=Ic(x, y) * K 't(x, y);
Gauss matched filtering device is primary from 0 degree to the every 15 degree of rotations of 180 degree, obtains the Gauss matched filtering device on 12 directions, Then convolution is carried out respectively, and a convolution results for choosing peak response are exported as final response, obtain vessel graph, are rotated The spin matrix of θ are as follows:
Wherein, θ value range is 1 to 12, represents which time of rotation;
Step 3.2 is based on vessel graph, is interconnected into tree according to blood vessel and comes together in the spy in single optic disk region Point, the central point of optic disk is obtained using the rete vasculosum Direction matching filtering device after weighting, and rete vasculosum Direction matching filtering device is The template of 9*9, specifically:
The Weight template that rete vasculosum Direction matching filtering device is weighted are as follows:
Rete vasculosum Direction matching filtering device template is multiplied with Weight template to get the rete vasculosum Direction matching filtering to after weighting Device;
Step 3.3, each pixel (x, y) and its triple channel pixel value I by the eye fundus image after color correctionc(x, y), Ic (x, y) refers toUnion feature space (x, y, I as 5 dimensionsc(x, y)), it is the Gauss of σ with standard deviation Function is kernel function, obtains the probability density of each pixel, probability density formula are as follows:
According to obtained probability density, the maximized point of proximity of the probability density is searched, using local maximum as in cluster Each pixel is marked according to its cluster centre to obtain corresponding segmentation for the heart, i.e., is divided by Quick and equal displacement To multiple super-pixel;Quick and equal displacement segmentation obtains the specific steps of multiple super-pixel are as follows:
The probability density for comparing pixel in each pixel and neighborhood, as P (xma, yma, Ic(xma, yma)) > P (x, y, Ic(x, y)) when, (xma, yma) indicate that probability density is greater than the pixel of the probability density of other pixels in neighborhood, pixel (x, y) and (xma, yma) into Line flag, by (xma, yma) as father's layer super-pixel, form the branch of tree;
If the probability density of each pixel and the probability density of pixel in neighborhood all compared with, pixel is constructed according to probability density size At " tree ", and the information of each layer super-pixel is saved, otherwise goes to step S3.2.2;
Based on obtained with neighborhood quantity identical " tree ", the distance of " tree " node, calculation formula are calculated in each tree are as follows:
Wherein, every " tree " by the lower level node of " tree ", the distance of " tree " node is compared with threshold tau, if being more than Given threshold value τ, corresponding branch of adjusting the distance are marked, and are formed subtree --- " local mode ", otherwise will be less than or equal to threshold The branch of value τ, is merged into local mode, is constantly calculated from lower layer to upper layer, obtains the point composition for belonging to same mode One super-pixel;
Step 3.4 searches super-pixel, as optic disk region corresponding to the central point of optic disk, first carries out after obtaining optic disk region Morphological dilations, then the optic disk region of the eye fundus image after color correction is removed, scheme after removal using R channel components figure as label Picture, tag image are carried out morphological reconstruction using the R channel components figure of the eye fundus image after color correction as mask image, obtained Image without optic disk region makes the difference the image that R channel components figure and morphological reconstruction obtain to get complete optic disk figure is arrived Picture.
5. a kind of method based on multiple light courcess color constancy model extraction eye fundus image feature according to claim 4, It is characterized in that, the specific steps of the step 4 are as follows:
The ROI region image of extraction is carried out morphological dilations by step 4.1, subtracts former ROI with the ROI region image after expansion Area image obtains image border, and optic disk image is added with image border, obtains tag image, tag image is with color school The G channel components figure of eye fundus image after just is mask images, morphological reconstruction is carried out, by G channel components figure and morphology weight The image built makes the difference to arrive preliminary candidate region;
Preliminary candidate region is first carried out morphological dilations by step 4.2, then is removed preliminary in the eye fundus image after color correction Candidate region, using G channel components figure as tag image after removal, tag image is logical with the G of the eye fundus image after color correction Road component map carries out morphological reconstruction as mask image, obtains the image without hard exudate region, by G channel components figure with The image that morphological reconstruction obtains makes the difference, and progress thresholding obtains binary map after making normalized to obtained error image Picture extracts the candidate region of hard exudate using the bianry image in former eye fundus image, i.e., by bianry image and former eyeground Image is multiplied, and obtains candidate region.
6. a kind of method based on multiple light courcess color constancy model extraction eye fundus image feature according to claim 1, It is characterized in that, the specific steps of the step 5 are as follows:
Step 5.1, the hard exudate candidate region I in former eye fundus imagehIn extract color histogram feature;
Step 5.2, using the bianry image in S4.2, hard exudate candidate regions are extracted in the eye fundus image after color correction Domain I 'h, and color constancy feature is extracted in this region, color constancy feature includes R, G, the luminance mean value of channel B, R, G, the luminance standard of channel B is poor, R, G, entropy, the R of channel B, G, the contrast of channel B, R, G, the edge strength and R of channel B, G, the area compact degree of channel B, specific formula for calculation are as follows:
R, G, the luminance mean value of channel B: Indicate candidate region image h in n-channel The brightness value of interior each pixel, Ω indicate the pixel collection of candidate region image;
The luminance standard of R, G, channel B are poor:
R, G, the entropy of channel B:fn(i) indicate that n is logical The pixel value in road is the number of pixels of i, and the value range of i is 1-256;
R, G, the contrast of channel B:Wherein δ (i, j) is between adjacent pixel Gray scale difference, PδThe pixel distribution probability that the gray scale difference of (i, j) between adjacent pixel is δ;
R, G, the edge strength of channel B: going out the edge in each channel using Sobel operator extraction, and the brightness for calculating edge pixel is equal Value;
R, G, the area compact degree of channel B:CnFor the area circumference of n-channel, SnFor n-channel Region area;
Step 5.3, former eye fundus image hard exudate candidate region IhIn extract LBP textural characteristics, extract formula such as Under:
Wherein, p indicates that p-th of pixel in 3 × 3 windows in addition to central pixel point, I (cn) indicate the ash of central pixel point cn Angle value, I (p) indicate the gray value of p-th of pixel in neighborhood.
7. a kind of method based on hard exudate in multiple light courcess color constancy model inspection eye fundus image, which is characterized in that including Following steps:
S1: former eye fundus image is inputted, is pre-processed, wherein former eye fundus image is eye fundus image to be analyzed;
S2: utilizing multiple light courcess color constancy algorithm, carries out color correction to pretreated former eye fundus image;
S3: the vessel information in eye fundus image after color combining correction carries out optic disk positioning, then based on the eye after color correction Base map picture is partitioned into optic disk region using Quick and equal displacement, obtains optic disk image;
S4: the image obtained based on optic disk image and pretreatment carries out Threshold segmentation and morphological reconstruction, extracts rigid infiltration Candidate region out obtains candidate region image;
S5: color histogram feature, color constancy feature and the textural characteristics of candidate region image are extracted;
S6: the color histogram feature, color constancy feature and textural characteristics of candidate region image are inputted into trained branch Vector machine is held, obtain final testing result.
8. a kind of side based on hard exudate in multiple light courcess color constancy model inspection eye fundus image according to claim 7 Method, which is characterized in that the specific steps of the step S1 are as follows:
S1.1, former eye fundus image is inputted, the R channel components figure of light conditions can most be reflected by choosing former eye fundus image, carry out the area ROI Domain is extracted, and ROI region image is obtained, and extracts formula are as follows:
Imaskd(Tt(IR))
T=0.05*tmax
Wherein, α indicate morphological erosion operation, T indicate carry out Threshold segmentation, due to ROI region outside maximum brightness value and ROI There are 5% ratio relations for maximum brightness value in region, therefore choose maximum brightness value tmax5%, that is, 0.05*tmax As threshold value, etching operation then is carried out with circular configuration element d and obtains exposure mask, IRFor the R channel components figure of former eye fundus image, ImaskFor ROI region image;
S1.2, ROI region image is denoised using the adaptive median filter of 3*3, i.e. adaptive median filter root According to the condition preset, the window size of median filter is dynamically changed, ROI region image is denoised;
S1.3, the RGB triple channel image degree of comparing of the ROI region image after denoising is increased using the transformation of multiple dimensioned top cap By force, and by the enhanced result in each channel it merges, has finally obtained the enhanced ROI region image I of contrastenTo get To pretreated eye fundus image:
Wherein, γ withIt respectively indicates mathematical morphology and opens operation and closed operation, SiIndicate that scale is the morphological structuring elements of i,Indicate the optimal bright area in the region r obtained after Morphological scale-space in bright area w,It is obtained after indicating Morphological scale-space Details d in optimal bright details in bright details w,Indicate optimal in dark areas b in the region r obtained after Morphological scale-space Bright area,Indicate the optimal dark details in the details d obtained after Morphological scale-space in dark details b, ItnAfter indicating denoising Any channel in ROI region image tn.
9. a kind of side based on hard exudate in multiple light courcess color constancy model inspection eye fundus image according to claim 8 Method, which is characterized in that the specific steps of the step S2 are as follows:
S2.1, the region that pretreated eye fundus image is divided into multiple 10*10, using Grey-world algorithm to each area The triple channel in domain carries out light source estimation, light source estimation formulas are as follows:
Wherein, f (x) is the pixel value of x point on region, and e is light source, and k is gain coefficient, and the gain coefficient of triple channel is respectively as follows:
S2.2, it is clustered using light source estimated value of the K-means clustering algorithm to each region;
S2.3, by Von-Kries model, the unknown light source after cluster that step S2.2 is obtained is converted into standard sources, i.e., Eye fundus image after obtaining color correction, conversion formula are as follows:
Ic=AuIen,
Wherein, IenFor pretreated eye fundus image, IcFor the eyeground figure under the standard sources c that is obtained after diagonal model is converted Picture, the i.e. eye fundus image of color correction prognosis, diagonal model refer to Von-Kries model, AuIt is unknown light source u to angular moment, R, G, B represent triple channel component;
Specific steps in the step S3 are as follows:
S3.1, the combination that the blood vessel in the eye fundus image of color correction prognosis is set as to multistage parallel zone, set length as L, Width is 3 σ, obtains a Gaussian curve based on Gauss matched filtering device, Gaussian curve is used to the ash on simulated blood vessel cross section It writes music line, the formula of Gauss matched filtering device is as follows:
K (x, y)=- exp (- u2/2σ2)
If A represents the number of the pixel in neighborhood N, the average response of Gauss matched filtering device is obtained are as follows:
Finally obtain convolution exposure mask are as follows: K 'i(x, y)=K (x, y)-mi Eyeground will be obtained after convolution exposure mask and pretreatment Image carries out convolution, obtains the convolution results on a direction, formula are as follows: Ik(x, y)=Ic(x, y) * K 't(x, y);
Gauss matched filtering device is primary from 0 degree to the every 15 degree of rotations of 180 degree, obtains the Gauss matched filtering device on 12 directions, Then convolution is carried out respectively, and a convolution results for choosing peak response are exported as final response, obtain vessel graph, are rotated The spin matrix of θ are as follows:
Wherein, θ value range is 1 to 12, represents which time of rotation;
S3.2, made the characteristics of being interconnected into tree according to blood vessel and come together in single optic disk region based on vessel graph The central point of optic disk is obtained with the rete vasculosum Direction matching filtering device after weighting, rete vasculosum Direction matching filtering device is 9*9's Template, specifically:
The Weight template that rete vasculosum Direction matching filtering device is weighted are as follows:
Rete vasculosum Direction matching filtering device template is multiplied with Weight template to get the rete vasculosum Direction matching filtering to after weighting Device;
S3.3, each pixel (x, y) and its triple channel pixel value I by the eye fundus image after color correctionc(x, y), Ic(x, Y) refer toUnion feature space (x, y, I as 5 dimensionsc(x, y)), it is the Gaussian function of σ with standard deviation For kernel function, the probability density of each pixel, probability density formula are obtained are as follows:
According to obtained probability density, the maximized point of proximity of the probability density is searched, using local maximum as in cluster Each pixel is marked according to its cluster centre to obtain corresponding segmentation for the heart, i.e., is divided by Quick and equal displacement To multiple super-pixel;Quick and equal displacement segmentation obtains the specific steps of multiple super-pixel are as follows:
The probability density for comparing pixel in each pixel and neighborhood, as P (xma, yma, Ic(xma, yma)) > P (x, y, Ic(x, y)) when, (xma, yma) indicate that probability density is greater than the pixel of the probability density of other pixels in neighborhood, pixel (x, y) and (xma, yma) into Line flag, by (xma, yma) as father's layer super-pixel, form the branch of tree;
If the probability density of each pixel and the probability density of pixel in neighborhood all compared with, pixel is constructed according to probability density size At " tree ", and the information of each layer super-pixel is saved, otherwise goes to step S3.2.2;
Based on obtained with neighborhood quantity identical " tree ", the distance of " tree " node, calculation formula are calculated in each tree are as follows:
Wherein, every " tree " by the lower level node of " tree ", the distance of " tree " node is compared with threshold tau, if being more than Given threshold value τ, corresponding branch of adjusting the distance are marked, and are formed subtree --- " local mode ", otherwise will be less than or equal to threshold The branch of value τ, is merged into local mode, is constantly calculated from lower layer to upper layer, obtains the point composition for belonging to same mode One super-pixel;
S3.4, super-pixel, as optic disk region corresponding to the central point of optic disk are searched, first carries out form after obtaining optic disk region Expansion is learned, then removes the optic disk region of the eye fundus image after color correction, using R channel components figure as tag image after removal, Tag image carries out morphological reconstruction using the R channel components figure of the eye fundus image after color correction as mask image, obtains not The image in the region containing optic disk makes the difference the image that R channel components figure and morphological reconstruction obtain to get complete optic disk figure is arrived Picture.
The specific steps of the step S4 are as follows:
S4.1, the ROI region image of extraction is subjected to morphological dilations, subtracts former ROI region with the ROI region image after expansion Image obtains image border, and optic disk image is added with image border, tag image is obtained, after tag image is with color correction The G channel components figure of eye fundus image be mask images, carry out morphological reconstruction, G channel components figure obtained with morphological reconstruction To image make the difference to get to preliminary candidate region;
S4.2, preliminary candidate region is first subjected to morphological dilations, then removes the preliminary time in the eye fundus image after color correction Favored area, using G channel components figure as tag image after removal, tag image is with the channel G of the eye fundus image after color correction Component map carries out morphological reconstruction as mask image, the image without hard exudate region is obtained, by G channel components figure and shape State is rebuild obtained image and is made the difference, and progress thresholding obtains bianry image after making normalized to obtained error image, The candidate region of hard exudate is extracted in former eye fundus image using the bianry image, i.e., by bianry image and former eye fundus image It is multiplied, obtains candidate region;
The specific steps of the step S5 are as follows:
S5.1, the hard exudate candidate region I in former eye fundus imagehIn extract color histogram feature;
S5.2, using the bianry image in S4.2, hard exudate candidate region is extracted in the eye fundus image after color correction I′h, and color constancy feature is extracted in this region, color constancy feature includes R, G, luminance mean value, the R of channel B, G, the luminance standard of channel B is poor, R, G, entropy, the R of channel B, G, the contrast of channel B, R, G, the edge strength and R, G of channel B, The area compact degree of channel B, specific formula for calculation are as follows:
R, G, the luminance mean value of channel B: Indicate candidate region image h in n-channel The brightness value of interior each pixel, Ω indicate the pixel collection of candidate region image;
The luminance standard of R, G, channel B are poor:
R, G, the entropy of channel B:fn(i) indicate that n is logical The pixel value in road is the number of pixels of i, and the value range of i is 1-256;
R, G, the contrast of channel B:Wherein δ (i, j) is between adjacent pixel Gray scale difference, PδThe pixel distribution probability that the gray scale difference of (i, j) between adjacent pixel is δ;
R, G, the edge strength of channel B: going out the edge in each channel using Sobel operator extraction, and the brightness for calculating edge pixel is equal Value;
R, G, the area compact degree of channel B:CnFor the area circumference of n-channel, SnFor n-channel Region area;
S5.3, former eye fundus image hard exudate candidate region IhIn extract LBP textural characteristics, it is as follows to extract formula:
Wherein, p indicates that p-th of pixel in 3 × 3 windows in addition to central pixel point, I (cn) indicate the ash of central pixel point cn Angle value, I (p) indicate the gray value of p-th of pixel in neighborhood.
10. according to claim 9 a kind of based on hard exudate in multiple light courcess color constancy model inspection eye fundus image Method, which is characterized in that the specific steps of the step S6 are as follows:
S6.1, the original data set that will acquire are training set and test set according to 7: 3 ratio cut partition, are partitioned into the time of original data set Constituency area image extracts color histogram feature, color constancy feature and the textural characteristics of candidate region image;
S6.2, PCA, i.e. color histogram feature, color constancy feature and texture of the principal component analytical method to extraction are used Characteristic carries out dimension-reduction treatment, and 107 original dimensional features are reduced to 27 dimensional features;
S6.3, by the feature after training set dimension-reduction treatment, be trained using support vector machines;
S6.4, each candidate region image for traversing test set, by the feature after dimensionality reduction in each candidate region image, input branch It holds vector machine to be tested, determines whether candidate region image belonging to this feature is hard exudate, after the completion of traversal, if it is determined that Accuracy rate then needs to adjust the parameter of support vector machines less than 90%, re-execute the steps S6.3- step S6.4, otherwise obtains Support vector machines after training;
S6.5, color histogram feature, color constancy feature and the textural characteristics input that former eye fundus image extracts are perfected Support vector machines afterwards is separated, and final testing result is obtained.
CN201910211745.5A 2019-03-19 2019-03-19 Method for detecting hard exudation in fundus image based on multi-light-source color constancy model Active CN109978848B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910211745.5A CN109978848B (en) 2019-03-19 2019-03-19 Method for detecting hard exudation in fundus image based on multi-light-source color constancy model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910211745.5A CN109978848B (en) 2019-03-19 2019-03-19 Method for detecting hard exudation in fundus image based on multi-light-source color constancy model

Publications (2)

Publication Number Publication Date
CN109978848A true CN109978848A (en) 2019-07-05
CN109978848B CN109978848B (en) 2022-11-04

Family

ID=67079704

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910211745.5A Active CN109978848B (en) 2019-03-19 2019-03-19 Method for detecting hard exudation in fundus image based on multi-light-source color constancy model

Country Status (1)

Country Link
CN (1) CN109978848B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111291706A (en) * 2020-02-24 2020-06-16 齐鲁工业大学 Retina image optic disc positioning method
CN111369478A (en) * 2020-03-04 2020-07-03 腾讯科技(深圳)有限公司 Face image enhancement method and device, computer equipment and storage medium
CN112164079A (en) * 2020-09-29 2021-01-01 东北电力大学 Sonar image segmentation method
CN112837388A (en) * 2021-02-01 2021-05-25 清华大学深圳国际研究生院 Multi-light-source picture generation method
CN112906469A (en) * 2021-01-15 2021-06-04 上海至冕伟业科技有限公司 Fire-fighting sensor and alarm equipment identification method based on building plan
CN113256717A (en) * 2021-05-08 2021-08-13 华南师范大学 Cell smear auxiliary analysis method and system
CN114424513A (en) * 2019-11-05 2022-04-29 谷歌有限责任公司 Highlight recovery for image processing pipelines
CN117876801A (en) * 2024-03-13 2024-04-12 中国人民解放军总医院第一医学中心 Method for predicting diabetic nephropathy based on fundus blood vessel characteristics and artificial intelligence

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140314288A1 (en) * 2013-04-17 2014-10-23 Keshab K. Parhi Method and apparatus to detect lesions of diabetic retinopathy in fundus images
CN106204662A (en) * 2016-06-24 2016-12-07 电子科技大学 A kind of color of image constancy method under multiple light courcess environment
CN106296620A (en) * 2016-08-14 2017-01-04 遵义师范学院 A kind of color rendition method based on rectangular histogram translation
CN106651899A (en) * 2016-12-09 2017-05-10 东北大学 Fundus image micro-aneurysm detection system based on Adaboost
CN106651795A (en) * 2016-12-03 2017-05-10 北京联合大学 Method of using illumination estimation to correct image color
CN107180421A (en) * 2016-03-09 2017-09-19 中兴通讯股份有限公司 A kind of eye fundus image lesion detection method and device
WO2018080575A1 (en) * 2016-09-12 2018-05-03 Elc Management Llc System and method for correcting color of digital image based on the human sclera and pupil
CN108961280A (en) * 2018-06-29 2018-12-07 电子科技大学 A kind of eyeground optic disk fine segmentation method based on SLIC super-pixel segmentation
CN109146983A (en) * 2018-08-30 2019-01-04 天津科技大学 A kind of multiple light courcess color of image constancy calculating method
CN109155071A (en) * 2017-06-30 2019-01-04 华为技术有限公司 A kind of method and terminal of color detection
CN109166117A (en) * 2018-08-31 2019-01-08 福州依影健康科技有限公司 A kind of eye fundus image automatically analyzes comparison method and a kind of storage equipment
CN109472781A (en) * 2018-10-29 2019-03-15 电子科技大学 A kind of diabetic retinopathy detection system based on serial structure segmentation

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140314288A1 (en) * 2013-04-17 2014-10-23 Keshab K. Parhi Method and apparatus to detect lesions of diabetic retinopathy in fundus images
CN107180421A (en) * 2016-03-09 2017-09-19 中兴通讯股份有限公司 A kind of eye fundus image lesion detection method and device
CN106204662A (en) * 2016-06-24 2016-12-07 电子科技大学 A kind of color of image constancy method under multiple light courcess environment
CN106296620A (en) * 2016-08-14 2017-01-04 遵义师范学院 A kind of color rendition method based on rectangular histogram translation
WO2018080575A1 (en) * 2016-09-12 2018-05-03 Elc Management Llc System and method for correcting color of digital image based on the human sclera and pupil
CN106651795A (en) * 2016-12-03 2017-05-10 北京联合大学 Method of using illumination estimation to correct image color
CN106651899A (en) * 2016-12-09 2017-05-10 东北大学 Fundus image micro-aneurysm detection system based on Adaboost
CN109155071A (en) * 2017-06-30 2019-01-04 华为技术有限公司 A kind of method and terminal of color detection
CN108961280A (en) * 2018-06-29 2018-12-07 电子科技大学 A kind of eyeground optic disk fine segmentation method based on SLIC super-pixel segmentation
CN109146983A (en) * 2018-08-30 2019-01-04 天津科技大学 A kind of multiple light courcess color of image constancy calculating method
CN109166117A (en) * 2018-08-31 2019-01-08 福州依影健康科技有限公司 A kind of eye fundus image automatically analyzes comparison method and a kind of storage equipment
CN109472781A (en) * 2018-10-29 2019-03-15 电子科技大学 A kind of diabetic retinopathy detection system based on serial structure segmentation

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
AVULA BENZAMIN,ET AL: "Detection of Hard Exudates in Retinal Fundus Images Using Deep Learning", 《ICIVPR》 *
LILI XU,ET AL: "SUPPORT VECTOR MACHINE BASED METHOD FOR IDENTIFYING HARD EXUDATES IN RETINAL IMAGES", 《2009 IEEE YOUTH CONFERENCE ON INFORMATION, COMPUTING AND TELECOMMUNICATION》 *
李居朋: "眼底图像处理与分析中一些关键问题的研究", 《中国博士学位论文全文数据库信息科技辑》 *
王岳: "面向视网膜病变图像硬性渗出物识别的应用研究", 《中国优秀硕士学位论文全文数据库医药卫生科技辑》 *
陈向: "糖尿病视网膜病变图像的渗出物自动检测算法研究", 《中国优秀硕士学位论文全文数据库医药卫生科技辑》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114424513B (en) * 2019-11-05 2024-04-26 谷歌有限责任公司 System and method for performing highlight restoration and computer storage medium
CN114424513A (en) * 2019-11-05 2022-04-29 谷歌有限责任公司 Highlight recovery for image processing pipelines
CN111291706A (en) * 2020-02-24 2020-06-16 齐鲁工业大学 Retina image optic disc positioning method
CN111291706B (en) * 2020-02-24 2022-11-22 齐鲁工业大学 Retina image optic disc positioning method
CN111369478B (en) * 2020-03-04 2023-03-21 腾讯科技(深圳)有限公司 Face image enhancement method and device, computer equipment and storage medium
CN111369478A (en) * 2020-03-04 2020-07-03 腾讯科技(深圳)有限公司 Face image enhancement method and device, computer equipment and storage medium
CN112164079A (en) * 2020-09-29 2021-01-01 东北电力大学 Sonar image segmentation method
CN112164079B (en) * 2020-09-29 2024-03-29 东北电力大学 Sonar image segmentation method
CN112906469A (en) * 2021-01-15 2021-06-04 上海至冕伟业科技有限公司 Fire-fighting sensor and alarm equipment identification method based on building plan
CN112837388A (en) * 2021-02-01 2021-05-25 清华大学深圳国际研究生院 Multi-light-source picture generation method
CN112837388B (en) * 2021-02-01 2023-04-28 清华大学深圳国际研究生院 Multi-light source picture generation method
CN113256717B (en) * 2021-05-08 2022-01-21 华南师范大学 Cell smear auxiliary analysis method and system
CN113256717A (en) * 2021-05-08 2021-08-13 华南师范大学 Cell smear auxiliary analysis method and system
CN117876801A (en) * 2024-03-13 2024-04-12 中国人民解放军总医院第一医学中心 Method for predicting diabetic nephropathy based on fundus blood vessel characteristics and artificial intelligence
CN117876801B (en) * 2024-03-13 2024-05-28 中国人民解放军总医院第一医学中心 Method for predicting diabetic nephropathy based on fundus blood vessel characteristics and artificial intelligence

Also Published As

Publication number Publication date
CN109978848B (en) 2022-11-04

Similar Documents

Publication Publication Date Title
CN109978848A (en) Method based on hard exudate in multiple light courcess color constancy model inspection eye fundus image
CN109522908B (en) Image significance detection method based on region label fusion
Zhang et al. Object-oriented shadow detection and removal from urban high-resolution remote sensing images
CN107680054B (en) Multi-source image fusion method in haze environment
Zhou et al. Multiscale water body extraction in urban environments from satellite images
Li et al. Change detection based on Gabor wavelet features for very high resolution remote sensing images
CN107610114B (en) optical satellite remote sensing image cloud and snow fog detection method based on support vector machine
CN108765465B (en) Unsupervised SAR image change detection method
Miao et al. An object-based method for road network extraction in VHR satellite images
CN110309781B (en) House damage remote sensing identification method based on multi-scale spectrum texture self-adaptive fusion
CN107092871B (en) Remote sensing image building detection method based on multiple dimensioned multiple features fusion
CN107330875B (en) Water body surrounding environment change detection method based on forward and reverse heterogeneity of remote sensing image
CN110021024B (en) Image segmentation method based on LBP and chain code technology
Zhang et al. Region-of-interest extraction based on saliency analysis of co-occurrence histogram in high spatial resolution remote sensing images
CN108319973A (en) Citrusfruit detection method on a kind of tree
Asokan et al. Machine learning based image processing techniques for satellite image analysis-a survey
CN104217221A (en) Method for detecting calligraphy and paintings based on textural features
Zhao et al. A systematic extraction approach for mapping glacial lakes in high mountain regions of Asia
CN112734761B (en) Industrial product image boundary contour extraction method
Zhang et al. Automatic and unsupervised water body extraction based on spectral-spatial features using GF-1 satellite imagery
CN104123554A (en) SIFT image characteristic extraction method based on MMTD
CN108492288B (en) Random forest based multi-scale layered sampling high-resolution satellite image change detection method
CN111597930A (en) Coastline extraction method based on remote sensing cloud platform
CN116071339A (en) Product defect identification method based on improved whale algorithm optimization SVM
CN115033721A (en) Image retrieval method based on big data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant