CN116309633A - Retina blood vessel segmentation method based on nuclear intuitionistic fuzzy C-means clustering - Google Patents

Retina blood vessel segmentation method based on nuclear intuitionistic fuzzy C-means clustering Download PDF

Info

Publication number
CN116309633A
CN116309633A CN202310095253.0A CN202310095253A CN116309633A CN 116309633 A CN116309633 A CN 116309633A CN 202310095253 A CN202310095253 A CN 202310095253A CN 116309633 A CN116309633 A CN 116309633A
Authority
CN
China
Prior art keywords
fundus image
blood vessel
clustering
color fundus
vessel segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310095253.0A
Other languages
Chinese (zh)
Inventor
徐国路
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Normal University
Original Assignee
South China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Normal University filed Critical South China Normal University
Priority to CN202310095253.0A priority Critical patent/CN116309633A/en
Publication of CN116309633A publication Critical patent/CN116309633A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses a retina blood vessel segmentation method, a retina blood vessel segmentation device, an electronic device, a readable storage medium and a computer program product based on nuclear intuitionistic fuzzy C-means clustering, wherein the method specifically comprises the following steps: and carrying out feature extraction on the preprocessed color fundus image to obtain a three-dimensional input feature vector, clustering the normalized feature vector by a kernel-based intuitionistic fuzzy C-means clustering method to obtain a retina blood vessel segmentation map, and denoising the retina blood vessel segmentation map by an area threshold method of area connectivity to obtain a retina blood vessel segmentation result map. The invention adopts Gaussian kernel distance to measure the distance from the sample point to the clustering center, solves the problem that part of low-contrast sample points are in linear inseparability, describes the relationship between the pixel sample point and the clustering center through three attributes of membership, non-membership and hesitation together, has better retina tiny blood vessel detection capability, and can be widely applied to the field of image segmentation.

Description

Retina blood vessel segmentation method based on nuclear intuitionistic fuzzy C-means clustering
Technical Field
The invention relates to the technical field of image segmentation, in particular to a retinal vessel segmentation method based on nuclear intuitionistic fuzzy C-means clustering.
Background
The characteristics of retinal fundus blood vessel width, angle, branch state, etc. are important in diagnosis of diabetes, glaucoma, hypertension, etc., and therefore retinal blood vessel segmentation has attracted extensive attention from researchers. Retinal vessel segmentation methods are largely classified into supervised methods and unsupervised methods. The supervised method utilizes the labeled pixel class label information to train the classifier, and uses the trained classifier to classify the pixels of the bottom-of-eye image. The unsupervised method does not need a training set of known class labels, and saves the time of an ophthalmologist to manually mark the bottom-of-eye image.
The existing retinal vascular segmentation method has the following problems:
1. the retina blood vessel segmentation method of the fuzzy C-means clustering uses Euclidean distance to measure the distance from a sample point to a clustering center, uses single attribute membership to measure the degree that the sample point belongs to the clustering center, so that the phenomenon of disconnection at the crossing of blood vessels occurs in the retina blood vessel segmentation result, and partial low-contrast tiny blood vessels are lost;
2. the retina blood vessel segmentation method of the intuitionistic fuzzy C-means clustering has the problem that part of low-contrast blood vessel sample points and background sample points are linearly inseparable;
3. the retina blood vessel segmentation method of the nuclear fuzzy C-means clustering has insufficient detection capability on retina tiny blood vessels.
Disclosure of Invention
In view of the above, the embodiment of the invention provides an accurate retinal vessel segmentation method based on nuclear intuitionistic fuzzy C-means clustering.
In one aspect, an embodiment of the present invention provides a retinal vessel segmentation method based on kernel intuitionistic fuzzy C-means clustering, including:
preprocessing the color fundus image;
extracting features of the preprocessed color fundus image to obtain a three-dimensional input feature vector of the color fundus image;
processing the three-dimensional input feature vector to obtain a normalized feature vector;
clustering the normalized feature vectors by a kernel-based intuitionistic fuzzy C-means clustering method to obtain a clustering center and membership degrees of each pixel point belonging to the clustering center;
classifying the pixel points according to the membership degree of each pixel point belonging to the clustering center to obtain a retina blood vessel segmentation map;
denoising the retinal blood vessel segmentation map by an area threshold method of area connectivity to obtain a retinal blood vessel segmentation result map.
Optionally, the feature extracting the preprocessed color fundus image to obtain a three-dimensional input feature vector of the color fundus image includes:
performing feature extraction on the preprocessed color fundus image by adopting the multi-scale Gaussian matched filtering to obtain multi-scale Gaussian matched filtering feature response output of each target pixel in the color fundus image;
performing feature extraction on the preprocessed color fundus image by adopting the B-COSSFIRE filtering to obtain B-COSSFIRE filtering feature response output of each target pixel in the color fundus image;
extracting the characteristics of the preprocessed color fundus image by adopting the multi-scale line operator to obtain the characteristic response output of the multi-scale line operator of each target pixel in the color fundus image;
and processing the characteristic response output of the multi-scale Gaussian matching filtering, the B-COSFIE filtering and the multi-scale line operator to obtain the three-dimensional input characteristic vector.
Optionally, the preprocessing the color fundus image includes:
extracting a green channel image of the color fundus image;
the green channel image is enhanced by limiting contrast adaptive histogram equalization.
Optionally, in the step of processing the preprocessed color fundus image by using the multi-scale gaussian matched filtering to obtain a multi-scale gaussian matched filtering characteristic response output of each target pixel in the color fundus image, a response function of the gaussian matched filtering is:
Figure BDA0004071516240000021
wherein K (x, y) represents a Gaussian matched filter response value, x represents an abscissa of a pixel point, y represents a direction of a blood vessel, L represents a length of a segmented line structure in the direction of the blood vessel, sigma represents a standard deviation of a Gaussian function, represents a blood vessel scale,
Figure BDA0004071516240000022
is a Gaussian function, m 0 Is the gray-scale mean of the gaussian function.
Optionally, the processing the preprocessed color fundus image by using the multi-scale line operator to obtain a characteristic response output of the multi-scale line operator of each target pixel in the color fundus image includes:
defining a detection window, sliding in the preprocessed color fundus image, and calculating the average gray value of the pixel points;
within 0 to 180 degrees, taking 15 degrees as a scale, taking 12 directions to match blood vessels with different directions, respectively calculating response values of each direction, and selecting the maximum response value;
calculating a line operator response value from the average gray value and the maximum response value, wherein the calculation formula of the line operator response value is as follows:
Figure BDA0004071516240000031
wherein R is N For the line operator response value,
Figure BDA0004071516240000032
is the maximum response value in 12 directions, < >>
Figure BDA0004071516240000033
The average gray value is obtained, N is a detection window, and I is a center point;
changing the value of the scale into 1 to 15, setting 7 scales by taking 2 as a step length, respectively solving the line operator response values of the 7 scales, and taking the maximum response value as the characteristic response output of the multi-scale line operator of the target pixel.
Optionally, in the step of clustering the normalized feature vectors by a kernel-based intuitive fuzzy C-means clustering method to obtain a clustering center and membership degrees of each pixel point belonging to the clustering center, an expression of an objective function based on the kernel-based intuitive fuzzy C-means clustering method is as follows:
Figure BDA0004071516240000034
wherein J is KIFCM Representing an objective function based on a kernel intuitionistic fuzzy C-means clustering method, wherein C represents the number of clustering centers; n represents the number of pixel points included in the color fundus image; m is the ambiguity of the control cluster;
Figure BDA0004071516240000035
updated membership degree after introduction of intuitionistic fuzzy set>
Figure BDA0004071516240000036
The intuitive fuzzy entropy is expressed and used for measuring the fuzzy degree in clustering; ||f (F (x) j ))-f(v i ) The I represents the kernel function distance from the j-th pixel point to the i-th cluster center, wherein phi represents the fact that sample points of the three-dimensional input feature vector are mapped to a high-dimensional feature space through a nonlinear continuous function phi, and the kernel function enables dot product operation in the feature space to be converted into kernel function calculation in the sample space; f (x) j ) Represents the j-th pixel point, v i Representing an i-th class cluster center; />
Figure BDA0004071516240000037
Representing the sum of all pixel hesitation values belonging to the ith cluster centerAnd (5) averaging.
On the other hand, the embodiment of the invention also provides a retina blood vessel segmentation device based on the nuclear intuition fuzzy C-means clustering, which comprises the following steps:
the preprocessing module is used for preprocessing the color fundus image;
the feature extraction module is used for carrying out feature extraction on the preprocessed color fundus image to obtain a three-dimensional input feature vector of the color fundus image, and processing the three-dimensional input feature vector to obtain a normalized feature vector;
the clustering module is used for clustering the normalized feature vectors through a kernel-based intuitionistic fuzzy C-means clustering method to obtain a clustering center and membership degrees of all pixel points belonging to the clustering center, and classifying the pixel points according to the membership degrees of all the pixel points belonging to the clustering center to obtain a retina blood vessel segmentation map;
and the post-processing module is used for denoising the retinal blood vessel segmentation graph through an area threshold method of area connectivity to obtain a retinal blood vessel segmentation result graph.
On the other hand, the embodiment of the invention also provides electronic equipment, which comprises a processor and a memory, wherein the memory is used for storing a program, and the processor executes the program to realize the retinal vessel segmentation method based on the nuclear intuition fuzzy C-means clustering.
In another aspect, an embodiment of the present invention further provides a computer readable storage medium, where a program is stored, where the program is executed by a processor to implement the above-mentioned retinal vessel segmentation method based on the kernel intuitive fuzzy C-means cluster.
Embodiments of the present invention also disclose a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions may be read from a computer-readable storage medium by a processor of a computer device, and executed by the processor, to cause the computer device to perform the foregoing method.
Embodiments of the present invention include at least the following beneficial results: the retina blood vessel segmentation method using the nuclear intuitionistic fuzzy C-means clustering solves the problem that part of low-contrast blood vessel sample points and background sample points are in linear inseparable, and the segmented blood vessels have better integrity and continuity and can better detect retina tiny blood vessels.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a retinal vessel segmentation method based on kernel intuitive fuzzy C-means clustering provided by an embodiment of the present invention;
FIG. 2 is a flowchart of steps for feature extraction of a color fundus image provided by an embodiment of the present invention;
FIG. 3 is a schematic diagram of a B-COSFIE filter according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of line operator detection retina provided by an embodiment of the present invention;
FIG. 5 is a flow chart of an implementation of an embodiment of the present invention;
FIG. 6 is a graph comparing results of retinal vessel segmentation based on a kernel-based intuitive fuzzy C-means clustering method and other methods provided by an embodiment of the present invention;
fig. 7 is a schematic diagram of a device of a retinal vessel segmentation method based on a nuclear intuitionistic fuzzy C-means cluster according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
Aiming at the problems existing in the prior art, the embodiment of the invention provides a retinal vessel segmentation method based on nuclear intuitionistic fuzzy C-means clustering, as shown in figure 1, which comprises steps 101 to 106:
step 101: preprocessing the color fundus image.
Firstly, a color retina fundus image is acquired, for example, the color fundus image is acquired from a public color fundus image database such as a START database, a DRIVE database, a mesideor database, a ROC database and the like, then a green channel image of the color fundus image is extracted through image processing software, opencv software is commonly used for extracting the green channel image software, and then enhancement processing is carried out on the green channel image through limiting contrast adaptive histogram equalization.
The contrast between the blood vessel and the background of the green channel image is high, the noise is less, and the morphological structure of the blood vessel is clearer, so that the green channel image is selected for further processing and analysis; the self-adaptive histogram equalization of limiting contrast is to equalize the global image, but each small area block uses contrast limitation, so that the local contrast and detail part of the image can be lightened, the problem of excessive amplified noise can be solved, and the purpose of image enhancement is achieved.
Step 102: and extracting the characteristics of the preprocessed color fundus image to obtain a three-dimensional input characteristic vector of the color fundus image.
Referring to fig. 2, the step 102 may specifically include steps 201 to 204.
Step 201: and carrying out feature extraction on the preprocessed color fundus image by adopting the multi-scale Gaussian matched filtering to obtain multi-scale Gaussian matched filtering feature response output of each target pixel in the color fundus image.
Wherein, the blood vessel of the color fundus image can be represented by a segmented line structure; the gray level distribution of the blood vessel cross section meets Gaussian distribution, the retina fundus blood vessel is detected by Gaussian matched filtering according to the characteristics, and the response function of the Gaussian matched filtering is as follows:
Figure BDA0004071516240000051
wherein, K (x, y) represents a Gaussian matched filter response value, x represents the abscissa of a pixel point, y represents the direction of a blood vessel, L is the length of a segmented line structure in the direction of the blood vessel, L=9 is taken, sigma is the standard deviation of a Gaussian function and represents the dimension of the blood vessel, the invention adopts multi-dimension Gaussian matched filter, and the dimension parameters are selected as follows: 0.8,1.2,2, exp () is an exponential function, m 0 Is the gray average of the gaussian function; since the blood vessels have different directions, the Gaussian matched filtering of each scale is respectively convolved with the image by using 12 Gaussian kernels of different directions, and the maximum filtering response values of different scales and different directions are output as characteristic responses.
Step 202: and carrying out feature extraction on the preprocessed color fundus image by adopting the B-COSSFIRE filtering to obtain B-COSSFIRE filtering feature response output of each target pixel in the color fundus image.
The B-COSFIE filtering is a filtering response method based on rod-shaped selection shift combination, response of the collinearly arranged DoG filter at a specific position relative to the center of a supporting area is taken as input, and the DoG filter takes the weighted geometric average value of fuzzy and shift response of the center of a corresponding circle as output. Design principle of B-cosfie filtering referring to fig. 3 (a), each gray circle in fig. 3 (a) represents a supporting domain of the DoG filter, and a marked black dot is a filter center point; the B-COSFIE filter response is obtained by product weighted geometric average of a set of Gaussian differential DoG filter responses, and the calculation formula of the DoG filter response is as follows:
Figure BDA0004071516240000061
wherein, doG (x, y) is a DoG filter response, s is the standard deviation of a Gaussian function, the DoG filter responds around a center point in concentric circles,as shown in FIG. 3 (b), point 1 is the center point, the most responsive point is the point with the greatest intensity change, i.e. the key point, as shown in FIG. 3 (b), points 2-5 are combined
Figure BDA0004071516240000062
Information representing key points, wherein n is the number of the DoG filters, which is set by practical conditions, and the invention is set to 5.s is(s) i Represents the standard deviation, ρ, of the maximum DoG filter response i 、/>
Figure BDA0004071516240000063
Representing the polar coordinates of the filter center point. In order to improve the fault tolerance of each key point position, the fuzzy and shift operations are carried out on the DoG filter response, and then the weighted geometric average operation is carried out on each filter response value, so as to finally obtain the response formula r of the B-COSFIE filter S (x, y) the calculation formula of the response formula is:
Figure BDA0004071516240000064
wherein r is S (x, y) is the response output of the B-COSFIE filter,
Figure BDA0004071516240000065
and gt represents that t is taken as a threshold value to threshold the response, and t is more than or equal to 0 and less than or equal to 1. Since the blood vessels have different directions, in order to detect the blood vessels in different directions, the filter needs to be rotated, the responses in 12 directions are calculated according to the formula (3) with 15 degrees as the scale, and the maximum value of the responses in 12 directions is taken as the response output.
The B-COSFIE filter has good detection effect on the banded structure of blood vessels, and in the fundus blood vessel detection process, the symmetrical B-COSFIE filter shown in the figure 3 (B) is adopted to detect continuous blood vessels and the asymmetrical B-COSFIE filter shown in the figure 3 (c) is adopted to detect the tail ends of blood vessels.
Step 203: and carrying out feature extraction on the preprocessed color fundus image by adopting the multi-scale line operator to obtain the feature response output of the multi-scale line operator of each target pixel in the color fundus image.
The principle diagram of linear operator detection of retinal blood vessels is shown in fig. 4 (a), a detection window N with the size of 15×15 is defined, the detection window N slides in the preprocessed fundus image, and the average gray value of all pixel points covered by the linear operator with I as a center point and 15 as a scale is calculated
Figure BDA0004071516240000071
And within 0 to 180 degrees, taking 12 directions to match blood vessels with different directions at 15-degree intervals, and respectively calculating the response value +.>
Figure BDA0004071516240000072
Take its maximum value, which is marked as +.>
Figure BDA0004071516240000073
At this time, the line structure direction is consistent with the blood vessel direction, the direction matching principle is as shown in (b) of fig. 4, and the line operator response value is calculated by the average gray value and the maximum response value, wherein the calculation formula of the line operator response value is as follows:
Figure BDA0004071516240000074
wherein R is N For the line operator response value,
Figure BDA0004071516240000075
is the maximum response value in 12 directions, < >>
Figure BDA0004071516240000076
Is the average gray value.
The method calculates response values for single-scale line operators, is unfavorable for detecting segmented line structure blood vessels with different lengths, provides a multi-scale line operator characteristic extraction method on the basis of the single-scale line operators, changes the scale values into 1-15, takes 2 as step length, sets 7 scales, respectively calculates the line operator response values of 7 scales, and takes the maximum response value as characteristic response output of the multi-scale line operators of target pixels.
Step 204: and processing the characteristic response output of the multi-scale Gaussian matched filtering, the B-COSFIE filter and the multi-scale line operator to obtain the three-dimensional input characteristic vector.
Obtaining f through steps 201, 202 and 203 1 (x j )、f 2 (x j )、f 3 (x j ) Representing the output of the characteristic response of Gaussian matched filtering, B-COSFIE filtering and multi-scale line operator extraction respectively, the three-dimensional characteristic vector is expressed as F 3D (x j )=(f 1 (x j ),f 2 (x j ),f 3 (x j ) And), F 3D (x j ) Representing the three-dimensional feature vector.
Step 103: and processing the three-dimensional input feature vector to obtain a normalized feature vector.
In order to eliminate the dimensional influence among the characteristic data, all the characteristic data can be in the same interval, the three characteristic response outputs are normalized by using a linear function normalization formula, namely, the original data is subjected to linear transformation and then mapped into a (0, 1) range, and the equal ratio scaling of the original data is realized, wherein the linear function normalization formula is as follows:
Figure BDA0004071516240000077
wherein f i For the i-th dimension characteristic response output, i takes 1,2,3,
Figure BDA0004071516240000078
represents the normalized characteristic response value, max (f i ) And min (f) i ) The maximum value and the minimum value of the i-th dimension characteristic response output are respectively.
The expression of the normalized feature vector is obtained by the corresponding output of the three normalized features
Figure BDA0004071516240000079
Wherein F (x) j ) For normalizing feature vectors, ++>
Figure BDA00040715162400000710
Normalized characteristic response values are output for the three characteristic responses.
Step 104: and clustering the normalized feature vectors by a kernel-based intuitional fuzzy C-means clustering method to obtain a clustering center and membership degrees of each pixel point belonging to the clustering center.
First, pixel sample points in the normalized feature vector are mapped to a high-dimensional feature space by a gaussian kernel function, in which a part of the linearly inseparable pixel sample points become linearly inseparable.
The expression of the objective function based on the kernel intuitionistic fuzzy C-means clustering is as follows:
Figure BDA0004071516240000081
wherein J is KIFCM The invention divides the pixel points into blood vessel class and background class, so the value of C is 2; n represents the number of pixel points included in the color fundus image; m is the ambiguity of the control cluster, the present invention is set to 2;
Figure BDA0004071516240000082
updated membership degrees after introducing the intuitionistic fuzzy set;
Figure BDA0004071516240000083
representing intuitive fuzzy entropy for measuring degree of ambiguity in clusters, wherein +.>
Figure BDA0004071516240000084
||f(F(x j ))-f(v i ) I represents the kernel distance from the jth pixel point to the center of the ith class of cluster, where φSample points representing normalized feature vectors are mapped to a high-dimensional feature space by a nonlinear continuous function phi, the relationship of the kernel function and the nonlinear continuous function phi satisfying
Figure BDA0004071516240000087
F(x j ) Normalized eigenvector representing jth pixel, v i Representing the class i cluster center.
The kernel function is a Gaussian kernel function, and each sample point is mapped to an infinite-dimensional feature space, so that data which is originally linearly inseparable is linearly separable; according to the method, the distance from the sample point to the clustering center is measured by using the Gaussian kernel function distance, the pixel points are mapped into the high-dimensional feature space by using the Gaussian kernel function, and in the high-dimensional feature space, part of the linearly inseparable pixel points are changed into the linearly inseparable pixel points, so that the classification accuracy of an algorithm on the linearly inseparable pixel points is improved.
The invention uses three attributes of membership, non-membership and hesitation to describe the relationship between the pixel point and the clustering center together, and the relationship between the three attributes is as follows:
Figure BDA0004071516240000085
π ij =1-u ijij (7)
γ ij =(1-(u ij ) α ) 1/α (8)
wherein,,
Figure BDA0004071516240000086
updated membership degrees after introducing the intuitionistic fuzzy set; mu (mu) ij Representing the membership value of the jth pixel sample point belonging to the ith clustering center in the feature space; pi ij Representing the hesitation value of the jth pixel sample point belonging to the ith cluster center in the feature space, wherein the hesitation attribute is used for describing the uncertainty of the sample point; gamma ray ij Representing non-membership, i.e. in feature space, the jth sample point does not belong to the ith cluster centerThe degree, alpha, is a parameter for controlling the non-membership degree, and the value of alpha is 0.8.
Because the relation between the kernel function and the nonlinear continuous function phi satisfies
Figure BDA0004071516240000096
Then |phi (x) j )-φ(v i )|| 2 Can be expressed as:
Figure BDA0004071516240000091
wherein, the expression of the Gaussian kernel function is
Figure BDA0004071516240000092
Sigma is the standard deviation of the Gaussian kernel function, the correlation degree of the Gaussian kernel function on the distance between the sample point and the clustering center is controlled by adjusting sigma parameters, and k (x j ,v i ) The variation of (c) is smoother, the degree of discrimination decreases, and k (x) decreases with decreasing sigma parameter j ,v i ) The change of (a) is relatively sharp, the differentiation degree rises, and the sigma in the invention has a value of 1.28; k (x) satisfied by Gaussian kernel function j ,x j )=1,k(v i ,v i ) =1 is substituted into equations (5) and (9), and the transformed objective function expression based on the kernel-intuitive fuzzy C-means clustering method is obtained as follows:
Figure BDA0004071516240000093
and minimizing the objective function of the formula (10) by using a Lagrangian multiplier method to obtain a membership update function (11) and a clustering center update function (12) corresponding to the clustering process as follows:
Figure BDA0004071516240000094
Figure BDA0004071516240000095
calculating the membership degree, the hesitation degree and the non-membership degree of the target pixel points according to formulas (6), (7) and (8), wherein the pixel points distributed in the boundary area of the blood vessel and the background in the fundus image of the retinal blood vessel are relatively close to and relatively smaller than the membership degree of the background clustering center, and the hesitation degree is large; in a small blood vessel region with low contrast, the membership degree of a sample point belonging to a blood vessel clustering center and the value of non-membership degree are relatively close, difficult to distinguish, and the hesitation degree is relatively large, so that the calculation of the membership degree and the clustering center is optimized according to formulas (11) and (12), the position of the clustering center is adjusted, and the sensitivity of small blood vessel detection is improved.
In summary, the algorithm steps based on the kernel intuitionistic fuzzy C-means clustering are summarized as follows:
determining the number c of clustering centers, membership fuzzy parameters m, non-membership control parameters alpha, iteration termination conditions epsilon and iteration frequency upper limit;
initializing a clustering center;
updating the membership according to formula (11);
updating the cluster center according to formula (12);
judging the iteration termination condition if |J m -J m-1 Epsilon or the iteration number reaches the upper limit, epsilon and the upper limit of the iteration number are selected according to practical conditions, the invention is not limited, for example epsilon can be 0.03, the upper limit of the iteration number can be 100, when the iteration termination condition is reached, the algorithm is terminated, otherwise, the membership degree is updated by returning to the formula (11).
Step 105: classifying the pixel points according to the membership degree of each pixel point belonging to the clustering center to obtain a retina blood vessel segmentation map.
The method comprises the steps of obtaining two clustering centers through a nuclear intuitional fuzzy C-means clustering method, wherein the clustering centers are blood vessels and backgrounds respectively, the membership degree of a target pixel point about the two clustering centers is compared with the membership degree of the target pixel point about the two clustering centers obtained through the last iteration of the target pixel point, the target pixel point is classified into a class with larger membership degree, and classification of the target pixel point in a fundus image is completed, so that a retina blood vessel segmentation map is obtained.
Step 106: denoising the retinal blood vessel segmentation map by an area threshold method of area connectivity to obtain a retinal blood vessel segmentation result map.
Firstly, determining a connected region in a retina blood vessel segmentation graph, wherein the connected region is a pixel set formed by adjacent pixels with the same pixel value; then calculating the area of each communication area; finally, removing the area smaller than the first threshold value to obtain a retina blood vessel segmentation result graph, wherein the first threshold value is 25 in the invention.
Referring to fig. 5, the implementation and application of the retinal vessel segmentation method based on the kernel intuitionistic fuzzy C-means clustering according to the embodiment of the present invention will be described below.
1. Preprocessing the color fundus image. The method includes extracting a green channel image of a color image using Opencv software from a color fundus image of a retina acquired from a DRIVE database, and enhancing the green channel image by limiting contrast adaptive histogram equalization.
2. And extracting the characteristics of the preprocessed color fundus image. Selecting multi-scale Gaussian matched filtering, B-COSFIE filtering and multi-scale line operators to extract pixel characteristics of fundus images, obtaining three-dimensional input characteristic vectors, and carrying out normalization processing on the three-dimensional input characteristic vectors to obtain normalized characteristic vectors.
3. And clustering the normalized feature vectors. Clustering the normalized feature vectors based on a kernel intuitionism fuzzy C-means clustering method to obtain a clustering center of the fundus image and membership degrees of all pixel points belonging to the clustering center, classifying the pixel points according to the membership degrees of all the pixel points belonging to the clustering center, and then performing retinal vessel segmentation on the fundus image to obtain a retinal vessel segmentation map.
4. And (5) carrying out post-treatment on the retina blood vessel segmentation map. Denoising the retinal blood vessel segmentation map by an area threshold method of area connectivity, and removing isolated noise points on the image to obtain a retinal blood vessel segmentation result map.
As shown in fig. 6, the embodiment of the invention provides a comparison graph of the segmentation results of the method and other methods, two rows of pictures in fig. 6 are two partial enlarged graphs of the image segmentation results in the experiment, only the clustering method is different in the comparison experiment, other steps are the same, fig. 6 (a) is a segmentation graph based on a fuzzy C-means clustering algorithm, fig. 6 (b) is a segmentation graph based on a kernel fuzzy C-means clustering algorithm, fig. 6 (C) is an algorithm segmentation graph based on an intuitive fuzzy C-means clustering, fig. 6 (d) is a segmentation graph based on a kernel intuitive fuzzy C-means clustering, fig. 6 (e) is an expert manual segmentation graph, and through the comparison graphs, the invention can be seen to be superior to other methods in the aspects of fine blood vessel detection and blood vessel continuity and integrity.
In summary, the retinal vessel segmentation method based on the kernel intuitional fuzzy C-means clustering provided by the embodiment of the invention has the following advantages:
1. the invention adopts Gaussian kernel distance to measure the distance from the sample point to the clustering center, better solves the problem that part of low-contrast blood vessel sample points and background sample points are in linear inseparable, and the problem that the intersection of the segmentation result is broken, and the segmented blood vessel has better integrity and continuity.
2. The invention uses three attributes of membership, non-membership and hesitation to describe the relationship between the pixel sample points and the clustering center together, and has better retina tiny blood vessel detection capability.
Referring to fig. 7, the embodiment of the invention further provides a retinal vessel segmentation device based on the kernel intuitionistic fuzzy C-means clustering, which comprises:
a preprocessing module 701, configured to preprocess a color fundus image;
the feature extraction module 702 is configured to perform feature extraction on the preprocessed color fundus image to obtain a three-dimensional input feature vector of the color fundus image, and process the three-dimensional input feature vector to obtain a normalized feature vector;
the clustering module 703 is configured to cluster the normalized feature vectors by using a kernel-based intuitive fuzzy C-means clustering method to obtain a clustering center and membership degrees of each pixel point belonging to the clustering center, and classify the pixel points according to the membership degrees of each pixel point belonging to the clustering center to obtain a retinal vessel segmentation map;
and the post-processing module 704 is used for denoising the retinal blood vessel segmentation map through an area threshold method of area connectivity to obtain a retinal blood vessel segmentation result map.
The embodiment of the invention also provides electronic equipment, which can perform a retina blood vessel segmentation method based on the kernel intuitionism fuzzy C-means clustering, firstly, preprocessing a color fundus image, then, extracting features of the preprocessed color fundus image to obtain a three-dimensional input feature vector of the color fundus image, then, clustering the three-dimensional input feature vector by the kernel intuitionism fuzzy C-means clustering method to obtain a clustering center and membership degree of each pixel point belonging to the clustering center, classifying the pixel points according to the membership degree of each pixel point belonging to the clustering center to obtain a retina blood vessel segmentation map, and finally, denoising the retina blood vessel segmentation map by an area threshold method of area connectivity to obtain a retina blood vessel segmentation result map. The invention adopts Gaussian kernel distance to measure the distance from the sample point to the clustering center, so that the problems that part of low-contrast blood vessel sample points are linearly inseparable from background sample points and the intersection of the segmentation result is broken are solved, and the segmented blood vessel has better integrity and continuity; the invention also uses three attributes of membership, non-membership and hesitation to describe the relationship between the pixel sample points and the clustering center together, and has better retina tiny blood vessel detection capability.
The embodiment of the invention also provides a readable storage medium, wherein the storage medium stores a program, and the program is executed by a processor to realize the retina blood vessel segmentation method based on the kernel intuitionistic fuzzy C-means clustering.
The embodiment of the invention also provides a computer program product, which comprises a computer program, wherein the computer program realizes the retina blood vessel segmentation method based on the kernel intuitionistic fuzzy C-means clustering when being executed by a processor.
Embodiments of the present invention also disclose a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions may be read from a computer-readable storage medium by a processor of a computer device, and executed by the processor, to cause the computer device to perform the method shown in fig. 1.
In some alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flowcharts of the present invention are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed, and in which sub-operations described as part of a larger operation are performed independently.
Furthermore, while the invention is described in the context of functional modules, it should be appreciated that, unless otherwise indicated, one or more of the described functions and/or features may be integrated in a single physical device and/or software module or one or more functions and/or features may be implemented in separate physical devices or software modules. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary to an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be apparent to those skilled in the art from consideration of their attributes, functions and internal relationships. Accordingly, one of ordinary skill in the art can implement the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative and are not intended to be limiting upon the scope of the invention, which is to be defined in the appended claims and their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.
While the preferred embodiment of the present invention has been described in detail, the present invention is not limited to the embodiments described above, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit of the present invention, and these equivalent modifications or substitutions are included in the scope of the present invention as defined in the appended claims.

Claims (10)

1. A retinal blood vessel segmentation method based on nuclear intuitionistic fuzzy C-means clustering is characterized by comprising the following steps:
preprocessing the color fundus image;
extracting features of the preprocessed color fundus image to obtain a three-dimensional input feature vector of the color fundus image;
processing the three-dimensional input feature vector to obtain a normalized feature vector;
clustering the normalized feature vectors by a kernel-based intuitionistic fuzzy C-means clustering method to obtain a clustering center and membership degrees of each pixel point belonging to the clustering center;
classifying the pixel points according to the membership degree of each pixel point belonging to the clustering center to obtain a retina blood vessel segmentation map;
denoising the retinal blood vessel segmentation map by an area threshold method of area connectivity to obtain a retinal blood vessel segmentation result map.
2. The retinal vessel segmentation method based on the nuclear intuitionistic fuzzy C-means clustering of claim 1, wherein the feature extraction is performed on the preprocessed color fundus image to obtain a three-dimensional input feature vector of the color fundus image, and the method comprises the following steps:
performing feature extraction on the preprocessed color fundus image by adopting the multi-scale Gaussian matched filtering to obtain multi-scale Gaussian matched filtering feature response output of each target pixel in the color fundus image;
performing feature extraction on the preprocessed color fundus image by adopting the B-COSSFIRE filtering to obtain B-COSSFIRE filtering feature response output of each target pixel in the color fundus image;
extracting the characteristics of the preprocessed color fundus image by adopting the multi-scale line operator to obtain the characteristic response output of the multi-scale line operator of each target pixel in the color fundus image;
and processing the characteristic response output of the multi-scale Gaussian matching filtering, the B-COSFIE filtering and the multi-scale line operator to obtain the three-dimensional input characteristic vector.
3. The retinal vessel segmentation method based on the nuclear intuitionistic fuzzy C-means clustering according to claim 1, wherein the preprocessing of the color fundus image comprises:
extracting a green channel image of the color fundus image;
the green channel image is enhanced by limiting contrast adaptive histogram equalization.
4. The retinal vessel segmentation method based on kernel-intuitive fuzzy C-means clustering according to claim 2, wherein in the step of processing the preprocessed color fundus image by the multi-scale gaussian matched filtering to obtain a multi-scale gaussian matched filtering characteristic response output of each target pixel in the color fundus image, a response function of the gaussian matched filtering is:
Figure FDA0004071516210000021
wherein K (x, y) represents a Gaussian-matched filter response value, x represents the abscissa of the pixel point, y represents the direction of the blood vessel, L represents the length of the segmented line structure in the direction of the blood vessel, sigma represents the standard deviation of a Gaussian function, andthe dimensions of the blood vessel are such that,
Figure FDA0004071516210000022
is a Gaussian function, m 0 Is the gray-scale mean of the gaussian function.
5. The retinal vessel segmentation method based on the kernel-intuitive fuzzy C-means clustering according to claim 2, wherein the processing the preprocessed color fundus image by using the multi-scale line operator to obtain the characteristic response output of the multi-scale line operator of each target pixel in the color fundus image comprises:
defining a detection window, sliding in the preprocessed color fundus image, and calculating the average gray value of the pixel points;
within 0 to 180 degrees, taking 15 degrees as a scale, taking 12 directions to match blood vessels with different directions, respectively calculating response values of each direction, and selecting the maximum response value;
calculating a line operator response value from the average gray value and the maximum response value, wherein the calculation formula of the line operator response value is as follows:
Figure FDA0004071516210000023
wherein R is N For the line operator response value,
Figure FDA0004071516210000024
is the maximum response value in 12 directions, < >>
Figure FDA0004071516210000025
The average gray value is obtained, N is a detection window, and I is a center point;
changing the value of the scale into 1 to 15, setting 7 scales by taking 2 as a step length, respectively solving the line operator response values of the 7 scales, and taking the maximum response value as the characteristic response output of the multi-scale line operator of the target pixel.
6. The retinal vessel segmentation method based on the kernel-intuitive fuzzy C-means clustering according to claim 2, wherein in the step of clustering the normalized feature vectors by the kernel-intuitive fuzzy C-means clustering method to obtain a cluster center and a membership degree of each pixel point belonging to the cluster center, an expression of an objective function based on the kernel-intuitive fuzzy C-means clustering method is as follows:
Figure FDA0004071516210000026
wherein J is KIFCM Representing an objective function based on a kernel intuitionistic fuzzy C-means clustering method, wherein C represents the number of clustering centers; n represents the number of pixel points included in the color fundus image; m is the ambiguity of the control cluster;
Figure FDA0004071516210000027
updated membership degree after introduction of intuitionistic fuzzy set>
Figure FDA0004071516210000028
The intuitive fuzzy entropy is expressed and used for measuring the fuzzy degree in clustering;
Figure FDA0004071516210000029
representing the kernel function distance from the jth pixel point to the ith class clustering center, wherein phi represents that sample points of the normalized feature vector are mapped to a high-dimensional feature space through a nonlinear continuous function phi, and the kernel function enables dot product operation in the feature space to be converted into calculation of the kernel function in the sample space; f (x) j ) Normalized eigenvector representing jth pixel, v i Representing an i-th class cluster center; />
Figure FDA0004071516210000031
Representation ofThe sum of all pixel hesitation values belonging to the ith cluster center is averaged.
7. A retinal vessel segmentation device based on nuclear intuitionistic fuzzy C-means clustering, comprising:
the preprocessing module is used for preprocessing the color fundus image;
the feature extraction module is used for carrying out feature extraction on the preprocessed color fundus image to obtain a three-dimensional input feature vector of the color fundus image, and processing the three-dimensional input feature vector to obtain a normalized feature vector;
the clustering module is used for clustering the normalized feature vectors through a kernel-based intuitionistic fuzzy C-means clustering method to obtain a clustering center and membership degrees of all pixel points belonging to the clustering center, and classifying the pixel points according to the membership degrees of all the pixel points belonging to the clustering center to obtain a retina blood vessel segmentation map;
and the post-processing module is used for denoising the retinal blood vessel segmentation graph through an area threshold method of area connectivity to obtain a retinal blood vessel segmentation result graph.
8. An electronic device comprising a processor and a memory, the memory for storing a program, the processor executing the program to implement the method of any one of claims 1 to 6.
9. A computer-readable storage medium, characterized in that the storage medium stores a program that is executed by a processor to implement the method of any one of claims 1 to 6.
10. A computer program product comprising a computer program which, when executed by a processor, implements the method of any of claims 1 to 6.
CN202310095253.0A 2023-02-06 2023-02-06 Retina blood vessel segmentation method based on nuclear intuitionistic fuzzy C-means clustering Pending CN116309633A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310095253.0A CN116309633A (en) 2023-02-06 2023-02-06 Retina blood vessel segmentation method based on nuclear intuitionistic fuzzy C-means clustering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310095253.0A CN116309633A (en) 2023-02-06 2023-02-06 Retina blood vessel segmentation method based on nuclear intuitionistic fuzzy C-means clustering

Publications (1)

Publication Number Publication Date
CN116309633A true CN116309633A (en) 2023-06-23

Family

ID=86817717

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310095253.0A Pending CN116309633A (en) 2023-02-06 2023-02-06 Retina blood vessel segmentation method based on nuclear intuitionistic fuzzy C-means clustering

Country Status (1)

Country Link
CN (1) CN116309633A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117372284A (en) * 2023-12-04 2024-01-09 江苏富翰医疗产业发展有限公司 Fundus image processing method and fundus image processing system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117372284A (en) * 2023-12-04 2024-01-09 江苏富翰医疗产业发展有限公司 Fundus image processing method and fundus image processing system
CN117372284B (en) * 2023-12-04 2024-02-23 江苏富翰医疗产业发展有限公司 Fundus image processing method and fundus image processing system

Similar Documents

Publication Publication Date Title
Garcia-Lamont et al. Segmentation of images by color features: A survey
AU2018225145B2 (en) Detection of prostate cancer in multi-parametric MRI using random forest with instance weighting and MR prostate segmentation by deep learning with holistically-nested networks
CN109003279B (en) Fundus retina blood vessel segmentation method and system based on K-Means clustering labeling and naive Bayes model
Kovács et al. A self-calibrating approach for the segmentation of retinal vessels by template matching and contour reconstruction
Zhang et al. Robust unsupervised small area change detection from SAR imagery using deep learning
Zhang et al. Automated segmentation of overlapped nuclei using concave point detection and segment grouping
Khowaja et al. A framework for retinal vessel segmentation from fundus images using hybrid feature set and hierarchical classification
CN101950364A (en) Remote sensing image change detection method based on neighbourhood similarity and threshold segmentation
Plissiti et al. A review of automated techniques for cervical cell image analysis and classification
CN115661467A (en) Cerebrovascular image segmentation method, device, electronic equipment and storage medium
Wu et al. Image segmentation
Sharma et al. Fuzzy c-means, anfis and genetic algorithm for segmenting astrocytoma-a type of brain tumor
CN111815563A (en) Retina optic disk segmentation method combining U-Net and region growing PCNN
Garcia-Gonzalez et al. A multiscale algorithm for nuclei extraction in pap smear images
Sule et al. Enhanced convolutional neural networks for segmentation of retinal blood vessel image
CN116309633A (en) Retina blood vessel segmentation method based on nuclear intuitionistic fuzzy C-means clustering
CN112183237A (en) Automatic white blood cell classification method based on color space adaptive threshold segmentation
Han et al. Segmenting images with complex textures by using hybrid algorithm
Saha et al. SRM superpixel merging framework for precise segmentation of cervical nucleus
Lyu et al. HRED-net: high-resolution encoder-decoder network for fine-grained image segmentation
Zhang et al. A robust image segmentation framework based on total variation spectral transform
Li et al. Scale selection for supervised image segmentation
Saha et al. Segmentation of cervical nuclei using SLIC and pairwise regional contrast
CN110211106B (en) Mean shift SAR image coastline detection method based on segmented Sigmoid bandwidth
Koyun et al. Adversarial nuclei segmentation on H&E stained histopathology images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination