CN113469270B - Semi-supervised intuitive clustering method based on decomposition multi-target differential evolution superpixel - Google Patents

Semi-supervised intuitive clustering method based on decomposition multi-target differential evolution superpixel Download PDF

Info

Publication number
CN113469270B
CN113469270B CN202110806823.3A CN202110806823A CN113469270B CN 113469270 B CN113469270 B CN 113469270B CN 202110806823 A CN202110806823 A CN 202110806823A CN 113469270 B CN113469270 B CN 113469270B
Authority
CN
China
Prior art keywords
pixel
super
image
superpixel
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110806823.3A
Other languages
Chinese (zh)
Other versions
CN113469270A (en
Inventor
赵凤
张莉阳
刘汉强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Posts and Telecommunications
Original Assignee
Xian University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Posts and Telecommunications filed Critical Xian University of Posts and Telecommunications
Priority to CN202110806823.3A priority Critical patent/CN113469270B/en
Publication of CN113469270A publication Critical patent/CN113469270A/en
Application granted granted Critical
Publication of CN113469270B publication Critical patent/CN113469270B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a semi-supervised intuitive clustering method based on decomposition multi-target differential evolution super pixels, which mainly solves the problems of poor image segmentation effect and poor algorithm timeliness in the prior art. The scheme comprises the following steps: inputting an image to be segmented and setting initial parameters; performing super-pixel segmentation on the image based on decomposition multi-target differential evolution, and defining the edge of the super-pixel region as a weak edge of the image; extracting strong edges of the image by using a canny operator, and merging super-pixel areas based on the strong and weak edges of the image; extracting representative features of each super pixel region after combination, and decomposing the image to perform multi-objective evolutionary fuzzy clustering; and performing class label correction on the clustering result to obtain a final image segmentation result. According to the method, the regional information and the partial supervision information of the image are fused, and the decomposition evolution strategy is adopted to optimize the fitness function, so that the image segmentation performance is effectively improved, and the problem of poor timeliness of the multi-objective evolutionary fuzzy clustering algorithm is solved.

Description

Semi-supervised intuitive clustering method based on decomposition multi-target differential evolution superpixel
Technical Field
The invention belongs to the technical field of image processing, and further relates to a semi-supervised intuitional clustering method, in particular to a semi-supervised intuitional clustering method based on decomposition multi-target differential evolution superpixel, which can be used for natural image recognition.
Background
Image segmentation is the process of assigning the same label to similar pixels in an image, and the quality of the result is important for subsequent image analysis. Image segmentation is largely divided into five categories, threshold-based methods, cluster-based methods, edge detection-based methods, region-based methods, and methods incorporating specific theory. The image segmentation method based on clustering is a common method in image segmentation because of the characteristics of simple principle, good segmentation effect and the like. Common clustering methods include a K-means clustering algorithm, a fuzzy clustering algorithm, a spectral clustering algorithm, a hierarchical clustering algorithm and the like. The things in the real world often have ambiguity and uncertainty, so that the fuzzy clustering algorithm can more objectively analyze the things in the real world and attract the eyes of a plurality of students at home and abroad. However, there are some drawbacks to applying the conventional fuzzy clustering algorithm to image segmentation: 1. the method is sensitive to initial values of clustering centers and is easy to sink into local optimum; 2. sensitive to noise, if the image contains a large amount of noise, an ideal image segmentation result cannot be obtained; 3. only a single objective function is considered during clustering, and different requirements of users cannot be met. Therefore, many scholars have conducted more intensive studies on this in recent years.
In 2016, liu Hanjiang et al, "local search adaptive kernel fuzzy clustering method [ J ], computer engineering and science, 38 (8): 1735-1740", a local search adaptive kernel fuzzy clustering method is proposed, in which a kernel function is introduced to improve the separability of data as much as possible; then designing a local search method based on a kernel, and searching an initial clustering center by carrying out local search on part of sample data; although the method can solve the problem that the traditional clustering algorithm is sensitive to the initial value of the clustering center to a certain extent, the method has higher time complexity due to the addition of a local search method based on a core.
In 2019, lanrong et al in "constrained non-local spatial intuitional blur C-means image segmentation algorithm [ J ], electronic and informatics report, 41 (6): 1472-1479", a suppressed non-local spatial intuitional fuzzy C-means image segmentation algorithm is proposed. The algorithm improves the robustness to noise by calculating the non-local spatial information of the pixels, overcomes the defect that the traditional fuzzy clustering algorithm only considers the gray characteristic information of a single pixel of an image, and improves the accuracy of image segmentation, but does not consider the region information of the image when performing image clustering, ignores the similarity between adjacent pixels, and finally has poor image segmentation effect.
In 2011, mukhopadhyay et al, "A multiobjective approach to MR brain image segmentation [ J ], applied Soft Computing,11 (1): 872-880", which adopts a non-dominant ranking genetic algorithm, optimizes two fitness functions of a fuzzy compactness function and a fuzzy separation function and is successfully applied to brain medical image segmentation, thereby realizing image segmentation under a plurality of clustering criteria; however, when the algorithm is applied to image segmentation, the regional information of the image is ignored, the image segmentation effect is poor, and in addition, a lot of time is needed in the multi-target evolution process, and the algorithm timeliness is poor.
Disclosure of Invention
The invention aims to provide a semi-supervised intuitive clustering method based on decomposition multi-target differential evolution super pixels aiming at the defects of the prior art. The segmentation performance of the image is improved by fusing the regional information and the partial supervision information of the image, and the speed of the multi-objective evolutionary fuzzy clustering algorithm is improved by optimizing two fitness functions by adopting a Kriging auxiliary reference vector guided decomposition evolutionary strategy. Therefore, the image segmentation performance is effectively improved, and the problem that the multi-objective evolutionary fuzzy clustering algorithm is poor in timeliness is solved.
The invention realizes the above purpose as follows:
(1) Inputting a color image to be segmented;
(2) Setting parameters: let the number of super-pixels be 500, the super-pixel fuzzy index be 25, the positive integer be 5, the maximum iteration number t of super-pixels max =10, the number of neighbors is 10, the differential evolution variation factor is 0.5, and the differential evolution crossover factor is 0.9; the cluster population scale is 50, and the maximum iteration number w of clusters max =100, the number of individuals used to update the Kriging model is 5, the fixed number of iterations before updating the Kriging model is 20, the binary crossover probability is 0.9 and the polynomial variation probability is 0.1;
(3) Performing super-pixel segmentation on the color image based on decomposition multi-target differential evolution, and defining the edge of the segmented super-pixel area as a weak edge of the image; the method comprises the following specific steps of:
(3.1) encoding the core point offset component by adopting a decomposition multi-target differential evolution method to obtain an initial population P:
an image is provided with N pixel points, which are divided into K super-pixel areas with uniform size, and the side length of each area is aboutInitial population p= [ P ] i,1 ,p i,2 ,…,p i,D ]The following random strategy generation was used:
p i,j =-S/2+rand×S,
wherein ,pi,j Representing individuals in the initial population; rand function generation [0,1]Random number in between, i=1, 2, …, pop, j=1, 2, …, D, d=2k; population size pop utilizationCalculating and determining that M=3 is the number of the super-pixel criterion functions, and H is a self-defined positive integer;
(3.2) randomly selecting points located in each uniform superpixel region on the image as core points, and then obtaining seed points s of the superpixels of the image by using the core points and individual decoded offsets i,k
Wherein q=0.1, c i,k A kernel point representing a kth superpixel of the ith individual corresponding image, k=1, 2, …, K; Λ represents a set of K x K diagonal matrices;
(3.3) seed Point s is taken i,k Is used for obtaining a super-pixel label matrix L by judging the distance between pixels in the neighborhood and seed points in the 3S multiplied by 3S neighborhood i
(3.4) Superpixel tag matrix L i Three superpixel criterion functions are designed: superpixel inner mean square error f 1 (s i ,L i ) Super pixel edge gradient criterion function f 2 (L i ) Sum region regularization term superpixel criterion function f 3 (L i );
Superpixel inner mean square error f 1 (s i ,L i ) The calculation formula is as follows:
wherein ,In For a 5-dimensional feature vector of an nth pixel, n=1, 2, N; l (L) i (n) is the label matrix L of the super pixel for the nth pixel point i Is a label in (a); d represents the pixel distance;
super pixel edge gradient criterion function f 2 (L i ) The specific calculation formula is as follows:
wherein, delta I (n) is the gradient characteristic corresponding to the nth pixel of the image; l (L) i (n) is the nth pixel point at L i Is a label in (a); delta (·) is a conditional judgment function, and when the condition in brackets is true, 1 is returned, otherwise 0 is returned; if the nth pixel is adjacent to the neighborhood W n The labels of the pixels in the two layers are differentIndicating that the pixel is at the junction of two super pixel areas.
Regional regularization term superpixel criterion function f 3 (L i ) The calculation formula is as follows:
wherein ,representing a superpixel label matrix L corresponding to an ith individual i The number of pixels in the kth super-pixel region;
(3.5) decomposing three superpixel criterion functions by using a Chebyshev MOEA/D method, wherein the method comprises the following steps:
(3.5.1) initializing a weight vector matrix λ= [ λ ] 12 ,…,λ i ,…,λ pop ]By calculating lambda i Euclidean distance between the weight vector and other weight vectors to obtain lambda i T neighborhood weight vectors λ of (2) i1i2 ,…,λ iT
(3.5.2) decomposing three superpixel criterion functions by using a chebyshev method to obtain a calculation formula:
wherein λ' = [ λ 12 ,…,λ M ]Is a set of weight vectors in λ, m=3 is the number of superpixel criterion functions; for each e=1, 2, …, M, there is λ e ≥0,Representing a reference point, the calculation formula is that
(3.6) intersecting, mutating and selecting individuals, and obtaining a final population and an optimal solution through iterative updating, so as to finally obtain a super-pixel region segmentation result of the image;
(4) Extracting strong edges of the image by utilizing an image edge detection canny operator, and merging super-pixel areas based on the strong and weak edges of the image;
(5) Extracting representative features r of the k super pixel region after combination k
wherein ,Yα Red, green and blue RGB eigenvalues, Y, representing pixel points alpha in the super pixel region β RGB feature values representing the median pixel point beta in the super pixel region, w (Y) α ,Y β ) The weights between pixel points α and β are represented:
w(Y α ,Y β )=Q αβ ×U αβ
wherein ,Qαβ Representing the position weight, wherein the closer the pixel point alpha is to the beta, the higher the weight is; u (U) αβ The color weight is represented, and the closer the pixel points alpha and beta are to the color information, the higher the weight is; q (Q) αβ and Uαβ The calculation formulas of (a) are respectively as follows:
wherein (x, y) represents coordinates of pixel points in the super pixel region, and num represents the number of the pixel points in the super pixel; σ represents the color feature variance of the superpixel region;
take k=1, 2,.. obtaining a representative feature set r= { r of each super-pixel region through calculation 1 ,r 2 ,…,r k ,…,r G };
(6) Obtaining partial supervision information by using marking information of the user on the image
(7) Initializing a reference vector, randomly initializing a population and encoding chromosomes in the population;
(8) Constructing a semi-supervised intuitionistic fuzzy compactibility function J fusing the information of the super-pixel region:
wherein C represents an imageThe number of clusters, G represents the number of super-pixel regions, m represents the cluster ambiguity index, κ represents the weighting index, set κ=2, r k Representing the representative feature of the kth superpixel region, v ρ Represents the cluster center of the p-th class,expressed in an intuitionistic fuzzy set r k To v ρ Is a euclidean distance of (2):
wherein , and pi (·) represent membership, non-membership, and hesitation in the intuitive fuzzy set, respectively:
wherein τ is a fixed parameter that generates a non-membership function;r represents k Membership to v ρ Is a supervisory membership of (1):
wherein ,r represents k Membership to v ρ If the marked super-pixel region belongs to class 1, thenUnlabeled superpixel region +.>u ρk Representing the membership degree of the kth super-pixel region to the rho-th class cluster center:
(9) Constructing an intuitive fuzzy separation function CS fusing the information of the super pixel region:
wherein ,μγρ Representing v ρ Relative to v γ Is a membership of (1).
(10) According to the function expressions constructed in the steps (8) and (9), respectively calculating fitness function values of each individual in the initial population, namely J and 1/CS, training a Kriging model by using the individual in the initial population and the fitness function value thereof, and setting t=0 and w=0, wherein t represents the current iteration times of super pixels and w represents the current iteration times of clusters;
(11) Generating a child population by utilizing binary crossover and polynomial variation, predicting an objective function value of a child individual by utilizing a Kriging model, and combining a parent individual and the child individual;
(12) Selecting a new population by adopting a selection strategy APD based on the angle punishment distance, updating a reference vector, and setting w=w+1;
(13) Judging whether the Kriging model needs to be updated or not, if w is more than w max Updating the Kriging model to enable w=0, and executing the step (14); otherwise, returning to the step (11);
(14) Judging whether the maximum iteration number of the super pixel is reached, if t is more than t max The iteration is terminated, a final generation non-dominant solution set is obtained, and the step (15) is executed; otherwise, let t=t+1, return to step (11);
(15) Constructing a semi-supervised intuitive fuzzy clustering optimal solution selection index SI fusing the information of the super-pixel region:
wherein ,EC Representing in-class compactibility metrics, E 1 Indicating the compactness of all samples grouped into one class, F C Representing a maximum partiality metric between classes;
(16) Selecting optimal individuals from the final generation non-dominant solution set by using an optimal solution selection index SI to obtain an optimal clustering center;
(17) According to the optimal clustering center, label distribution is carried out on each super-pixel area so as to obtain labels of all pixel points in the image, and an image clustering result is obtained;
(18) And performing class label correction on the image clustering result to obtain a final image segmentation result.
Compared with the prior art, the invention has the following beneficial technical effects:
firstly, the invention firstly carries out super-pixel segmentation on the image under a plurality of criteria, then extracts the representative features of the super-pixel region to carry out multi-objective evolutionary fuzzy clustering, thereby fully considering the region information of the image during image clustering and obviously improving the finally obtained image segmentation effect;
secondly, the invention utilizes the super-pixel technology to preprocess the image before image clustering, thereby improving the speed of the multi-objective evolutionary fuzzy clustering algorithm;
thirdly, the invention respectively constructs the semi-supervised intuitive fuzzy compact function fused with the super-pixel area information and the intuitive fuzzy separation function fused with the super-pixel area information as the fitness function to be optimized, and constructs the optimal solution selection index of the semi-supervised intuitive fuzzy clustering fused with the super-pixel area information to select the optimal clustering center, thereby improving the image segmentation performance.
Drawings
FIG. 1 is a flow chart of an implementation of the method of the present invention;
FIG. 2 is a comparative graph of the results of simulated segmentation of image number 253036 in the Berkeley image database using the present invention and prior methods;
FIG. 3 is a comparison of the results of simulated segmentation of image number 113334665744 in a Weizmann image database using the present invention and prior methods;
fig. 4 is a schematic diagram of super-pixel region merging based on image strong and weak edges in the present invention, where (a) is a schematic diagram of image strong and weak edges and (b) is a schematic diagram of region merging result.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Embodiment one, referring to fig. 1, the implementation steps of the present invention are as follows:
step A: the color image to be segmented is input and initial parameter values are set.
Inputting a color image to be segmented;
setting parameters: setting the number of super pixels as 500, the blurring index of the super pixels as 25, the positive integer as 5, the maximum iteration number of the super pixels as 10, the number of neighborhoods as 10, the differential evolution variation factor as 0.5 and the differential evolution crossover factor as 0.9; the cluster population scale is 50, the maximum iteration number of clusters is 100, the number of individuals used for updating the Kriging model is 5, the fixed iteration number before the Kriging model is updated is 20, the binary cross probability is 0.9, and the polynomial variation probability is 0.1;
and (B) step (B): the color image is subjected to super-pixel segmentation based on decomposition multi-target differential evolution, and the edges of the super-pixel region are defined as weak edges of the image.
2.1 Initializing;
2.1.1 Assuming that an image with N pixels is divided into K super-pixel regions of uniform size, the side length of each region is aboutSelecting 5-dimensional characteristics of the central point of each super-pixel area to obtain an initialization core point c= { c i,k}, wherein ,ci,k Core point of kth super-pixel of image corresponding to ith individual, c i,k =[l i,k ,a i,k ,b i,k ,x i,k ,y i,k ],l i,k ,a i,k ,b i,k Representing Lab color characteristics, x i,k ,y i,k Representing its spatial characteristics;
2.1.2 According to the formulaCalculating the super-pixel population scale pop, and initializing a weight vector matrix lambda= [ lambda ] by using the super-pixel population scale pop 12 ,…,λ pop ]By calculating lambda i Euclidean distance between the weight vector and other weight vectors to obtain lambda i T neighborhood weight vectors λ of (2) i1i2 ,L,λ iT Let B (i) = [ i1, i2, …, iT],i=1,2,…,pop;
2.1.3 Initializing population p= [ P ] i,1 ,p i,2 ,…,p i,D ]:
p i,j =-S/2+rand×S,
Wherein the rand function generates random numbers between [0,1], i=1, 2, …, pop, j=1, 2, …, D, d=2k;
2.1.4 Using individual p i Optimizing the core point c to generate a group of super pixel points s i And a superpixel label matrix L i
Where q=0.1, k=1, 2, …, K, s i,k Representing the number p of the ith individual i Optimizing the 5-dimensional characteristics of the kth super-pixel seed point generated by the core point c. Based on the obtained super-pixel seed points, taking a 3S multiplied by 3S neighborhood of the seed points, and obtaining a super-pixel tag matrix by judging the distance between pixels in the neighborhood and the seed points. Given any two pixel points alpha and beta, the distance formula between the two pixel points is as follows:
wherein ,dc (alpha, beta) represents the color distance, d s (alpha, beta) represents the spatial distance and m' represents the super-pixel blur index.
2.1.5 Respectively calculating the target function value superpixel error f i,1 Super pixel edge gradient criterion function f i,2 Sum region regularization term f i,3 。f 1 The mean square error of each pixel of the image allocated to the nearest super-pixel seed point is represented, the smaller the value of the mean square error is, the more accurate the pixel is represented, and a specific calculation formula is as follows:
wherein ,In For the 5-dimensional feature vector of the nth pixel point, l=l i (n) is the label matrix L of the super pixel for the nth pixel point i In (a) and s) i,l Representing the seed point s corresponding to the ith individual i 5-dimensional feature vectors at the first super-pixel point. f (f) 2 The gradient criterion of the super pixel edge is represented, is the basis for judging the boundary strength, and the larger the value is, the better the value is, and the specific calculation formula is as follows:
wherein, delta I (n) is the gradient characteristic corresponding to the nth pixel of the image; l (L) i (n) is the nth pixel point at L i Is a label in (a); delta (·) is a conditional judgment function, and returns 1 when the condition in brackets is true, or 0 otherwise. Can be analyzed according to the formula definition, if the nth pixel and the neighborhood W n The labels of the pixels in the two layers are differentIndicating that the pixel is at the junction of two super pixel areas. f (f) 3 Indicating how much the size of each super-pixel region deviates from the desired size, the smaller the value, the better, specifically:
wherein ,representing a superpixel label matrix L corresponding to an ith individual i The number of pixels in the kth super pixel region.
2.1.6 Initializing a reference pointThe calculation formula is +.>
2.2 Updating the neighborhood solution, the specific method is as follows: for each individual p i H is E B (i), and aggregate function values g(s) are calculated respectively i ,L ih ,z * ),g(s h ,L hh ,z * ) If g(s) i ,L ih ,z * )≤g(s h ,L hh ,z * ) Let p h =p i ,f h,1 =f i,1 ,f h,2 =f i,2 ,f h,3 =f i,3 ,O(h)=g(s i ,L ih ,z * ) Otherwise, let O (h) =g (s h ,L hh ,z * ) Wherein O (h) represents an aggregation function value corresponding to the h group of super pixel points;
2.3 Selecting an individual corresponding to the minimum aggregation function value to combine with the core points to generate an initial optimal label matrix;
2.4 Iterative update)
2.4.1 Setting gen=1;
2.4.2 Updating the core point c;
2.4.3 Updating individual p i Corresponding objective function value f i,1 ,f i,2 ,f i,3 Simultaneously updating the reference point z * Step 2.2) is then performed to update the neighborhood solution, i=1, 2, …, pop;
2.4.4 Selecting an individual corresponding to the minimum aggregation function value to combine with the core point c to generate an optimal label matrix L, and setting gen=gen+1;
2.4.5 Judging whether the maximum iteration number is reached, if gen > gen max Outputting L to obtain a super-pixel segmentation result of the image; otherwise, executing the step 2.4.2);
step C: and extracting the strong edge of the image by using a canny operator, and merging the super-pixel areas of the image based on the strong and weak edges.
Step D: and extracting representative features of each super-pixel region after merging, and decomposing the image to perform multi-target evolutionary fuzzy clustering.
Step E: and performing class label correction on the clustering result to obtain a final image segmentation result.
Embodiment two, the implementation steps of the present invention are described in further detail in this embodiment:
step 1: inputting a color image to be segmented;
step 2: setting parameters: let the number of super-pixels be 500, the super-pixel fuzzy index be 25, the positive integer be 5, the maximum iteration number t of super-pixels max =10, the number of neighbors is 10, the differential evolution variation factor is 0.5, and the differential evolution crossover factor is 0.9; the cluster population scale is 50, and the maximum iteration number w of clusters max =100, the number of individuals used to update the Kriging model is 5, the fixed number of iterations before updating the Kriging model is 20, the binary crossover probability is 0.9 and the polynomial variation probability is 0.1;
step 3: performing super-pixel segmentation on the color image based on decomposition multi-target differential evolution, and defining the edge of the segmented super-pixel area as a weak edge of the image; super-pixel region segmentation of an image is typically the process of grouping pixels that are adjacent in location and similar in feature into small regions. The invention considers the super-pixel segmentation of the image from three aspects of the intra-super-pixel mean square error, the super-pixel edge gradient criterion and the region regularization term. In order to optimize the three targets simultaneously, a decomposition multi-target differential evolution method is adopted, firstly, the offset components of core points are coded to obtain an initialized population, and different seed points are obtained by utilizing individuals in the population and the core points, further, a super-pixel criterion function based on a mean square error in super pixels, a super-pixel edge gradient criterion and a region regularization term is calculated by utilizing a Chebyshev method decomposition method, then, the individuals are updated by utilizing a cross, variation and selection strategy, a final population and an optimal solution are obtained through continuous iteration, and finally, a super-pixel region segmentation result of an image is obtained.
The super-pixel segmentation based on the decomposition multi-target differential evolution is carried out on the color image, and the specific steps are as follows:
(3.1) in order to obtain a super-pixel region of an image, the difference between the pixels of the image and the seed points of the region needs to be measured from the spatial position and the characteristic angle; in order to obtain the appropriate superpixel seed points, the core point offset component of each superpixel needs to be encoded here, so the core point offset component is encoded by adopting a decomposition multi-objective differential evolution method to obtain an initial population P:
an image is provided with N pixel points, which are divided into K super-pixel areas with uniform size, and the side length of each area is aboutInitial population p= [ P ] i,1 ,p i,2 ,…,p i,D ]The following random strategy generation was used:
p i,j =-S/2+rand×S,
wherein ,pi,j Representing individuals in the initial population; rand function generation [0,1]Random number in between, i=1, 2, …, pop, j=1, 2, …, D, d=2k; population size pop utilizationCalculating and determining that M=3 is the number of the super-pixel criterion functions, and H is a self-defined positive integer;
(3.2) since the individual encodes the core point offset component, in the initial stage of using the decomposition-based multi-objective differential evolution method, the core points need to be initialized randomly and uniformly on the image, i.e. the selection of the core points is located in each uniform super-pixel region; and then obtaining the seed point of the super pixel of the image by using the core point and the individual decoded offset.
Randomly selecting points in each uniform superpixel area on the image as core points, and obtaining seed points s of the superpixels of the image by using the core points and individual decoding offset i,k
Wherein q=0.1, c i,k A kernel point representing a kth superpixel of the ith individual corresponding image, k=1, 2, …, K; Λ represents a set of K x K diagonal matrices;
(3.3) seed Point s is taken i,k Is used for obtaining a super-pixel label matrix L by judging the distance between pixels in the neighborhood and seed points in the 3S multiplied by 3S neighborhood i The method comprises the steps of carrying out a first treatment on the surface of the The distance between the pixel and the seed point is calculated as follows:
given any two pixel points alpha and beta, the distance d (alpha, beta) between the two pixel points is as follows:
wherein m' represents a super-pixel blur index; d, d c (alpha, beta) represents the color distance, d s (alpha, beta) represents a spatial distance;
(3.4) Superpixel tag matrix L i Three superpixel criterion functions are designed: superpixel inner mean square error f 1 (s i ,L i ) Super pixel edge gradient criterion function f 2 (L i ) Sum region regularization term superpixel criterion function f 3 (L i );
Superpixel inner mean square error f 1 (s i ,L i ) The calculation formula is as follows:
wherein ,In For a 5-dimensional feature vector of an nth pixel, n=1, 2, N; l (L) i (n) is the label matrix L of the super pixel for the nth pixel point i Is a label in (a); d represents the pixel distance;
super pixel edge gradient criterion function f 2 (L i ) The specific calculation formula is as follows:
wherein, delta I (n) is the gradient characteristic corresponding to the nth pixel of the image; l (L) i (n) is the nth pixel point at L i Is a label in (a); delta (·) is a conditional judgment function, and when the condition in brackets is true, 1 is returned, otherwise 0 is returned; if the nth pixel is adjacent to the neighborhood W n The labels of the pixels in the two layers are differentIndicating that the pixel is at the junction of two super pixel areas.
Regional regularization term superpixel criterion function f 3 (L i ) The calculation formula is as follows:
wherein ,representing a superpixel label matrix L corresponding to an ith individual i The number of pixels in the kth super-pixel region;
(3.5) in order to optimize the three super-pixel criterion functions at the same time, the invention adopts the chebyshev method of MOEA/D to decompose the three super-pixel criterion functions into a plurality of scalar subproblems so as to evaluate the advantages and disadvantages of individuals in the population. Three superpixel criterion functions are decomposed by using a chebyshev method MOEA/D, and the method is as follows:
(3.5.1) initializing a weight vector matrix λ= [ λ ] 12 ,…,λ i ,…,λ pop ]By calculating lambda i Euclidean distance between the weight vector and other weight vectors to obtain lambda i T neighborhood weight vectors λ of (2) i1i2 ,…,λ iT
(3.5.2) decomposing three superpixel criterion functions by using a chebyshev method to obtain a calculation formula:
wherein λ' = [ λ 12 ,…,λ M ]Is a set of weight vectors in λ, m=3 is the number of superpixel criterion functions; for each e=1, 2, …, M, there is λ e ≥0, Representing a reference point, the calculation formula is that
(3.6) intersecting, mutating and selecting individuals, and obtaining a final population and an optimal solution through iterative updating, so as to finally obtain a super-pixel region segmentation result of the image; wherein the individuals are crossed, mutated and selected as follows:
(3.6.1) obtaining decomposition results of all individuals under three criterion functions by using Chebyshev method, and selecting the optimal individual p i And superpixel seed point s of image i,k As a core point;
(3.6.2) generating new individuals by crossover and mutation operations:
wherein χ, τ e B (i), B (i) = [ i1, i2, …, iT ]; χ+.τ+.i, FR 'is the variation factor, CR' is the crossover factor, rand is a random number between [0,1 ];
(3.6.3) defining elements present in the individual that are greater than a maximum value or less than a minimum value as illegal values and repairing them as adjacent boundary values;
(3.6.4) generating new individuals using gaussian mutation operators:
wherein ,mean value of normal distribution is +.>The standard deviation is a random number of S/20, pm is the mutation probability, defined as pm=1/D.
Step 4: extracting strong edges of the image by utilizing an image edge detection canny operator, and merging super-pixel areas based on the strong and weak edges of the image; the method comprises the steps of carrying out super-pixel region merging on the image based on strong and weak edges, and specifically comprises the following steps: obtaining strong edge information E of image by utilizing image edge detection canny operator edge Then, selecting the spatial position characteristic of the center point of the super pixel region, and constructing a set cen= [ cen ] 1 ,cen 2 ,…,cen K ]Calculating the space distance between any two center points, and finally judging whether a strong edge exists on a connecting line of the adjacent 8-neighborhood super-pixel center points for each super-pixel area, if so, not merging; if not, merging the two super pixel areas; finally, super-pixel region combination based on image strong and weak edges is realized, and G combined super-pixel regions R= [ R ] are obtained 1 ,R 2 ,…,R G ]. Referring to fig. 4, where u, v, w in the (a) diagram are 3 center points of 3 super pixel regions R1, R2, R3 in the image, respectively, a dotted line in the diagram represents a boundary between the super pixel regions, that is, a weak edge of the image, and a line L represents a strong edge of the image. There is no strong edge of the image on the line between u and v, so the two super-pixel regions R1 and R2 are merged to get the region R4 of the (b) image, and there is a strong edge of the image on the line between u and w and between v and w, so the super-pixel regions R1 and R3 and R2 and R3 are not merged.
Step (a)5: extracting representative features r of the k super pixel region after combination k
wherein ,Yα Red, green and blue RGB eigenvalues, Y, representing pixel points alpha in the super pixel region β RGB feature values representing the median pixel point beta in the super pixel region, w (Y) α ,Y β ) The weights between pixel points α and β are represented:
w(Y α ,Y β )=Q αβ ×U αβ
wherein ,Qαβ Representing the position weight, wherein the closer the pixel point alpha is to the beta, the higher the weight is; u (U) αβ The color weight is represented, and the closer the pixel points alpha and beta are to the color information, the higher the weight is; q (Q) αβ and Uαβ The calculation formulas of (a) are respectively as follows:
wherein (x, y) represents coordinates of pixel points in the super pixel region, and num represents the number of the pixel points in the super pixel; σ represents the color feature variance of the superpixel region;
take k=1, 2,.. obtaining a representative feature set r= { r of each super-pixel region through calculation 1 ,r 2 ,…,r k ,…,r G };
Step 6: obtaining partial supervision information by using marking information of the user on the imageThe semi-supervision strategy is adopted to mark each type of manual line in the image, namely marking information, so as to obtainThe characteristic information RGB characteristic value of the pixel points on the line is called partial supervision information.
Step 7: initializing a reference vector, wherein the vector is a technical term involved in the Kriging model; randomly initializing a population and encoding chromosomes in the population, namely encoding RGB characteristic values of a clustering center; if an image is classified as class C, each individual is a vector of C times 3.
Step 8: constructing a semi-supervised intuitionistic fuzzy compactibility function J fusing the information of the super-pixel region:
wherein C represents the number of image clusters, G represents the number of super-pixel regions, m represents the cluster blur index, κ represents the weighting index, and k=2, r is set k Representing the representative feature of the kth superpixel region, v ρ Represents the cluster center of the p-th class,expressed in an intuitionistic fuzzy set r k To v ρ Is a euclidean distance of (2):
wherein , and pi (·) represent membership, non-membership, and hesitation in the intuitive fuzzy set, respectively:
wherein τ is a fixed parameter that generates a non-membership function;r represents k Membership to v ρ Is a supervisory membership of (1):
wherein ,r represents k Membership to v ρ If the marked super-pixel region belongs to class 1, thenUnlabeled superpixel region +.>u ρk Representing the membership degree of the kth super-pixel region to the rho-th class cluster center:
step 9: constructing an intuitive fuzzy separation function CS fusing the information of the super pixel region:
wherein ,μγρ Representing v ρ Relative to v γ Is a membership of (1).
Step 10: the fitness function value of each individual in the initial population, namely J and 1/CS, is calculated according to the function expressions constructed in the steps 8 and 9, wherein the fitness function is a generic term in a multi-objective evolutionary clustering algorithm and consists of a plurality of objective functions. The fitness function of the present invention refers to the function J constructed in step 8 and the function CS constructed in step 9. Training a Kriging model by using individuals in the initial population and fitness function values thereof, and setting t=0 and w=0, wherein t represents the current iteration times of super pixels, and w represents the current iteration times of clusters;
step 11: generating a child population by utilizing binary crossover and polynomial variation, predicting an objective function value of a child individual by utilizing a Kriging model, and combining a parent individual and the child individual;
step 12: after dividing the original population into a plurality of sub-populations, one elite individual from each sub-population is selected to the next generation, the invention adopts a selection strategy based on an angle penalty distance (Angle Penalized Distance, APD), and the diversity and convergence can be better balanced by selecting the individual with the smallest APD.
Selecting a new population by adopting a selection strategy APD based on the angle punishment distance, updating a reference vector, and setting w=w+1; the selection strategy based on the angle punishment distance specifically comprises the following steps: firstly, calculating APD values of population individuals, and then selecting an individual with the minimum APD value for balancing diversity and convergence; the APD value is calculated according to the following formula:
wherein ,representing the target vector +.>Euclidean distance to origin, θ t,i,e Representation->With the reference vector v to which it belongs t,e Included angle between, P (θ) t,i,e ) Representing a penalty function, the calculation formula is:
where beta represents a parameter of the penalty rate of change,representing the reference vector v t,e Minimum angle values with other reference vectors in the current generation;
step 13: judging whether the Kriging model needs to be updated or not, if w is more than w max Updating the Kriging model to let w=0, and executing step 14; otherwise, return to step 44;
step 14: judging whether the maximum iteration number of the super pixel is reached, if t is more than t max The iteration is terminated, a final generation non-dominant solution set is obtained, and step 15 is executed; otherwise, let t=t+1, return to step 11;
step 15: constructing a semi-supervised intuitive fuzzy clustering optimal solution selection index SI fusing the information of the super-pixel region:
wherein ,EC Representing in-class compactibility metrics, E 1 Indicating the compactness of all samples grouped into one class, F C Representing a maximum partiality metric between classes;
step 16: selecting optimal individuals from the final generation non-dominant solution set by using an optimal solution selection index SI to obtain an optimal clustering center;
step 17: according to the optimal clustering center, label distribution is carried out on each super-pixel area so as to obtain labels of all pixel points in the image, and an image clustering result is obtained;
step 18: and performing class label correction on the image clustering result to obtain a final image segmentation result.
The technical effects of the invention are further described by combining simulation experiments:
1. simulation conditions:
the simulation experiment is carried out in the software environment of MATLAB R2018a in the memory of a computer Inter (R) Core (TM) i5-6500M 3.20GHz CPU,8G.
2. The simulation content:
simulation 1, selecting an image with the number of 253036 in a Berkeley image database, and respectively dividing the image by using the method and the prior FCM method, KFCC method, IFCM method, MOVGA method, K-MOVGA method and RE-MSSFC method, wherein the result is shown in figure 2, wherein:
(a) Is the original image of 253036;
(b) Is a standard segmentation map of 253036 images;
(c) Is a decomposition-based multi-target differential evolution super-pixel segmentation result graph of 253036 images;
(d) Is a region merging map of 253036 images;
(e) Is a supervision information mark graph of 253036 images;
(f) The segmentation result of 253036 images by using the prior FCM method;
(g) The segmentation result of 253036 images by using the existing KFCC method is obtained;
(h) The segmentation result of 253036 images by using the existing IFCM method;
(i) The segmentation result of 253036 images by using the existing MOVGA method;
(j) The segmentation result of 253036 images is obtained by using the existing K-MOVGA method;
(k) The segmentation result of 253036 images is obtained by using the existing RE-MSSFC method;
(l) The invention is used for segmenting 253036 images;
as can be seen from FIG. 2, the invention can separate the object from the background clearly, so the invention has better dividing effect on Berkeley gallery than the prior FCM method, KFCM method, IFCM method, MOVGA method, K-MOVGA method and RE-MSSFC method.
Simulation 2, selecting an image with the number of 113334665744 in a Weizmann image database, and respectively dividing the image by using the method and the prior FCM method, KFCC method, IFCM method, MOVGA method, K-MOVGA method and RE-MSSFC method, wherein the result is shown in figure 3, wherein:
(a) Is the original image of 113334665744;
(b) Is a standard segmentation map of 113334665744 images;
(c) Is a decomposition-based multi-target differential evolution super-pixel segmentation result graph of 113334665744 images;
(d) Is a region merging map of 113334665744 images;
(e) Is a supervision information mark graph of 113334665744 images;
(f) The segmentation result of 113334665744 images by using the prior FCM method;
(g) The segmentation result of 113334665744 images by using the existing KFCC method is obtained;
(h) The segmentation result of 113334665744 images by using the existing IFCM method;
(i) The segmentation result of 113334665744 images by using the existing MOVGA method;
(j) The segmentation result of 113334665744 images is obtained by using the existing K-MOVGA method;
(k) The segmentation result of 113334665744 images is obtained by using the existing RE-MSSFC method;
(l) The invention is used for segmenting 113334665744 images;
as can be seen from FIG. 3, the invention can completely segment the target and clearly segment the target with the background, so that the segmentation effect of the invention on the Weizmann gallery is superior to that of the prior FCM method, KFCC method, IFCM method, MOVGA method, K-MOVGA method and RE-MSSFC method.
The simulation analysis proves the correctness and effectiveness of the method provided by the invention.
The non-detailed description of the invention is within the knowledge of a person skilled in the art.
The foregoing description of the preferred embodiment of the invention is not intended to be limiting, but it will be apparent to those skilled in the art that various modifications and changes in form and detail may be made without departing from the principles and construction of the invention, but these modifications and changes based on the idea of the invention are still within the scope of the appended claims.

Claims (5)

1. A semi-supervised intuitive clustering method based on decomposition multi-target differential evolution superpixel is characterized by comprising the following steps:
(1) Inputting a color image to be segmented;
(2) Setting parameters: let the number of super-pixels be 500, the super-pixel fuzzy index be 25, the positive integer be 5, the maximum iteration number t of super-pixels max =10, the number of neighbors is 10, the differential evolution variation factor is 0.5, and the differential evolution crossover factor is 0.9; the cluster population scale is 50, and the maximum iteration number w of clusters max =100, the number of individuals used to update the Kriging model is 5, the fixed number of iterations before updating the Kriging model is 20, the binary crossover probability is 0.9 and the polynomial variation probability is 0.1;
(3) Performing super-pixel segmentation on the color image based on decomposition multi-target differential evolution, and defining the edge of the segmented super-pixel area as a weak edge of the image; the method comprises the following specific steps of:
(3.1) encoding the core point offset component by adopting a decomposition multi-target differential evolution method to obtain an initial population P:
an image is provided with N pixel points, which are divided into K super-pixel areas with uniform size, and the side length of each area is aboutInitial population p= [ P ] i,1 ,p i,2 ,…,p i,D ]The following random strategy generation was used:
p i,j =-S/2+rand×S,
wherein ,pi,j Representing individuals in the initial population; rand function generation [0,1]Random number in between, i=1, 2, …, pop, j=1, 2, …, D, d=2k; population size pop utilizationCalculating and determining that M=3 is the number of the super-pixel criterion functions, and H is a self-defined positive integer;
(3.2) randomly selecting points located in each uniform superpixel region on the image as core points, and then obtaining seed points s of the superpixels of the image by using the core points and individual decoded offsets i,k
Wherein q=0.1, c i,k A kernel point representing a kth superpixel of the ith individual corresponding image, k=1, 2, …, K; Λ represents a set of K x K diagonal matrices;
(3.3) seed Point s is taken i,k Is used for obtaining a super-pixel label matrix L by judging the distance between pixels in the neighborhood and seed points in the 3S multiplied by 3S neighborhood i
(3.4) Superpixel tag matrix L i Three superpixel criterion functions are designed: superpixel inner mean square error f 1 (s i ,L i ) Super pixel edge gradient criterion function f 2 (L i ) Sum region regularization term superpixel criterion function f 3 (L i );
Superpixel inner mean square error f 1 (s i ,L i ) The calculation formula is as follows:
wherein ,In For a 5-dimensional feature vector of an nth pixel, n=1, 2, N; l (L) i (n) is the label matrix L of the super pixel for the nth pixel point i Is a label in (a); d represents the pixel distance;
super pixel edge gradient criterion function f 2 (L i ) The specific calculation formula is as follows:
wherein, delta I (n) is the gradient characteristic corresponding to the nth pixel of the image; l (L) i (n) is the nth pixel point at L i Is a label in (a); delta (·) is a conditional judgment function, and when the condition in brackets is true, 1 is returned, otherwise 0 is returned; if the nth pixel is adjacent to the neighborhood W n The labels of the pixels in the two layers are differentIndicating that the pixel is at the intersection of two super pixel regions,
regional regularization term superpixel criterion function f 3 (L i ) The calculation formula is as follows:
wherein ,representing a superpixel label matrix L corresponding to an ith individual i The number of pixels in the kth super-pixel region;
(3.5) decomposing three superpixel criterion functions by using a Chebyshev MOEA/D method, wherein the method comprises the following steps:
(3.5.1) initializing a weight vector matrix λ= [ λ ] 12 ,…,λ i ,…,λ pop ]By calculating lambda i Euclidean distance from other weight vectorsObtaining lambda i T neighborhood weight vectors λ of (2) i1i2 ,…,λ iT
(3.5.2) decomposing three superpixel criterion functions by using a chebyshev method to obtain a calculation formula:
wherein λ' = [ λ 12 ,…,λ M ]Is a set of weight vectors in λ, m=3 is the number of superpixel criterion functions; for each e=1, 2, …, M, there is λ e ≥0, Representing a reference point with a formula of +.>
(3.6) intersecting, mutating and selecting individuals, and obtaining a final population and an optimal solution through iterative updating, so as to finally obtain a super-pixel region segmentation result of the image;
(4) Extracting strong edges of the image by utilizing an image edge detection canny operator, and merging super-pixel areas based on the strong and weak edges of the image;
(5) Extracting representative features r of the k super pixel region after combination k
wherein ,Yα Red, green and blue RGB eigenvalues, Y, representing pixel points alpha in the super pixel region β RGB feature values representing the median pixel point beta in the super pixel region, w (Y) α ,Y β ) The weights between pixel points α and β are represented:
w(Y α ,Y β )=Q αβ ×U αβ
wherein ,Qαβ Representing the position weight, wherein the closer the pixel point alpha is to the beta, the higher the weight is; u (U) αβ The color weight is represented, and the closer the pixel points alpha and beta are to the color information, the higher the weight is; q (Q) αβ and Uαβ The calculation formulas of (a) are respectively as follows:
wherein (x, y) represents coordinates of pixel points in the super pixel region, and num represents the number of the pixel points in the super pixel; σ represents the color feature variance of the superpixel region;
take k=1, 2,.. obtaining a representative feature set r= { r of each super-pixel region through calculation 1 ,r 2 ,…,r k ,…,r G };
(6) Obtaining partial supervision information by using marking information of the user on the image
(7) Initializing a reference vector, randomly initializing a population and encoding chromosomes in the population;
(8) Constructing a semi-supervised intuitionistic fuzzy compactibility function J fusing the information of the super-pixel region:
wherein C represents the number of image clusters, G represents the number of super-pixel regions, m represents the cluster blur index, κ represents the weighting index, and k=2, r is set k Representing the representative feature of the kth superpixel region, v ρ Represents the cluster center of the p-th class,expressed in an intuitionistic fuzzy set r k To v ρ Is a euclidean distance of (2):
wherein ,and pi (·) represent membership, non-membership, and hesitation in the intuitive fuzzy set, respectively:
wherein τ is a fixed parameter that generates a non-membership function;r represents k Membership to v ρ Is a supervisory membership of (1):
wherein ,r represents k Membership to v ρ If the marked super-pixel region belongs to class 1, thenUnlabeled superpixel region +.>u ρk Representing the membership degree of the kth super-pixel region to the rho-th class cluster center:
(9) Constructing an intuitive fuzzy separation function CS fusing the information of the super pixel region:
wherein ,μγρ Representing v ρ Relative to v γ Is used for the degree of membership of the group (a),
(10) According to the function expressions constructed in the steps (8) and (9), respectively calculating fitness function values of each individual in the initial population, namely J and 1/CS, training a Kriging model by using the individual in the initial population and the fitness function value thereof, and setting t=0 and w=0, wherein t represents the current iteration times of super pixels and w represents the current iteration times of clusters;
(11) Generating a child population by utilizing binary crossover and polynomial variation, predicting an objective function value of a child individual by utilizing a Kriging model, and combining a parent individual and the child individual;
(12) Selecting a new population by adopting a selection strategy APD based on the angle punishment distance, updating a reference vector, and setting w=w+1;
(13) Judging whether the Kriging model needs to be updated or not, if w is more than w max Updating the Kriging model to enable w=0, and executing the step (14); otherwise, returning to the step (11);
(14) Judging whether or notReaching the maximum iteration number of super-pixel, if t is more than t max The iteration is terminated, a final generation non-dominant solution set is obtained, and the step (15) is executed; otherwise, let t=t+1, return to step (11);
(15) Constructing a semi-supervised intuitive fuzzy clustering optimal solution selection index SI fusing the information of the super-pixel region:
wherein ,EC Representing in-class compactibility metrics, E 1 Indicating the compactness of all samples grouped into one class, F C Representing a maximum partiality metric between classes;
(16) Selecting optimal individuals from the final generation non-dominant solution set by using an optimal solution selection index SI to obtain an optimal clustering center;
(17) According to the optimal clustering center, label distribution is carried out on each super-pixel area so as to obtain labels of all pixel points in the image, and an image clustering result is obtained;
(18) And performing class label correction on the image clustering result to obtain a final image segmentation result.
2. The method according to claim 1, characterized in that: the distance between the pixel and the seed point in the step (3.3) is calculated as follows:
given any two pixel points alpha and beta, the distance d (alpha, beta) between the two pixel points is as follows:
wherein m' represents a super-pixel blur index; d, d c (alpha, beta) represents the color distance, d s (alpha, beta) represents a spatial distance.
3. The method according to claim 1, characterized in that: crossing, mutating and selecting the individual in the step (3.6), specifically as follows:
(3.6.1) obtaining decomposition results of all individuals under three criterion functions by using Chebyshev method, and selecting the optimal individual p i And superpixel seed point s of image i,k As a core point;
(3.6.2) generating new individuals by crossover and mutation operations:
wherein χ, τ e B (i), B (i) = [ i1, i2, …, iT ]; χ+.τ+.i, FR 'is the variation factor, CR' is the crossover factor, rand is a random number between [0,1 ];
(3.6.3) defining elements present in the individual that are greater than a maximum value or less than a minimum value as illegal values and repairing them as adjacent boundary values;
(3.6.4) generating new individuals using gaussian mutation operators:
wherein ,mean value of normal distribution is +.>The standard deviation is a random number of S/20, pm is the mutation probability, defined as pm=1/D.
4. The method according to claim 1, characterized in that: in the step (4), super pixel region merging based on strong and weak edges is performed on the image, specifically: obtaining strong edge information E of image by utilizing image edge detection canny operator edge Then, selecting the spatial position characteristic of the center point of the super pixel region, and constructing a set cen= [ cen ] 1 ,cen 2 ,…,cen K ]And calculating the space distance between any two center points, and finally aligningJudging whether a strong edge exists on a connecting line of the adjacent 8 neighborhood super pixel center points in each super pixel area, and if so, not merging; if not, merging the two super pixel areas; finally, super-pixel region combination based on image strong and weak edges is realized, and G combined super-pixel regions R= [ R ] are obtained 1 ,R 2 ,…,R G ]。
5. The method according to claim 1, characterized in that: the selection strategy based on the angle punishment distance in the step (12) specifically comprises the following steps: firstly, calculating APD values of population individuals, and then selecting an individual with the minimum APD value for balancing diversity and convergence; the APD value is calculated according to the following formula:
wherein ,representing the target vector +.>Euclidean distance to origin, θ t,i,e Representation->With the reference vector v to which it belongs t,e Included angle between, P (θ) t,i,e ) Representing a penalty function, the calculation formula is:
where beta represents a parameter of the penalty rate of change,representing the reference vector v t,e Minimum angle value with other reference vectors in the current generation.
CN202110806823.3A 2021-07-16 2021-07-16 Semi-supervised intuitive clustering method based on decomposition multi-target differential evolution superpixel Active CN113469270B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110806823.3A CN113469270B (en) 2021-07-16 2021-07-16 Semi-supervised intuitive clustering method based on decomposition multi-target differential evolution superpixel

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110806823.3A CN113469270B (en) 2021-07-16 2021-07-16 Semi-supervised intuitive clustering method based on decomposition multi-target differential evolution superpixel

Publications (2)

Publication Number Publication Date
CN113469270A CN113469270A (en) 2021-10-01
CN113469270B true CN113469270B (en) 2023-08-11

Family

ID=77880811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110806823.3A Active CN113469270B (en) 2021-07-16 2021-07-16 Semi-supervised intuitive clustering method based on decomposition multi-target differential evolution superpixel

Country Status (1)

Country Link
CN (1) CN113469270B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114862216A (en) * 2022-05-16 2022-08-05 中国银行股份有限公司 Method and device for determining agile project scheduling scheme

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839261A (en) * 2014-02-18 2014-06-04 西安电子科技大学 SAR image segmentation method based on decomposition evolution multi-objective optimization and FCM
CN108596244A (en) * 2018-04-20 2018-09-28 湖南理工学院 A kind of high spectrum image label noise detecting method based on spectrum angle density peaks

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9638678B2 (en) * 2015-01-30 2017-05-02 AgriSight, Inc. System and method for crop health monitoring

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839261A (en) * 2014-02-18 2014-06-04 西安电子科技大学 SAR image segmentation method based on decomposition evolution multi-objective optimization and FCM
CN108596244A (en) * 2018-04-20 2018-09-28 湖南理工学院 A kind of high spectrum image label noise detecting method based on spectrum angle density peaks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
融合对称特性的混合标签传递半监督直觉模糊聚类图像分割;赵凤;吝晓娟;刘汉强;;信号处理(第09期);全文 *

Also Published As

Publication number Publication date
CN113469270A (en) 2021-10-01

Similar Documents

Publication Publication Date Title
Liu et al. CNN-enhanced graph convolutional network with pixel-and superpixel-level feature fusion for hyperspectral image classification
CN109858390B (en) Human skeleton behavior identification method based on end-to-end space-time diagram learning neural network
Chen et al. Automatic graph learning convolutional networks for hyperspectral image classification
Zhao et al. Optimal-selection-based suppressed fuzzy c-means clustering algorithm with self-tuning non local spatial information for image segmentation
Nakane et al. Application of evolutionary and swarm optimization in computer vision: a literature survey
CN107633226B (en) Human body motion tracking feature processing method
CN110188763B (en) Image significance detection method based on improved graph model
CN110443257B (en) Significance detection method based on active learning
Yu et al. A re-balancing strategy for class-imbalanced classification based on instance difficulty
Kandhway et al. Spatial context cross entropy function based multilevel image segmentation using multi-verse optimizer
Yang et al. Color texture segmentation based on image pixel classification
Yang et al. High-resolution remote sensing image classification using associative hierarchical CRF considering segmentation quality
CN113807176A (en) Small sample video behavior identification method based on multi-knowledge fusion
CN114723037A (en) Heterogeneous graph neural network computing method for aggregating high-order neighbor nodes
Liu et al. Multiobjective fuzzy clustering with multiple spatial information for Noisy color image segmentation
Cheng et al. Leveraging semantic segmentation with learning-based confidence measure
Teng et al. BiSeNet-oriented context attention model for image semantic segmentation
CN113469270B (en) Semi-supervised intuitive clustering method based on decomposition multi-target differential evolution superpixel
CN111325259A (en) Remote sensing image classification method based on deep learning and binary coding
CN108921853B (en) Image segmentation method based on super-pixel and immune sparse spectral clustering
Wang et al. Salient object detection by robust foreground and background seed selection
CN113989256A (en) Detection model optimization method, detection method and detection device for remote sensing image building
CN113723449A (en) Preference information-based agent-driven multi-objective evolutionary fuzzy clustering method
CN105678798A (en) Multi-target fuzzy clustering image segmentation method combining local spatial information
CN111259938B (en) Manifold learning and gradient lifting model-based image multi-label classification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant