CN111860755A - Improved particle swarm algorithm based on regression of support vector machine - Google Patents

Improved particle swarm algorithm based on regression of support vector machine Download PDF

Info

Publication number
CN111860755A
CN111860755A CN202010714392.3A CN202010714392A CN111860755A CN 111860755 A CN111860755 A CN 111860755A CN 202010714392 A CN202010714392 A CN 202010714392A CN 111860755 A CN111860755 A CN 111860755A
Authority
CN
China
Prior art keywords
particle
optimal
regression
support vector
vector machine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010714392.3A
Other languages
Chinese (zh)
Inventor
何鸿天
李先允
倪喜军
王书征
张效言
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Institute of Technology
Original Assignee
Nanjing Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Institute of Technology filed Critical Nanjing Institute of Technology
Priority to CN202010714392.3A priority Critical patent/CN111860755A/en
Publication of CN111860755A publication Critical patent/CN111860755A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses an improved particle swarm algorithm based on support vector machine regression. And randomly taking points near the current globally optimal particle position, predicting the optimal random point by regression of a support vector machine, replacing a non-optimal particle, and performing the next iteration. With the increasing input regression information, the prediction becomes more and more accurate. When the bird crowd searches for food, the bird crowd always advances to the known optimal position, and the bird crowd has certain probability not to advance to the optimal position and has had an intelligent person inside now, can judge the position of more advantage near the optimal position according to the point that the bird crowd passed through in advance. The random value range of the original particle swarm algorithm is changed from [0,1] to [ -1,1], the value is randomly taken near the optimal value after the optimal particle position is calculated each time, the optimal is predicted by regression of a support vector machine, the global property and the local property of the swarm are enhanced, and the global and local optimizing capability can be effectively enhanced. The method can be particularly applied to complex optimization problems such as function optimization, planning problems, mode recognition and image processing problems.

Description

Improved particle swarm algorithm based on regression of support vector machine
Technical Field
The invention belongs to the technical field of evolutionary algorithms, relates to a particle swarm optimization algorithm, and particularly relates to an improved particle swarm algorithm based on support vector machine regression.
Background
Particle Swarm Optimization (PSO) is a biological heuristic method in the field of computational intelligence, and belongs to a swarm intelligence optimization algorithm. Derived from Kennedy and Eberhart through observation studies of certain social behaviors of a flock of birds. Because the PSO is simple to operate and high in convergence speed, the PSO is widely applied to the fields of neural network training, semiconductor device synthesis, decision scheduling and the like. However, the particle swarm optimization is not good for discrete optimization problem processing and is easy to fall into local optimization. The particle swarm algorithm mainly develops in the prior art:
(1) the parameters of the PSO are adjusted to balance the global exploration and local production capabilities of the algorithm. Inertial weights are introduced into the velocity terms of the PSO algorithm such as Shi and Eberhart, and are dynamically adjusted linearly (or nonlinearly) according to the iterative process and the particle flight condition so as to balance the globality and the convergence speed of the search. Based on stability analysis of position expectation and variance of a standard particle swarm algorithm, the influence of an acceleration factor on the position expectation and variance is researched, and a group of better acceleration factor values are obtained.
(2) Different types of topological structures are designed, and a particle learning mode is changed, so that the diversity of the population is improved, and Kennedy and the like research the influence of the different topological structures on the PSO performance. Aiming at the defects of easy premature convergence and low optimizing precision of PSO, a more clear particle swarm algorithm form is proposed in 2003: backbone particle swarm algorithm (BareBones PSO, BBPSO)
(3) The PSO is combined with other optimization algorithms (or strategies) to form a hybrid PSO algorithm. If Zengyi and the like, the pattern search algorithm is embedded into the PSO algorithm, so that the advantage complementation of the local search capability of the pattern search algorithm and the global optimization capability of the PSO algorithm is realized.
The algorithm is still easy to fall into local optimization when a complex high-dimensional multi-mode function is optimized.
The invention content is as follows:
in order to solve the technical problems, the invention provides a particle swarm optimization algorithm based on support vector machine regression, which is particularly applied to the solving process of complex optimization problems such as function optimization, energy storage planning problems, pattern recognition and image processing problems.
The technical scheme adopted by the invention is as follows: a particle swarm optimization algorithm based on support vector machine regression is characterized by comprising the following steps:
step 1: initializing parameters, population positions and speeds of a regression improved particle swarm algorithm based on a support vector machine;
step 2: calculating the fitness value of each particle to obtain an individual optimal fitness value and a group optimal fitness value, recording the position of each particle and the corresponding fitness value, and respectively using the positions and the corresponding fitness values as a training set and a training label of the support vector machine regression;
and step 3: randomly selecting a plurality of positions near the particle position corresponding to the optimal fitness value of the group;
and 4, step 4: inputting the plurality of position parameters taken as a verification set into regression input parameters of a support vector machine to obtain corresponding regression fitness values;
and 5: selecting the optimal regression fitness value and the corresponding position corresponding to the step 4;
step 6: calculating the fitness value of the position corresponding to the optimal regression fitness value selected in the step 5;
and 7: replacing the position of a random non-corresponding optimal fitness particle in the particle swarm by the particle position calculated in the step 6 to form a new particle;
and 8: calculating the fitness value of the new particle, and modifying the individual optimal fitness value and the group optimal fitness value;
and step 9: updating the speed and the position of all the particles;
step 10: judging whether the improved particle swarm algorithm converges or reaches the maximum iteration times, if so, outputting the position of the global optimal solution; otherwise, the step 2 is executed in a rotating way.
Preferably, in step 1, the parameters include a population number n, a maximum iteration number k, an inertia weight w, random numbers with learning factors c1 and c2, r1 and r2 of [ -1,1], a random number with a random position number t, and a random position random number Z of [ -1,1 ].
Preferably, the step 2 further includes calculating a corresponding support vector machine penalty coefficient c and a corresponding gamma value by using the pso-svm, wherein the support vector machine regression type is e-SVR, the kernel function type is RBF kernel function, and the loss function p in the e-SVR is 0.1.
Preferably, in step 2, the number of each particle position change and the corresponding fitness value thereof increases with the increase of the number of iterations, for example, thirty-one particle position is known to correspond to thirty-one fitness in the first generation, sixty-two particle position is known to correspond to sixty-two fitness in the second generation, and then the support vector machine penalty coefficient c and the gamma value are calculated by the original particle swarm algorithm. By means of regression of the support vector machine, the global property of the improved algorithm is strengthened, and the local convergence speed is higher and more accurate.
Preferably, the step 3 is implemented by setting the optimal position as Xn, the adjacent random position Xn' ═ Xn + Xn × Zt, Z is a random number between [ -1,1], and t is a selected number. The globality of the improved algorithm is increased, and the local optimal value is effectively avoided.
Preferably, in step 9, the particle swarm algorithm position equation is:
Vk+1 iD=wVk iD+c1r1(PiD-XiD)+c2r2(PgD-XiD);
the particle swarm algorithm speed updating equation is as follows:
Xk+1 iD=Xk iD+ViD
w is the inertia weight, k is the number of iterations, c1And c2Are all learning factors, r1And r2Are all uniformly distributed in the interval
[-1,1]The random number in (c). The variation range of the position and velocity of the d-th particle is [ X ] respectivelymin,Xmax]And [ V ]min,Vmax];
Particles in particle swarm optimizationThe number is n, the search space is D-dimensional, and the position and the velocity of the ith particle are x respectivelyiAnd vi
xi=(xi1,...,xid,...,xiD);
vi=(vi1,...,vid,...,viD);
The position of the particle with the best fitness in the population is recorded as:
Pg=(Pg1,Pg2,...,PgD);
the best position of the solution space experienced by the ith particle is:
Pi=(Pi1,Pi2,...,PiD)。
the invention has the beneficial effects that:
the invention randomly takes points beside the current globally optimal particle position, uses a support vector machine to carry out regression prediction on which random point is optimal, then replaces a non-optimal particle, carries out next iteration, and can carry out more and more accurate prediction along with the continuous increase of input regression information. Analogize when the bird crowd seeks food, the bird crowd always will advance to known optimal position, and the bird crowd has certain probability not to advance to optimal position now and has had an intelligence inside, can foresee the position of more advantage near optimal position according to the point that the bird crowd passed through. The invention simulates the behaviors, changes the random value range of the original particle swarm algorithm from [0,1] to [ -1,1], randomly takes values beside the optimal value after the optimal particle position is calculated each time, and predicts which point is optimal by regression of a support vector machine, thereby enhancing the global property and the local property of the swarm and effectively enhancing the global and local optimizing capability. The method can be particularly applied to the solving process of complex optimization problems such as function optimization, energy storage planning problems, mode identification and image processing problems and the like.
Drawings
FIG. 1: the convergence graph for the function Spheremodel is shown in the embodiment of the invention;
FIG. 2: is a convergence plot for the functions of the generalized rastrigin's of the present example;
FIG. 3: is a flow chart of an embodiment of the present invention.
Detailed Description
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention is further described in detail with reference to the accompanying drawings and examples, it is to be understood that the embodiments described herein are merely illustrative and explanatory of the present invention and are not restrictive thereof.
As shown in fig. 3, the particle swarm optimization algorithm based on support vector machine regression of the present invention includes the following steps:
step 1: initializing parameters of a regression improved particle swarm algorithm based on a support vector machine, wherein the parameters comprise a population number n, a maximum iteration number k, an inertia weight w and a learning factor c1And c2,r1And r2Is [ -1,1 [ ]]The random number of random positions t, the random number of random positions Z is [ -1,1]The random number of (2);
step 2: initializing a population position and a population speed, calculating a fitness value of each particle to obtain an individual optimal fitness value and a population optimal fitness value, recording the position of each particle and the corresponding fitness value, respectively taking the positions and the corresponding fitness values as a training set and a training label of support vector machine regression, and calculating a corresponding penalty coefficient c and a gamma value of the support vector machine by using pso-svm, wherein the regression type of the support vector machine is e-SVR, the kernel function type is RBF kernel function, and a loss function p in the e-SVR is 0.1.
And step 3: randomly taking a plurality of positions beside the particle position corresponding to the optimal fitness value of the group;
and 4, step 4: inputting the plurality of position parameters taken as a verification set into regression input parameters of a support vector machine to obtain corresponding regression fitness values;
and 5: selecting the optimal regression fitness value and the corresponding position corresponding to the step 4;
step 6: calculating the fitness value of the position corresponding to the optimal regression fitness value selected in the step 5;
and 7: replacing the position of a random non-corresponding optimal fitness particle in the particle swarm by the particle position calculated in the step 6;
and 8: calculating the fitness value of the new particle, and modifying the individual optimal fitness value and the group optimal fitness value;
and step 9: updating the speed and the position of all the particles;
step 10: judging whether the improved particle swarm algorithm converges or reaches the maximum iteration number, if so, outputting the position of the global optimal solution as the solution of the optimization problem; otherwise, the step 2 is executed in a rotating way.
Preferably, in step 2, the number of each particle position change and the corresponding fitness value thereof increases with the increase of the number of iterations, for example, thirty-one particle position is known to correspond to thirty-one fitness in the first generation, sixty-two particle position is known to correspond to sixty-two fitness in the second generation, and then the support vector machine penalty coefficient c and the gamma value are calculated by the original particle swarm algorithm. By means of regression of the support vector machine, the global property of the improved algorithm is strengthened, and the local convergence speed is higher and more accurate.
Preferably, the method comprises the following steps: the specific implementation process of the step 3 is that the optimal position is XnLateral random position Xn’=Xn+Xn*Zt. The globality of the improved algorithm is increased, and the local optimal value is effectively avoided.
The specific implementation mode is as follows:
assuming that the number of particles in the particle swarm optimization is n, the search space is D-dimensional, and the position and the speed of the ith particle are x respectivelyiAnd viAnd then:
xi=(xi1,...,xid,...,xiD) (1),
vi=(vi1,...,vid,...,viD) (2);
the position of the particle with the best fitness in the population is recorded as:
Pg=(Pg1,Pg2,...,PgD) (3);
the best position of the solution space experienced by the ith particle is:
Pi=(Pi1,Pi2,...,PiD) (4);
the updated equation of the position and the speed of the particle swarm optimization is as follows:
Vk+1 iD=wVk iD+c1r1(PiD-XiD)+c2r2(PgD-XiD) (5);
Xk+1 iD=Xk iD+ViD(6);
wherein w is the inertia weight, k is the number of iterations, c1And c2Are all learning factors, r1And r2Are all uniformly distributed in the interval [ -1,1 [)]The random number in (c). The variation range of the position and velocity of the d-th particle is [ X ] respectivelymin,Xmax]And [ V ]min,Vmax]. If the values calculated by equations (5) and (6) exceed this range, they are set as boundary values.
After the first iteration, we can get the known PgThen we are at PgRandomly taking t points, Z is [ -1,1]Random number therebetween, equation is
Pg’=Pg+Pg*Zt(7);
The optimal wisdom can be predicted according to all previous positions by adding one bit in the bird group, namely randomly taking values beside an optimal value after the optimal value is calculated every time, and predicting which point is optimal by regression of a support vector machine. And then replacing any non-optimal particle position with the optimal point predicted by regression, and calculating the position replacement of the corresponding fitness value and the corresponding front fitness value.
The next iteration is performed.
To evaluate the performance of this improved algorithm, test functions of 7 criteria were optimized using the algorithm, the test functions are shown in table 1, and the development environment of the experiment is as follows: matlab R2016a with a CPU of i7-10700k 3.8 GHz.
Table one: testing functions
Figure BDA0002596420370000051
Figure BDA0002596420370000061
In the table I, the Spheremodil function is a monomodal function, Schwefel's problem 22, Alpine, generalized rastrigin's, and generalized gridding is a multimodal function. In these functions, N is the dimension of the function, the minimum of these functions is all 0, and the optimal solution is 0 (for any N); the Eggholder function is a two-dimensional multi-modal test function, and a minimum value f (x) is obtained when x (512,404.2319) is equal to-959.640669; michaewicz is a multi-modal test function with optional dimension (optional 2, 5 and 10), the dimension of this test is 5, and m is 10(m is a steepness coefficient, the greater the optimization difficulty is), and the minimum value should be-4.687658. Here, the modified PSOs, PSOs and GAs are used to optimize the above functions, and in these three algorithms, the dimension 15 (N15) is selected for the latter two functions, the population number is 40, and the maximum number of iterations is 500. In PSO, c 1-c 2-1.49445, the inertia factor w decreases linearly with the number of iterations from 0.9 to 0.4. The other parameters of the improved PSO are set as in the PSO. The parameters of the GA are set as follows: the crossover probability was 0.7, and the chance probability was 0.01 using a roulette mechanism. The optimization experiment of each function is run for 30 times, and the optimal value, the average value and the variance of the solution of 30 times are shown in the second table.
Second, test data (if the data value is less than 10)-1010-10Then is marked as 0)
Figure BDA0002596420370000062
Figure BDA0002596420370000071
As can be seen from Table two, the optimal solution for improving PSO for all functions is better than those of GA and PSO, so the global search capability of improving PSO is better than those of the other two algorithms. The mean and variance of PSO were improved for all functions over PSO and GA except for the Genellizedgriewank function. Therefore, improving the overall search performance of the PSO is superior to the other two algorithms.
The convergence curves were used to evaluate the performance of the three algorithms, and in order not to lose generality, the single-modal function Spheremodel and the multi-modal function generalized rastrigin's were randomly selected for analysis. The convergence curves of the three algorithms are shown in fig. 1 and 2. As can be seen from the convergence curves, the improved PSO has better global search capability and convergence compared with the other two algorithms. The number of iterations for finding the global optimal value by improving the PSO is less than that of the GA, and the convergence rate is lower than that of the PSO but the precision is higher. Therefore, improving the PSO in terms of the optimization function shows a significant improvement effect, and the overall performance is also good.
The invention provides an improved particle swarm algorithm based on support vector machine regression, which firstly enlarges the range of random numbers to [ -1,1], so that the global search capability of the algorithm is enhanced. And a possible optimal point position beside the optimal point is selected by regression of a support vector machine and is replaced by a random non-optimal position, so that the local searching capacity is increased, and the two positions supplement each other. The test function through the optimization criterion proves that the algorithm has higher efficiency for solving continuous and discrete optimization problems.
It should be understood that parts of the specification not set forth in detail are well within the prior art.
It should be understood that the above description of the preferred embodiments is given for clarity and not for any purpose of limitation, and that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (6)

1. An improved particle swarm algorithm based on support vector machine regression is characterized by comprising the following steps:
step 1: initializing parameters, population positions and speeds of a regression improved particle swarm algorithm based on a support vector machine;
step 2: calculating the fitness value of each particle to obtain an individual optimal fitness value and a group optimal fitness value, recording the position of each particle and the corresponding fitness value, and respectively using the positions and the corresponding fitness values as a training set and a training label of the support vector machine regression;
and step 3: randomly selecting a plurality of positions near the particle position corresponding to the optimal fitness value of the group;
and 4, step 4: inputting the plurality of position parameters taken as a verification set into regression input parameters of a support vector machine to obtain corresponding regression fitness values;
and 5: selecting the optimal regression fitness value and the corresponding position corresponding to the step 4;
step 6: calculating the fitness value of the position corresponding to the optimal regression fitness value selected in the step 5;
and 7: replacing the position of a random non-corresponding optimal fitness particle in the particle swarm by the particle position calculated in the step 6 to form a new particle;
and 8: calculating the fitness value of the new particle, and modifying the individual optimal fitness value and the group optimal fitness value;
and step 9: updating the speed and the position of all the particles;
step 10: judging whether the improved particle swarm algorithm converges or reaches the maximum iteration times, if so, outputting the position of the global optimal solution; otherwise, the step 2 is executed in a rotating way.
2. The improved particle swarm algorithm based on support vector machine regression as claimed in claim 1, wherein: in the step 1, the parameters include the number n of groups, the maximum iteration number k, the inertia weight w, and the learning factor c1And c2,r1And r2Is [ -1,1 [ ]]The random number of random positions t, the random number of random positions Z is [ -1,1]The random number of (2).
3. The improved particle swarm algorithm based on support vector machine regression as claimed in claim 1, wherein: and the step 2 further comprises the step of calculating a corresponding penalty coefficient c and a gamma value of the support vector machine by using the pso-svm, wherein the regression type of the support vector machine is e-SVR, the kernel function type is RBF kernel function, and a loss function p in the e-SVR is 0.1.
4. The improved particle swarm algorithm based on support vector machine regression as claimed in claim 1, wherein: in the step 2, the position change of each particle and the number of the corresponding fitness values of the particle are increased along with the increase of the iteration times, and then the penalty coefficient c and the gamma value of the support vector machine are calculated through the original particle swarm algorithm.
5. The support vector machine regression-based particle swarm optimization algorithm of claim 1, wherein: the specific implementation process of the step 3 is that the optimal position is XnLateral random position Xn’=Xn+Xn*ZtZ is [ -1,1]And t is the selected number.
6. The support vector machine regression-based particle swarm optimization algorithm of claim 1, wherein: in step 9, the particle swarm algorithm position equation is: vk+1 iD=wVk iD+c1r1(PiD-XiD)+c2r2(PgD-XiD);
The particle swarm algorithm speed updating equation is as follows: xk+1 iD=Xk iD+ViD
w is the inertia weight, k is the number of iterations, c1And c2Are all learning factors, r1And r2Are all uniformly distributed in the interval [ -1,1 [)]The random number in (c). The variation range of the position and velocity of the d-th particle is [ X ] respectivelymin,Xmax]And [ V ]min,Vmax](ii) a In the particle swarm optimization, the number of particles is n, the search space is D-dimension, and the position and the speed of the ith particle are x respectivelyiAnd vi
xi=(xi1,...,xid,...,xiD);
vi=(vi1,...,vid,...,viD);
The position of the particle with the best fitness in the population is recorded as:
Pg=(Pg1,Pg2,...,PgD);
the best position of the solution space experienced by the ith particle is:
Pi=(Pi1,Pi2,...,PiD)。
CN202010714392.3A 2020-07-22 2020-07-22 Improved particle swarm algorithm based on regression of support vector machine Pending CN111860755A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010714392.3A CN111860755A (en) 2020-07-22 2020-07-22 Improved particle swarm algorithm based on regression of support vector machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010714392.3A CN111860755A (en) 2020-07-22 2020-07-22 Improved particle swarm algorithm based on regression of support vector machine

Publications (1)

Publication Number Publication Date
CN111860755A true CN111860755A (en) 2020-10-30

Family

ID=72949680

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010714392.3A Pending CN111860755A (en) 2020-07-22 2020-07-22 Improved particle swarm algorithm based on regression of support vector machine

Country Status (1)

Country Link
CN (1) CN111860755A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819062A (en) * 2021-01-26 2021-05-18 淮阴工学院 Fluorescence spectrum quadratic characteristic selection method based on mixed particle swarm and continuous projection

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819062A (en) * 2021-01-26 2021-05-18 淮阴工学院 Fluorescence spectrum quadratic characteristic selection method based on mixed particle swarm and continuous projection

Similar Documents

Publication Publication Date Title
Alswaitti et al. Density-based particle swarm optimization algorithm for data clustering
Gharehchopogh et al. A comprehensive survey: Whale Optimization Algorithm and its applications
Wang et al. Stud krill herd algorithm
CN108304316B (en) Software defect prediction method based on collaborative migration
Nayak et al. Hybrid chemical reaction based metaheuristic with fuzzy c-means algorithm for optimal cluster analysis
CN112232413B (en) High-dimensional data feature selection method based on graph neural network and spectral clustering
Tran et al. Overview of particle swarm optimisation for feature selection in classification
Rashno et al. Particle ranking: An efficient method for multi-objective particle swarm optimization feature selection
CN110991518B (en) Two-stage feature selection method and system based on evolutionary multitasking
Cao et al. A PSO-based cost-sensitive neural network for imbalanced data classification
CN108985515A (en) A kind of new energy based on independent loops neural network goes out force prediction method and system
CN110287985B (en) Depth neural network image identification method based on variable topology structure with variation particle swarm optimization
Zhai et al. Instance selection for time series classification based on immune binary particle swarm optimization
Shu et al. A modified hybrid rice optimization algorithm for solving 0-1 knapsack problem
CN104091038A (en) Method for weighting multiple example studying features based on master space classifying criterion
CN113255873A (en) Clustering longicorn herd optimization method, system, computer equipment and storage medium
Ghorpade-Aher et al. PSO based multidimensional data clustering: A survey
CN111860755A (en) Improved particle swarm algorithm based on regression of support vector machine
Yang et al. Feature selection using memetic algorithms
Mohamad et al. Particle swarm optimization for gene selection in classifying cancer classes
Urade et al. Study and analysis of particle swarm optimization: a review
Mangat Survey on particle swarm optimization based clustering analysis
Oloruntoba et al. Clan-based cultural algorithm for feature selection
Mohamad et al. Particle swarm optimization with a modified sigmoid function for gene selection from gene expression data
Liu et al. Swarm intelligence for classification of remote sensing data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination