CN105678401A - Global optimization method based on strategy adaptability differential evolution - Google Patents

Global optimization method based on strategy adaptability differential evolution Download PDF

Info

Publication number
CN105678401A
CN105678401A CN201511010201.0A CN201511010201A CN105678401A CN 105678401 A CN105678401 A CN 105678401A CN 201511010201 A CN201511010201 A CN 201511010201A CN 105678401 A CN105678401 A CN 105678401A
Authority
CN
China
Prior art keywords
population
individual
individuality
centerdot
represent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201511010201.0A
Other languages
Chinese (zh)
Inventor
张贵军
周晓根
俞旭锋
郝小虎
徐东伟
李章维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201511010201.0A priority Critical patent/CN105678401A/en
Publication of CN105678401A publication Critical patent/CN105678401A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Economics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Biology (AREA)
  • Operations Research (AREA)
  • Biomedical Technology (AREA)
  • Tourism & Hospitality (AREA)
  • Quality & Reliability (AREA)
  • Marketing (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Physiology (AREA)
  • Genetics & Genomics (AREA)
  • Artificial Intelligence (AREA)
  • General Business, Economics & Management (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Complex Calculations (AREA)

Abstract

The invention relates to a global optimization method based on strategy adaptability differential evolution. The global optimization method includes the steps: calculating the distance between each individual in population and the optimal individual in the current population, and ranking the whole population according to the distance and the objective function value respectively; according to the average error value for the distance ranking and the objective function value ranking, determining the distribution situation of the individuals in the current population so as to determine the state of the portion searched by the algorithm, that is, global detection and local search, and at the same time, setting a plurality of different variation strategies for every search state; and at last, selecting one variation strategy from a state strategy pool corresponding to each individual in the population randomly to generate a new individual so as to achieve the effect for the global detection ability and the local enhancement ability for the balance algorithm. The global optimization method based on strategy adaptability differential evolution can effectively avoid improper selection of strategy to influence the performance of the algorithm, and can improve the optimization performance.

Description

A kind of global optimization method based on policy flexibility differential evolution
Technical field
The present invention relates to a kind of intelligent optimization, computer application field, in particular, a kind of global optimization method based on policy flexibility differential evolution.
Background technology
It is frequently encountered by some Global Optimal Problems in fields such as economy, science and engineerings, in global optimization, algorithm needs to find out a globally optimal solution from numerous locally optimal solutions, but, the problem that these global optimization approaches are maximum is likely to be absorbed in local optimum and cannot try to achieve globally optimal solution exactly. Increasingly sophisticated along with engineering optimization, the condition of the object function of optimization problem also becomes to become increasingly complex, it is common that discontinuous, non-differentiability, nonlinearity, it does not have clear and definite analytical expression, and has multiple peak value, multiobject feature. Therefore, traditional optimization method (method as based on gradient) is not used to and solves challenge.
In recent years, evolution algorithm (EAs) is widely used in every field as technology for global optimization. According to natural evolution mechanism and survival of the fittest, evolution algorithm utilizes Cooperative Study process individual in population to instruct evolution, and generate offspring by random operation (such as variation, restructuring), then pass through selection operation and select fitness individual preferably.
Differential evolution algorithm (DE), as a kind of randomness algorithm, has proved to be the simplest in evolution algorithm, and powerful global optimization approach. The same with other evolution algorithms, DE algorithm also comprises variation, intersects and selects three operations. New producing individual by and variation individuality individual in conjunction with corresponding parent, the regularity of distribution that wherein variation individuality is based in current population to solve produces. When new individual fitness value is due to parent individuality, then new individual replacement parent is individual. It is general that DE algorithm has algorithm, does not rely on problem information, and principle is simple, it is easy to accomplish, memory individual optimal solution and population internal information are shared and the feature such as stronger global convergence ability. Therefore, DE algorithm has shown the advantage of its uniqueness in the extensive use in the fields such as communication, power system, optics, chemical industry and mechanical engineering.
Although DE algorithm is widely applied in a lot of fields, but also expose some weakness in theory and application.In DE algorithm, each Mutation Strategy has different characteristics, for instance, some Mutation Strategy overall situation detectivity is relatively strong, but local search ability is more weak, thus causing that algorithm late convergence is slower; Some Mutation Strategy overall situation detectivity is more weak, and local search ability is relatively strong, but is easily caused algorithm and is absorbed in local optimum, and Premature Convergence occurs. Therefore, for a specific problem, from numerous Mutation Strategies, how to choose a most suitable strategy be directly connected to the success or not solved; It addition, along with the carrying out of evolutionary process, algorithm is likely between different regions to search for, ceaselessly switching between overall situation detection and local search condition, different regions is likely to need different strategies.
Selecting difficult problem for DE algorithm Mutation Strategy, many scholars propose some strategies. Zamuda et al., by each Mutation Strategy arranges a fixing select probability, then utilizes a random parameter determines to select which Mutation Strategy; Xie et al., based on the success rate of each Mutation Strategy early stage, utilizes the weight of the neutral net each Mutation Strategy of adaptive renewal; Qin et al. is provided with multiple Mutation Strategy in DE algorithm, and then the success rate according to each strategy early stage dynamically updates the probability that each strategy is selected. Wang et al. arranges a group policy pond in the algorithm, then passes through each strategy competition and generates new individuality. These methods achieve certain effect, are so that the selection of strategy is still a difficult problem for some extensive problems.
Therefore, the existing global optimization method based on differential evolution algorithm also exists defect in policy selection, it is necessary to improve.
Summary of the invention
In order to overcome the existing global optimization method based on differential evolution algorithm deficiency in policy selection, the present invention judges each individuality state in which during evolution according to fitness information and the range information of each individuality, and then according to the suitable strategy of the condition selecting of each individuality, thus it is improper and affect the performance of algorithm, promote the global optimization method based on policy flexibility differential evolution optimizing performance to propose a kind of policy selection that is prevented effectively from.
The technical solution adopted for the present invention to solve the technical problems is:
A kind of global optimization method based on policy flexibility differential evolution, described optimization method comprises the following steps:
1) initialize: population scale N is setP, initial crossover probability CR, initial gain constant F;
2) stochastic generation initial population P={x1,g,x2,g,...,xNp,g, and calculate the target function value of each individuality, wherein, g is evolutionary generation, xi,g, i=1,2 ..., Np represents that g is individual for the i-th in population, if g=0, then it represents that initial population;
3) according to each individual xi,gTarget function value f (xi,g) each individuality carried out descending, and write down the ranking F of each individualityi,g, and find out the optimum individual x in current populationbest,g, wherein, Fi,gRepresent that g is for the target function value ranking of i-th individuality in population;
4) each individuality and optimum individual x in initial population are calculated according to formula (1)best,gBetween distance di,g;
d i , g = ( Σ i = 1 N p Σ k = i + 1 N P Σ j = 1 N ( x j i , g - x j b e s t , g ) 2 ) / ( N p ( N p - 1 ) / 2 ) - - - ( 1 )
Wherein, di,gRepresent that g is in populationiIndividuality and optimum individual xbest,gBetween distance,Represent that g is in populationiIndividual xi,gJth dimension element,Represent that g is in populationOptimumIndividual xbest,gJth dimension element, N is problem dimension, NPFor population scale;
5) according to the distance d between each individuality and optimum individuali,gCarry out descending, and write down the ranking D of each individualityi ,g, Di,gRepresent that g is for the distance ranking of i-th individuality in population;
6) the average error value E of desired value ranking and distance ranking in every generation is calculated according to formula (2)g:
E g = 1 N p Σ i = 1 N p | F i , g - D i , g | - - - ( 2 )
Wherein EgRepresent the g average error value for population;
7) according to formula (3) by average error value EgIt is normalized:
E ‾ g = E g - E m i n g E m a x g - E m i n g - - - ( 3 )
Wherein,Represent average error value EgNormalized value, average error valueRepresent EgMinima, value is always 0,For EgMaximum, as population scale NpDuring for even number,As population scale NpDuring for odd number,
8) judge evolutionary process state in which, each individuality in population randomly choosed Mutation Strategy and makes a variation:
8.1) ifThen algorithm is in the overall situation detection phase, makes a variation according to formula (4):
v j i , g = x j p b e s t , g + F · ( x j a , g - x j b , g ) , i f r a n d n ( 1 , 3 ) = 1 x j a , g + F · ( x j p b e s t , g - x j a , g ) + F · ( x j b , g - x j c , g ) , i f r a n d n ( 1 , 3 ) = 2 x j i , g + F · ( x j p b e s t , g - x j i , g ) + F · ( x j a , g - x j b , g ) , o t h e r w i s e - - - ( 4 )
8.2) ifThen algorithm is in the Local Search stage, then make a variation according to formula (5):
v j i , g = x j b e s t , g + F · ( x j a , g - x j b , g ) , i f r a n d n ( 1 , 3 ) = 1 x j a , g + F · ( x j b e s t , g - x j a , g ) + F · ( x j b , g - x j c , g ) , i f r a n d n ( 1 , 3 ) = 2 x j i , g + F · ( x j b e s t , g - x j i , g ) + F · ( x j a , g - x j b , g ) , o t h e r w i s e - - - ( 5 )
8.1) and 8.2) in, rand (0,1) represents one decimal of stochastic generation between interval [0,1], j=1,2 ..., N, N are problem dimension, and g is evolutionary generation, randn (1,3) represents one integer of stochastic generation between interval [1,3], a, b, c ∈ { 1,2 ..., Np, a ≠ b ≠ c ≠ i, i is the index that current goal is individual,It is that g ties up element for the jth that the variation of i-th target individual in population is individual,Respectively g is in population a, b, c individual jth dimension element,For from 0.5NpThe jth dimension element of the optimum individual randomly selected in randb (0,1) individuality, randb (0,1) represents the decimal randomly generating between 0 to 1,For the current g jth dimension element for the optimum individual in population, F represents gain constant;
9) according to formula (6), each variation individuality is carried out intersection and generate new individual triali,g:
trial j i , g = { v j i , g i f ( r a n d b ( 0 , 1 ) ≤ C R o r j = r n b r ( j ) x j i , g o t h e r w i s e - - - ( 6 )
Wherein, j=1,2 ..., N,Represent that g is for new individual trial corresponding to i-th target individual in populationi ,gJth dimension element, randb (0,1) is expressed as the decimal randomly generating between 0 to 1, and rnbr (j) represents and randomly generates integer between 1 to N, CRRepresent crossover probability;
10) according to formula (7), each new individuality is carried out population recruitment:
x i , g + 1 = trial i , g i f f ( trial i , g ) ≤ f ( x i , g ) x i , g o t h e r w i s e - - - ( 7 )
Wherein, trial i , g = ( trial 1 i , g , trial 2 i , g , ... , trial N i , g ) , x i , g + 1 = ( x 1 i , g + 1 , x 2 i , g + 1 , ... , x N i , g + 1 ) , x i , g = ( x 1 i , g , x 2 i , g , .. , x N i , g ) , Formula (7) shows, if new individuality is better than target individual, then new individual replacement target individual, otherwise keeps target individual constant;
11) judge whether to meet end condition, if it is satisfied, then preserve result and exit, otherwise return step 3).
Further, described step 11) in, end condition is that function evaluates number of times. It is of course also possible to be other end conditions.
The technology of the present invention is contemplated that: first, calculates each individuality in population and distance between optimum individual in current population, and according to distance and target function value, whole population is carried out ranking respectively; Then, average error value according to distance ranking and target function value ranking judges distribution situation individual in current population, and then evaluation algorithm search state in which, namely overall situation detection and Local Search, arrange multiple different Mutation Strategy simultaneously to every kind of search condition; From the state policy pond of its correspondence, randomly choosing a Mutation Strategy to produce new individuality finally, for each individuality in population, thus reaching the effect of balanced algorithm overall situation detectivity and local enhancement ability, and then improving the overall performance of algorithm.
Beneficial effects of the present invention shows: carry out search condition residing for evaluation algorithm according to the average error value of the distance ranking of individualities all in population Yu target function value ranking, and each state is arranged multiple suitable Mutation Strategy, thus randomly choosing different Mutation Strategies in each iteration to produce new individuality, avoidance strategy selects improper and affects the performance of algorithm, effectively achieves algorithm and detects seamlessly transitting of Local Search from the overall situation.
Accompanying drawing explanation
Fig. 1 is based on the basic flow sheet of the global optimization method of policy flexibility differential evolution.
Convergence in mean curve chart when Fig. 2 is based on the global optimization method of policy flexibility differential evolution to 30 dimension Schaffer2 Optimization Solution.
Detailed description of the invention
Below in conjunction with accompanying drawing, the invention will be further described.
See figures.1.and.2, a kind of global optimization method based on policy flexibility differential evolution, comprise the following steps:
1) initialize: population scale N is setP, initial crossover probability CR, initial gain constant F;
2) stochastic generation initial population P={x1,g,x2,g,...,xNp,g, and calculate the target function value of each individuality, wherein, g is evolutionary generation, xi,g, i=1,2 ..., Np represents that g is individual for the i-th in population, if g=0, then it represents that initial population;
3) according to each individual xi,gTarget function value f (xi,g) each individuality carried out descending, and write down the ranking F of each individualityi,g, and find out the optimum individual x in current populationbest,g, wherein, Fi,gRepresent that g is for the target function value ranking of i-th individuality in population;
4) each individuality and optimum individual x in initial population are calculated according to formula (1)best,gBetween distance di,g;
d i , g = ( Σ i = 1 N p Σ k = i + 1 N P Σ j = 1 N ( x j i , g - x j b e s t , g ) 2 ) / ( N p ( N p - 1 ) / 2 ) - - - ( 1 )
Wherein, di,gRepresent that g is in populationiIndividuality and optimum individual xbest,gBetween distance,Represent that g is in populationiIndividual xi,gJth dimension element,Represent that g is in populationOptimumIndividual xbest,gJth dimension element, N is problem dimension, NPFor population scale;
5) according to the distance d between each individuality and optimum individuali,gCarry out descending, and write down the ranking D of each individualityi ,g, Di,gRepresent that g is for the distance ranking of i-th individuality in population;
6) the average error value E of desired value ranking and distance ranking in every generation is calculated according to formula (2)g:
E g = 1 N p Σ i = 1 N p | F i , g - D i , g | - - - ( 2 )
Wherein EgRepresent the g average error value for population;
7) according to formula (3) by average error value EgIt is normalized:
E ‾ g = E g - E m i n g E m a x g - E m i n g - - - ( 3 )
Wherein,Represent average error value EgNormalized value, average error valueRepresent EgMinima, value is always 0,For EgMaximum, as population scale NpDuring for even number,As population scale NpDuring for odd number,
8) judge evolutionary process state in which, each individuality in population randomly choosed Mutation Strategy and makes a variation:
8.1) ifThen algorithm is in the overall situation detection phase, makes a variation according to formula (4):
v j i , g = x j p b e s t , g + F · ( x j a , g - x j b , g ) , i f r a n d n ( 1 , 3 ) = 1 x j a , g + F · ( x j p b e s t , g - x j a , g ) + F · ( x j b , g - x j c , g ) , i f r a n d n ( 1 , 3 ) = 2 x j i , g + F · ( x j p b e s t , g - x j i , g ) + F · ( x j a , g - x j b , g ) , o t h e r w i s e - - - ( 4 )
8.2) ifThen algorithm is in the Local Search stage, then make a variation according to formula (5):
v j i , g = x j b e s t , g + F · ( x j a , g - x j b , g ) , i f r a n d n ( 1 , 3 ) = 1 x j a , g + F · ( x j b e s t , g - x j a , g ) + F · ( x j b , g - x j c , g ) , i f r a n d n ( 1 , 3 ) = 2 x j i , g + F · ( x j b e s t , g - x j i , g ) + F · ( x j a , g - x j b , g ) , o t h e r w i s e - - - ( 5 )
8.1) and 8.2) in, rand (0,1) represents one decimal of stochastic generation between interval [0,1], j=1,2 ..., N, N are problem dimension, and g is evolutionary generation, randn (1,3) represents one integer of stochastic generation between interval [1,3], a, b, c ∈ { 1,2 ..., Np, a ≠ b ≠ c ≠ i, i is the index that current goal is individual,It is that g ties up element for the jth that the variation of i-th target individual in population is individual,Respectively g is in population a, b, c individual jth dimension element,For from 0.5NpThe jth dimension element of the optimum individual randomly selected in randb (0,1) individuality, randb (0,1) represents the decimal randomly generating between 0 to 1,For the current g jth dimension element for the optimum individual in population, F represents gain constant;
9) according to formula (6), each variation individuality is carried out intersection and generate new individual triali,g:
trial j i , g = { v j i , g i f ( r a n d b ( 0 , 1 ) ≤ C R o r j = r n b r ( j ) x j i , g o t h e r w i s e - - - ( 6 )
Wherein, j=1,2 ..., N,Represent that g is for new individual trial corresponding to i-th target individual in populationi ,gJth dimension element, randb (0,1) is expressed as the decimal randomly generating between 0 to 1, and rnbr (j) represents and randomly generates integer between 1 to N, CRRepresent crossover probability;
10) according to formula (7), each new individuality is carried out population recruitment:
x i , g + 1 = trial i , g i f f ( trial i , g ) ≤ f ( x i , g ) x i , g o t h e r w i s e - - - ( 7 )
Wherein, trial i , g = ( trial 1 i , g , trial 2 i , g , ... , trial N i , g ) , x i , g + 1 = ( x 1 i , g + 1 , x 2 i , g + 1 , ... , x N i , g + 1 ) , x i , g = ( x 1 i , g , x 2 i , g , .. , x N i , g ) , Formula (7) shows, if new individuality is better than target individual, then new individual replacement target individual, otherwise keeps target individual constant;
11) judge whether to meet end condition, if it is satisfied, then preserve result and exit, otherwise return step 3).
Further, described step 11) in, end condition is that function evaluates number of times. It is of course also possible to be other end conditions.
The present embodiment ties up Schaffer2 functions for embodiment with classical 30, and a kind of global optimization method based on policy flexibility differential evolution wherein comprises the steps of
1) initialize: population scale N is setP=50, initial crossover probability CR=0.5, initial gain constant F=0.5;
2) stochastic generation initial population P={x1,g,x2,g,...,xNp,g, and calculate the target function value of each individuality, wherein, g is evolutionary generation, xi,g, i=1,2 ..., Np represents that g is individual for the i-th in population, if g=0, then it represents that initial population;
3) according to each individual xi,gTarget function value f (xi,g) each individuality carried out descending, and write down the ranking F of each individualityi,g, and find out the optimum individual x in current populationbest,g, wherein, Fi,gRepresent that g is for the target function value ranking of i-th individuality in population;
4) each individuality and optimum individual x in initial population are calculated according to formula (1)best,gBetween distance di,g;
d i , g = ( Σ i = 1 N p Σ k = i + 1 N P Σ j = 1 N ( x j i , g - x j b e s t , g ) 2 ) / ( N p ( N p - 1 ) / 2 ) - - - ( 1 )
Wherein, di,gRepresent that g is in populationiIndividuality and optimum individual xbest,gBetween distance,Represent that g is in populationiIndividual xi,gJth dimension element,Represent that g is in populationOptimumIndividual xbest,gJth dimension element, N is problem dimension, NPFor population scale;
5) according to the distance d between each individuality and optimum individuali,gCarry out descending, and write down the ranking D of each individualityi ,g, Di,gRepresent that g is for the distance ranking of i-th individuality in population;
6) the average error value E of desired value ranking and distance ranking in every generation is calculated according to formula (2)g:
E g = 1 N p Σ i = 1 N p | F i , g - D i , g | - - - ( 2 )
Wherein EgRepresent the g average error value for population;
7) according to formula (3) by average error value EgIt is normalized:
E ‾ g = E g - E m i n g E m a x g - E m i n g - - - ( 3 )
Wherein,Represent average error value EgNormalized value, average error valueRepresent EgMinima, value is always 0,For EgMaximum, as population scale NpDuring for even number,As population scale NpDuring for odd number,
8) judge evolutionary process state in which, each individuality in population randomly choosed Mutation Strategy and makes a variation:
8.3) ifThen algorithm is in the overall situation detection phase, makes a variation according to formula (4):
v j i , g = x j p b e s t , g + F · ( x j a , g - x j b , g ) , i f r a n d n ( 1 , 3 ) = 1 x j a , g + F · ( x j p b e s t , g - x j a , g ) + F · ( x j b , g - x j c , g ) , i f r a n d n ( 1 , 3 ) = 2 x j i , g + F · ( x j p b e s t , g - x j i , g ) + F · ( x j a , g - x j b , g ) , o t h e r w i s e - - - ( 4 )
8.4) ifThen algorithm is in the Local Search stage, then make a variation according to formula (5):
v j i , g = x j b e s t , g + F · ( x j a , g - x j b , g ) , i f r a n d n ( 1 , 3 ) = 1 x j a , g + F · ( x j b e s t , g - x j a , g ) + F · ( x j b , g - x j c , g ) , i f r a n d n ( 1 , 3 ) = 2 x j i , g + F · ( x j b e s t , g - x j i , g ) + F · ( x j a , g - x j b , g ) , o t h e r w i s e - - - ( 5 )
8.1) and 8.2) in, rand (0,1) represents one decimal of stochastic generation between interval [0,1], j=1,2 ..., N, N are problem dimension, and g is evolutionary generation, randn (1,3) represents one integer of stochastic generation between interval [1,3], a, b, c ∈ { 1,2 ..., Np, a ≠ b ≠ c ≠ i, i is the index that current goal is individual,It is that g ties up element for the jth that the variation of i-th target individual in population is individual,Respectively g is in population a, b, c individual jth dimension element,For from 0.5NpThe jth dimension element of the optimum individual randomly selected in randb (0,1) individuality, randb (0,1) represents the decimal randomly generating between 0 to 1,For the current g jth dimension element for the optimum individual in population, F represents gain constant;
9) according to formula (6), each variation individuality is carried out intersection and generate new individual triali,g:
trial j i , g = { v j i , g i f ( r a n d b ( 0 , 1 ) ≤ C R o r j = r n b r ( j ) x j i , g o t h e r w i s e - - - ( 6 )
Wherein, j=1,2 ..., N,Represent that g is for new individual trial corresponding to i-th target individual in populationi ,gJth dimension element, randb (0,1) is expressed as the decimal randomly generating between 0 to 1, and rnbr (j) represents and randomly generates integer between 1 to N, CRRepresent crossover probability;
10) according to formula (7), each new individuality is carried out population recruitment:
x i , g + 1 = trial i , g i f f ( trial i , g ) ≤ f ( x i , g ) x i , g o t h e r w i s e - - - ( 7 )
Wherein, trial i , g = ( trial 1 i , g , trial 2 i , g , ... , trial N i , g ) , x i , g + 1 = ( x 1 i , g + 1 , x 2 i , g + 1 , ... , x N i , g + 1 ) , x i , g = ( x 1 i , g , x 2 i , g , .. , x N i , g ) , Formula (7) shows, if new individuality is better than target individual, then new individual replacement target individual, otherwise keeps target individual constant;
11) judge that object function evaluates whether number of times reaches 60000, if reached, then preserve result and exit, otherwise returning step 3).
Schaffer2 function is tieed up for embodiment with 30, the average success rate of 30 independent operatings is 100% (degree of accuracy of the optimal solution that regulation algorithm finds in 150000 object function evaluation number of times is successfully solve when being 0.00001), the meansigma methods of the solution tried to achieve in 60000 function evaluation number of times is 2.36E-15, and standard deviation is 2.53E-15.
The excellent effect of optimization that the embodiment that the present invention provides that described above is shows, the obvious present invention is not only suitable for above-described embodiment, and may apply to the every field in Practical Project (such as protein structure prediction, power system, the optimization problems such as path planning), simultaneously under not necessarily departing from essence spirit of the present invention and the premise without departing from content involved by flesh and blood of the present invention, it can be done many variations and be carried out.

Claims (2)

1. the global optimization method based on policy flexibility differential evolution, it is characterised in that: described optimization method comprises the following steps:
1) initialize: population scale N is setP, initial crossover probability CR, initial gain constant F;
2) stochastic generation initial population P={x1,g,x2,g,...,xNp,g, and calculate the target function value of each individuality, wherein, g is evolutionary generation, xi,g, i=1,2 ..., Np represents that g is individual for the i-th in population, if g=0, then it represents that initial population;
3) according to each individual xi,gTarget function value f (xi,g) each individuality carried out descending, and write down the ranking F of each individualityi,g, and find out the optimum individual x in current populationbest,g, wherein, Fi,gRepresent that g is for the target function value ranking of i-th individuality in population;
4) each individuality and optimum individual x in initial population are calculated according to formula (1)best,gBetween distance di,g;
d i , g = ( Σ i = 1 N p Σ k = i + 1 N P Σ j = 1 N ( x j i , g - x j b e s t , g ) 2 ) / ( N p ( N p - 1 ) / 2 ) - - - ( 1 )
Wherein, di,gRepresent that g is in populationiIndividuality and optimum individual xbest,gBetween distance,Represent that g is in populationiIndividual xi,gJth dimension element,Represent that g is in populationOptimumIndividual xbest,gJth dimension element, N is problem dimension, NPFor population scale;
5) according to the distance d between each individuality and optimum individuali,gCarry out descending, and write down the ranking D of each individualityi,g, Di,gRepresent that g is for the distance ranking of i-th individuality in population;
6) the average error value E of desired value ranking and distance ranking in every generation is calculated according to formula (2)g:
E g = 1 N p Σ i = 1 N p | F i , g - D i , g | - - - ( 2 )
Wherein EgRepresent the g average error value for population;
7) according to formula (3) by average error value EgIt is normalized:
E ‾ g = E g - E m i n g E m a x g - E m i n g - - - ( 3 )
Wherein,Represent average error value EgNormalized value, average error valueRepresent EgMinima, value is always 0,For EgMaximum, as population scale NpDuring for even number,As population scale NpDuring for odd number, E m a x g = ( N p 2 - 1 ) / 2 N p ;
8) judge evolutionary process state in which, each individuality in population randomly choosed Mutation Strategy and makes a variation:
8.1) ifThen algorithm is in the overall situation detection phase, makes a variation according to formula (4):
v j i , g = x j p b e s t , g + F · ( x j a , g - x j b , g ) , i f r a n d n ( 1 , 3 ) = 1 x j a , g + F · ( x j p b e s t , g - x j a , g ) + F · ( x j b , g - x j c , g ) , i f r a n d n ( 1 , 3 ) = 2 x j i , g + F · ( x j p b e s t , g - x j i , g ) + F · ( x j a , g - x j b , g ) , o t h e r w i s e - - - ( 4 )
8.2) ifThen algorithm is in the Local Search stage, then make a variation according to formula (5):
v j i , g = x j b e s t , g + F · ( x j a , g - x j b , g ) , i f r a n d n ( 1 , 3 ) = 1 x j a , g + F · ( x j b e s t , g - x j a , g ) + F · ( x j b , g - x j c , g ) , i f r a n d n ( 1 , 3 ) = 2 x j i , g + F · ( x j b e s t , g - x j i , g ) + F · ( x j a , g - x j b , g ) , o t h e r w i s e - - - ( 5 )
8.1) and 8.2) in, rand (0,1) represents one decimal of stochastic generation between interval [0,1], j=1,2 ..., N, N are problem dimension, and g is evolutionary generation, randn (1,3) represents one integer of stochastic generation between interval [1,3], a, b, c ∈ { 1,2 ..., Np, a ≠ b ≠ c ≠ i, i is the index that current goal is individual,It is that g ties up element for the jth that the variation of i-th target individual in population is individual,Respectively g is in population a, b, c individual jth dimension element,For from 0.5NpThe jth dimension element of the optimum individual randomly selected in randb (0,1) individuality, randb (0,1) represents the decimal randomly generating between 0 to 1,For the current g jth dimension element for the optimum individual in population, F represents gain constant;
9) according to formula (6), each variation individuality is carried out intersection and generate new individual triali,g:
trial j i , g = v j i , g i f ( r a n d b ( 0 , 1 ) ≤ C R o r j = r n b r ( j ) x j i , g o t h e r w i s e - - - ( 6 )
Wherein, j=1,2 ..., N,Represent that g is for new individual trial corresponding to i-th target individual in populationi,gJth dimension element, randb (0,1) is expressed as the decimal randomly generating between 0 to 1, and rnbr (j) represents and randomly generates integer between 1 to N, CRRepresent crossover probability;
10) according to formula (7), each new individuality is carried out population recruitment:
x i , g + 1 = trial i , g , i f f ( trial i , g ) ≤ f ( x i , g ) x i , g , o t h e r w i s e - - - ( 7 )
Wherein, trial i , g = ( trial 1 i , g , trial 2 i , g , ... , trial N i , g ) , x i , g + 1 = ( x 1 i , g + 1 , x 2 i , g + 1 , ... , x N i , g + 1 ) , Formula (7) shows, if new individuality is better than target individual, then new individual replacement target individual, otherwise keeps target individual constant;
11) judge whether to meet end condition, if it is satisfied, then preserve result and exit, otherwise return step 3).
2. as claimed in claim 1 a kind of based on shifty interim colony global optimization method, it is characterised in that: described step 11) in, end condition is that function evaluates number of times.
CN201511010201.0A 2015-12-29 2015-12-29 Global optimization method based on strategy adaptability differential evolution Pending CN105678401A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201511010201.0A CN105678401A (en) 2015-12-29 2015-12-29 Global optimization method based on strategy adaptability differential evolution

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201511010201.0A CN105678401A (en) 2015-12-29 2015-12-29 Global optimization method based on strategy adaptability differential evolution

Publications (1)

Publication Number Publication Date
CN105678401A true CN105678401A (en) 2016-06-15

Family

ID=56297736

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201511010201.0A Pending CN105678401A (en) 2015-12-29 2015-12-29 Global optimization method based on strategy adaptability differential evolution

Country Status (1)

Country Link
CN (1) CN105678401A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106779153A (en) * 2016-11-15 2017-05-31 浙江工业大学 Optimization method is distributed in a kind of intelligent three-dimensional warehouse goods yard
CN107704985A (en) * 2017-08-11 2018-02-16 浙江工业大学 A kind of differential evolution Flexible Workshop Optimization Scheduling of dynamic strategy
CN108565857A (en) * 2018-05-07 2018-09-21 江南大学 A kind of Economic Dispatch method based on information interchange strategy ACS in continuous space
CN108564592A (en) * 2018-03-05 2018-09-21 华侨大学 Based on a variety of image partition methods for being clustered to differential evolution algorithm of dynamic
CN108808667A (en) * 2018-06-22 2018-11-13 江苏师范大学 A kind of Economic Dispatch method based on the tactful dynamic difference evolution algorithm of change
CN112598189A (en) * 2020-12-29 2021-04-02 浙江工业大学 Multi-path multi-target emergency material distribution path selection method based on SHADE algorithm
CN113435596A (en) * 2021-06-16 2021-09-24 暨南大学 Micro-ring resonant wavelength searching method based on differential evolution
CN117969044A (en) * 2024-03-29 2024-05-03 山东大学 DFB laser spectral parameter extraction method based on improved differential evolution algorithm

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106779153A (en) * 2016-11-15 2017-05-31 浙江工业大学 Optimization method is distributed in a kind of intelligent three-dimensional warehouse goods yard
CN106779153B (en) * 2016-11-15 2021-08-03 浙江工业大学 Intelligent stereoscopic warehouse goods space allocation optimization method
CN107704985A (en) * 2017-08-11 2018-02-16 浙江工业大学 A kind of differential evolution Flexible Workshop Optimization Scheduling of dynamic strategy
CN108564592A (en) * 2018-03-05 2018-09-21 华侨大学 Based on a variety of image partition methods for being clustered to differential evolution algorithm of dynamic
CN108564592B (en) * 2018-03-05 2021-05-11 华侨大学 Image segmentation method based on dynamic multi-population integration differential evolution algorithm
CN108565857A (en) * 2018-05-07 2018-09-21 江南大学 A kind of Economic Dispatch method based on information interchange strategy ACS in continuous space
CN108565857B (en) * 2018-05-07 2021-01-22 江南大学 Electric power system scheduling method based on information exchange strategy continuous domain ant colony algorithm
CN108808667A (en) * 2018-06-22 2018-11-13 江苏师范大学 A kind of Economic Dispatch method based on the tactful dynamic difference evolution algorithm of change
CN112598189A (en) * 2020-12-29 2021-04-02 浙江工业大学 Multi-path multi-target emergency material distribution path selection method based on SHADE algorithm
CN113435596A (en) * 2021-06-16 2021-09-24 暨南大学 Micro-ring resonant wavelength searching method based on differential evolution
CN117969044A (en) * 2024-03-29 2024-05-03 山东大学 DFB laser spectral parameter extraction method based on improved differential evolution algorithm

Similar Documents

Publication Publication Date Title
CN105678401A (en) Global optimization method based on strategy adaptability differential evolution
Fan et al. Self-adaptive differential evolution algorithm with discrete mutation control parameters
Shao et al. A novel discrete water wave optimization algorithm for blocking flow-shop scheduling problem with sequence-dependent setup times
Narayanan et al. Quantum-inspired genetic algorithms
CN102413029B (en) Method for partitioning communities in complex dynamic network by virtue of multi-objective local search based on decomposition
US11831505B2 (en) Method and system of hybrid data-and-model-driven hierarchical network reconfiguration
CN104636801A (en) Transmission line audible noise prediction method based on BP neural network optimization
Ding et al. A new hierarchical ranking aggregation method
CN107275801A (en) A kind of array element arrangement method based on the inheritance of acquired characters of L-type array antenna
CN104268629A (en) Complex network community detecting method based on prior information and network inherent information
CN105550749A (en) Method for constructing convolution neural network in novel network topological structure
CN102663514A (en) Constrained optimization evolution algorithm based on feasible equilibrium mechanism
CN105740949A (en) Group global optimization method based on randomness best strategy
CN110110434A (en) A kind of initial method that Probabilistic Load Flow deep neural network calculates
Chen et al. The Evolutionary Algorithm to Find Robust Pareto‐Optimal Solutions over Time
CN109993205A (en) Time Series Forecasting Methods, device, readable storage medium storing program for executing and electronic equipment
Chen et al. MOGA-based fuzzy data mining with taxonomy
CN106326988A (en) Improved genetic algorithm for complex computing based on fast matching mechanism
CN102915407A (en) Prediction method for three-dimensional structure of protein based on chaos bee colony algorithm
Wang Opposition‐Based Barebones Particle Swarm for Constrained Nonlinear Optimization Problems
CN105760929A (en) Layered global optimization method based on DFP algorithm and differential evolution
Huang et al. A novel modified gravitational search algorithm for the real world optimization problem
Xiao et al. A locating method for reliability-critical gates with a parallel-structured genetic algorithm
Zhu et al. An Efficient Hybrid Feature Selection Method Using the Artificial Immune Algorithm for High‐Dimensional Data
Bureva et al. Hierarchical generalized net model of the process of selecting a method for clustering

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160615