CN113657589B - Method, system, device and storage medium for solving optimization problem - Google Patents

Method, system, device and storage medium for solving optimization problem Download PDF

Info

Publication number
CN113657589B
CN113657589B CN202110775130.2A CN202110775130A CN113657589B CN 113657589 B CN113657589 B CN 113657589B CN 202110775130 A CN202110775130 A CN 202110775130A CN 113657589 B CN113657589 B CN 113657589B
Authority
CN
China
Prior art keywords
objective function
particle
initial
original
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110775130.2A
Other languages
Chinese (zh)
Other versions
CN113657589A (en
Inventor
吕超
史玉回
孙立君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southern University of Science and Technology
Original Assignee
Southern University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southern University of Science and Technology filed Critical Southern University of Science and Technology
Priority to CN202110775130.2A priority Critical patent/CN113657589B/en
Publication of CN113657589A publication Critical patent/CN113657589A/en
Application granted granted Critical
Publication of CN113657589B publication Critical patent/CN113657589B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/086Learning methods using evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/30Circuit design
    • G06F30/32Circuit design at the digital level
    • G06F30/337Design optimisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Geometry (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Physiology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a solving method, a solving system, a solving device and a storage medium for an optimization problem. The method comprises the following steps: acquiring an original problem and solution space sampling data of the original problem; according to the artificial neural network training solution space sampling data, determining model training parameters; constructing a corresponding new problem according to the original problem; determining a new objective function corresponding to the new problem according to the model training parameters; optimizing a new objective function according to a particle swarm algorithm, and determining initial parameters of the population; and optimizing an original objective function corresponding to the original problem according to a particle swarm algorithm based on the population initial parameters to obtain an optimal solution of the original problem. The problem of simplifying and optimizing by manual modeling is solved, and the efficiency of searching the global optimal solution by the existing evolution algorithm is greatly improved.

Description

Method, system, device and storage medium for solving optimization problem
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a method, a system, a device and a storage medium for solving an optimization problem.
Background
There are a large number of optimization problems in the fields of science, engineering and the like, the problems which can be processed by the traditional optimization technology are limited, and the optimization problem with a large number of local optimal values is difficult to find out the optimal solution in the face of irreducible high dimensionality. Optimization is a persistent problem facing human development. Generally, optimization refers to finding an optimal solution of a problem, where the optimal solution is composed of a number of decision variables with specific values, and "optimal" refers to that the problem is solved most perfectly under a specific evaluation index (objective function). In brief, the process of optimization is to find a specific value for each decision variable to maximize or minimize the objective function.
In the related art, evolution optimization is a novel optimization algorithm for solving an optimization problem by utilizing an evolution calculation principle, and the novel optimization algorithm comprises a genetic algorithm, an evolution planning, an evolution strategy and various novel meta-heuristic optimization algorithms, such as a particle swarm algorithm, a brainstorming algorithm and the like, wherein the initial design inspiration is derived from inheritance and evolution of natural species. The evolution optimization generally adopts a group search mode, firstly, the solution of the problem is characterized as an individual, a certain number of individuals can form a group, then new individuals are continuously generated through a series of evolution operators, the success and failure are completed through an fitness evaluation mechanism, and the group is updated, so that the efficient search of the optimal solution in the problem solution space is realized.
The design of two-dimensional infinite impulse response digital filters is also commonly treated as an optimization problem. The design of a two-dimensional infinite impulse response digital filter requires obtaining optimal parameters of the filter, however, in the two-dimensional infinite impulse response digital filter design, the solution space is very complex and contains a large number of local minima. That is, there are a large number of "locally optimal solutions" that are fraudulent, which solutions easily interfere with the design of a two-dimensional infinite impulse response digital filter, thereby increasing the difficulty of obtaining optimal parameters. Therefore, how to reduce the difficulty of obtaining the optimal parameters of the two-dimensional infinite impulse response digital filter becomes a technical problem to be solved urgently.
Disclosure of Invention
The present invention aims to solve at least one of the technical problems existing in the prior art. Therefore, the invention provides a solving method, a solving system, a solving device and a storage medium for an optimization problem, which can reduce the optimization difficulty and exert the optimization capability of an evolution algorithm to a greater extent.
According to an embodiment of the first aspect of the present application, a method for solving an optimization problem includes:
Acquiring an original problem and solution space sampling data of the original problem;
according to the artificial neural network training solution space sampling data, determining model training parameters;
Constructing a corresponding new problem according to the original problem;
determining a new objective function corresponding to the new problem according to the model training parameters; constructing a new problem by modifying the original problem;
optimizing a new objective function according to a particle swarm algorithm, and determining initial parameters of the population;
And optimizing an original objective function corresponding to the original problem according to a particle swarm algorithm based on the population initial parameters to obtain an optimal solution of the original problem.
The solving method of the optimization problem according to the embodiment of the invention has at least the following beneficial effects:
According to the invention, the solution space sampling data is obtained by sampling the solution space of the original optimization problem, the solution space sampling data is trained according to the artificial neural network, the model training parameters are determined, the new objective function corresponding to the new problem is determined according to the model training parameters, the new objective function is optimized according to the particle swarm algorithm, the population initial parameters are determined, the original objective function corresponding to the original problem is optimized according to the particle swarm algorithm based on the population initial parameters, the optimal solution of the original problem is obtained, the optimization problem is simplified through artificial modeling, and the efficiency of searching the global optimal solution by the existing evolution algorithm is greatly improved.
According to some embodiments of the application, there is provided:
Acquiring an initial artificial neural network;
Inputting the solution space sampling data into an artificial neural network for training;
model training parameters are determined from the training.
According to some embodiments of the application, there is provided:
Constructing a new problem;
and determining a new objective function corresponding to the new problem according to the model training parameters.
According to some embodiments of the application, there is provided:
acquiring an initial population; the population comprises at least an initial particle and a plurality of particles;
Determining the historical optimal position of the initial particles according to the calculation of the population;
and calculating a plurality of particles based on the historical optimal positions of the initial particles of the population, and determining initial parameters of the population.
According to some embodiments of the application, there is provided:
acquiring vector parameters of initial particles; the vector parameters include at least: a position vector and a velocity vector;
Inputting the initial particle position vector into a new objective function for calculation to obtain an initial particle fitness value;
determining global guide particles according to the initial particle fitness value;
And obtaining the historical optimal position of the initial particle based on the global guide particle.
According to some embodiments of the application, there is provided:
Calculating a plurality of particle position vectors and a plurality of particle velocity vectors based on a combination of the initial particle historical optimal position and the vector parameters of the initial particles;
inputting a plurality of particle position vectors into a new objective function for calculation to obtain a plurality of particle fitness values;
Determining a plurality of particle history optimal positions according to the plurality of particle fitness values;
Population initial parameters are determined based on the plurality of particle historical optimal positions and the initial particle historical optimal positions.
According to some embodiments of the application, there is provided:
determining an original objective function corresponding to the original problem according to the solution space sampling data of the original problem;
And optimizing the original objective function according to a particle swarm algorithm based on the population initial parameters to obtain an optimal solution of the original problem.
A system for solving an optimization problem according to an embodiment of the second aspect of the present application includes:
the acquisition module is used for acquiring the original problem and the solution space sampling data of the original problem;
the training module is used for training the solution space sampling data according to the artificial neural network and determining model training parameters;
The construction module is used for constructing a corresponding new problem according to the original problem;
the determining module is used for determining a new objective function corresponding to the new problem according to the model training parameters; constructing a new problem by modifying the original problem;
the optimizing module is used for optimizing the new objective function according to the particle swarm algorithm and determining initial parameters of the population;
And the solving module is used for optimizing the original objective function corresponding to the original problem according to the particle swarm algorithm based on the population initial parameters to obtain the optimal solution of the original problem.
An apparatus for solving an optimization problem according to an embodiment of a third aspect of the present application includes:
A processor;
a memory for storing an executable program;
The solving means for obtaining the optimization problem implements the solving method of the optimization problem as in the first aspect of the invention when the executable program is executed by the processor.
The computer-readable storage medium according to the embodiment of the fourth aspect of the present application stores executable instructions that can be executed by a computer to cause the computer to execute the solving method of the optimization problem as in the first aspect of the present application.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The invention is further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a schematic flow chart of a method for solving an optimization problem according to the present invention;
FIG. 2 is a schematic diagram of a specific flow of step S200 in the method for solving an optimization problem according to the present invention;
FIG. 3 is a schematic flowchart of step S400 in the method for solving an optimization problem according to the present invention;
FIG. 4 is a schematic flowchart of step S500 in the method for solving an optimization problem according to the present invention;
FIG. 5 is a flowchart illustrating a step S520 in the method for solving an optimization problem according to the present invention;
FIG. 6 is a flowchart illustrating a step S530 in the method for solving an optimization problem according to the present invention;
fig. 7 is a schematic flowchart of step S600 in the method for solving an optimization problem according to the present invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention.
In the description of the present invention, the meaning of a number is one or more, the meaning of a number is two or more, and greater than, less than, exceeding, etc. are understood to exclude the present number, and the meaning of a number is understood to include the present number. The description of the first and second is for the purpose of distinguishing between technical features only and should not be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present invention, the descriptions of the terms "one embodiment," "some embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
In some embodiments, the hardware environment for implementing the method of the present invention is: a personal computer with Intel-Rui series CPU, more than 4GB memory and more than 256GB hard disk is provided, and the built-in operating system can be any one of Windows, linux and Mac OS. The method provided by the invention can be realized through Python programming, and the development and operation environment is PyCharm software.
An artificial neural network (ARTIFICIAL NEURAL NETWORKS, abbreviated as ANNs) is also known as a Neural Network (NNs) or a Connection Model (Connection Model), which is an algorithmic mathematical Model that mimics the behavior of an animal neural network and performs distributed parallel information processing. The network relies on the complexity of the system and achieves the purpose of processing information by adjusting the relationship of the interconnection among a large number of nodes.
The artificial neural network is an artificial model formed by connecting a plurality of neurons, can fit various complex models in the real world, has a good fitting effect on a high-dimensional nonlinear model, and is widely used for classification and regression problems. Artificial neural networks are typically composed of an input layer, an hidden layer, and an output layer. The independent variable (decision variable) of the model is fed into the neural network by the input layer, and after being transferred in the hidden layer, the corresponding target value is output by the output layer.
Generally, optimization refers to finding an optimal solution of a problem, where the optimal solution is composed of a number of decision variables with specific values, and "optimal" refers to that the problem is solved most perfectly under a specific evaluation index (objective function). In brief, the process of optimization is to find a specific value for each decision variable to maximize or minimize the objective function.
The evolution optimization generally adopts a group search mode, firstly, the solution of the problem is characterized as an individual, a certain number of individuals can form a group, then new individuals are continuously generated through a series of evolution operators, the success and failure are completed through an fitness evaluation mechanism, and the group is updated, so that the efficient search of the optimal solution in the problem solution space is realized.
As shown in fig. 1, which is a schematic flow chart of an implementation of a method for solving an optimization problem according to an embodiment of the present application, the method for solving an optimization problem may include, but is not limited to, steps S100 to S600.
S100, acquiring original problems and solution space sampling data of the original problems;
s200, determining model training parameters according to the artificial neural network training solution space sampling data;
S300, constructing a corresponding new problem according to the original problem;
S400, determining a new objective function corresponding to the new problem according to the model training parameters; constructing a new problem by modifying the original problem;
S500, optimizing a new objective function according to a particle swarm algorithm, and determining initial parameters of the population;
and S600, optimizing an original objective function corresponding to the original problem according to a particle swarm algorithm based on the population initial parameters to obtain an optimal solution of the original problem.
In step S100 of some embodiments, solution space sampling data of an original problem and the original problem are obtained, and it is assumed that there is a single-objective unconstrained continuous optimization problem P as the original problem, where the original problem P includes n decision variables, n is called a dimension of the original problem P, and a value range of each decision variable is defined as a section [ l b,ub ], and f (X) is an objective function of the original problem P. To obtain the solution space sampling data, N feasible solutions are randomly generated: each feasible solution X i (i=1, 2, Λ, N) is represented as a vector X i=[x1,x2,Λ,xn ], then each decision variable X i (i=1, 2, Λ, N) is a value randomly generated in the interval range [ l b,ub ], the vector X i=[x1,x2,Λ,xn represented by the feasible solution is substituted into the objective function f (X) of the original problem to obtain an objective function value Y i=f(Xi corresponding to each feasible solution, (i=1, 2, Λ, N), the generated data y= { Y 1,y2,Λ,yN } and x= { X 1,X2,Λ,XN } form a training data set dt= { X, Y }, all the feasible solutions form an N-dimensional decision space R p, and each data { X i,Yi } is called one solution space sampling data in the decision space R p. The training data set dt= { X, Y } is the solution space that is obtained using data.
In some embodiments of the present invention, the number N of samples is related to the dimension N, and n=200×n is generally set.
In step S200 of some embodiments, model training parameters are determined according to the artificial neural network training solution space sampling data, an initial artificial neural network is acquired first, then the solution space sampling data acquired in step S100 is input to the artificial neural network for training, and training parameters of an artificial neural network model are determined according to the training, so as to determine a new objective function corresponding to a new problem according to the model training parameters.
In some embodiments, referring to fig. 2, the method of solving the optimization problem may include, but is not limited to, steps S210 to S230.
S210, acquiring an initial artificial neural network;
S220, inputting the solution space sampling data into an artificial neural network for training;
S230, determining model training parameters according to training.
In step S210 of some embodiments, an initial artificial neural network is obtained, and in embodiments of the present invention, the artificial neural network may be a fully connected feedforward neural network, which includes an input layer, a hidden layer, and an output layer, where the neural network includes a plurality of input nodes, an output node, and a plurality of hidden layers, after an argument (decision variable) of a model is input into the neural network from the input layer, the argument (decision variable) is transferred in the hidden layer, and finally, the output layer outputs a corresponding target value.
In step S220 of some embodiments, the subspace sampling data is input to an artificial neural network for training, specifically, the subspace sampling data obtained in step S100 is trained by the artificial neural network constructed in step S210.
In step S230 of some embodiments, model training parameters are determined according to training, and a training data set dt= { X, Y } is sent to the artificial neural network M for training, where training is performed to minimize a mean square error between the neural network output and the data Y, thereby determining optimal connection parameters between neurons of the artificial neural network, that is, the required model training parameters.
In step S300 of some embodiments, a corresponding new problem is constructed according to the original problem, the new problem is the same as the original problem except for the corresponding objective function, and the input-output mapping relationship of the constructed new problem, that is, the new objective function corresponding to the new problem is relatively simple, and the number of local optimal values is small.
In step S400 of some embodiments, a new objective function corresponding to the new problem is determined according to the model training parameters; by constructing a new problem by modifying the original problem, a new objective function corresponding to the new problem is determined according to the model training parameters obtained in step S230.
In some embodiments, referring to fig. 3, the method of solving the optimization problem may include, but is not limited to, steps S410 through S420.
S410, constructing a new problem;
S420, determining a new objective function corresponding to the new problem according to the model training parameters.
In step S410 of some embodiments, a new problem P M is constructed first, and the new problem is obtained by modifying some parameters of the original problem P. The only difference between the new problem and the original problem is the difference in the objective function.
In step S420 of some embodiments, a new objective function corresponding to the new problem is determined according to the model training parameters, and the model training parameters are determined according to the artificial neural network training solution space sampling data in step S200, so that the obtained trained artificial neural network M can be regarded as a proxy model of the original problem objective function y=f (X), each input node of the artificial neural network M represents the decision variable X i of each input, and each output node of the artificial neural network M represents the objective function value y i corresponding to each output, so that the input-output mapping relationship thereof can be expressed as y=f M(X),y=fM (X), namely, the new objective function, representing the input-output mapping relationship represented by the artificial neural network M. Training the solution space sampling data according to the artificial neural network, and finally obtaining a new objective function corresponding to the new problem, so that the number of local optimal values is reduced.
In step S500 of some embodiments, a new objective function is optimized according to a particle swarm algorithm, and initial parameters of a population are determined, including first obtaining an initial population, then calculating the population to determine an initial particle historical optimal position, and then calculating a plurality of particles based on the initial particle optimal position of the population to determine initial parameters of the population.
The particle swarm optimization algorithm (PARTICLE SWARM optimization, PSO) is in turn translated into a particle swarm algorithm, or a particle swarm optimization algorithm. Is a random search algorithm based on group collaboration, which is developed by simulating the foraging behavior of a bird group.
The particle swarm algorithm initializes to a population of random particles (random solutions), and then finds the optimal solution by iteration, in each of which the particles update themselves by tracking two "extrema". The first is the optimal solution found by the particle itself, which is called the individual extremum pBest, the other extremum is the optimal solution found by the whole population, and the extremum is the global extremum gBest. Alternatively, instead of using the whole population, only a part of the neighbors of the optimal particles may be used, and the extremum in all neighbors is the local extremum.
In some embodiments, referring to fig. 4, the solving method of the optimization problem may include, but is not limited to, steps S510 to S530.
S510, obtaining an initial population; the population comprises at least an initial particle and a plurality of particles;
s520, determining the historical optimal position of the initial particles according to the calculation of the population.
And S530, calculating a plurality of particles based on the historical optimal positions of the initial particles of the population, and determining initial parameters of the population.
In step S510 of some embodiments, an initial population is obtained, the solution of the problem is generally characterized as an individual, a certain number of individuals form a "population", then a series of evolution operators are performed to continuously generate new individuals, and then a fitness evaluation mechanism is used to complete the victory and defeat, so that the population is updated, and the efficient search of the optimal solution in the problem solution space is realized through the update of the population.
The population comprises an initial population p= { P 1,p2,Λ,pw }, which comprises w individuals, each individual P i (i=1, 2, Λ, w) is called a particle, the initial particle is denoted by P 1, but the initial particle is not fixed by P 1. For example, p 1 is the starting particle with respect to p 2 particles, but p 2 particles are now the starting particle for p 3 particles with respect to p 3 particles. The plurality of particles is a particle including at least 2 particles and more, and is referred to as a plurality of particles.
In step S520 of some embodiments, according to the calculation of the population, the optimal position of the initial particle is determined, firstly, the vector parameter of the initial particle is obtained, then the position vector of the initial particle is input into a new objective function to perform calculation, the fitness value of the initial particle is obtained, then the particle with the largest fitness value is selected as the global guiding particle, and finally the historical optimal position of the initial particle is obtained.
In step S530 of some embodiments, a plurality of particles are calculated based on an initial particle history optimal position of the population, a population initial parameter is determined, first, a plurality of particle position vectors and a plurality of particle velocity vectors are calculated based on a combination of the initial particle history optimal position and a vector parameter of the initial particle, then the plurality of particle position vectors are input to a new objective function to be calculated, a plurality of particle fitness values are obtained, a plurality of particle history optimal positions are determined according to the plurality of particle fitness values, and finally, the population initial parameter is determined based on the plurality of particle history optimal positions and the initial particle optimal position.
In some embodiments, referring to fig. 5, the method of solving the optimization problem may further include, but is not limited to, steps S521 to S524.
S521, obtaining vector parameters of initial particles; the vector parameters include at least: a position vector and a velocity vector;
S522, inputting the initial particle position vector into a new objective function for calculation to obtain an initial particle fitness value;
S523, determining global guide particles according to the initial particle fitness value;
S524, obtaining the history optimal position of the initial particle based on the global guide particle.
In step S521 of some embodiments, vector parameters of the initial particle are obtained, where the vector parameters include a position vector and a velocity vector, the vector parameters of the initial particle are the same as the step of obtaining the solution space sampling data in step S100, and a single particle is to be obtained, and the vector parameters are initialized to obtain the position vector and the velocity vector of the initial particle.
Generally, a position vector is a directional line segment that starts at the origin of coordinates and ends at the location of a moving particle at a certain point in time.
The velocity vector represents the velocity of the directional line segment distance at a certain moment, which takes the origin of coordinates as the starting point and takes the position of the motion particle as the end point in the effective time.
In step S522 of some embodiments, the initial particle position vector is input to a new objective function to calculate, so as to obtain an initial particle fitness value, that is, the position vector p i (i=1, 2, Λ, w) of each initial particle is sent to the artificial neural network M to calculate so as to obtain a corresponding objective function y i=fM(pi),yi=fM(pi), which is the initial particle fitness value.
In step S523 of some embodiments, the global guiding particle is determined according to the initial particle fitness value, specifically, the initial particle with the largest value is selected from the initial particle fitness values as the global guiding particle p g.
In step S524 of some embodiments, based on the global guiding particles, the history optimal positions of the initial particles are obtained, and after the global guiding particles are determined according to the method with the maximum fitness value of the initial particles, the rest of the optimal values of each initial particle are initialized to obtain the history optimal positions h i=pi of the initial particles.
In some embodiments, referring to fig. 6, the solving method of the optimization problem may further include, but is not limited to, steps S531 to S534.
S531, calculating a plurality of particle position vectors and a plurality of particle velocity vectors based on the combination of the initial particle history optimal position and the vector parameters of the initial particles;
S532, inputting a plurality of particle position vectors into a new objective function for calculation to obtain a plurality of particle fitness values;
S533, determining a plurality of particle history optimal positions according to the plurality of particle fitness values;
S534, determining population initial parameters based on the plurality of particle historical optimal positions and the initial particle historical optimal positions.
In step S531 of some embodiments, based on the combination of the initial particle historical optimal position and the vector parameters of the initial particles, a plurality of particle position vectors and a plurality of particle velocity vectors are calculated, and after the initial particle historical optimal position is obtained, the method starts to enter a mid-roll evolution and update stage, and the velocity vector and the position vector of each particle in the plurality of particles can be iteratively updated according to the following formula:
vi=ω*vi+c1*r*(pg-pi)+c2*r*(hi-pi)
pi=vi+pi
Wherein ω, c 1,c2 is a preset parameter, r is a random number in the [0,1] interval, h i is an initial particle history optimal position, and p g is a global guiding particle.
In some embodiments, the main parameters required to implement the present invention are set as follows: omega, c 1,c2 are set to 0.6,2.0,2.0 respectively.
According to the formula, all particles in the population are calculated to obtain a plurality of particle position vectors and a plurality of particle velocity vectors, and the particle position vectors and the particle velocity vectors are used for calculating the objective function value corresponding to each particle, so that the historical optimal position (global guiding particles) of the population and further the historical optimal position of each particle are updated.
In step S532 of some embodiments, a plurality of particle position vectors are input to a new objective function for calculation, resulting in a plurality of particle fitness values. The specific execution is the same as that of step S522 in which the initial particle position vector is input to the new objective function for calculation, and the initial particle fitness value is obtained.
In step S533 of some embodiments, a plurality of particle historical optimal positions are determined according to a plurality of particle fitness values, a plurality of particle fitness values are obtained according to a plurality of iterative cycles, and the historical optimal position of each particle is determined.
In step S534 of some embodiments, a population initial parameter is determined based on the plurality of particle historical optimal positions and the initial particle optimal positions, and the plurality of particle historical optimal positions and the initial particle historical optimal positions are obtained through a plurality of calculations, thereby obtaining the population initial parameter.
The initial parameters of the population can be regarded as optimizing a new objective function of a new problem through a particle swarm algorithm, so that a local optimal solution and a global optimal solution of the new objective function, namely a so-called initial particle optimal position and a plurality of particle historical optimal positions, are determined, and when the original problem is solved, the historical optimal position of each particle in the population is known, so that the interference of the local optimal solution on the global optimal solution can be avoided.
In step S600 of some embodiments, based on the initial parameters of the population, the original objective function corresponding to the original problem is optimized according to the particle swarm algorithm to obtain an optimal solution of the original problem, specifically, the particle swarm algorithm is interrupted, the new objective function corresponding to the new problem is replaced by the original objective function corresponding to the original problem, and then the new objective function corresponding to the new problem is continuously optimized by the particle swarm algorithm, but at this time, the initial position of each particle in the population is no longer random, but is set as a position vector before the algorithm interruption, that is, based on the initial parameters of the population obtained in step S534, the new objective function is optimized, so as to obtain the optimal solution of the problem.
In some embodiments, referring to fig. 7, the solving method of the optimization problem may include, but is not limited to, steps S610 to S620.
S610, determining an original objective function corresponding to the original problem according to the solution space sampling data of the original problem;
S620, optimizing the original objective function according to the particle swarm algorithm based on the population initial parameters to obtain an optimal solution of the original problem.
In step S610 of some embodiments, an original objective function corresponding to the original problem is determined according to the solution space sampling data of the original problem, where the original objective function is y=f (X) in step S100.
In step S620 of some embodiments, the original objective function is optimized according to the particle swarm algorithm based on the initial parameters of the population to obtain an optimal solution of the original problem, specifically, the new objective function corresponding to the new problem processed by the particle swarm algorithm is replaced by the original objective function corresponding to the original problem, the replacement of the formula is replacing the new objective function y=f M (X) in step S420 with the original objective function y=f (X) in step S100, the rest is not modified and changed, the particle swarm algorithm is restarted to optimize the original objective function corresponding to the original problem, at this time, the initial position of each particle in the population is not set randomly, but is set as the position vector of the particle before the particle swarm algorithm is interrupted, and then the updating and optimizing process of multiple iterative cycles is performed on each particle corresponding to the original problem in the population, thereby obtaining the optimal solution of the original problem.
In some embodiments, example one: when analyzing the topology structure of a complex network, community structure division is often required to be carried out on the network, one common method is to model the network community structure division problem as an optimization problem, namely, the community structure of one network is characterized as a decision variable, and the optimal decision variable is obtained by maximizing the modularity index of the network, so that the optimal community structure of the network is obtained. However, the solution space of the optimization problem is very complex and contains a large number of local maxima, so that the solution can be achieved by the method proposed by the invention: firstly, sampling a solution space of the problem, namely randomly generating a certain number of feasible solutions and calculating the corresponding community division modularity to form training data. Then, an artificial neural network is trained using the data to fit the solution space of the original problem. And then maximizing the output value of the neural network by adopting a particle swarm algorithm, after the algorithm executes a plurality of generations of loops, replacing an objective function with an objective function of an original problem, taking the current population as an initial population, and optimizing a plurality of generations on the original problem by adopting the same algorithm, so that the optimal community division of the network represented by the optimal individual in the population can be obtained.
In some embodiments, example two: the design of two-dimensional infinite impulse response digital filters is also commonly treated as an optimization problem. One common approach is to characterize all parameters of the filter to be designed as a solution, and then minimize the mean square error of the response of the filter to the ideal filter at each discrete frequency domain point, ultimately yielding the optimal parameters of the filter. The solution space of the optimization problem is quite complex and contains a large number of local minima, so that the solution can be achieved by the method provided by the invention, and the solution is similar to the first embodiment: firstly, constructing an optimization problem similar to the optimization problem by using an artificial neural network, and then optimizing the constructed problem by adopting a particle swarm algorithm. And when the population is updated for a certain algebra, replacing the optimized objective function with the objective function of the original problem, and performing optimization for a plurality of generations to obtain the optimal parameter combination of the filter represented by the optimal individuals in the population.
In some embodiments, a solution system for an optimization problem includes: the acquisition module is used for acquiring the original problem and the solution space sampling data of the original problem; the training module is used for training the solution space sampling data according to the artificial neural network and determining model training parameters; the construction module is used for constructing a corresponding new problem according to the original problem; the determining module is used for determining a new objective function corresponding to the new problem according to the model training parameters; constructing the new problem by modifying the original problem; the optimizing module is used for optimizing the new objective function according to a particle swarm algorithm and determining initial parameters of the population; and the solving module is used for optimizing the original objective function corresponding to the original problem according to the particle swarm algorithm based on the population initial parameters to obtain an optimal solution of the original problem.
In some embodiments, the solving means of the optimization problem includes: a processor and a memory, wherein the memory is for storing an executable program which, when executed, performs a method of solving the optimization problem as described above.
In some embodiments, the storage medium stores executable instructions that are executable by a computer.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The embodiments described in the embodiments of the present disclosure are for more clearly describing the technical solutions of the embodiments of the present disclosure, and do not constitute a limitation on the technical solutions provided by the embodiments of the present disclosure, and as those skilled in the art can know that, with the evolution of technology and the appearance of new application scenarios, the technical solutions provided by the embodiments of the present disclosure are equally applicable to similar technical problems.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including multiple instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory RAM), a magnetic disk, or an optical disk, or other various media capable of storing a program.
Preferred embodiments of the disclosed embodiments are described above with reference to the accompanying drawings, and thus do not limit the scope of the claims of the disclosed embodiments. Any modifications, equivalent substitutions and improvements made by those skilled in the art without departing from the scope and spirit of the embodiments of the present disclosure shall fall within the scope of the claims of the embodiments of the present disclosure.

Claims (10)

1. The solving method of the optimization problem is applied to the design of a two-dimensional infinite impulse response digital filter and is characterized by comprising the following steps:
Acquiring an original problem and solution space sampling data of the original problem; the original problem is a design problem of a two-dimensional infinite impulse response digital filter, the original problem comprises n decision variables, n is the dimension of the original problem P, the value range of each decision variable is defined as a section [ l b,ub ], and f (X) is an objective function of the original problem P; wherein obtaining the subspace sample data comprises: in the design of a two-dimensional infinite impulse response digital filter, all parameters of the filter to be designed are characterized as one solution, and N feasible solutions are randomly generated firstly: x 1,X2,…,XN, each feasible solution X i (i=1, 2, …, N), characterizing the feasible solutions as a vector X i=[x1,x2,…,xn ], then each decision variable X i (i=1, 2, …, N) is a value randomly generated in a decision variable interval range [ l b,ub ], substituting the vector X i=[x1,x2,…,xn characterized by the feasible solutions into an objective function f (X) of the original problem to obtain an objective function value Y i=f(Xi corresponding to each feasible solution, (i=1, 2, …, N), forming the generated data y= { Y 1,y2,…,yN } and x= { X 1,X2,…,XN } into a training dataset dt= { X, Y }, forming an N-dimensional decision space R p by all the feasible solutions, and each data { X i,Yi } is called a solution space sampling data in a decision space R p; training data set dt= { X, Y } is that the solution space is data;
training the solution space sampling data according to an artificial neural network, and determining model training parameters to obtain a trained proxy model;
constructing a corresponding new problem according to the original problem;
Determining a new objective function corresponding to the new problem according to the model training parameters; each input node of the proxy model represents a decision variable X i of each input, each output node of the proxy model represents an objective function value y i corresponding to each output, and an input-output mapping relation is obtained and determined as a new objective function, and is expressed as y=f m (X);
optimizing the new objective function according to a particle swarm algorithm, and determining initial parameters of the population;
based on the population initial parameters, optimizing an original objective function corresponding to the original problem according to a particle swarm algorithm to obtain an optimal solution of the original problem; wherein the optimal solution represents an optimal combination of parameters of the filter.
2. The method of claim 1, wherein the training the solution space sampling data from the artificial neural network to determine model training parameters comprises:
Acquiring an initial artificial neural network;
Inputting the solution space sampling data into the artificial neural network for training;
and determining the model training parameters according to the training.
3. The method for solving the optimization problem according to claim 1, wherein determining a new objective function corresponding to the new problem according to the model training parameters comprises:
constructing the new problem;
and determining a new objective function corresponding to the new problem according to the model training parameters.
4. The method of claim 1, wherein optimizing the new objective function according to a particle swarm algorithm, determining population initial parameters, comprises:
Acquiring an initial population; the population comprises at least an initial particle and a plurality of particles;
determining the historical optimal position of the initial particles according to the calculation of the population;
and calculating the plurality of particles based on the initial particle history optimal position of the population, and determining a population initial parameter.
5. The method of claim 4, wherein said determining initial particle historical optimal locations from computing the population comprises:
acquiring vector parameters of the initial particles; the vector parameters include at least: a position vector and a velocity vector;
inputting the initial particle position vector into the new objective function for calculation to obtain an initial particle fitness value;
determining global guide particles according to the initial particle fitness value;
and obtaining the history optimal position of the initial particle based on the global guide particle.
6. The method of claim 4, wherein the computing the plurality of particles based on the initial particle historical optimal position of the population to determine a population initial parameter comprises:
calculating a plurality of particle position vectors and a plurality of particle velocity vectors based on a combination of the initial particle historical optimal position and vector parameters of the initial particles;
Inputting the plurality of particle position vectors into the new objective function for calculation to obtain a plurality of particle fitness values;
determining a plurality of particle history optimal positions according to the plurality of particle fitness values;
The population initial parameters are determined based on the plurality of particle historical optimal positions and the initial particle historical optimal positions.
7. The method for solving the optimization problem according to any one of claims 1 to 6, wherein optimizing the original objective function corresponding to the original problem according to the particle swarm algorithm based on the population initial parameters to obtain an optimal solution of the original problem comprises:
determining an original objective function corresponding to the original problem according to the solution space sampling data of the original problem;
And optimizing the original objective function according to a particle swarm algorithm based on the population initial parameters to obtain an optimal solution of the original problem.
8. The solving system of the optimization problem is applied to the design of a two-dimensional infinite impulse response digital filter, and is characterized by comprising the following steps:
The acquisition module is used for acquiring the original problem and the solution space sampling data of the original problem; the original problem is a design problem of a two-dimensional infinite impulse response digital filter, the original problem comprises n decision variables, n is the dimension of the original problem P, the value range of each decision variable is defined as a section [ l b,ub ], and f (X) is an objective function of the original problem P; wherein obtaining the subspace sample data comprises: in the design of a two-dimensional infinite impulse response digital filter, all parameters of the filter to be designed are characterized as one solution, and N feasible solutions are randomly generated firstly: x 1,X2,…,XN, each feasible solution X i (i=1, 2, …, N), characterizing the feasible solutions as a vector X i=[x1,x2,…,xn ], then each decision variable X i (i=1, 2, …, N) is a value randomly generated in a decision variable interval range [ l b,ub ], substituting the vector X i=[x1,x2,…,xn characterized by the feasible solutions into an objective function f (X) of the original problem to obtain an objective function value Y i=f(Xi corresponding to each feasible solution, (i=1, 2, …, N), forming the generated data y= { Y 1,y2,…,yN } and x= { X 1,X2,…,XN } into a training dataset dt= { X, Y }, forming an N-dimensional decision space R p by all the feasible solutions, and each data { X i,Yi } is called a solution space sampling data in a decision space R p; training data set dt= { X, Y } is that the solution space is data;
the training module is used for training the solution space sampling data according to the artificial neural network, determining model training parameters and obtaining a trained agent model;
the construction module is used for constructing a corresponding new problem according to the original problem;
The determining module is used for determining a new objective function corresponding to the new problem according to the model training parameters; constructing the new problem by modifying the original problem; each input node of the proxy model represents a decision variable X i of each input, each output node of the proxy model represents an objective function value y i corresponding to each output, and an input-output mapping relation is obtained and determined as a new objective function, and is expressed as y=f m (X);
The optimizing module is used for optimizing the new objective function according to a particle swarm algorithm and determining initial parameters of the population;
the solving module is used for optimizing the original objective function corresponding to the original problem according to the particle swarm algorithm based on the population initial parameters to obtain an optimal solution of the original problem; wherein the optimal solution represents an optimal combination of parameters of the filter.
9. An optimization problem solving apparatus, comprising:
A processor;
a memory for storing an executable program;
The solving means for obtaining an optimization problem implements the solving method for an optimization problem according to any one of claims 1 to 7 when the executable program is executed by the processor.
10. A storage medium storing executable instructions executable by a computer to cause the computer to perform the method of solving an optimization problem according to any one of claims 1 to 7.
CN202110775130.2A 2021-07-08 2021-07-08 Method, system, device and storage medium for solving optimization problem Active CN113657589B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110775130.2A CN113657589B (en) 2021-07-08 2021-07-08 Method, system, device and storage medium for solving optimization problem

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110775130.2A CN113657589B (en) 2021-07-08 2021-07-08 Method, system, device and storage medium for solving optimization problem

Publications (2)

Publication Number Publication Date
CN113657589A CN113657589A (en) 2021-11-16
CN113657589B true CN113657589B (en) 2024-05-14

Family

ID=78489298

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110775130.2A Active CN113657589B (en) 2021-07-08 2021-07-08 Method, system, device and storage medium for solving optimization problem

Country Status (1)

Country Link
CN (1) CN113657589B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114595641A (en) * 2022-05-09 2022-06-07 支付宝(杭州)信息技术有限公司 Method and system for solving combined optimization problem

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101740029A (en) * 2009-12-16 2010-06-16 深圳大学 Three-particle cooperative optimization method applied to vector quantization-based speaker recognition
CN102005135A (en) * 2010-12-09 2011-04-06 上海海事大学 Genetic algorithm-based support vector regression shipping traffic flow prediction method
CN112308288A (en) * 2020-09-29 2021-02-02 百维金科(上海)信息科技有限公司 Particle swarm optimization LSSVM-based default user probability prediction method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101740029A (en) * 2009-12-16 2010-06-16 深圳大学 Three-particle cooperative optimization method applied to vector quantization-based speaker recognition
CN102005135A (en) * 2010-12-09 2011-04-06 上海海事大学 Genetic algorithm-based support vector regression shipping traffic flow prediction method
CN112308288A (en) * 2020-09-29 2021-02-02 百维金科(上海)信息科技有限公司 Particle swarm optimization LSSVM-based default user probability prediction method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
群体智能优化算法;史玉回 等;《郑州大学学报(工学版)》;20181231;第39卷(第6期);第1-2页 *

Also Published As

Publication number Publication date
CN113657589A (en) 2021-11-16

Similar Documents

Publication Publication Date Title
Wang et al. Exploring model-based planning with policy networks
Baker et al. Designing neural network architectures using reinforcement learning
Bloembergen et al. Evolutionary dynamics of multi-agent learning: A survey
WO2020024170A1 (en) Nash equilibrium strategy and social network consensus evolution model in continuous action space
JP6042274B2 (en) Neural network optimization method, neural network optimization apparatus and program
Liu et al. Feature selection and feature learning for high-dimensional batch reinforcement learning: A survey
Ibrahim et al. An improved runner-root algorithm for solving feature selection problems based on rough sets and neighborhood rough sets
Abd-Alsabour A review on evolutionary feature selection
Gu et al. Particle swarm optimized autonomous learning fuzzy system
Brajevic et al. Multilevel image thresholding selection based on the cuckoo search algorithm
Majhi et al. Oppositional Crow Search Algorithm with mutation operator for global optimization and application in designing FOPID controller
CN113657589B (en) Method, system, device and storage medium for solving optimization problem
Hafez et al. Topological Q-learning with internally guided exploration for mobile robot navigation
Liu et al. The eigenoption-critic framework
Ma et al. Opponent portrait for multiagent reinforcement learning in competitive environment
Dhivyaprabha et al. Computational intelligence based machine learning methods for rule-based reasoning in computer vision applications
Chen et al. Individual-level inverse reinforcement learning for mean field games
Hayes et al. Monte Carlo tree search algorithms for risk-aware and multi-objective reinforcement learning
Marwala et al. Handbook of machine learning: Volume 2: Optimization and decision making
Zivkovic et al. Chaotic binary ant lion optimizer approach for feature selection on medical datasets with covid-19 case study
Araya-López et al. Active learning of MDP models
Jia et al. Model gradient: unified model and policy learning in model-based reinforcement learning
Levner et al. Automated feature extraction for object recognition
Hwang et al. Induced states in a decision tree constructed by Q-learning
Retyk On Meta-Reinforcement Learning in task distributions with varying dynamics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant