CN108038538A - Multi-objective Evolutionary Algorithm based on intensified learning - Google Patents

Multi-objective Evolutionary Algorithm based on intensified learning Download PDF

Info

Publication number
CN108038538A
CN108038538A CN201711279238.2A CN201711279238A CN108038538A CN 108038538 A CN108038538 A CN 108038538A CN 201711279238 A CN201711279238 A CN 201711279238A CN 108038538 A CN108038538 A CN 108038538A
Authority
CN
China
Prior art keywords
mrow
value
population
msup
new
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711279238.2A
Other languages
Chinese (zh)
Inventor
郭宝龙
郭新兴
宁伟康
李�诚
安陆
闫允
闫允一
陈祖铭
李星星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201711279238.2A priority Critical patent/CN108038538A/en
Publication of CN108038538A publication Critical patent/CN108038538A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/086Learning methods using evolutionary algorithms, e.g. genetic algorithms or genetic programming

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Physiology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses the multi-objective Evolutionary Algorithm based on intensified learning, initial population is randomly generated from search space, obtained population is evaluated;Population to being unsatisfactory for end condition, new value is produced using the DEvariant operators and T operators of intensified learning selection, and the value of itself and neighborhood is intersected, and variation produces new explanation;The new explanation of generation, compared with the solution of original seed group, selection makes subproblem function meet the solution of optimal value, carrys out Population Regeneration;Using the new population of generation, 5 new dimension observation vector sum return value R are calculated, and then update the state of RL controllers;Judge whether to meet end condition, be unsatisfactory for, be constantly iterated calculating, until meeting end condition, terminate.The present invention solve thes problems, such as that MOEA/D is insensitive for T parameter regulations.

Description

Multi-objective Evolutionary Algorithm based on intensified learning
Technical field
The present invention relates to science and field of engineering technology, more particularly to the multi-objective Evolutionary Algorithm based on intensified learning.
Background technology
In science and engineering field, there are substantial amounts of multi-objective optimization question (Multi- ObjectiveOptimizationProblem, MOP), with single-object problem (Single-objective Optimization Problem, SOP) different, the set that the optimal solution of MOP is made of so-called Pareto optimal solutions.Tradition Multi-objective optimization algorithm includes weighting method, leash law, Objective Programming and Chebyshev approximation etc..These methods are all to turn MOP SOP is changed to, the drawback is that needing sufficient priori, and is difficult to handle target noise, robustness is poor.Due to multiple target The object function and constraint function of optimization problem are probably non-linear, non-differentiability or discontinuous, traditional mathematic programming methods It is often less efficient, and the order that they give for weighted value or target is more sensitive.
2. evolution algorithm (EvolutionaryAlgorithm, EA) is a kind of random overall situation for simulating natural evolution process Optimization method, EA search for the solution of problem by the way of information exchange between collective search and individual in population.Since EA is intrinsic Concurrency, it is possible to find multiple Pareto optimal solutions in simulations.Compared with traditional algorithm, it the advantage is that:First, evolve Search procedure has randomness, is not easy to be absorbed in local optimum;Secondly, EA has intrinsic concurrency, and can evolve searching at the same time To multiple solutions, it is adapted to multi-objective optimization question;3rd, can handle it is discontinuous, the problems such as non-differentiability and non-convex Pareto forward positions, Excessive priori is not required.
3. algorithm is dominant mechanism based on Pareto, using different fitness assignment strategies and selection mechanism;Take difference Scheme keeps population diversity and avoids algorithm Premature Convergence so that the Pareto angle distribution of algorithm is uniform.
It is excellent due to multi-objective Evolutionary Algorithm 4. as multiple-objection optimization device a kind of efficient and with good robustness Gesture, MOEA have been widely used in many fields of science and engineering, including control engineering, systems organization, production scheduling, number According to excavation etc..
MOP is decomposed into the subproblem of N number of scalar by 5.MOEA/D.It by evolve one solution population come and meanwhile solve All subproblems.For every generation population, population is the set of the optimal solution for each subproblem selected from all generations.Phase The correlation degree of adjacent two subproblem keys is that the distance between the polymerizing factor vector by them is determined.For two adjacent sons For problem, optimal solution should be closely similar.For each subproblem, simply with subproblem adjacent thereto Information optimizes it.
MOEA/D has following characteristic:
MOEA/D provides a simple but effective method, that is, the method for decomposition is incorporated into multi-target evolution In calculating.For usually in the decomposition method of Mathematical Planning field development, it can be really incorporated into EA, by using MOEA/D frames solve the problems, such as MOP.Because MOEA/D algorithms are to optimize N scalars subproblem at the same time rather than directly ask MOP Topic is solved as an entirety, then the fitness assignment and more for traditional MOEA algorithms for being not based on decomposing The difficulty of sample control will be reduced in MOEA/D frames.
It is exactly insensitive for T parameter regulations 6. still, MOEA/D is there is deficiency, T is small not to have a range, and big do not have There is depth, adaptive adjustment capability is poor.
The content of the invention
The shortcomings that it is an object of the invention to overcome the prior art, there is provided the multi-objective Evolutionary Algorithm based on intensified learning, To solve above-mentioned technical problem.
The present invention uses following technical scheme to achieve the above object:
Multi-objective Evolutionary Algorithm based on intensified learning, includes the following steps:Step 1, generate at random from search space Initial population;
Step 2, evaluate obtained population according to assessment level;
The optimum value for the object function that step 3, renewal search;
Step 4, the approximate solution Z using generation*Judgement is compared with end condition, satisfaction just terminates;To being unsatisfactory for end The only population of condition, new value is produced using the DEvariant operators and T operators of intensified learning RL controllers selection, and to it Intersected with the value of neighborhood, variation produces new explanation;
Step 5, the new explanation produced select the solution for making subproblem function meet optimal value compared with the solution of original seed group, Carry out Population Regeneration;
Step 6, the new population using generation, calculate 5 new dimension observation vector sum return value R, and then update RL controls The state of device, judges whether to meet end condition, is unsatisfactory for, and is constantly iterated calculating, until meeting end condition, terminates.
Preferably, the step 1 concretely comprises the following steps:
Euclidean distance between step 1.1, calculating any two weight vectors, searches T nearest power of each weight vectors Weight vector, wherein T is the number of the weight vectors in each neighborhood, for each i=1 ..., N, makes Bi={ i1,…,iT, λi 1,…λi TIt is λiT nearest weight vectors;
Step 1.2, establish an exterior population EP, for storing the non-domination solution searched for during optimal solution and found, just Beginningization EP is sky;
Step 1.3, uniformly random collection generation makes object function F (X)=(f from search space1(x),f2(x),…,fi (x)) solution of optimal value is taken as initial population, wherein i=1,2 ..., m;X is one group of decision vector, and x is independent variable;
Step 1.4, using Chebyshev's method, object function F (X) is resolved into N straton problems:Wherein,I-th subproblem it is adjacent Relation is by all subproblems on λiThe weight vectors of point represent, Z*Be the object function that can search at present it is optimal to Value, also referred to as approximate solution, Z*=min { (f1(x),f2(x),…,fi(x))}。
Preferably, the value produced in the step 4 and the value of its neighborhood carry out following computing, produce new explanation:Step 4.1 is selected Select computing:Two sequence numbers h, k are randomly selected from B (i), with genetic operator by xhAnd xkProduce a new value, wherein xhIt is The current optimal solution of h-th of subproblem, and xkIt is the current optimal solution of k-th of subproblem;The value and its neighborhood of generation Value be compared, carry out it is winning slightly eliminates operation, the high outstanding value of selection fitness stays, and is genetic to the next generation;
Step 4.2 crossing operation:Individual in population is matched, carries out the crossover operation of gene, produces new Body;
Step 4.3 mutation operator:The variation that low probability is carried out to genic value operates.
Preferably, return value R is drawn by the following formula in the step 6:
The beneficial effects of the invention are as follows:Invention introduces intensified learning mechanism, is continued to optimize using RL controllers, can be with Accomplish the adaptive of parameter;The operator specially selected using the RL controllers of intensified learning, R and five Wei Guan is rewarded according to maximum Vector is examined, population is continued to optimize to produce optimal value, until meeting end condition, efficiently solves MOEA/D for T parameter tune Save the problem of insensitive.
Brief description of the drawings
Fig. 1 is the method for the present invention flow diagram.
Fig. 2 is verification the verifying results figure of the present invention in test UF3 problems.
Fig. 3 is verification the verifying results figure of the present invention in test UF7 problems.
Embodiment
The present invention is further elaborated with specific embodiment below in conjunction with the accompanying drawings.
As shown in Figure 1, the multi-objective Evolutionary Algorithm based on intensified learning, includes the following steps:
Step 1, generate initial population at random from search space;
Euclidean distance between step 1.1, calculating any two weight vectors, searches T nearest power of each weight vectors Weight vector, wherein T is the number of the weight vectors in each neighborhood, for each i=1 ..., N, makes Bi={ i1,…,iT, λi 1,…λi TIt is λiT nearest weight vectors;
Step 1.2, establish an exterior population EP, for storing the non-domination solution searched for during optimal solution and found, just Beginningization EP is sky;
Step 1.3, uniformly random collection generation makes object function F (X)=(f from search space1(x),f2(x),…,fi (x)) solution of optimal value is taken as initial population, wherein i=1,2 ..., m;X is one group of decision vector, and x is independent variable;
Step 1.4, using Chebyshev's method, object function F (X) is resolved into N straton problems:Wherein,I-th subproblem it is adjacent Relation is by all subproblems on λiThe weight vectors of point represent, Z*Be the object function that can search at present it is optimal to Value, also referred to as approximate solution, Z*=min { (f1(x),f2(x),…,fi(x))}。
Step 2, evaluate obtained population according to assessment level;
The optimum value for the object function that step 3, renewal search;
Step 4, the approximate solution Z using generation*Judgement is compared with end condition, satisfaction just terminates;To being unsatisfactory for end The only population of condition, new value is produced using the DEvariant operators and T operators of intensified learning RL controllers selection, and to it Intersected with the value of neighborhood, variation produces new explanation;
The value of generation carries out following computing with the value of its neighborhood, produces new explanation:
Step 4.1 Selecting operation:Two sequence numbers h, k are randomly selected from B (i), with genetic operator by xhAnd xkProduce one A new value, wherein xhIt is the current optimal solution of h-th of subproblem, and xkIt is the current optimal solution of k-th of subproblem; The value of generation is compared with the value of its neighborhood, and progress is winning slightly to eliminate operation, and the outstanding value for selecting fitness high stays, and loses Pass to the next generation;
Step 4.2 crossing operation:Individual in population is matched, carries out the crossover operation of gene, produces new Body;
Step 4.3 mutation operator:The variation that low probability is carried out to genic value operates.
Step 5, the new explanation produced select the solution for making subproblem function meet optimal value compared with the solution of original seed group, Carry out Population Regeneration;
Step 6, the new population using generation, calculate 5 new dimension observation vector sum return value R, and then update RL controls The state of device, judges whether to meet end condition, is unsatisfactory for, and is constantly iterated calculating, until meeting end condition, terminates.
Return value R is drawn by the following formula:
As Figure 2-3, in order to show the validity of algorithm, two standard testing collection UF3, UF7 be have chosen to verify.Its Middle UF3, UF7 are the optimization problem of 2 targets.Population size is set to 300.Test result indicates that more mesh based on intensified learning The MOEA/D algorithms that mark optimization algorithm is better than in the adjusting to T parameters.
Invention introduces intensified learning mechanism, is continued to optimize using RL controllers, can accomplish the adaptive of parameter;Profit The operator selected with the RL controllers of intensified learning, it is constantly excellent to produce optimal value according to the maximum dimension observation vectors of reward R and five Change population, until meeting end condition, solve the problems, such as that MOEA/D is insensitive for T parameter regulations.
The above is present pre-ferred embodiments, for the ordinary skill in the art, according to the present invention Teaching, in the case where not departing from the principle of the present invention and spirit, the changes, modifications, replacement and the change that are carried out to embodiment Type is still fallen within protection scope of the present invention.

Claims (4)

1. the multi-objective Evolutionary Algorithm based on intensified learning, it is characterised in that include the following steps:
Step 1, generate initial population at random from search space;
Step 2, evaluate obtained population according to assessment level;
The optimum value for the object function that step 3, renewal search;
Step 4, the approximate solution Z using generation*Judgement is compared with end condition, satisfaction just terminates;To being unsatisfactory for end condition Population, produce new value using the DEvariant operators and T operators of intensified learning RL controllers selection, and to itself and neighborhood Value intersected, variation produce new explanation;
Compared with the solution of original seed group, selection makes subproblem function meet the solution of optimal value, comes more for step 5, the new explanation produced New population;
Step 6, the new population using generation, calculate 5 new dimension observation vector sum return value R, and then update RL controllers State, judges whether to meet end condition, is unsatisfactory for, and is constantly iterated calculating, until meeting end condition, terminates.
2. the multi-objective Evolutionary Algorithm according to claim 1 based on intensified learning, it is characterised in that the step 1 Concretely comprise the following steps:
Step 1.1, calculate any two weight vectors between Euclidean distance, search T nearest weight of each weight vectors to Amount, wherein T is the number of the weight vectors in each neighborhood, for each i=1 ..., N, makes Bi={ i1,…,iT, λi 1,…λi TIt is λiT nearest weight vectors;
Step 1.2, establish an exterior population EP, for storing the non-domination solution searched for during optimal solution and found, initialization EP is sky;
Step 1.3, uniformly random collection generation makes object function F (X)=(f from search space1(x),f2(x),…,fi(x)) The solution of optimal value is taken as initial population, wherein i=1,2 ..., m;X is one group of decision vector, and x is independent variable;
Step 1.4, using Chebyshev's method, object function F (X) is resolved into N straton problems:Wherein,I-th subproblem it is adjacent Relation is by all subproblems on λiThe weight vectors of point represent, Z*Be the object function that can search at present it is optimal to Value, also referred to as approximate solution, Z*=min { (f1(x),f2(x),…,fi(x))}。
3. the multi-objective Evolutionary Algorithm according to claim 2 based on intensified learning, it is characterised in that in the step 4 The value of generation carries out following computing with the value of its neighborhood, produces new explanation:Step 4.1 Selecting operation:Two are randomly selected from B (i) A sequence number h, k, with genetic operator by xhAnd xkProduce a new value, wherein xhIt is the current optimal of h-th subproblem Solution, and xkIt is the current optimal solution of k-th of subproblem;The value of generation compared with the value of its neighborhood, winning summary is carried out Operation is eliminated, the outstanding value for selecting fitness high stays, and is genetic to the next generation;
Step 4.2 crossing operation:Individual in population is matched, carries out the crossover operation of gene, produces new individual;
Step 4.3 mutation operator:The variation that low probability is carried out to genic value operates.
4. the multi-objective Evolutionary Algorithm according to claim 3 based on intensified learning, it is characterised in that in the step 6 Return value R is drawn by the following formula:
<mrow> <mi>R</mi> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mfrac> <mrow> <msup> <mi>g</mi> <mrow> <mi>t</mi> <mi>e</mi> </mrow> </msup> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>i</mi> </msubsup> <mo>|</mo> <msup> <mi>&amp;lambda;</mi> <mi>i</mi> </msup> <mo>,</mo> <mi>z</mi> <mo>*</mo> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>g</mi> <mrow> <mi>e</mi> <mi>t</mi> </mrow> </msup> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mi>t</mi> <mi>i</mi> </msubsup> <mo>|</mo> <msup> <mi>&amp;lambda;</mi> <mi>i</mi> </msup> <mo>,</mo> <mi>z</mi> <mo>*</mo> <mo>)</mo> </mrow> </mrow> <mrow> <msup> <mi>g</mi> <mrow> <mi>t</mi> <mi>e</mi> </mrow> </msup> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>i</mi> </msubsup> <mo>|</mo> <msup> <mi>&amp;lambda;</mi> <mi>i</mi> </msup> <mo>,</mo> <mi>z</mi> <mo>*</mo> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>.</mo> </mrow>
CN201711279238.2A 2017-12-06 2017-12-06 Multi-objective Evolutionary Algorithm based on intensified learning Pending CN108038538A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711279238.2A CN108038538A (en) 2017-12-06 2017-12-06 Multi-objective Evolutionary Algorithm based on intensified learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711279238.2A CN108038538A (en) 2017-12-06 2017-12-06 Multi-objective Evolutionary Algorithm based on intensified learning

Publications (1)

Publication Number Publication Date
CN108038538A true CN108038538A (en) 2018-05-15

Family

ID=62095661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711279238.2A Pending CN108038538A (en) 2017-12-06 2017-12-06 Multi-objective Evolutionary Algorithm based on intensified learning

Country Status (1)

Country Link
CN (1) CN108038538A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805268A (en) * 2018-06-08 2018-11-13 中国科学技术大学 Deeply learning strategy network training method based on evolution algorithm
CN108830370A (en) * 2018-05-24 2018-11-16 东北大学 Based on the feature selection approach for enhancing learning-oriented flora foraging algorithm
CN110174118A (en) * 2019-05-29 2019-08-27 北京洛必德科技有限公司 Robot multiple-objective search-path layout method and apparatus based on intensified learning
CN110704959A (en) * 2019-08-19 2020-01-17 南昌航空大学 MOEAD (Metal oxide optical insulator deposition) optimization fixture layout method and device based on migration behavior
CN110782016A (en) * 2019-10-25 2020-02-11 北京百度网讯科技有限公司 Method and apparatus for optimizing neural network architecture search
CN111045325A (en) * 2018-10-11 2020-04-21 富士通株式会社 Optimization device and control method of optimization device
TWI741760B (en) * 2020-08-27 2021-10-01 財團法人工業技術研究院 Learning based resource allocation method, learning based resource allocation system and user interface

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830370A (en) * 2018-05-24 2018-11-16 东北大学 Based on the feature selection approach for enhancing learning-oriented flora foraging algorithm
CN108830370B (en) * 2018-05-24 2020-11-10 东北大学 Feature selection method based on reinforced learning type flora foraging algorithm
CN108805268A (en) * 2018-06-08 2018-11-13 中国科学技术大学 Deeply learning strategy network training method based on evolution algorithm
CN111045325A (en) * 2018-10-11 2020-04-21 富士通株式会社 Optimization device and control method of optimization device
CN110174118A (en) * 2019-05-29 2019-08-27 北京洛必德科技有限公司 Robot multiple-objective search-path layout method and apparatus based on intensified learning
CN110704959A (en) * 2019-08-19 2020-01-17 南昌航空大学 MOEAD (Metal oxide optical insulator deposition) optimization fixture layout method and device based on migration behavior
CN110704959B (en) * 2019-08-19 2022-04-08 南昌航空大学 MOEAD (Metal oxide optical insulator deposition) optimization fixture layout method and device based on migration behavior
CN110782016A (en) * 2019-10-25 2020-02-11 北京百度网讯科技有限公司 Method and apparatus for optimizing neural network architecture search
TWI741760B (en) * 2020-08-27 2021-10-01 財團法人工業技術研究院 Learning based resource allocation method, learning based resource allocation system and user interface

Similar Documents

Publication Publication Date Title
CN108038538A (en) Multi-objective Evolutionary Algorithm based on intensified learning
Reagen et al. A case for efficient accelerator design space exploration via bayesian optimization
Oduguwa et al. Bi-level optimisation using genetic algorithm
CN109932903A (en) The air-blower control Multipurpose Optimal Method of more parent optimization networks and genetic algorithm
Yu et al. Evolutionary fuzzy neural networks for hybrid financial prediction
CN109214449A (en) A kind of electric grid investment needing forecasting method
He et al. Optimising the job-shop scheduling problem using a multi-objective Jaya algorithm
CN105808426A (en) Path coverage test data generation method used for weak mutation test
Sato et al. Variable space diversity, crossover and mutation in MOEA solving many-objective knapsack problems
CN104616062A (en) Nonlinear system recognizing method based on multi-target genetic programming
CN110163743A (en) A kind of credit-graded approach based on hyperparameter optimization
CN105117326A (en) Test case set generation method based on combination chaotic sequence
CN110111606A (en) A kind of vessel traffic flow prediction technique based on EEMD-IAGA-BP neural network
CN108563875A (en) Analog circuit measuring point and frequency based on multiple-objection optimization combine preferred method
CN109886448A (en) Using learning rate changing BP neural network and the heat pump multiobjective optimization control method of NSGA-II algorithm
Huang et al. Multi-objective multi-generation Gaussian process optimizer for design optimization
CN103473465B (en) Land resource spatial configuration optimal method based on multiple target artificial immune system
Guo et al. Hybridizing cellular automata principles and NSGAII for multi-objective design of urban water networks
Sun et al. Solving interval multi-objective optimization problems using evolutionary algorithms with preference polyhedron
Zheng et al. Data-driven optimization based on random forest surrogate
CN112132259B (en) Neural network model input parameter dimension reduction method and computer readable storage medium
CN114004065A (en) Transformer substation engineering multi-objective optimization method based on intelligent algorithm and environmental constraints
Roeva et al. Generalized net model of selection operator of genetic algorithms
Zhou et al. Approximation model guided selection for evolutionary multiobjective optimization
Yasin et al. Optimal least squares support vector machines parameter selection in predicting the output of distributed generation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180515

RJ01 Rejection of invention patent application after publication