CN113339499A - Intelligent gear shifting rule control method based on Q-Learning reinforcement Learning algorithm - Google Patents

Intelligent gear shifting rule control method based on Q-Learning reinforcement Learning algorithm Download PDF

Info

Publication number
CN113339499A
CN113339499A CN202110760756.6A CN202110760756A CN113339499A CN 113339499 A CN113339499 A CN 113339499A CN 202110760756 A CN202110760756 A CN 202110760756A CN 113339499 A CN113339499 A CN 113339499A
Authority
CN
China
Prior art keywords
value
gear
gear shifting
action
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110760756.6A
Other languages
Chinese (zh)
Other versions
CN113339499B (en
Inventor
张坤
郭洪强
崔庆虎
赵峰睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaocheng University
Original Assignee
Liaocheng University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaocheng University filed Critical Liaocheng University
Priority to CN202110760756.6A priority Critical patent/CN113339499B/en
Publication of CN113339499A publication Critical patent/CN113339499A/en
Application granted granted Critical
Publication of CN113339499B publication Critical patent/CN113339499B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16HGEARING
    • F16H61/00Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing
    • F16H61/02Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing characterised by the signals used
    • F16H61/0202Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing characterised by the signals used the signals being electric
    • F16H61/0204Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing characterised by the signals used the signals being electric for gearshift control, e.g. control functions for performing shifting or generation of shift signal
    • F16H61/0213Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing characterised by the signals used the signals being electric for gearshift control, e.g. control functions for performing shifting or generation of shift signal characterised by the method for generating shift signals
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16HGEARING
    • F16H61/00Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing
    • F16H2061/0075Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing characterised by a particular control method
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16HGEARING
    • F16H61/00Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing
    • F16H2061/0075Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing characterised by a particular control method
    • F16H2061/0087Adaptive control, e.g. the control parameters adapted by learning
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16HGEARING
    • F16H61/00Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing
    • F16H2061/0075Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing characterised by a particular control method
    • F16H2061/009Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing characterised by a particular control method using formulas or mathematic relations for calculating parameters
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16HGEARING
    • F16H61/00Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing
    • F16H61/02Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing characterised by the signals used
    • F16H61/0202Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing characterised by the signals used the signals being electric
    • F16H61/0204Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing characterised by the signals used the signals being electric for gearshift control, e.g. control functions for performing shifting or generation of shift signal
    • F16H61/0213Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing characterised by the signals used the signals being electric for gearshift control, e.g. control functions for performing shifting or generation of shift signal characterised by the method for generating shift signals
    • F16H2061/0223Generating of new shift maps, i.e. methods for determining shift points for a schedule by taking into account driveline and vehicle conditions

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Mechanical Engineering (AREA)
  • Control Of Transmission Device (AREA)

Abstract

The invention discloses an intelligent gear shifting rule control method based on a Q-Learning reinforcement Learning algorithm, which controls the lifting of gears through the reinforcement Learning algorithm, enables an AMT (automated mechanical transmission) to always work under the gear with the best energy consumption and economy through repeated training, solves the problem that self-adaptive gear shifting control cannot be realized under a dynamic unknown working condition, further reduces the energy consumption, and further improves the dynamic property and the economy of the whole vehicle.

Description

Intelligent gear shifting rule control method based on Q-Learning reinforcement Learning algorithm
Technical Field
The invention relates to the field of AMT intelligent control, in particular to an intelligent gear shifting rule control method based on a Q-Learning reinforcement Learning algorithm.
Background
A large number of researches show that the power performance and the economical efficiency of the whole vehicle are greatly influenced by the gear shifting rule, and the traditional regular gear shifting rule cannot adapt to dynamic and uncertain traffic environments due to parameter determination and cannot realize dynamic self-adaptive gear shifting control for adapting to different road traffic conditions, so that the power performance and the economical efficiency of the whole vehicle are difficult to achieve the optimal performance. Therefore, an intelligent gear shifting rule control method based on reinforcement learning is provided.
Disclosure of Invention
The invention aims to solve the technical problem of providing an intelligent gear shifting law control method based on a Q-Learning reinforcement Learning algorithm, which is suitable for dynamic self-adaptive gear shifting control under different road traffic conditions, so that the dynamic property and the economical efficiency of the whole vehicle are difficult to achieve the optimal property.
In order to solve the technical problems, the invention adopts the following technical means:
an intelligent gear shifting rule control method based on a Q-Learning reinforcement Learning algorithm comprises the following steps:
p1: setting an action set A ═ a of reinforcement learning1,a2…atAnd state set S ═ S1,s2…st}; for action set a ═ a1,a2…atAn action value a (t) at time t indicates a target shift position to be selected at time t, that is, a (t) {1,2,3,4}, and a state set S { S ═ S }1,s2…stState s (t) at time t includes vehicle speed, acceleration and shift position at time t, i.e., s (t) ═ { v (t), acc (t), gear (t);
p2: respectively discretizing v and acc in the state set into 100 parts, wherein gear is 4 gears, namely, the total state variables in the state set are 100 multiplied by 4 to 40000, and then realizing state space reduction through the optimal Latin hypercube design;
p3: initializing a Q table constructed by a state set S and an action set A into an all-zero matrix;
p4: selecting an action value a (t) by an epsilon-greedy strategy according to a state s (t) at the time t in the environment;
p5: performing gear shifting according to the selected action value a (t), calculating a corresponding gear and motor efficiency, and acquiring a return value r (s (t) and a (t)) of the action value a (t) selected at the time t;
p6: and updating the Q table according to a Q learning formula, wherein the current environment is in the next state, and the updating formula is as follows:
Figure BDA0003146742250000021
wherein eta represents learning rate, 0< eta < 1, gamma represents discount factor, and 0< gamma < 1;
p7: the execution of P4 to P6 is repeated until the current environment is finished, i.e. the updating of the Q table is completed once.
The further preferred technical scheme is as follows:
in step P2, based on the memory space limitation of the controller, the number of state variables is reduced from 40000 to 1000 by implementing the state space reduction through the optimal latin hypercube design, which is beneficial to the practical application of the algorithm to the whole vehicle controller.
In step P4, an epsilon-greedy strategy is used to select actions, and the search factor is adjusted continuously according to the iteration of the training times, so that the closer the training times are to the end, the smaller the search probability and the larger the probability of selecting from the learned Q-table.
In step P5: after an action value a (t) is selected through an epsilon-greedy strategy, the action value a (t) is used as a gear action to obtain a target gear of the next second and a motor efficiency effective value of the next moment, and a return value r (s (t) under the action value a (t) at the moment t is obtained, wherein a (t) is defined as:
Figure BDA0003146742250000022
the efficiency value of the motor at the next moment after gear shifting is equal to the efficiency value of the motor at the next moment, the efficiency value of the motor at the current gear shifting is equal to the efficiency value of the motor at the next moment after gear shifting, if the efficiency value of the motor after gear shifting is high, a reward is given, and if the efficiency value of the motor after gear shifting is not high, a penalty is given.
Drawings
FIG. 1 is a schematic diagram of the effect of the optimal Latin hypercube algorithm.
Fig. 2 is a diagram of a reinforcement learning intelligent gear shift schematic.
FIG. 3 is a Q-learning reinforcement learning framework diagram.
Detailed Description
The present invention will be further described with reference to the following examples.
Referring to fig. 3, the intelligent gear shifting law control method based on the Q-Learning reinforcement Learning algorithm of the present invention includes the following steps:
p1: setting an action set A ═ a of reinforcement learning1,a2…atAnd state set S ═ S1,s2…st}; for action set a ═ a1,a2…atAn action value a (t) at time t indicates a target shift position to be selected at time t, that is, a (t) {1,2,3,4}, and a state set S { S ═ S }1,s2…stState s (t) at time t includes vehicle speed, acceleration and shift position at time t, i.e., s (t) ═ { v (t), acc (t), gear (t);
p2: respectively discretizing v and acc in the state set into 100 parts, wherein gear is 4 gears, namely, the total state variables in the state set are 100 multiplied by 4 to 40000, and then realizing state space reduction through the optimal Latin hypercube design; as shown in fig. 1, an optimal latin hypercube design is introduced by state reduction, and a distance criterion is used as optimization judgment;
p3: initializing a Q table constructed by a state set S and an action set A into an all-zero matrix;
p4: selecting an action value a (t) by an epsilon-greedy strategy according to a state s (t) at the time t in the environment;
p5: performing gear shifting according to the selected action value a (t), calculating a corresponding gear and motor efficiency, and acquiring a return value r (s (t) and a (t)) of the action value a (t) selected at the time t;
p6: and updating the Q table according to a Q learning formula, wherein the current environment is in the next state, and the updating formula is as follows:
Figure BDA0003146742250000031
wherein eta represents learning rate, 0< eta < 1, gamma represents discount factor, and 0< gamma < 1;
p7: the execution of P4 to P6 is repeated until the current environment is finished, i.e. the updating of the Q table is completed once.
In step P2, based on the memory space limitation of the controller, the number of state variables is reduced from 40000 to 1000 by using the optimal latin hypercube design to reduce the number of state variables from 40000, which is beneficial to the practical application of the algorithm to the whole vehicle controller.
As shown in fig. 3, an Optimal Latin hypercube design (Opt LHD) can effectively improve sampling uniformity and stability, and can make the fitting between the factors and the response more accurate.
The calculation principle of Opt LHD can be explained as: assume that there is an n x m matrix, where n represents the test point and m represents the test factor. Row matrix Xi T=[xi1,xi2,…,xim]Representing the assay analysis and the column matrix representing the assay factors. Wherein, Opt LHD can judge whether to reach the optimum according to the distance criterion, the entropy criterion or the deviation criterion. This patent adopts the distance criterion to carry out optimization judgement.
The distance criterion can be calculated by the maximum and minimum values:
Figure BDA0003146742250000032
Figure BDA0003146742250000033
in the formula, d (x)i,xj) Represents two sample points (x)i,xj) The number of the state variables is reduced from 40000 to 1000 by the formula, so that the reduction of the state space is realized;
in step P4, an epsilon-greedy strategy is used to select actions, and the search factor is adjusted continuously according to the iteration of the training times, so that the closer the training times are to the end, the smaller the search probability and the larger the probability of selecting from the learned Q-table. Different from a random strategy and a greedy strategy, the method not only avoids iteration times increase caused by repeated action selection possibly generated by the random strategy, but also avoids the situation that the greedy strategy is locally optimal and lacks exploratory property.
In step P5: after an action value a (t) is selected through an epsilon-greedy strategy, the action value a (t) is used as a gear action to obtain a target gear of the next second and a motor efficiency effective value of the next moment, and a return value r (s (t) under the action value a (t) at the moment t is obtained, wherein a (t) is defined as:
Figure BDA0003146742250000041
the efficiency value of the motor at the next moment after gear shifting is equal to the efficiency value of the motor at the next moment, the efficiency value of the motor at the current gear shifting is equal to the efficiency value of the motor at the next moment after gear shifting, if the efficiency value of the motor after gear shifting is high, a reward is given, and if the efficiency value of the motor after gear shifting is not high, a penalty is given.
The invention has the advantages that:
(1) by applying the optimal Latin hypercube design, the sampling uniformity and stability are effectively improved, and the fitting between the factors and the response is more accurate.
(2) The action value a (t) is selected through the epsilon-greedy strategy, so that the phenomenon that iteration times are increased due to repeated action selection possibly generated by a random strategy is avoided, and the phenomenon that a greedy strategy is locally optimal and lacks exploratory property is avoided.
(3) And (d) using the action value a (t) as a gear action to obtain a target gear for the next second and a motor efficiency value at the next moment, and obtaining a return value r (s (t), a (t)) under the action value a (t) at the moment t, so that the energy consumption is further reduced, and the energy-saving and environment-friendly effects are facilitated.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the scope of the present invention, which is defined in the appended claims.

Claims (4)

1. An intelligent gear shifting rule control method based on a Q-Learning reinforcement Learning algorithm is characterized by comprising the following steps of:
p1: setting an action set A ═ a of reinforcement learning1,a2…atAnd state set S ═ S{s1,s2…st}; for action set a ═ a1,a2…atAn action value a (t) at time t indicates a target shift position to be selected at time t, that is, a (t) {1,2,3,4}, and a state set S { S ═ S }1,s2…stState s (t) at time t includes vehicle speed, acceleration and shift position at time t, i.e., s (t) ═ { v (t), acc (t), gear (t);
p2: respectively discretizing v and acc in the state set into 100 parts, wherein gear is 4 gears, namely, the total state variables in the state set are 100 multiplied by 4 to 40000, and then realizing state space reduction through the optimal Latin hypercube design;
p3: initializing a Q table constructed by a state set S and an action set A into an all-zero matrix;
p4: selecting an action value a (t) by an epsilon-greedy strategy according to a state s (t) at the time t in the environment;
p5: performing gear shifting according to the selected action value a (t), calculating a corresponding gear and motor efficiency, and acquiring a return value r (s (t) and a (t)) of the action value a (t) selected at the time t;
p6: and updating the Q table according to a Q learning formula, wherein the current environment is in the next state, and the updating formula is as follows:
Figure FDA0003146742240000011
wherein eta represents learning rate, 0< eta < 1, gamma represents discount factor, and 0< gamma < 1;
p7: the execution of P4 to P6 is repeated until the current environment is finished, i.e. the updating of the Q table is completed once.
2. The intelligent gear shift schedule control method based on the Q-Learning reinforcement Learning algorithm as claimed in claim 1, wherein: in step P2, based on the memory space limitation of the controller, the number of state variables is reduced from 40000 to 1000 by implementing the state space reduction through the optimal latin hypercube design, which is beneficial to the practical application of the algorithm to the whole vehicle controller.
3. The intelligent gear shift schedule control method based on the Q-Learning reinforcement Learning algorithm as claimed in claim 1, wherein: in step P4, an epsilon-greedy strategy is used to select actions, and the search factor is adjusted continuously according to the iteration of the training times, so that the closer the training times are to the end, the smaller the search probability and the larger the probability of selecting from the learned Q-table.
4. The intelligent gear shift schedule control method based on the Q-Learning reinforcement Learning algorithm as claimed in claim 1, wherein: in step P5: after an action value a (t) is selected through an epsilon-greedy strategy, the action value a (t) is used as a gear action to obtain a target gear of the next second and a motor efficiency effective value of the next moment, and a return value r (s (t) under the action value a (t) at the moment t is obtained, wherein a (t) is defined as:
Figure FDA0003146742240000021
the efficiency value of the motor at the next moment after gear shifting is equal to the efficiency value of the motor at the next moment, the efficiency value of the motor at the current gear shifting is equal to the efficiency value of the motor at the next moment after gear shifting, if the efficiency value of the motor after gear shifting is high, a reward is given, and if the efficiency value of the motor after gear shifting is not high, a penalty is given.
CN202110760756.6A 2021-07-04 2021-07-04 Intelligent gear shifting rule control method based on Q-Learning reinforcement Learning algorithm Active CN113339499B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110760756.6A CN113339499B (en) 2021-07-04 2021-07-04 Intelligent gear shifting rule control method based on Q-Learning reinforcement Learning algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110760756.6A CN113339499B (en) 2021-07-04 2021-07-04 Intelligent gear shifting rule control method based on Q-Learning reinforcement Learning algorithm

Publications (2)

Publication Number Publication Date
CN113339499A true CN113339499A (en) 2021-09-03
CN113339499B CN113339499B (en) 2022-05-10

Family

ID=77482562

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110760756.6A Active CN113339499B (en) 2021-07-04 2021-07-04 Intelligent gear shifting rule control method based on Q-Learning reinforcement Learning algorithm

Country Status (1)

Country Link
CN (1) CN113339499B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008095620A1 (en) * 2007-02-06 2008-08-14 Fatec Fahrzeugtechnik Gmbh Method for optimizing an electronically controlled automatic transmission for a motor vehicle
US20190291736A1 (en) * 2018-03-26 2019-09-26 Hyundai Motor Company Integrated control method and integrated controller of powertrain
CN110985651A (en) * 2019-12-04 2020-04-10 北京理工大学 Automatic transmission multi-parameter fusion gear shifting strategy based on prediction
CN112429005A (en) * 2020-12-02 2021-03-02 西华大学 Pure electric vehicle personalized gear shifting rule optimization method considering transmission efficiency and application

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008095620A1 (en) * 2007-02-06 2008-08-14 Fatec Fahrzeugtechnik Gmbh Method for optimizing an electronically controlled automatic transmission for a motor vehicle
US20190291736A1 (en) * 2018-03-26 2019-09-26 Hyundai Motor Company Integrated control method and integrated controller of powertrain
CN110985651A (en) * 2019-12-04 2020-04-10 北京理工大学 Automatic transmission multi-parameter fusion gear shifting strategy based on prediction
CN112429005A (en) * 2020-12-02 2021-03-02 西华大学 Pure electric vehicle personalized gear shifting rule optimization method considering transmission efficiency and application

Also Published As

Publication number Publication date
CN113339499B (en) 2022-05-10

Similar Documents

Publication Publication Date Title
CN111267831B (en) Intelligent time-domain-variable model prediction energy management method for hybrid electric vehicle
CN110378439B (en) Single robot path planning method based on Q-Learning algorithm
CN111284489A (en) Intelligent networked automobile random prediction cruise control system
CN107085372A (en) A kind of sewage energy-efficient treatment optimal control method based on improvement glowworm swarm algorithm and least square method supporting vector machine
US20220242390A1 (en) Energy management method and system for hybrid electric vehicle
CN109978025B (en) Intelligent internet vehicle front vehicle acceleration prediction method based on Gaussian process regression
CN108730047B (en) Method and device for generating engine target torque map
CN111597750A (en) Hybrid electric vehicle energy management method based on BP neural network
JP2010134863A (en) Control input determination means of control object
CN110488759A (en) A kind of numerically-controlled machine tool feeding control compensation methods based on Actor-Critic algorithm
CN114103924A (en) Energy management control method and device for hybrid vehicle
CN115793445B (en) Hybrid electric vehicle control method based on multi-agent deep reinforcement learning
CN112277927A (en) Hybrid electric vehicle energy management method based on reinforcement learning
CN113339499B (en) Intelligent gear shifting rule control method based on Q-Learning reinforcement Learning algorithm
CN115534929A (en) Plug-in hybrid electric vehicle energy management method based on multi-information fusion
CN104616072A (en) Method for improving concentration of glutamic acid fermented product based on interval optimization
CN108407797B (en) Method for realizing automatic gear shifting of agricultural machinery based on deep learning
CN113479187B (en) Layered different-step-length energy management method for plug-in hybrid electric vehicle
CN114872711A (en) Driving planning method, system, device and medium based on intelligent networked vehicle
CN110469661B (en) CVT efficiency-based dynamic speed ratio optimization method and system
CN112084700A (en) Hybrid power system energy management method based on A3C algorithm
CN114219274A (en) Workshop scheduling method adapting to machine state based on deep reinforcement learning
CN112100909B (en) Parallel configurable intelligent optimization method based on collaborative optimization strategy
Qin et al. Electric vehicle shift strategy based on model predictive control
CN113537620A (en) Vehicle speed prediction method based on Markov model optimization and working condition recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant