CN111638465A - Lithium battery health state estimation method based on convolutional neural network and transfer learning - Google Patents

Lithium battery health state estimation method based on convolutional neural network and transfer learning Download PDF

Info

Publication number
CN111638465A
CN111638465A CN202010475482.1A CN202010475482A CN111638465A CN 111638465 A CN111638465 A CN 111638465A CN 202010475482 A CN202010475482 A CN 202010475482A CN 111638465 A CN111638465 A CN 111638465A
Authority
CN
China
Prior art keywords
layer
model
neural network
value
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010475482.1A
Other languages
Chinese (zh)
Other versions
CN111638465B (en
Inventor
陶吉利
李央
马龙华
白杨
乔志军
谢亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Science and Technology ZUST
Original Assignee
Zhejiang University of Science and Technology ZUST
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Science and Technology ZUST filed Critical Zhejiang University of Science and Technology ZUST
Priority to CN202010475482.1A priority Critical patent/CN111638465B/en
Publication of CN111638465A publication Critical patent/CN111638465A/en
Application granted granted Critical
Publication of CN111638465B publication Critical patent/CN111638465B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/36Arrangements for testing, measuring or monitoring the electrical condition of accumulators or electric batteries, e.g. capacity or state of charge [SoC]
    • G01R31/392Determining battery ageing or deterioration, e.g. state of health
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/36Arrangements for testing, measuring or monitoring the electrical condition of accumulators or electric batteries, e.g. capacity or state of charge [SoC]
    • G01R31/367Software therefor, e.g. for battery testing using modelling or look-up tables
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/36Arrangements for testing, measuring or monitoring the electrical condition of accumulators or electric batteries, e.g. capacity or state of charge [SoC]
    • G01R31/382Arrangements for monitoring battery or accumulator variables, e.g. SoC
    • G01R31/3842Arrangements for monitoring battery or accumulator variables, e.g. SoC combining voltage and current measurements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Secondary Cells (AREA)
  • Charge And Discharge Circuits For Batteries Or The Like (AREA)

Abstract

The invention discloses a lithium battery health state estimation method based on a convolutional neural network and transfer learning. The method is based on transfer learning, a basic model is pre-trained offline by using complete cycle data of an accelerated aging experiment and the last small part of 7.5% cycle data in the life cycle of a waste battery, and then parameters of the basic model are finely adjusted by using normal speed aging data of only 15% cycle before a new battery, so that the health state of the battery at any time is estimated online. Because the service life of the battery is greatly shortened by the accelerated aging experiment, the last small part of cycle data of the waste battery is easy to obtain, and the previous 15% cycle data of the new battery is also easy to obtain, a large amount of time for collecting training data is saved, the size of model input data is reduced, and the calculation process is quicker.

Description

Lithium battery health state estimation method based on convolutional neural network and transfer learning
Technical Field
The invention belongs to the technical field of automation, and relates to a lithium battery health state estimation method based on a convolutional neural network and transfer learning.
Background
The existing lithium battery health state estimation method mainly comprises two main categories of model-based and data-driven. The model-based approach places high demands on the complex physical mechanisms inside the battery. The data-driven method mainly extracts important features from the original voltage, current, capacity and other data of the battery manually, and the important features are used as the input of some traditional machine learning models. The method converts a voltage platform reflecting the first-order phase change of the battery on an original charging and discharging voltage capacity curve into a delta Q/delta V peak which can be clearly identified on the capacity increment curve, then extracts the delta Q/delta V peak, a corresponding voltage value and other characteristics as input of a model, and researches show that the physical change of the aging of the battery can be correspondingly embodied in the voltage capacity curve.
In recent years, deep learning is used in various fields, important features can be automatically extracted from a large amount of data, and the defects that important information is lost and workload is large due to traditional artificial feature extraction of machine learning are overcome. S. shen et al in the document a deep learning method for online capacity estimation of lithium-ion batteries (Journal of Energy Storage, vol.25, p.100817,2019) introduced deep learning for the first time into the study of battery state of health estimation, however, this method is based on 10 years of cyclic experimental data, collecting the data is rather time consuming, and the present invention aims to improve the accuracy of the state of health estimation model and overcome the shortcomings of the past that depend too much on the data.
Disclosure of Invention
In order to overcome the defects of huge data quantity required for establishing a model and the like in the prior art, a method based on a convolutional neural network and transfer learning is provided. The transfer learning uses the complete cycle data of the accelerated aging experiment and the last small part of the 7.5% cycle data in the life cycle of the abandoned battery to off-line pre-train a basic model, and then uses the normal speed aging data of only 15% of previous cycles of the new battery to carry out fine adjustment on the parameters of the basic model, thereby estimating the health state of the battery at any moment. Because the service life of the battery is greatly shortened by the accelerated aging experiment, the last small part of cycle data of the waste battery is easy to obtain, and the previous 15% cycle data of the new battery is also easy to obtain, a large amount of time for collecting training data is saved, the size of model input data is reduced, and the calculation process is quicker.
The technical scheme of the invention is that a lithium battery health state estimation method based on a convolutional neural network and transfer learning is established through means of data acquisition, model establishment, fine tuning and the like. The method can effectively improve the accuracy of the estimation of the health state of the battery.
The specific technical scheme of the invention is as follows:
the lithium battery health state estimation method based on the convolutional neural network and the transfer learning comprises the following steps:
s1: the method for acquiring the input data of the convolutional neural network comprises the following steps:
s11: selecting a plurality of brand-new lithium batteries of different models, respectively carrying out an accelerated aging experiment to acquire cycle data, and continuously consuming the battery capacity according to the cycles of constant-current charging, constant-voltage charging and constant-current discharging until the health state is reduced to below 80%;
meanwhile, waste lithium batteries with the same type and close to the end of service life are obtained, normal speed aging experiments are respectively carried out to collect cycle data, and the battery capacity is consumed according to the processes of constant current charging, constant voltage charging and constant current discharging until the health state is reduced to be below 80%;
acquiring brand new lithium batteries with the same model, respectively carrying out a normal speed aging experiment to acquire cycle data, carrying out charge-discharge cycle according to the processes of constant current charging, constant voltage charging and constant current discharging, and acquiring the first 15% cycle data of the service life of the batteries;
s12: calculating to obtain the battery capacity according to the voltage and current values of the battery in the constant current charging stage in different aging experiments collected in S11, and forming a matrix by using the numerical values of the voltage, the current and the battery capacity as input data of the convolutional neural network;
s2: constructing a convolutional neural network model, wherein the whole network comprises convolutional layers, pooling layers and full-connection layers, and selecting a modified linear unit as an activation function to be connected with the output of each convolutional layer and each pooling layer;
s3: pre-training the model constructed in S2, wherein the specific method is S31-S32:
s31: dividing input data obtained by the brand-new lithium battery accelerated aging experiment in the S1 into a plurality of small batches of training samples, inputting the training samples into the neural network constructed in the S2 according to the batches, updating parameters through a random gradient descent method in the iterative learning process to obtain a first pre-training model, and storing parameter values of the first pre-training model, including the value k of a convolution kernela,b,c,kOffset value bkWeight W of full connection layerlAnd bias bl
S32: dividing input data obtained by the normal speed aging experiment of the waste lithium battery in the S1 into a plurality of small batches of training samples, inputting the training samples into the neural network trained in the S31 batch, performing iterative learning on the basis of model parameter values stored in the first pre-training model, further adjusting parameters through a random gradient descent method to obtain a second pre-training model, storing parameter values of the second pre-training model, and including a value k 'of a new convolution kernel'a,b,c,kAnd a bias value of b'kWeight W of full connection layerl' and offset bl';
The forward propagation and parameter update at this time are as follows:
Figure BDA0002515692820000031
al'=f(zl')=f(Wl'al-1'+bl') (17)
Figure BDA0002515692820000032
wherein: model internal parameter θ'jValue k 'comprising a convolution kernel'a,b,c,kAnd a bias value of b'kWeight W of full connection layerl' and offset bl'; the superscript "'" of the parameter in equations (16) - (18) indicates the forward propagation update value of the parameter in the pre-training phase;
s4: dividing input data obtained in the normal speed aging experiment of the brand-new lithium battery in the S1 into a plurality of small batches of training samples, inputting the training samples into the pre-training model obtained in the S3 according to the batches for iterative learning, and fixing convolution layer parameters of the pre-training model in the iterative learning process to be constant, namely keeping k'a,b,c,kAnd b'kNot changing, only the weight W of the full connection layerl' and offset blUpdate to Wl"and bl", saving the updated parameters to obtain the final estimation model;
the forward propagation and parameter update at this time are as follows:
Figure BDA0002515692820000033
al”=f(zl”)=f(Wl”al-1”+bl”) (20)
Figure BDA0002515692820000034
wherein: internal parameter of model theta'jWeight W including full connection layerlAnd bias bl"; the superscript "" of a parameter in equations (19) - (21) indicates that the parameter is propagating forward to update values during the fine-tuning phase;
s5: the lithium battery to be estimated is processed onceObtaining voltage, current and capacity test values of the constant current charging experiment, using a matrix formed by the three as input X of the estimation model obtained in S4, and using the parameter k 'stored in S4 in the calculation process of network forward propagation'a,b,c,k、b'k、Wl"and bl", the forward propagation and parameter update at this time are as follows:
Figure BDA0002515692820000041
al”'=f(zl”')=f(Wl”al-1”'+bl”) (23)
the superscript ""' of the parameter in equations (22) to (23) indicates the forward propagation update value of the parameter in the estimation stage;
finally, the estimation model outputs the state of health of the battery at that time.
Preferably, in S1, the accelerated aging test refers to overcharging and overdischarging the battery, that is, by setting a higher upper constant current charging voltage limit and a lower constant current discharging voltage limit.
Preferably, in S1, the normal rate aging test of the waste lithium battery is performed for 35 to 40 cycles of charge and discharge.
Preferably, in S1, the normal rate aging test of a brand-new lithium battery is performed by performing 75 charge/discharge cycles.
Preferably, in S1, the data collected in different aging experiments are respectively constructed as model input X:
Figure BDA0002515692820000042
wherein: k is the number of sampling points in the constant current charging stage, Vi、Ii、CiThe voltage, current and capacity values of the ith sampling point are respectively.
Preferably, in the convolutional neural network model, the forward propagation of the convolutional layer is calculated as follows:
Figure BDA0002515692820000043
i'=(i-1)×hs+a (3)
j'=(j-1)×ws+b (4)
where k is the number of convolution kernels in the convolutional layer, i.e., the number of channels in the output matrix, Ci,j,kIs the value of the ith row and jth column in the kth layer of the output matrix, bkIs an offset value, hk、wkAnd ckRespectively the height, width and number of channels, w, of the convolution kernelsAnd hsStep sizes, x, of the width and height, respectively, of the convolution kernel when scanning the input matrixi',j',cIs the value of the ith 'row and the jth' column in the input matrix at layer c, ka,b,c,kIs the value of the layer c, row a, column b in the kth convolution kernel, and f is the activation function;
the dimensions of the convolutional layer were calculated as follows:
Figure BDA0002515692820000051
Figure BDA0002515692820000052
wherein wkAnd hkWidth and height, w, of the convolution kernel, respectivelysAnd hsStep sizes, w, of the convolution kernel in the width and height directions, respectively, when scanning the input matrixin,woutRepresenting the width of the input matrix and the output matrix, h, respectivelyinAnd houtRepresenting the height, w, of the input and output matrices, respectivelypAnd hpRespectively representing the number of zero elements symmetrically filled in the left, right and up and down directions of the input matrix, and preventing the boundary information of the matrix from being lost along with continuous convolution;
the forward propagation for the maximum pooling layer is calculated as:
Figure BDA0002515692820000053
the above formula (7) representsThe feature map is split into i × j with the size e1×e2For each e1×e2Performing one maximum pooling operation on the characteristic points of the region; wherein M isi,j,kIs the value of the ith row and jth column of the k-th layer output of the pooling layer,
Figure BDA0002515692820000054
the e + sigma of the k-th layer of the previous convolutional layer1Line ej + sigma2The value of column, (ei, ej) is the ith row and jth column in the feature map, e1×e2The upper left corner position coordinates of the region;
the forward propagation of the fully connected layer is calculated as:
al=f(zl)=f(Wlal-1+bl) (8)
Figure BDA0002515692820000055
where f (x) is an activation function, WlAnd blRespectively, the weight and offset value of the l-th layer, alIs an input to the l-th layer;
the back propagation of the convolutional layer is:
Figure BDA0002515692820000061
where rot180 denotes rotating the convolution kernel 180 degrees,lrepresents the differential of the output of the l-th layer to the objective function;
the objective function J for establishing the neural network is:
Figure BDA0002515692820000062
wherein
Figure BDA0002515692820000063
Is the output of the output layer, yi(x) Is the true label value, n is the number of samples, and λ is the regularization parameter.
Parameter theta inside the networkjAccording to the objective function (12), the following is updated, thetajIncluding weight W and bias b:
Figure BDA0002515692820000064
Figure BDA0002515692820000065
where m represents the number of samples contained in a small batch,
Figure BDA0002515692820000066
representing the output value, y, of the ith input in a small batch at the jth iterationi(x)jIs the corresponding true value, θjIs the internal parameter at the j-th iteration, α is the learning rate, and γ is the momentum value.
Preferably, in the convolutional neural network model, a strategy is added in the network to prevent overfitting, a certain proportion of hidden neurons in the network are temporarily deleted randomly, and input and output neurons are kept unchanged; the input is propagated forward through the modified network, and then the obtained loss result is propagated backward through the modified network; after a batch of training samples completes the process, parameters are updated on the neurons which are not deleted according to a random gradient descent method.
Preferably, in the convolutional neural network model, a piecewise constant attenuation strategy is adopted, and the learning rate is adjusted in the training process instead of being fixed, so that the model is converged quickly.
Compared with the prior art, the invention has the following beneficial effects:
(1) the invention utilizes the convolutional neural network to automatically extract the characteristics from the data of voltage, current and capacity, thereby saving the step of manual extraction and avoiding the loss of important information possibly brought by the manual extraction of the characteristics.
(2) Because the service life of the battery is greatly shortened by an accelerated aging experiment, the last small part of cycle data of the waste battery is easy to obtain, and the previous 15% cycle data of the new battery is also easy to obtain, the method saves a large amount of time for collecting training data, reduces the size of model input data, and enables the calculation process to be faster.
(3) The model based on the acceleration mode can be quickly transferred to the normal speed mode, and has good generalization.
(4) The invention only takes the data in the constant current charging stage as input, reduces the size of the model input data and accelerates the calculation process.
(5) The invention improves the accuracy of the online estimation of the health state of the lithium battery.
Drawings
FIG. 1 is a schematic diagram of a convolutional neural network architecture;
FIG. 2 is a schematic diagram of the voltage, current, and capacity variations during a single charge-discharge cycle;
FIG. 3 is a schematic diagram of transfer learning;
FIG. 4 is a graph of the results of the on-line estimation of experiment 2 in Table 1, corresponding to a SONYUS18650VTC6 cell;
fig. 5 is a graph showing the results of the on-line estimation of experiment 4 in table 1, corresponding to an FST2000 cell.
Detailed Description
The invention will be further elucidated and described with reference to the drawings and the detailed description. The technical features of the embodiments of the present invention can be combined correspondingly without mutual conflict.
The invention provides a lithium battery health state estimation method based on a convolutional neural network and transfer learning, which comprises the following steps of:
s1: the method for acquiring the input data of the convolutional neural network comprises the following steps:
s11: selecting a plurality of brand-new lithium batteries with different models, respectively carrying out an accelerated aging experiment to acquire cycle data, and continuously consuming the battery capacity according to the cycle of constant current charging, constant voltage charging and constant current discharging until the health state is reduced to below 80%. The accelerated aging refers to the overcharge and overdischarge of the battery, namely, the higher upper limit of the constant current charging voltage and the lower limit of the constant current discharging voltage are set, and the same is carried out below.
Meanwhile, waste lithium batteries of the same type and close to the end of service life are obtained, normal speed aging experiments are respectively carried out to collect cycle data, the battery capacity is consumed according to the processes of constant current charging, constant voltage charging and constant current discharging, and the cycle is carried out for 35-40 times approximately until the health state is reduced to below 80%. Since the full life of the cell under normal aging conditions is approximately 500 cycles, 35-40 cycles are around 7.5% of the full life.
Obtaining brand-new lithium batteries with the same model, respectively carrying out a normal speed aging experiment to acquire cycle data, carrying out 75 times of charge-discharge cycles according to the processes of constant current charging, constant voltage charging and constant current discharging, and obtaining the first 15% cycle data of the service life of the batteries. Since the full life of the cell under normal aging conditions is approximately 500 cycles, 75 cycles is about 15% of the full life.
S12: according to the voltage and current values of the battery in the constant-current charging stage in different aging experiments collected in S11, the capacity of the battery is calculated by using a coulomb counting method, numerical values of three variables of the voltage, the current and the capacity of the battery form a matrix and serve as input data of a convolutional neural network, and the input X form of the model is as follows:
Figure BDA0002515692820000081
wherein: k is the number of sampling points in the constant current charging stage, Vi、Ii、CiThe voltage, current and capacity of the ith sampling point are respectively.
S2: and constructing a convolutional neural network model, wherein the whole network comprises a convolutional layer, a pooling layer and a full-connection layer, and a correction linear unit is selected as an activation function and connected with the output of each convolutional layer and the output of each pooling layer.
The forward propagation of the convolutional layer is calculated as follows:
Figure BDA0002515692820000082
i'=(i-1)×hs+a (3)
j'=(j-1)×ws+b (4)
where k is the number of convolution kernels in the convolutional layer, i.e., the number of channels in the output matrix, Ci,j,kIs the value of the ith row and jth column in the kth layer of the output matrix, bkIs an offset value, hk、wkAnd ckRespectively the height, width and number of channels, w, of the convolution kernelsAnd hsStep sizes, x, of the width and height, respectively, of the convolution kernel when scanning the input matrixi',j',cIs the value of the ith 'row and the jth' column in the input matrix at layer c, ka,b,c,kIs the value of the layer c, row a, column b in the kth convolution kernel, and f is the activation function;
the dimensions of the convolutional layer were calculated as follows:
Figure BDA0002515692820000091
Figure BDA0002515692820000092
wherein wkAnd hkWidth and height, w, of the convolution kernel, respectivelysAnd hsStep sizes, w, of the convolution kernel in the width and height directions, respectively, when scanning the input matrixin,woutRepresenting the width of the input matrix and the output matrix, h, respectivelyinAnd houtRepresenting the height, w, of the input and output matrices, respectivelypAnd hpRespectively representing the number of zero elements symmetrically filled in the left, right and up and down directions of the input matrix, and preventing the boundary information of the matrix from being lost along with continuous convolution;
the forward propagation for the maximum pooling layer is calculated as:
Figure BDA0002515692820000093
the above equation (7) shows that the feature map is divided into i × j with the size e1×e2For each e1×e2Performing maximum pooling operation on feature points of the region(ii) a Wherein M isi,j,kIs the value of the ith row and jth column of the k-th layer output of the pooling layer,
Figure BDA0002515692820000094
the e + sigma of the k-th layer of the previous convolutional layer1Line ej + sigma2The value of column, (ei, ej) is the ith row and jth column in the feature map, e1×e2The upper left corner position coordinates of the region;
the forward propagation of the fully connected layer is calculated as:
al=f(zl)=f(Wlal-1+bl) (8)
Figure BDA0002515692820000095
where f (x) is an activation function, WlAnd blRespectively, the weight and offset value of the l-th layer, alIs an input to the l-th layer;
the back propagation of the convolutional layer is:
Figure BDA0002515692820000096
where rot180 denotes rotating the convolution kernel 180 degrees,lrepresents the differential of the output of the l-th layer to the objective function;
the objective function J for establishing the neural network is:
Figure BDA0002515692820000101
wherein
Figure BDA0002515692820000102
Is the output of the output layer, yi(x) Is the true label value, n is the number of samples, and λ is the regularization parameter.
Parameter theta inside the networkjUpdating according to the objective function (12) as follows, including weight W and bias b:
Figure BDA0002515692820000103
Figure BDA0002515692820000104
where m represents the number of samples contained in a small batch,
Figure BDA0002515692820000105
representing the output value, y, of the ith input in a small batch at the jth iterationi(x)jIs the corresponding true value, θjIs the internal parameter at the j-th iteration, α is the learning rate, and γ is the momentum value.
S3: pre-training the model constructed in S2, wherein the specific method is S31-S32:
s31: dividing input data obtained by the brand-new lithium battery accelerated aging experiment in the S1 into a plurality of small batches of training samples, inputting the training samples into the neural network constructed in the S2 according to the batches, updating parameters through a random gradient descent method in the iterative learning process to obtain a first pre-training model, and storing parameter values of the first pre-training model, including the value k of a convolution kernela,b,c,kOffset value bkWeight W of full connection layerlAnd bias bl
S32: dividing input data obtained by the normal speed aging experiment of the waste lithium battery in the S1 into a plurality of small batches of training samples, inputting the training samples into the neural network trained in the S31 batch, performing iterative learning on the basis of model parameter values stored in the first pre-training model, further adjusting parameters through a random gradient descent method to obtain a second pre-training model, storing parameter values of the second pre-training model, and including a value k 'of a new convolution kernel'a,b,c,kAnd a bias value of b'kWeight W of full connection layerl' and offset bl';
The forward propagation and parameter update at this time are as follows:
Figure BDA0002515692820000106
al'=f(zl')=f(Wl'al-1'+bl') (17)
Figure BDA0002515692820000111
wherein: model internal parameter θ'jValue k 'comprising a convolution kernel'a,b,c,kAnd a bias value of b'kWeight W of full connection layerl' and offset bl'; the superscript "'" of the parameter in equations (16) - (18) indicates the forward propagation update value of the parameter in the pre-training phase;
in the convolutional neural network model, a strategy is added in the network to prevent overfitting, a certain proportion of hidden neurons in the network are temporarily deleted randomly, and input and output neurons are kept unchanged; the input is propagated forward through the modified network, and then the obtained loss result is propagated backward through the modified network; after a batch of training samples completes the process, parameters are updated on the neurons which are not deleted according to a random gradient descent method.
In the convolutional neural network model, a piecewise constant attenuation strategy is adopted, and the learning rate is adjusted in the training process instead of being fixed, so that the model is rapidly converged.
S4: dividing input data obtained in the normal speed aging experiment of the brand-new lithium battery in the S1 into a plurality of small batches of training samples, inputting the training samples into the pre-training model obtained in the S3 according to the batches for iterative learning, and fixing convolution layer parameters of the pre-training model in the iterative learning process to be constant, namely keeping k'a,b,c,kAnd b'kNot changing, only the weight W of the full connection layerl' and offset blUpdate to Wl"and bl", saving the updated parameters to obtain the final estimation model;
the forward propagation and parameter update at this time are as follows:
Figure BDA0002515692820000112
al”=f(zl”)=f(Wl”al-1”+bl”) (20)
Figure BDA0002515692820000113
wherein: internal parameter of model theta'jWeight W including full connection layerlAnd bias bl"; the superscript "" of a parameter in equations (19) - (21) indicates that the parameter is propagating forward to update values during the fine-tuning phase;
s5: performing a constant current charging experiment on the lithium battery to be estimated to obtain voltage, current and capacity test values of the lithium battery, taking a matrix formed by the three as an input X of the estimation model obtained in S4, and using the parameter k 'saved in S4 in the calculation process of network forward propagation'a,b,c,k、b'k、Wl"and bl", the forward propagation and parameter update at this time are as follows:
Figure BDA0002515692820000121
al”'=f(zl”')=f(Wl”al-1”'+bl”) (23)
the superscript ""' of the parameter in equations (22) to (23) indicates the forward propagation update value of the parameter in the estimation stage;
finally, the estimation model outputs the state of health of the battery at that time.
The method is applied to a specific embodiment to show the specific implementation process and technical effect.
Examples
In this embodiment, the specific steps are as follows:
and (1) acquiring input data of the convolutional neural network.
a. 3 brand-new SONYUS18650VTC6 models and 3 brand-new FST2000 models of lithium batteries are selected, and complete overcharge and overdischarge aging experiments and 75 normal-speed aging cycles are respectively carried out on each model. Selecting 1 SONYUS18650VTC6 model and 1 FST2000 model waste batteries close to the end of service life, performing normal speed aging experiments respectively, and performing 35-40 times of circulation according to the circulation of constant current charging, constant voltage charging and constant current discharging to consume the battery capacity until the health state is reduced to below 80%. The above results in 8 sets of data. The upper and lower cut-off voltage limits of normal speed aging are 4.2V and 2.75V respectively, the upper cut-off voltage limit of overcharge is 4.4V, and the lower cut-off voltage limit of overdischarge is 2V.
The voltage, current, and capacity changes during a single charge-discharge cycle are shown in fig. 2.
b. Collecting the voltage and current values of the battery in the constant current charging stage in the charging and discharging cycle, obtaining the capacity by using a coulomb counting method, wherein the numerical values of the three variables form a matrix and are used as the input X of the convolutional neural network, and the dimension of the X is 4000 multiplied by 3.
Figure BDA0002515692820000122
And (2) designing a convolutional neural network algorithm.
a. The whole network comprises a convolution layer, a pooling layer and a full-connection layer. A modified linear element is selected as an activation function, connected to the output of each convolutional layer and pooling layer.
Calculation of forward propagation of convolutional layers:
Figure BDA0002515692820000131
i'=(i-1)×hs+a (3)
j'=(j-1)×ws+b (4)
where k is the number of convolution kernels in the convolutional layer, i.e., the number of channels in the output matrix, Ci,j,kIs the value of the ith row and jth column in the kth layer of the output matrix, bkIs an offset value, hk、wkAnd ckRespectively the height, width and number of channels, w, of the convolution kernelsAnd hsStep sizes, x, of the width and height, respectively, of the convolution kernel when scanning the input matrixi',j',cIs the value of the ith 'row and the jth' column in the input matrix at layer c, ka,b,c,kIs the value of the layer c, row a, column b in the kth convolution kernel, and f is the activation function.
The dimensions of the convolutional layer were calculated as follows:
Figure BDA0002515692820000132
Figure BDA0002515692820000133
wherein wkAnd hkWidth and height, w, of the convolution kernel, respectivelysAnd hsStep sizes, w, of the width and height, respectively, of the convolution kernel when scanning the input matrixin,wout,hinAnd houtRepresenting the width and height, w, of the input and output matrices, respectivelypAnd hpThe number of zero elements symmetrically filled in the left, right and up and down directions of the input matrix is represented, and the loss of the boundary information of the matrix along with continuous convolution is prevented;
calculation of the forward propagation of the maximum pooling layer:
Figure BDA0002515692820000134
wherein M isi,j,kIs the value of the ith row and jth column of the k-th layer output of the pooling layer,
Figure BDA0002515692820000135
the e + sigma of the k-th layer of the previous convolutional layer1Line ej + sigma2Column values, the above formula representing the value for each e in the feature map1×e2One maximum pooling operation is performed for each zone.
Calculation of forward propagation for fully connected layers:
al=f(zl)=f(Wlal-1+bl) (8)
Figure BDA0002515692820000141
whereinf (x) is an activation function, WlAnd blRespectively, the weight and offset value of the l-th layer, alIs the input to the l-th layer, corresponding to the convolution calculation in the convolution layer.
The size of the convolution kernel in each convolutional layer, the change of the dimension of the maximum pooling layer and the fully-connected layer can be seen in FIG. 1. the dimension of the input X is 4000 × 3 × 1, firstly, the convolution operation is carried out through 6 convolution kernels with the size of 5 × 2 × 1, and w is takenp=1,hpWhen the dimension of the convolutional layer output is 3996 × 4 × 6, the convolutional layer output is calculated by the formulas (5) and (6), then the convolutional layer output is subjected to a maximum pooling operation every 4 × 02 areas after passing through a maximum pooling layer, the dimension of the output is 999 × 12 × 6, then the convolutional layer output is subjected to 995 × 2 × 16 after being subjected to 16 convolutional kernels with the size of 5 × 1 × 6, the maximum pooling operation is performed every 5 × 1 areas, the dimension of the output is 199 × 2 × 6, the convolutional layer output is tiled into column vectors with the dimension of 6348, then the column vectors pass through 3 full-connected layers respectively, the dimension is sequentially changed into 80 and 40, and finally the network outputs one neuron.
Back propagation of convolutional layer:
Figure BDA0002515692820000142
where rot180 denotes rotating the convolution kernel 180 degrees,lrepresenting the differential of the output of the l-th layer on the objective function.
Establishing an objective function J:
Figure BDA0002515692820000143
where W and b are the weight and bias value, respectively, internal to the network;
Figure BDA0002515692820000144
is the output of the output layer, yi(x) Is the true label value, n is the number of samples, and λ is the regularization parameter. Let λ be 0.001.
The weights and offsets are updated as follows:
Figure BDA0002515692820000145
Figure BDA0002515692820000146
where m represents the number of samples contained in a small batch,
Figure BDA0002515692820000151
representing the output value, y, of the ith input in a small batch at the jth iterationi(x)jIs the corresponding true value, θjThe internal parameter at the j-th iteration, α, is the learning rate, γ is the momentum value, and taking m to 64, the learning rate is set as shown in step c, γ is 0.9.
The model evaluation indexes adopt accuracy and root mean square error:
Figure BDA0002515692820000152
Figure BDA0002515692820000153
b. strategies are added on the basis of the network to prevent overfitting, hidden neurons with p proportion in the network are temporarily and randomly deleted, and input and output neurons are kept unchanged. The input is propagated forward through the modified network and the resulting loss results are then propagated backward through the modified network. After a small batch of training samples has performed this process, the parameters are updated on the neurons that have not been deleted according to the stochastic gradient descent method. The hidden neuron stops computation with a probability of p ═ 0.5.
c. The learning rate is adjusted in the training process instead of being fixed by adopting a piecewise constant attenuation strategy, so that the model is rapidly converged, and the initial value of the learning rate is 1 × 10-5The attenuation is 0.7 times of the original attenuation every certain iteration period.
And (3) pre-training the model.
a. Transfer learning first trains a base network on a base data set and a base task, then fine-tunes learned features, transfers them to a second target network, and trains the network with the target data set and the target task. Since the aging of the battery under accelerated conditions has some similarity to the aging at normal speed, a base model can be pre-trained for normal speed aging using accelerated aging data. The specific strategy of transfer learning in the present invention can be seen in fig. 3.
The accelerated aging experiment performed in the step (1) can obtain cycle data of the battery after the service life of the battery is shortened under the overcharge and overdischarge conditions, all cycles are divided into a series of small batches of training samples, the training samples are input into the network according to batches, parameters are updated through a random gradient descent method, a preliminary pre-training model is obtained, parameters in the network are fixed after iterative learning, and values of the parameters are stored, wherein the values are as follows: value k of the convolution kernela,b,c,kOffset value bkWeight W of full connection layerlAnd bias bl
b. If only the accelerated aging data is used for pre-training, and the obtained model is directly used in the fine-tuning and testing process of the step (4), then during testing, the model performs poorly on the last part of the cycle of the battery life cycle, and the SOH estimation is very inaccurate, which may be because the aging conditions of the batteries respectively near the end of the life show obvious differences under accelerated aging and normal conditions, and the characteristics have no good generalization. Thus, after the pre-training phase using the accelerated aging data, the second pre-training phase continues using the normal aging data for a portion of the discarded batteries.
Similarly, the normal speed aging experiment of the waste battery performed in the step (1) can obtain data of the last part of cycles in the life cycle of the waste battery, the cycles are divided into a series of small batches of training samples, the training samples are input into a network according to the batches, parameters are updated through a random gradient descent method, and the parameters are further adjusted on the original preliminary pre-training model, so that a new pre-training model is obtained. Specifically, parameters saved in the last stage are initialized on a new model, parameters in the network are fixed again after iterative learning, and values of the parameters are savedThe following are respectively: k'a,b,c,k、b'k、Wl' and bl'。
Forward propagation and parameter update at this time are as follows, where θ'jValue k 'comprising a convolution kernel'a,b,c,kAnd a bias value of b'kWeight W of full connection layerl' and offset bl':
Figure BDA0002515692820000161
al'=f(zl')=f(Wl'al-1'+bl') (17)
Figure BDA0002515692820000162
And (2) designing 4 groups of experiments by using the data obtained in the step (1), wherein each group of experiments are respectively subjected to a primary pre-training stage and a further pre-training stage, and the data used in each stage are shown in table 1.
Table 1: data used at each stage of 4 experiments
Figure BDA0002515692820000163
Figure BDA0002515692820000171
And (4) fine adjustment and online estimation of the model.
a. And (3) obtaining cycle data of 15% of the battery in the first step in the normal speed aging experiment in the step (1), dividing the cycle data into a series of small batches of training samples, inputting the training samples into the network according to the batches, adjusting parameters on the basis of the pre-training model obtained in the step (3), and estimating the health state of the battery at any moment on line by using the fine-tuned model. In this case, the fine tuning phase does not need to be updated on all parameters as in the second pre-training phase, but rather the parameters of the convolutional layer are fixed, i.e., k 'is maintained'a,b,c,kAnd b'kNot changing, only the weight W of the full connection layerl' and offset bl', update to Wl"and bl". The values of these parameters are saved as a model for the final test. The data used in the fine tuning phase is shown in table 1.
The forward propagation and parameter update at this time are as follows, where θ "jWeight W including full connection layerlAnd bias bl”:
Figure BDA0002515692820000172
al”=f(zl”)=f(Wl”al-1”+bl”) (20)
Figure BDA0002515692820000173
b. When on-line estimation is carried out, a constant current charging experiment is carried out on a battery at a certain moment, a matrix formed by voltage, current and capacity of the battery is used as an input X of a model, and parameters stored in a fine adjustment stage are used in a calculation process of network forward propagation: k'a,b,c,k、b'k、Wl"and blThe health state of the battery at the moment can be output after the calculation of the formulas (2) to (9), and a back propagation process is not needed during testing.
The forward propagation and parameter update at this time are as follows:
Figure BDA0002515692820000181
al”'=f(zl”')=f(Wl”al-1”'+bl”) (23)
fig. 4 and 5 are partial experimental results, which correspond to experiment 2 and experiment 4 in table 1, respectively, in which the triangles represent true SOH and the circles represent on-line estimated values, and it can be seen from the results that the accuracy of the present invention respectively reaches 99.56% and 99.01%, and the root mean square error difference respectively is 0.435% and 1.120%.
The above-described embodiments are merely preferred embodiments of the present invention, which should not be construed as limiting the invention. Various changes and modifications may be made by one of ordinary skill in the pertinent art without departing from the spirit and scope of the present invention. Therefore, the technical scheme obtained by adopting the mode of equivalent replacement or equivalent transformation is within the protection scope of the invention.

Claims (8)

1. A lithium battery health state estimation method based on a convolutional neural network and transfer learning is characterized by comprising the following steps:
s1: the method for acquiring the input data of the convolutional neural network comprises the following steps:
s11: selecting a plurality of brand-new lithium batteries of different models, respectively carrying out an accelerated aging experiment to acquire cycle data, and continuously consuming the battery capacity according to the cycles of constant-current charging, constant-voltage charging and constant-current discharging until the health state is reduced to below 80%;
meanwhile, waste lithium batteries with the same type and close to the end of service life are obtained, normal speed aging experiments are respectively carried out to collect cycle data, and the battery capacity is consumed according to the processes of constant current charging, constant voltage charging and constant current discharging until the health state is reduced to be below 80%;
acquiring brand new lithium batteries with the same model, respectively carrying out a normal speed aging experiment to acquire cycle data, carrying out charge-discharge cycle according to the processes of constant current charging, constant voltage charging and constant current discharging, and acquiring the first 15% cycle data of the service life of the batteries;
s12: calculating to obtain the battery capacity according to the voltage and current values of the battery in the constant current charging stage in different aging experiments collected in S11, and forming a matrix by using the numerical values of the voltage, the current and the battery capacity as input data of the convolutional neural network;
s2: constructing a convolutional neural network model, wherein the whole network comprises convolutional layers, pooling layers and full-connection layers, and selecting a modified linear unit as an activation function to be connected with the output of each convolutional layer and each pooling layer;
s3: pre-training the model constructed in S2, wherein the specific method is S31-S32:
s31: dividing input data obtained by the brand-new lithium battery accelerated aging experiment in the S1 into a plurality of small batches of training samples, inputting the training samples into the neural network constructed in the S2 according to the batches, updating parameters through a random gradient descent method in the iterative learning process to obtain a first pre-training model, and storing parameter values of the first pre-training model, including the value k of a convolution kernela,b,c,kOffset value bkWeight W of full connection layerlAnd bias bl
S32: dividing input data obtained by the normal speed aging experiment of the waste lithium battery in the S1 into a plurality of small batches of training samples, inputting the training samples into the neural network trained in the S31 batch, performing iterative learning on the basis of model parameter values stored in the first pre-training model, further adjusting parameters through a random gradient descent method to obtain a second pre-training model, storing parameter values of the second pre-training model, and including a value k 'of a new convolution kernel'a,b,c,kAnd a bias value of b'kWeight W of full connection layerl' and offset bl';
The forward propagation and parameter update at this time are as follows:
Figure FDA0002515692810000021
al'=f(zl')=f(Wl'al-1'+bl') (17)
Figure FDA0002515692810000022
wherein: model internal parameter θ'jValue k 'comprising a convolution kernel'a,b,c,kAnd a bias value of b'kWeight W of full connection layerl' and offset bl'; the superscript "'" of the parameter in equations (16) - (18) indicates the forward propagation update value of the parameter in the pre-training phase;
s4: dividing input data obtained by the normal speed aging experiment of the brand-new lithium battery in the S1 into a plurality of small batches of training samples,inputting the batch into a pre-training model obtained in S3 for iterative learning, wherein the convolution layer parameters of the pre-training model are fixed during the iterative learning process, namely k 'is kept'a,b,c,kAnd b'kNot changing, only the weight W of the full connection layerl' and offset blUpdate to Wl"and bl", saving the updated parameters to obtain the final estimation model;
the forward propagation and parameter update at this time are as follows:
Figure FDA0002515692810000023
al”=f(zl”)=f(Wl”al-1”+bl”) (20)
Figure FDA0002515692810000024
wherein: internal parameter of model theta'jWeight W including full connection layerlAnd bias bl"; the superscript "" of a parameter in equations (19) - (21) indicates that the parameter is propagating forward to update values during the fine-tuning phase;
s5: performing a constant current charging experiment on the lithium battery to be estimated to obtain voltage, current and capacity test values of the lithium battery, taking a matrix formed by the three as an input X of the estimation model obtained in S4, and using the parameter k 'saved in S4 in the calculation process of network forward propagation'a,b,c,k、b'k、Wl"and bl", the forward propagation and parameter update at this time are as follows:
Figure FDA0002515692810000031
al”'=f(zl”')=f(Wl”al-1”'+bl”) (23)
the superscript ""' of the parameter in equations (22) to (23) indicates the forward propagation update value of the parameter in the estimation stage;
finally, the estimation model outputs the state of health of the battery at that time.
2. The method for estimating the state of health of a lithium battery based on a convolutional neural network and transfer learning as claimed in claim 1, wherein in S1, the accelerated aging test refers to overcharging and overdischarging the battery by setting a higher upper constant-current charging voltage limit and a lower constant-current discharging voltage limit.
3. The method for estimating the health state of a lithium battery based on a convolutional neural network and transfer learning of claim 1, wherein in S1, the normal speed aging test of the waste lithium battery is performed for 35-40 charge and discharge cycles.
4. The method for estimating the health status of a lithium battery based on a convolutional neural network and transfer learning of claim 1, wherein in S1, the normal speed aging test of a brand-new lithium battery is performed for 75 charge and discharge cycles.
5. The lithium battery health state estimation method based on the convolutional neural network and the transfer learning of claim 1, wherein in S1, the data collected by different aging experiments are respectively constructed as model inputs X:
Figure FDA0002515692810000032
wherein: k is the number of sampling points in the constant current charging stage, Vi、Ii、CiThe voltage, current and capacity of the ith sampling point are respectively.
6. The lithium battery state of health estimation method based on convolutional neural network and transfer learning of claim 1, wherein in the convolutional neural network model, the calculation of forward propagation of convolutional layer is as follows:
Figure FDA0002515692810000033
i'=(i-1)×hs+a (3)
j'=(j-1)×ws+b (4)
where k is the number of convolution kernels in the convolutional layer, i.e., the number of channels in the output matrix, Ci,j,kIs the value of the ith row and jth column in the kth layer of the output matrix, bkIs an offset value, hk、wkAnd ckRespectively the height, width and number of channels, w, of the convolution kernelsAnd hsStep sizes, x, of the width and height, respectively, of the convolution kernel when scanning the input matrixi',j',cIs the value of the ith 'row and the jth' column in the input matrix at layer c, ka,b,c,kIs the value of the layer c, row a, column b in the kth convolution kernel, and f is the activation function;
the dimensions of the convolutional layer were calculated as follows:
Figure FDA0002515692810000041
Figure FDA0002515692810000042
wherein wkAnd hkWidth and height, w, of the convolution kernel, respectivelysAnd hsStep sizes, w, of the convolution kernel in the width and height directions, respectively, when scanning the input matrixin,woutRepresenting the width of the input matrix and the output matrix, h, respectivelyinAnd houtRepresenting the height, w, of the input and output matrices, respectivelypAnd hpRespectively representing the number of zero elements symmetrically filled in the left, right and up and down directions of the input matrix, and preventing the boundary information of the matrix from being lost along with continuous convolution;
the forward propagation for the maximum pooling layer is calculated as:
Figure FDA0002515692810000043
the above equation (7) shows that the feature map is divided into i × j with the size e1×e2For each e1×e2Performing one maximum pooling operation on the characteristic points of the region; wherein M isi,j,kIs the value of the ith row and jth column of the k-th layer output of the pooling layer,
Figure FDA0002515692810000044
the e + sigma of the k-th layer of the previous convolutional layer1Line ej + sigma2The value of column, (ei, ej) is the ith row and jth column in the feature map, e1×e2The upper left corner position coordinates of the region;
the forward propagation of the fully connected layer is calculated as:
al=f(zl)=f(Wlal-1+bl)(8)
Figure FDA0002515692810000051
where f (x) is an activation function, WlAnd blRespectively, the weight and offset value of the l-th layer, alIs an input to the l-th layer;
the back propagation of the convolutional layer is:
Figure FDA0002515692810000052
where rot180 denotes rotating the convolution kernel 180 degrees,lrepresents the differential of the output of the l-th layer to the objective function;
the objective function J for establishing the neural network is:
Figure FDA0002515692810000053
wherein
Figure FDA0002515692810000054
Is the output of the output layer, yi(x) Is the true label value, n is the number of samples, and λ is the regularization parameter.
Updating the network internal parameter theta according to the objective function (12)jWhich includes the weight W and the offset b:
Figure FDA0002515692810000055
Figure FDA0002515692810000056
where m represents the number of samples contained in a small batch,
Figure FDA0002515692810000057
representing the output value, y, of the ith input in a small batch at the jth iterationi(x)jIs the corresponding true value, θjIs the internal parameter at the j-th iteration, α is the learning rate, and γ is the momentum value.
7. The lithium battery health state estimation method based on the convolutional neural network and the transfer learning of claim 1, wherein in the convolutional neural network model, a strategy is added in the network to prevent overfitting, a certain proportion of hidden neurons in the network are temporarily and randomly deleted, and input and output neurons are kept unchanged; the input is propagated forward through the modified network, and then the obtained loss result is propagated backward through the modified network; after a batch of training samples completes the process, parameters are updated on the neurons which are not deleted according to a random gradient descent method.
8. The lithium battery health state estimation method based on the convolutional neural network and the transfer learning of claim 1, wherein a piecewise constant attenuation strategy is adopted in the convolutional neural network model, and the learning rate is adjusted in a training process instead of being fixed, so that the model is converged quickly.
CN202010475482.1A 2020-05-29 2020-05-29 Lithium battery health state estimation method based on convolutional neural network and transfer learning Active CN111638465B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010475482.1A CN111638465B (en) 2020-05-29 2020-05-29 Lithium battery health state estimation method based on convolutional neural network and transfer learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010475482.1A CN111638465B (en) 2020-05-29 2020-05-29 Lithium battery health state estimation method based on convolutional neural network and transfer learning

Publications (2)

Publication Number Publication Date
CN111638465A true CN111638465A (en) 2020-09-08
CN111638465B CN111638465B (en) 2023-02-28

Family

ID=72332399

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010475482.1A Active CN111638465B (en) 2020-05-29 2020-05-29 Lithium battery health state estimation method based on convolutional neural network and transfer learning

Country Status (1)

Country Link
CN (1) CN111638465B (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112083337A (en) * 2020-10-22 2020-12-15 重庆大学 Power battery health prediction method oriented to predictive operation and maintenance
CN112231975A (en) * 2020-10-13 2021-01-15 中国铁路上海局集团有限公司南京供电段 Data modeling method and system based on reliability analysis of railway power supply equipment
CN112345952A (en) * 2020-09-23 2021-02-09 上海电享信息科技有限公司 Power battery aging degree judging method
CN112444748A (en) * 2020-10-12 2021-03-05 武汉蔚来能源有限公司 Battery abnormality detection method, battery abnormality detection device, electronic apparatus, and storage medium
CN112666479A (en) * 2020-12-02 2021-04-16 西安交通大学 Battery service life prediction method based on charging cycle fusion
CN112666480A (en) * 2020-12-02 2021-04-16 西安交通大学 Battery life prediction method based on charging process characteristic attention
CN112684346A (en) * 2020-12-10 2021-04-20 西安理工大学 Lithium battery health state estimation method based on genetic convolutional neural network
CN112798960A (en) * 2021-01-14 2021-05-14 重庆大学 Battery pack residual life prediction method based on migration deep learning
CN112834945A (en) * 2020-12-31 2021-05-25 东软睿驰汽车技术(沈阳)有限公司 Evaluation model establishing method, battery health state evaluation method and related product
CN113406496A (en) * 2021-05-26 2021-09-17 广州市香港科大***研究院 Battery capacity prediction method, system, device and medium based on model migration
CN113536676A (en) * 2021-07-15 2021-10-22 重庆邮电大学 Lithium battery health condition monitoring method based on feature transfer learning
CN113612269A (en) * 2021-07-02 2021-11-05 国网山东省电力公司莱芜供电公司 Battery monomer charging and discharging control method and system for lead-acid storage battery energy storage station
CN113721151A (en) * 2021-11-03 2021-11-30 杭州宇谷科技有限公司 Battery capacity estimation model and method based on double-tower deep learning network
CN113740736A (en) * 2021-08-31 2021-12-03 哈尔滨工业大学 Electric vehicle lithium battery SOH estimation method based on deep network self-adaptation
CN113777499A (en) * 2021-09-24 2021-12-10 山东浪潮科学研究院有限公司 Lithium battery capacity estimation method based on convolutional neural network
CN114578250A (en) * 2022-02-28 2022-06-03 广东工业大学 Lithium battery SOH estimation method based on double-triangular structure matrix
CN114720882A (en) * 2022-05-20 2022-07-08 东南大学溧阳研究院 Reconstruction method of lithium ion battery maximum capacity fading curve based on neural network and migration model
CN115015760A (en) * 2022-05-10 2022-09-06 香港中文大学(深圳) Lithium battery health state evaluation method based on neural network and migration integrated learning
CN115184805A (en) * 2022-06-21 2022-10-14 东莞新能安科技有限公司 Battery health state acquisition method, device, equipment and computer program product
JP2023017480A (en) * 2021-07-26 2023-02-07 本田技研工業株式会社 Battery model construction method and battery degradation prediction device
CN117054892A (en) * 2023-10-11 2023-11-14 特变电工西安电气科技有限公司 Evaluation method, device and management method for battery state of energy storage power station

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034045A (en) * 2018-07-20 2018-12-18 中南大学 A kind of leucocyte automatic identifying method based on convolutional neural networks
CN109523013A (en) * 2018-10-15 2019-03-26 西北大学 A kind of air particle pollution level estimation method based on shallow-layer convolutional neural networks
CN109784480A (en) * 2019-01-17 2019-05-21 武汉大学 A kind of power system state estimation method based on convolutional neural networks
CN109918752A (en) * 2019-02-26 2019-06-21 华南理工大学 Mechanical failure diagnostic method, equipment and medium based on migration convolutional neural networks
US20200081070A1 (en) * 2017-11-20 2020-03-12 The Trustees Of Columbia University In The City Of New York Neural-network state-of-charge and state of health estimation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200081070A1 (en) * 2017-11-20 2020-03-12 The Trustees Of Columbia University In The City Of New York Neural-network state-of-charge and state of health estimation
CN109034045A (en) * 2018-07-20 2018-12-18 中南大学 A kind of leucocyte automatic identifying method based on convolutional neural networks
CN109523013A (en) * 2018-10-15 2019-03-26 西北大学 A kind of air particle pollution level estimation method based on shallow-layer convolutional neural networks
CN109784480A (en) * 2019-01-17 2019-05-21 武汉大学 A kind of power system state estimation method based on convolutional neural networks
CN109918752A (en) * 2019-02-26 2019-06-21 华南理工大学 Mechanical failure diagnostic method, equipment and medium based on migration convolutional neural networks

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
DANHUA ZHOU; ZHANYING LI; JIALI ZHU; HAICHUAN ZHANG; LIN HOU: "State of Health Monitoring and Remaining Useful Life Prediction of Lithium-Ion Batteries Based on Temporal Convolutional Network", 《IEEE ACCESS 》, vol. 8, 16 March 2020 (2020-03-16), XP011780331, DOI: 10.1109/ACCESS.2020.2981261 *
MICROSTRONG0305: "卷积神经网络(CNN)综述", 《CSDN博客》, 10 December 2018 (2018-12-10) *
SHENGSHEN,MOHAMMADKAZEM SADOUGHI,XIANGYICHEN,MINGYIHONG,CHAOHU: "A deep learning method for online capacity estimation of lithium-ion batteries", 《JOURNAL OF ENERGY STORAGE》, 31 October 2019 (2019-10-31), pages 2 *
YOHWAN CHOI; SEUNGHYOUNG RYU; KYUNGNAM PARK; HONGSEOK KIM: "Machine Learning-Based Lithium-Ion Battery Capacity Estimation Exploiting Multi-Channel Charging Profiles", 《IEEE ACCESS》, vol. 7, 5 June 2019 (2019-06-05) *
李央: "基于锂电池和超级电容的车用混合动力***能量管理研究", 《全国优秀硕士论文数据库·工程科技Ⅱ辑》 *
李央: "基于锂电池和超级电容的车用混合动力***能量管理研究", 《全国优秀硕士论文数据库·工程科技Ⅱ辑》, vol. 2022, no. 1, 15 January 2022 (2022-01-15) *
肖仁鑫等: "基于长短期记忆神经网络的健康状态估算", 《农业装备与车辆工程》 *
肖仁鑫等: "基于长短期记忆神经网络的健康状态估算", 《农业装备与车辆工程》, no. 04, 10 April 2020 (2020-04-10) *
蜜丝特湖: "Pool层及其公式推导", 《CSDN博客》, 14 June 2018 (2018-06-14) *

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112345952A (en) * 2020-09-23 2021-02-09 上海电享信息科技有限公司 Power battery aging degree judging method
CN112444748A (en) * 2020-10-12 2021-03-05 武汉蔚来能源有限公司 Battery abnormality detection method, battery abnormality detection device, electronic apparatus, and storage medium
CN112231975A (en) * 2020-10-13 2021-01-15 中国铁路上海局集团有限公司南京供电段 Data modeling method and system based on reliability analysis of railway power supply equipment
CN112083337A (en) * 2020-10-22 2020-12-15 重庆大学 Power battery health prediction method oriented to predictive operation and maintenance
CN112666479B (en) * 2020-12-02 2023-05-16 西安交通大学 Battery life prediction method based on charge cycle fusion
CN112666479A (en) * 2020-12-02 2021-04-16 西安交通大学 Battery service life prediction method based on charging cycle fusion
CN112666480A (en) * 2020-12-02 2021-04-16 西安交通大学 Battery life prediction method based on charging process characteristic attention
CN112684346A (en) * 2020-12-10 2021-04-20 西安理工大学 Lithium battery health state estimation method based on genetic convolutional neural network
CN112684346B (en) * 2020-12-10 2023-06-20 西安理工大学 Lithium battery health state estimation method based on genetic convolutional neural network
CN112834945A (en) * 2020-12-31 2021-05-25 东软睿驰汽车技术(沈阳)有限公司 Evaluation model establishing method, battery health state evaluation method and related product
CN112798960A (en) * 2021-01-14 2021-05-14 重庆大学 Battery pack residual life prediction method based on migration deep learning
CN113406496A (en) * 2021-05-26 2021-09-17 广州市香港科大***研究院 Battery capacity prediction method, system, device and medium based on model migration
CN113406496B (en) * 2021-05-26 2023-02-28 广州市香港科大***研究院 Battery capacity prediction method, system, device and medium based on model migration
CN113612269B (en) * 2021-07-02 2023-06-27 国网山东省电力公司莱芜供电公司 Method and system for controlling charge and discharge of battery monomer of lead-acid storage battery energy storage station
CN113612269A (en) * 2021-07-02 2021-11-05 国网山东省电力公司莱芜供电公司 Battery monomer charging and discharging control method and system for lead-acid storage battery energy storage station
CN113536676A (en) * 2021-07-15 2021-10-22 重庆邮电大学 Lithium battery health condition monitoring method based on feature transfer learning
JP2023017480A (en) * 2021-07-26 2023-02-07 本田技研工業株式会社 Battery model construction method and battery degradation prediction device
JP7269999B2 (en) 2021-07-26 2023-05-09 本田技研工業株式会社 Battery model construction method and battery deterioration prediction device
CN113740736A (en) * 2021-08-31 2021-12-03 哈尔滨工业大学 Electric vehicle lithium battery SOH estimation method based on deep network self-adaptation
CN113740736B (en) * 2021-08-31 2024-04-02 哈尔滨工业大学 Electric vehicle lithium battery SOH estimation method based on deep network self-adaption
CN113777499A (en) * 2021-09-24 2021-12-10 山东浪潮科学研究院有限公司 Lithium battery capacity estimation method based on convolutional neural network
CN113721151A (en) * 2021-11-03 2021-11-30 杭州宇谷科技有限公司 Battery capacity estimation model and method based on double-tower deep learning network
CN114578250B (en) * 2022-02-28 2022-09-02 广东工业大学 Lithium battery SOH estimation method based on double-triangular structure matrix
CN114578250A (en) * 2022-02-28 2022-06-03 广东工业大学 Lithium battery SOH estimation method based on double-triangular structure matrix
CN115015760A (en) * 2022-05-10 2022-09-06 香港中文大学(深圳) Lithium battery health state evaluation method based on neural network and migration integrated learning
CN115015760B (en) * 2022-05-10 2024-06-14 香港中文大学(深圳) Lithium battery health state assessment method based on neural network and migration integrated learning
CN114720882B (en) * 2022-05-20 2023-02-17 东南大学溧阳研究院 Reconstruction method of maximum capacity fading curve of lithium ion battery
CN114720882A (en) * 2022-05-20 2022-07-08 东南大学溧阳研究院 Reconstruction method of lithium ion battery maximum capacity fading curve based on neural network and migration model
CN115184805A (en) * 2022-06-21 2022-10-14 东莞新能安科技有限公司 Battery health state acquisition method, device, equipment and computer program product
CN117054892A (en) * 2023-10-11 2023-11-14 特变电工西安电气科技有限公司 Evaluation method, device and management method for battery state of energy storage power station
CN117054892B (en) * 2023-10-11 2024-02-27 特变电工西安电气科技有限公司 Evaluation method, device and management method for battery state of energy storage power station

Also Published As

Publication number Publication date
CN111638465B (en) 2023-02-28

Similar Documents

Publication Publication Date Title
CN111638465B (en) Lithium battery health state estimation method based on convolutional neural network and transfer learning
Yang et al. State-of-charge estimation of lithium-ion batteries based on gated recurrent neural network
Xiao et al. Accurate state-of-charge estimation approach for lithium-ion batteries by gated recurrent unit with ensemble optimizer
CN110888058B (en) Algorithm based on power battery SOC and SOH joint estimation
CN110068774A (en) Estimation method, device and the storage medium of lithium battery health status
CN115632179B (en) Intelligent quick charging method and system for lithium ion battery
CN110146822A (en) A kind of Vehicular dynamic battery capacity On-line Estimation method based on constant-current charge process
CN113702843B (en) Lithium battery parameter identification and SOC estimation method based on suburb optimization algorithm
Li et al. CNN and transfer learning based online SOH estimation for lithium-ion battery
CN104156791A (en) Lithium ion battery residual life predicting method based on LS-SVM probability ensemble learning
CN112163372B (en) SOC estimation method of power battery
CN112782594B (en) Method for estimating SOC (state of charge) of lithium battery by data-driven algorithm considering internal resistance
CN114726045B (en) Lithium battery SOH estimation method based on IPEA-LSTM model
CN113534938B (en) Method for estimating residual electric quantity of notebook computer based on improved Elman neural network
CN115015760B (en) Lithium battery health state assessment method based on neural network and migration integrated learning
CN113406503A (en) Lithium battery SOH online estimation method based on deep neural network
CN114779103A (en) Lithium ion battery SOC estimation method based on time-lag convolutional neural network
Hasan et al. Performance comparison of machine learning methods with distinct features to estimate battery SOC
CN114167295B (en) Lithium ion battery SOC estimation method and system based on multi-algorithm fusion
CN113917336A (en) Lithium ion battery health state prediction method based on segment charging time and GRU
CN115219906A (en) Multi-model fusion battery state of charge prediction method and system based on GA-PSO optimization
CN115963407A (en) ICGWO (intensive care unit) optimization ELM (element-based robust model) based lithium battery SOC estimation method
CN116643196A (en) Battery health state estimation method integrating mechanism and data driving model
CN116401770A (en) Quick charge strategy design method based on battery digital twin model and machine learning
CN110232432B (en) Lithium battery pack SOC prediction method based on artificial life model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant