CN113777496B - Lithium ion battery residual life prediction method based on time convolution neural network - Google Patents

Lithium ion battery residual life prediction method based on time convolution neural network Download PDF

Info

Publication number
CN113777496B
CN113777496B CN202111039920.0A CN202111039920A CN113777496B CN 113777496 B CN113777496 B CN 113777496B CN 202111039920 A CN202111039920 A CN 202111039920A CN 113777496 B CN113777496 B CN 113777496B
Authority
CN
China
Prior art keywords
training
battery
data
convolution
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111039920.0A
Other languages
Chinese (zh)
Other versions
CN113777496A (en
Inventor
***
郭旭东
马波
宋浏阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Chemical Technology
Original Assignee
Beijing University of Chemical Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Chemical Technology filed Critical Beijing University of Chemical Technology
Priority to CN202111039920.0A priority Critical patent/CN113777496B/en
Publication of CN113777496A publication Critical patent/CN113777496A/en
Application granted granted Critical
Publication of CN113777496B publication Critical patent/CN113777496B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/36Arrangements for testing, measuring or monitoring the electrical condition of accumulators or electric batteries, e.g. capacity or state of charge [SoC]
    • G01R31/367Software therefor, e.g. for battery testing using modelling or look-up tables
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/36Arrangements for testing, measuring or monitoring the electrical condition of accumulators or electric batteries, e.g. capacity or state of charge [SoC]
    • G01R31/392Determining battery ageing or deterioration, e.g. state of health
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Secondary Cells (AREA)

Abstract

A lithium ion battery residual life prediction method based on a time convolution neural network belongs to the field of lithium ion battery fault prediction and health management. The battery equipment degradation process is a highly nonlinear and complex multidimensional system, the time-varying property is strong, the existing algorithm prediction process requires expert knowledge and priori knowledge, the time and the labor are wasted, the prediction process is difficult, and the precision is low. The method automatically establishes a nonlinear mapping relation between measured parameters and service life by means of strong time sequence modeling capability of the neural network and mining hidden models in time sequences. Because of the parallelism mechanism of convolution computation, graphic computation can be used for acceleration training, and computation is faster. The invention provides a calculation method of a parameter screening device, which can automatically screen a part of parameters when invalid parameters and redundant parameters are too much, thereby reducing the prediction workload and improving the training efficiency.

Description

Lithium ion battery residual life prediction method based on time convolution neural network
Technical Field
A lithium ion battery residual life prediction method based on a time convolution neural network belongs to the field of lithium ion battery fault prediction and health management.
Background
The lithium battery has the advantages of high output voltage, high energy density, low self-discharge rate, long cycle life, high reliability and the like, and is widely applied to power energy storage systems of hydraulic power, firepower, wind power, solar power stations and other power supply energy storage systems, and important fields of traffic, military equipment, aerospace and the like. If a battery failure occurs during use, the performance of the corresponding power equipment or system is likely to be reduced or failed, thereby increasing the cost and even causing accidents such as fire and explosion. Therefore, accurate prediction has great practical significance for the remaining life of the battery.
Typical lithium ion battery remaining life (remaining useful life, RUL) prediction methods are three: the first is an experience-based approach, the second is a model-based approach, and the third is a data-driven approach. The battery life prediction is completed through historical data rules by an experience-based method, which is simple, convenient and quick, but only rough estimation of RUL is performed, and the current precision requirement is not met. The model-based method is to complete life prediction by analyzing the operation environment, internal material characteristics and the like of the battery, but the establishment of an accurate model takes a lot of time and is cumbersome to operate. Data-driven methods are based on statistical analysis, which enables life prediction by mining the inherent correlation between inputs and response outputs, without requiring knowledge of specific material properties, structures, or failure mechanisms. Although more convenient in modeling, the specific aging mechanism for the battery cannot be clearly expressed.
In recent years, deep learning has been widely used as one of machine learning algorithms, in the face of highly nonlinear, complex multidimensional systems, with its strong function mapping capability. Particularly in time series processing problems, recurrent Neural Networks (RNNs) and their variant networks such as: long and short term memory networks, gate control unit networks, and the like are widely used due to their powerful feature mining capabilities. However, the recurrent neural network and its variant network cannot be calculated in parallel during operation, and the graphic computing capability of the GPU cannot be fully used to accelerate training, which affects the computing efficiency. The invention uses a time series convolution network (Temporal convolutional network, TCN) to process the sequence modeling task, integrates a one-dimensional full convolution network, a causal convolution and an expansion convolution, and has flexible receptive fields and stable gradients. Preliminary experimental evaluation of TCN shows that simple convolution structures perform better than standard recursive networks (e.g., LSTMs) on different tasks and data sets, while exhibiting more long-term efficient memory.
Therefore, the time sequence convolution network has great prospect when applied to the residual life prediction estimation of the lithium ion battery.
Disclosure of Invention
In order to accurately predict the residual life of the lithium battery as far as possible, the invention provides a lithium battery residual life prediction method based on a time convolution network (Temporal convolutional network, TCN). By excavating hidden characteristics of monitoring parameters and residual service life in the use process of the battery, a nonlinear time sequence model is constructed to accurately predict the residual service life of the battery, and the method mainly comprises the following steps:
step 1: collecting historical data of the whole life cycle of the lithium ion battery;
step 2: and importing the acquired data into a parameter screening device for parameter screening. Preprocessing the screened data, and constructing a training data set sample and a test data set sample so as to facilitate inputting the TCN;
step 3: constructing a required TCN network model, randomly initializing model parameters, inputting a training data set, training the TCN model by using a gradient descent algorithm, and fixing optimal model parameters.
Step 4: and (3) importing the test data set into the optimal model obtained in the step (3), and calculating to obtain a predicted residual life value.
And (3) carrying out repeated charge and discharge experiments on the battery in the step (1) at the room temperature of 24 ℃ and collecting the influence of battery aging on internal parameters. During charging and discharging, charging was performed in a constant current mode of 1.5A until the battery voltage reached 4.2V, and then charging was continued in a constant voltage mode until the charging current was reduced to 20mA. Discharging with constant current of 2A until the voltage is reduced to 2.7V, and repeating charging and discharging process until rated capacity is reduced from 2Ahr to 1.4Ahr to obtain time sequence and residual life index { (x) 1 ,x 2 …x 5 ,x n ) Y }, where { x } 1 -x n And is a lifetime parameter, y is lifetime.
Step 2 comprises 4 steps:
the first step: and importing the data into a data screening device to carry out data screening, and eliminating invalid parameters to reduce the operand.
And a second step of: normalization is performed for the selected parameters. The normalization method is maximum and minimum normalization, and the mathematical expression is
And a third step of: and constructing a training set of the TCN model by using the training data. The data of the full data period is used as training data. One period is randomly selected from the last 50 periods, and all data before the period is taken as a test data set. The battery capacity is used as a standard for measuring the service life of the battery and is set as a label
Fourth step: in order to facilitate the training of the neural network, a sliding window sampling method is adopted to manufacture samples, and training samples with time marks are obtained.
Step 3 comprises 3 steps.
The first step: a time-sequential convolution network is constructed. The time-sequential convolution network consists of several remaining blocks. Each residual block contains two dilation-causal convolution modules for convolution operations, two ReLU layers to ensure non-linearisation, and two weight normalization and loss layers for normalization and regularization. Finally an additional 1 x 1 convolution is introduced to ensure that the inputs and outputs of the remaining blocks have the same width.
Wherein the calculation formula of the convolution is as follows:
wherein k is a parameter trained in the filter; w is convolution kernel weight; x is x l i An ith input feature for the first layer; y is l i Is the ith output feature of the first layer.
In order to accelerate the network training process and ensure consistency of hidden layer activation data distribution, weight normalization (BN, batch normalization) is used to normalize each batch of data. After the input data is standardized in batches, the distribution of the data is standardized, so that the network does not need to learn further to adapt to the distribution characteristics of different data, and the generalization capability of the network is improved. The calculation formula for batch normalization is as follows:
in which y (k) An output response for the kth neuron; gamma ray (k) The scale of the reconstruction parameters for training; beta (k) For the transformation of the reconstruction parameters of training, epsilon is an extremely small constant, the denominator is prevented from being zero, the initial value is a random value, and the optimal y, gamma and beta are obtained by repeating the iterative process;
the activation function is mainly used for nonlinear transformation. The model adopts the ReLU as an activation function, and in the back propagation process of network training, when the input is smaller than 0, the weight value cannot be updated. ReLU is defined as follows:
wherein x is i Is an input feature.
And a second step of: the mean square error is selected as the loss function, and the mathematical expression is
Wherein:outputting a predicted value; y is i Is an actual value; n is the data time series length.
The evaluation was performed using Root Mean Square Error (RMSE) as follows:
wherein:outputting a predicted value; y is i Is an actual value; n is the data time series length.
And a third step of: training by adopting a random initialization model, inputting the training sample constructed in the step 2, training and iterating by using a back propagation algorithm, and retaining the model when the loss function value is not reduced within 10 periods.
Step 4: and (3) processing the test set data according to the step (2) to obtain a training set sample. And (3) inputting the training set sample into the model trained in the step (3), carrying out life pre-treatment, outputting the residual life value, and carrying out life assessment. If the RMSE value is lower than 10, the process is ended, and if the RMSE value is higher than 10, the training is conducted in the third step.
The invention mainly comprises the following steps:
the battery equipment degradation process is a highly nonlinear and complex multidimensional system, the time-varying property is strong, the existing algorithm prediction process requires expert knowledge and priori knowledge, the time and the labor are wasted, the prediction process is difficult, and the precision is low. The method uses the neural network algorithm to predict the degradation process of the battery equipment, and does not need to manually establish a complex mathematical physical model. Through the strong time sequence modeling capability of the neural network, a hidden model in the time sequence is mined, and a nonlinear mapping relation between the measured parameters and the service life is automatically established.
Existing methods for modeling time series based on neural networks mostly use cyclic neural networks and their variant networks for modeling. And we choose to jump out of the structure of the cyclic neural network, use the convolutional neural network for modeling, and because of the parallelism mechanism of convolutional calculation, can use graphic calculation for accelerating training, so that the calculation is faster.
The calculation method of the parameter screening device is provided, when invalid parameters and redundant parameters are too many, a part of parameters can be automatically screened, the prediction workload is reduced, and the training efficiency is improved.
Drawings
FIG. 1 is a schematic diagram of a residual block of a time-series convolutional network
FIG. 2 is a schematic diagram of an expansion causal convolution
FIG. 3A process flow chart
Detailed Description
The lithium ion battery residual life prediction method based on TCN comprises the following specific embodiments:
step 1: the battery is subjected to repeated charge and discharge experiments at the room temperature of 24 ℃ to collect the influence of battery aging on internal parameters. During charging and discharging, charging was performed in a constant current mode of 1.5A until the battery voltage reached 4.2V, and then charging was continued in a constant voltage mode until the charging current was reduced to 20mA. The discharge was performed at a constant current of 2A until the voltage was reduced to 2.7V, and the charge and discharge process was repeated until the rated capacity was reduced from 2ah to 1.4 ah. Collecting operation state parameter values of the lithium ion battery under different working conditions in the whole life cycle operation process, including battery terminal voltage, battery output current, battery temperature, load measurement current, load measurement voltage, battery capacity and environment temperature { (x) 1 ,x 2 ,x 3 ,x 4 ,x 5 ,x 6 ,x 7 ) And remaining lifetime y, combining the parameters over time to obtain a time series { (x) 1 ,x 2 ,x 3 ,x 4 ,x 5 ,x 6 ,x 7 ) The present invention uses the published lithium battery dataset of nasaspcon as experimental data.
Step 2: preprocessing the acquired data to construct a training data set, and conveniently inputting the test data set into a TCN network
The first step: and (5) data cleaning. For multi-source sensor data acquired by the same system, some sensor parameters have small correlation with equipment life and even are irrelevant data, and some data are repeated data, so that the multi-source sensor data firstly needs to select proper sensor parameters, secondly needs to select changed parameters, and some parameters cannot be changed and are redundant parameters. In terms of battery life, parameters should be selected that are easily measured and that are closely related to the state of health of the battery. Seven parameters in the dataset are selected as characteristic parameters for life prediction based on this principle. Time series X { (X) 1 ,x 2 ,x 3 ,x 4 ,x 5 ,x 6 ,x 7 ) Y } is led into the parameter filter, and is fed intoAnd (5) calculating rows. The calculation formula is that
In the nashapoe dataset, the partial derivative of ambient temperature is 0, based on which the ambient temperature parameter is culled to obtain a time series { (x) 1 ,x 2 ,x 3 ,x 4 ,x 5 ,x 6 ) Y }. If the partial derivative of the ambient temperature is not 0, the time series is { (x) 1 ,x 2 ,x 3 ,x 4 ,x 5 ,x 6 ,x 7 ),y}。
And a second step of: data normalization. In order to solve the problems of different sensor data sizes, characteristics, distribution differences and the like, so that the data are comparable, the method adopts a maximum and minimum normalization method to eliminate the differences. The mathematical expression is shown as:
where x is the sensor value, x min Is the minimum value, x max Is the maximum value and x' is the processed value.
The third step is to divide the test data set and the training data set: and constructing a training set of the TCN model by using the training data. The data of the full data period is used as training data. One period is randomly selected from the last 50 periods, and all data before the period is taken as a test data set. Battery capacity was used as a measure of battery life and was set as a label.
(1) Wherein the training set is denoted as [ x_train ], [ y_train ], wherein:
v in the above m(i) ,A m(i) ,T m(i) ,A l(i) ,V l(i) ,C (i) Battery terminal voltage, battery output current, battery temperature, load measurement current, load measurement voltage, battery capacity, respectively representing the i-th charge-discharge cycle of the lithium battery.
Fourth step: a remaining life label is obtained. After preprocessing the sensor data, a time series { (x) is obtained 1 ,x 2 ,x 3 ,...,x t ) In order to obtain training samples, a sliding window sampling method is adopted, the time window is the sensor data width b, and the length is the set length a, so that the sample matrix size is a×b. The sliding window slides from the beginning of the time sequence, is supplemented with 0 when no element is present, and is shifted one time step at a time to obtain a sample sequence { X ] 1 ,X 2 ,...,X L And L is the maximum life cycle of the battery.
Step 3: and constructing a TCN network prediction model, setting training parameters, and performing model training.
The first step: constructing a TCN model:
(1) The remaining blocks of TCN are shown in fig. 1: two dilation-causal convolution modules for convolution operations are included, two ReLU layers to ensure non-linearisation, and two weight normalization and loss layers for normalization and regularization. Finally an additional 1 x 1 convolution is introduced to ensure that the inputs and outputs of the remaining blocks have the same width. The dilation convolution is to obtain a larger range of receptive fields. Causal convolution ensures that each time input has a corresponding output, while ensuring that no time leakage has never occurred in the past.
Wherein the calculation formula of the convolution is as follows:
wherein k is a parameter trained in the filter; w is convolution kernel weight; x is x l i An ith input feature for the first layer; y is l i The ith output feature of the first layer, b is biasDefault to 1.
In order to accelerate the network training process and ensure consistency of hidden layer activation data distribution, weight normalization (BN, batch normalization) is used to normalize each batch of data. After the input data is standardized in batches, the distribution of the data is standardized, so that the network does not need to learn further to adapt to the distribution characteristics of different data, and the generalization capability of the network is improved. The calculation formula for batch normalization is as follows:
in which y (k) An output response for the kth neuron; gamma ray (k) The scale of the reconstruction parameters for training; beta (k) For the trained reconstruction parameter transition, epsilon is a very small constant preventing denominator zero, E (x) represents the mean,representing standard deviation. The initial value is a random value, the iterative process is repeated, and when the loss function value is not reduced in 5 periods, the optimal y, gamma and beta are obtained;
the activation function is mainly used for nonlinear transformation. The model adopts the ReLU as an activation function, and in the back propagation process of network training, when the input is smaller than 0, the weight value cannot be updated. ReLU is defined as follows:
wherein x is i Is an input feature.
(2) Causal hole convolution: as shown in fig. 2, the significance of the dilation convolution is to expand the receptive field of the convolution kernel, which is sized:
k′=d(k-1)+1
where d is the expansion coefficient, k is the convolution kernel size, and k' is the expanded convolution kernel size. k' should not exceed the sample size. Window length is 15, so the convolution kernel size is chosen to be: {5x5,3x3}; the expansion coefficient is selected as {3,2,1}, and the number of convolution kernels is {64, 32}, so that multi-scale characteristic information can be obtained.
Causal convolution is a strictly fixed convolution direction, as shown in fig. 2, which ensures that historical information does not leak.
(3) The output O of each residual block is as follows, where x is the input and F (x) is the causal hole convolution calculation:
O=Activation(x+F(x))
the Activation is an Activation function, the Activation function selected by the network is a ReLU function, and the expression is f (x) =max {0, x }
X is input, and f (X) is output.
And a second step of: the mean square error is selected as the loss function, and the mathematical expression is
Wherein:outputting a predicted value; y is i Is an actual value; n is the data time series length.
In the training process, a gradient descent algorithm is selected, loss is counter-propagated through gradient descent, and trainable parameters (weight W and bias b) of each layer of the convolutional neural network are updated layer by layer. The learning rate parameter (η) is used to control the strength of the residual back propagation:
i is the ith convolution layer. The η set point is 0.001.
The evaluation was performed using Root Mean Square Error (RMSE) as follows:
wherein:outputting a predicted value; y is i Is an actual value; n is the number of samples.
And a third step of:
after the steps are completed, the network training is carried out, and the training process is as follows: after the data is input into the model, firstly, weight normalization is carried out through causal convolution calculation, then nonlinear processing is carried out through an activation function ReLU, meanwhile, a random loss strategy (Dropout) is carried out, and the Dropout refers to random freezing of some neurons to ensure that the neurons do not play a role in a network, and the general value is as follows: 0.05-0.5, and the method is taken to be 0.2 according to experience after experiments. After two times of such operations, the output of the first residual error module is obtained, the output of the first residual error module is used as the input of the next residual error module, and the last residual error module is calculated, so that the output value of the TCN model is obtained. The residual block is 3 in number. And an early termination strategy is used in the iterative calculation process, and when the mean square error is unchanged for 5 periods, training is stopped and a training model is stored.
Step 4: and (3) processing the test set data according to the step (2) to obtain a training set sample. And (3) inputting the training set sample into the model trained in the step (3), carrying out life prediction, outputting a residual life value, and carrying out life evaluation. If the RMSE value is lower than 10, the process is ended, and if the RMSE value is higher than 10, the training is conducted in the third step.
In conclusion, the method fully exerts the characteristic mining capability of the time sequence convolution network on the time sequence information and the extracting capability of the high-dimensional characteristics. The traditional battery life prediction method is avoided from needing to utilize priori knowledge of battery degradation and building a complex mathematical physical model to predict the life. And directly establishing a nonlinear mapping relation between the characteristic parameters and the service life through a time sequence convolution network, and capturing potential characteristic information of the time sequence, thereby obtaining a more accurate prediction result. The method can accurately estimate the health state of the battery, ensure the safe operation of the battery system and reduce the operation cost.

Claims (1)

1. The lithium ion battery residual life prediction method based on the time convolution neural network is characterized by comprising the following steps of:
step 1: the battery is subjected to repeated charge and discharge experiments at the room temperature of 24 ℃ to collect the influence of battery aging on internal parameters;
in the charging and discharging process, charging is carried out in a constant current mode of 1.5A until the battery voltage reaches 4.2V, and then charging is continued in a constant voltage mode until the charging current is reduced to 20mA; discharging with constant current of 2A until the voltage is reduced to 2.7V, and repeatedly performing charging and discharging until the rated capacity is reduced from 2Ahr to 1.4Ahr; collecting operation state parameter values of the lithium ion battery under different working conditions in the whole life cycle operation process, including battery terminal voltage, battery output current, battery temperature, load measurement current, load measurement voltage, battery capacity and environment temperature { (x) 1 ,x 2 ,x 3 ,x 4 ,x 5 ,x 6 ,x 7 ) And remaining lifetime y, combining the parameters over time to obtain a time series { (x) 1 ,x 2 ,x 3 ,x 4 ,x 5 ,x 6 ,x 7 ),y};
Step 2: preprocessing the acquired data to construct a training data set, and conveniently inputting the test data set into a TCN network
The first step: data cleaning; time series X { (X) 1 ,x 2 ,x 3 ,x 4 ,x 5 ,x 6 ,x 7 ) Introducing y into a parameter screening device for calculation; the calculation formula is that
When the partial derivative of the ambient temperature is 0, the ambient temperature parameter is removed to obtain a time sequence { (x) 1 ,x 2 ,x 3 ,x 4 ,x 5 ,x 6 ) Y }; if it isThe partial derivative of ambient temperature is other than 0, then the time series is { (x) 1 ,x 2 ,x 3 ,x 4 ,x 5 ,x 6 ,x 7 ),y};
And a second step of: data standardization;
a maximum and minimum normalization method is adopted to eliminate the difference; the mathematical expression is shown as:
where x is the sensor value, x min Is the minimum value, x max Is the maximum value, x' is the processed value;
the third step is to divide the test data set and the training data set:
constructing a training set of the TCN model by utilizing training data; taking the data of the full data period as training data; randomly selecting one period from the last 50 periods, and taking all data before the period as a test data set; taking the battery capacity as a standard for measuring the service life of the battery, and setting the battery capacity as a label;
(1) Wherein the training set is denoted as [ x_train ], [ y_train ], wherein:
v in the above m(i) ,A m(i) ,T m(i) ,A l(i) ,V l(i) ,C (i) Battery terminal voltage, battery output current, battery temperature, load measurement current, load measurement voltage, battery capacity, respectively representing the i-th charge-discharge cycle of the lithium battery;
fourth step: obtaining a residual life label;
sampling method by sliding window, wherein the time window height isThe sensor data width b, length is set length a, so the sample matrix size is a×b; the sliding window slides from the beginning of the time sequence, is supplemented with 0 when no element is present, and is shifted one time step at a time to obtain a sample sequence { X ] 1 ,X 2 ,...,X L -L is the maximum life cycle of the battery;
step 3: constructing a TCN network prediction model, setting training parameters, and carrying out model training;
the first step: constructing a TCN model:
(1) The TCN comprises two causal expansion convolution modules for convolution operations, two ReLU layers to ensure non-linearisation, and two weight normalization and loss layers for normalization and regularization; finally an additional 1 x 1 convolution is introduced to ensure that the inputs and outputs of the remaining blocks have the same width;
wherein the calculation formula of the convolution is as follows:
wherein k is a parameter trained in the filter; w is convolution kernel weight; x is x l i An ith input feature for the first layer; y is l i The ith output feature of the first layer, b is bias default to 1;
in order to accelerate the network training process and ensure the consistency of hidden layer activation data distribution, the weight standardization (BN, batch normalization) is adopted to standardize each batch of data;
the calculation formula for batch normalization is as follows:
in which y (k) An output response for the kth neuron; gamma ray (k) The scale of the reconstruction parameters for training; beta (k) For the trained reconstruction parameter transition, epsilon is a very small constant preventing denominator zero, E (x) represents the mean,representing standard deviation; the initial value is a random value, the iterative process is repeated, and when the loss function value is not reduced in 5 periods, the optimal y, gamma and beta are obtained;
adopting a ReLU as an activation function, and when the input is smaller than 0 in the back propagation process of network training, the weight cannot be updated; reLU is defined as follows:
wherein x is i Is an input feature;
(2) Causal hole convolution:
the convolution kernel size is:
k′=d(k-1)+1
where d is the expansion coefficient, k is the convolution kernel size, and k' is the expanded convolution kernel size; k' should not exceed the sample size; window length is 15, so the convolution kernel size is chosen to be: {5x5,3x3}; the expansion coefficient is selected as {3,2,1}, and the number of convolution kernels is {64, 32}, so that multi-scale characteristic information can be obtained;
(3) The output O of each residual block is as follows, where x is the input and F (x) is the causal hole convolution calculation:
O=Activation(x+F(x))
the Activation is an Activation function, the selected Activation function is a ReLU function, and the expression is
f(x)=max{0,x}
X is input, and f (X) is output;
and a second step of: the mean square error is selected as the loss function, and the mathematical expression is
Wherein:outputting a predicted value; y is i Is an actual value; n is the length of the data time sequence;
in the training process, a gradient descent algorithm is selected, loss is counter-propagated through gradient descent, and trainable parameters of each layer of the convolutional neural network, namely weight W and bias b, are updated layer by layer; the learning rate parameter η is used to control the strength of the residual back propagation:
i is the ith convolution layer; the eta set value is 0.001;
evaluation was performed using Root Mean Square Error (RMSE);
and a third step of:
after the steps are completed, the network training is carried out, and the training process is as follows: after the data is input into the model, firstly, carrying out causal convolution calculation, carrying out weight normalization, then carrying out nonlinear processing through an activation function ReLU, and simultaneously carrying out a random loss strategy Dropout, wherein the Dropout is taken as 0.2; after two times of such operations, the output of the first residual error module is obtained, the output of the first residual error module is used as the input of the next residual error module, and the last residual error module is calculated to obtain the output value of the TCN model; the number of residual blocks is 3; an early termination strategy is used in the iterative calculation process, and training is stopped and a training model is stored when the mean square error is unchanged for 5 periods;
step 4: processing the test set data according to the step 2 to obtain a training set sample; inputting the training set sample into the model trained in the step 3, carrying out life prediction, outputting a residual life value, and carrying out life assessment; if the RMSE value is lower than 10, the process is ended, and if the RMSE value is higher than 10, the training is conducted in the third step.
CN202111039920.0A 2021-09-06 2021-09-06 Lithium ion battery residual life prediction method based on time convolution neural network Active CN113777496B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111039920.0A CN113777496B (en) 2021-09-06 2021-09-06 Lithium ion battery residual life prediction method based on time convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111039920.0A CN113777496B (en) 2021-09-06 2021-09-06 Lithium ion battery residual life prediction method based on time convolution neural network

Publications (2)

Publication Number Publication Date
CN113777496A CN113777496A (en) 2021-12-10
CN113777496B true CN113777496B (en) 2023-10-24

Family

ID=78841186

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111039920.0A Active CN113777496B (en) 2021-09-06 2021-09-06 Lithium ion battery residual life prediction method based on time convolution neural network

Country Status (1)

Country Link
CN (1) CN113777496B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114062948B (en) * 2022-01-11 2022-05-20 南通大学 Lithium ion battery SOC estimation method based on 3DCNN
CN114216558B (en) * 2022-02-24 2022-06-14 西安因联信息科技有限公司 Method and system for predicting remaining life of battery of wireless vibration sensor
CN114372538B (en) * 2022-03-22 2023-04-18 中国海洋大学 Method for convolution classification of scale vortex time series in towed sensor array
CN114861527A (en) * 2022-04-15 2022-08-05 南京工业大学 Lithium battery life prediction method based on time series characteristics
CN114966451B (en) * 2022-06-06 2024-04-26 北京航空航天大学 Power supply product residual service life online prediction method based on physical information neural network
CN115291108A (en) * 2022-06-27 2022-11-04 东莞新能安科技有限公司 Data generation method, device, equipment and computer program product
CN115470742B (en) * 2022-10-31 2023-03-14 中南大学 Lithium ion battery modeling method, system, equipment and storage medium
CN115616415B (en) * 2022-12-06 2023-04-07 北京志翔科技股份有限公司 Method, device and equipment for evaluating state of battery pack and storage medium
CN116047314B (en) * 2023-03-31 2023-08-18 泉州装备制造研究所 Rechargeable battery health state prediction method
CN116609668B (en) * 2023-04-26 2024-03-26 淮阴工学院 Lithium ion battery health state and residual life prediction method
CN116298936A (en) * 2023-05-19 2023-06-23 河南科技学院 Intelligent lithium ion battery health state prediction method in incomplete voltage range
CN116502141A (en) * 2023-06-26 2023-07-28 武汉新威奇科技有限公司 Data-driven-based electric screw press fault prediction method and system
CN116738859B (en) * 2023-06-30 2024-02-02 常州润来科技有限公司 Online nondestructive life assessment method and system for copper pipe
CN117111540B (en) * 2023-10-25 2023-12-29 南京德克威尔自动化有限公司 Environment monitoring and early warning method and system for IO remote control bus module
CN117237349B (en) * 2023-11-14 2024-03-05 珠海市嘉德电能科技有限公司 Thermal runaway protection method, device, equipment and storage medium of battery management system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110059377A (en) * 2019-04-02 2019-07-26 西南交通大学 A kind of fuel battery service life prediction technique based on depth convolutional neural networks
CN111443294A (en) * 2020-04-10 2020-07-24 华东理工大学 Method and device for indirectly predicting remaining life of lithium ion battery
WO2020191800A1 (en) * 2019-03-27 2020-10-01 东北大学 Method for predicting remaining service life of lithium-ion battery employing wde-optimized lstm network
CN111948563A (en) * 2020-06-19 2020-11-17 浙江大学 Electric forklift lithium battery residual life prediction method based on multi-neural network coupling
CN112241608A (en) * 2020-10-13 2021-01-19 国网湖北省电力有限公司电力科学研究院 Lithium battery life prediction method based on LSTM network and transfer learning
CN113094989A (en) * 2021-04-07 2021-07-09 贵州大学 Unmanned aerial vehicle battery life prediction method based on random configuration network
WO2021138925A1 (en) * 2020-01-08 2021-07-15 重庆邮电大学 Lithium battery capacity estimation method based on improved convolution-long short term memory neural network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020191800A1 (en) * 2019-03-27 2020-10-01 东北大学 Method for predicting remaining service life of lithium-ion battery employing wde-optimized lstm network
CN110059377A (en) * 2019-04-02 2019-07-26 西南交通大学 A kind of fuel battery service life prediction technique based on depth convolutional neural networks
WO2021138925A1 (en) * 2020-01-08 2021-07-15 重庆邮电大学 Lithium battery capacity estimation method based on improved convolution-long short term memory neural network
CN111443294A (en) * 2020-04-10 2020-07-24 华东理工大学 Method and device for indirectly predicting remaining life of lithium ion battery
CN111948563A (en) * 2020-06-19 2020-11-17 浙江大学 Electric forklift lithium battery residual life prediction method based on multi-neural network coupling
CN112241608A (en) * 2020-10-13 2021-01-19 国网湖北省电力有限公司电力科学研究院 Lithium battery life prediction method based on LSTM network and transfer learning
CN113094989A (en) * 2021-04-07 2021-07-09 贵州大学 Unmanned aerial vehicle battery life prediction method based on random configuration network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Remaining Useful Life Prediction of Lithium-Ion Batteries Using Neural Network and Bat-Based Particle Filter.IEEE ACCESS.2019,全文. *
基于机器学习的设备剩余寿命预测方法综述;裴洪;胡昌华;司小胜;张建勋;庞哲楠;张鹏;;机械工程学报(08);全文 *

Also Published As

Publication number Publication date
CN113777496A (en) 2021-12-10

Similar Documents

Publication Publication Date Title
CN113777496B (en) Lithium ion battery residual life prediction method based on time convolution neural network
CN111443294B (en) Method and device for indirectly predicting remaining life of lithium ion battery
CN110824364B (en) Lithium battery SOH estimation and RUL prediction method based on AST-LSTM neural network
CN112763929A (en) Method and device for predicting health of battery monomer of energy storage power station system
CN112241608A (en) Lithium battery life prediction method based on LSTM network and transfer learning
CN113805064B (en) Lithium ion battery pack health state prediction method based on deep learning
Zhou et al. Battery health prognosis using improved temporal convolutional network modeling
Zheng et al. State of health estimation for lithium battery random charging process based on CNN-GRU method
CN112763967B (en) BiGRU-based intelligent electric meter metering module fault prediction and diagnosis method
CN112734002B (en) Service life prediction method based on data layer and model layer joint transfer learning
CN115409263A (en) Method for predicting remaining life of lithium battery based on gating and attention mechanism
CN112611976A (en) Power battery state of health estimation method based on double differential curves
CN113536676A (en) Lithium battery health condition monitoring method based on feature transfer learning
CN113093014B (en) Online collaborative estimation method and system for SOH and SOC based on impedance parameters
CN113988210A (en) Method and device for restoring distorted data of structure monitoring sensor network and storage medium
CN116047314B (en) Rechargeable battery health state prediction method
CN116774086B (en) Lithium battery health state estimation method based on multi-sensor data fusion
CN113868957B (en) Residual life prediction and uncertainty quantitative calibration method under Bayes deep learning
CN114580705B (en) Method for predicting residual life of avionics product
CN114219118A (en) Method and system for predicting service life of intelligent electric meter based on D-S evidence theory
Guan et al. Research on the method of remaining useful life prediction of lithium-ion battery based on LSTM
Liu et al. A prognostics approach based on feature fusion and deep BiLSTM neural network for aero-engine
CN115389947B (en) Lithium battery health state prediction method and device, electronic equipment and storage medium
CN112485676B (en) Battery energy storage system state estimation early warning method under digital mirror image
CN114580620A (en) Lithium battery health state estimation method based on end-to-end deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant