CN112133941A - Rapid fault diagnosis method for locomotive proton exchange membrane fuel cell system - Google Patents

Rapid fault diagnosis method for locomotive proton exchange membrane fuel cell system Download PDF

Info

Publication number
CN112133941A
CN112133941A CN202011076131.XA CN202011076131A CN112133941A CN 112133941 A CN112133941 A CN 112133941A CN 202011076131 A CN202011076131 A CN 202011076131A CN 112133941 A CN112133941 A CN 112133941A
Authority
CN
China
Prior art keywords
layer
output
neuron
locomotive
fuel cell
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011076131.XA
Other languages
Chinese (zh)
Other versions
CN112133941B (en
Inventor
张雪霞
郭雪庆
陈维荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202011076131.XA priority Critical patent/CN112133941B/en
Publication of CN112133941A publication Critical patent/CN112133941A/en
Application granted granted Critical
Publication of CN112133941B publication Critical patent/CN112133941B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01MPROCESSES OR MEANS, e.g. BATTERIES, FOR THE DIRECT CONVERSION OF CHEMICAL ENERGY INTO ELECTRICAL ENERGY
    • H01M8/00Fuel cells; Manufacture thereof
    • H01M8/04Auxiliary arrangements, e.g. for control of pressure or for circulation of fluids
    • H01M8/04298Processes for controlling fuel cells or fuel cell systems
    • H01M8/04313Processes for controlling fuel cells or fuel cell systems characterised by the detection or assessment of variables; characterised by the detection or assessment of failure or abnormal function
    • H01M8/04664Failure or abnormal function
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0808Diagnosing performance data
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01MPROCESSES OR MEANS, e.g. BATTERIES, FOR THE DIRECT CONVERSION OF CHEMICAL ENERGY INTO ELECTRICAL ENERGY
    • H01M8/00Fuel cells; Manufacture thereof
    • H01M8/04Auxiliary arrangements, e.g. for control of pressure or for circulation of fluids
    • H01M8/04298Processes for controlling fuel cells or fuel cell systems
    • H01M8/04992Processes for controlling fuel cells or fuel cell systems characterised by the implementation of mathematical or computational algorithms, e.g. feedback control loops, fuzzy logic, neural networks or artificial intelligence
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E60/00Enabling technologies; Technologies with a potential or indirect contribution to GHG emissions mitigation
    • Y02E60/30Hydrogen technology
    • Y02E60/50Fuel cells

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Sustainable Development (AREA)
  • Chemical & Material Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Manufacturing & Machinery (AREA)
  • Evolutionary Biology (AREA)
  • Sustainable Energy (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Chemical Kinetics & Catalysis (AREA)
  • Electrochemistry (AREA)
  • General Chemical & Material Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Fuzzy Systems (AREA)
  • Medical Informatics (AREA)
  • Fuel Cell (AREA)

Abstract

The invention discloses a quick fault diagnosis method of a locomotive proton exchange membrane fuel cell system, which comprises the following steps: inputting multi-sensor data acquired in the running process of a locomotive into a fully-connected back propagation neural network to extract high-dimensional abstract features and express the high-dimensional abstract features by vectors; reconstructing the abstract features of the vector representation into a tensor feature map to obtain an integrated representation of the abstract features; and inputting the characteristic diagram into a convolution network based on IncepotionNet to classify the characteristic diagram, obtaining a fault category, and further realizing fault diagnosis of the locomotive proton exchange membrane fuel cell system. The invention can accurately classify the state of the locomotive proton exchange membrane fuel cell system, ensure the stable operation of the locomotive proton exchange membrane fuel cell system, reduce the power loss caused by faults, avoid the irreversible damage to the fuel cell system caused by the faults and prolong the healthy operation time of the fuel cell system.

Description

Rapid fault diagnosis method for locomotive proton exchange membrane fuel cell system
Technical Field
The invention belongs to the technical field of fuel cells, and particularly relates to a quick fault diagnosis method for a locomotive proton exchange membrane fuel cell system.
Background
The proton exchange membrane fuel cell, as a novel energy conversion device, can convert chemical energy in oxygen and hydrogen into electric energy to provide power for portable equipment or large-scale vehicles such as hybrid locomotives, and the theoretical by-product is only water, and because of the advantages of high power density, high starting speed and the like, the proton exchange membrane fuel cell becomes one of the pollution-free high-efficiency new energy sources which are concerned in recent years.
The proton exchange membrane fuel cell system is a complex environment with coupled multi-physical fields, and the stability becomes one of the key factors restricting the wider commercial application of the proton exchange membrane fuel cell system. Accurate fault detection of the proton exchange membrane fuel cell system greatly improves the stable operation capability of the proton exchange membrane fuel cell system, and the proton exchange membrane fuel cell system for the locomotive is more complex and is easier to have faults.
Fault diagnosis for pem fuel cells in general can be divided into experimental-based, model-based and data-based methods. In a complex system environment of a proton exchange membrane fuel cell for a locomotive, the data-based method has more significant advantages compared with other methods.
Data-based fault diagnosis methods are often classified into statistical-based methods, signal processing-based methods, and artificial intelligence-based methods. With the remarkable progress of the artificial intelligence method in recent years, the artificial intelligence method becomes a great hot tool in the field of fault diagnosis, and deep learning is one of the research hotspots in the field of artificial intelligence due to the excellent model expression capability. However, the existing fault diagnosis method cannot accurately classify the states of the locomotive proton exchange membrane fuel cell system, cannot ensure the stable operation of the locomotive proton exchange membrane fuel cell system, cannot reduce the power loss caused by faults, avoid the irreversible damage to the fuel cell system caused by the faults, and prolong the healthy operation time of the fuel cell system.
Disclosure of Invention
In order to solve the problems, the invention provides a rapid fault diagnosis method for a locomotive proton exchange membrane fuel cell system, which is used for accurately classifying the state of the locomotive proton exchange membrane fuel cell system, ensuring the stable operation of the locomotive proton exchange membrane fuel cell system, reducing the power loss caused by faults, avoiding the irreversible damage to the fuel cell system caused by the faults and prolonging the healthy operation time of the fuel cell system.
In order to achieve the purpose, the invention adopts the technical scheme that: a quick fault diagnosis method for a locomotive proton exchange membrane fuel cell system comprises the following steps:
s100, inputting multi-sensor data acquired in the running process of the locomotive into a fully-connected back propagation neural network to extract high-dimensional abstract features and express the high-dimensional abstract features by vectors;
s200, reconstructing the abstract features expressed by the vectors into a tensor feature diagram to obtain integrated expression of the abstract features;
and S300, inputting the characteristic diagram into a convolution network based on the Inception Net to classify the characteristic diagram, obtaining a fault category, and further realizing fault diagnosis of the locomotive proton exchange membrane fuel cell system.
Further, in step S100, inputting multi-sensor data collected during the operation of the locomotive into a fully-connected back-propagation neural network to extract high-dimensional abstract features and express the extracted features with vectors, including the steps of:
s101, collecting multi-sensor data of a proton exchange membrane fuel cell system and a current system state class in the running process of a locomotive;
s102, inputting the obtained data into a fully-connected back propagation neural network for extracting high-dimensional abstract features; the fully-connected back propagation neural network comprises three neuron structures of an input layer, a hidden layer and an output layer, and neurons in the same layer are not connected with one another and are connected in a fully-connected mode between adjacent layers;
s103, setting the parameters of the fully-connected back propagation neural network, and matching the acquired multi-sensor data dimension number according to the node number of the set input layer; setting the number of nodes of the hidden layer; the number of output layer nodes is set to obtain a vector representation of the high-dimensional abstract features.
Further, the fully-connected back propagation neural network comprises three neuron structures of an input layer, a hidden layer and an output layer, and neurons in the same layer are not connected with one another and are connected in a fully-connected manner between adjacent layers of neurons;
the output values of the hidden layer neuron and the output layer neuron are respectively calculated according to the following formulas:
Figure BDA0002716605050000021
in the formula: h is(j)An output representing a jth neuron node of the hidden layer; relu is a rectification linear unit which can make a negative value be 0 and make a positive value keep output, and the nonlinear representation capability of the network can be increased through the nonlinear function; w is aijRepresenting a weight parameter between the ith neuron of the input layer and the jth neuron of the hidden layer; bjA bias term representing the jth neuron node of the hidden layer; m is the number of input layer nodes; y is(k)An output value representing a kth neuron node of an output layer; w is ajkRepresenting a weight parameter between the jth neuron of the hidden layer and the kth neuron of the output layer; bkA bias term representing the kth neuron node of the output layer; n is the number of hidden layer nodes.
Further, setting parameters of a fully-connected back propagation neural network, and setting the number of nodes of an input layer to be 12 for matching the collected multi-sensor data dimension number; the number of hidden layer nodes is set to 512; the number of output layer nodes is set to 1024 to get a vector representation of the high-dimensional abstract features.
Further, in the step S200, reconstructing the abstract features represented by the vector into a tensor feature map to obtain an integrated representation of the abstract features, includes the steps of:
s201, sequentially arranging the abstract features represented by the obtained vectors in sequence;
s202, according to the arrangement, the abstract features represented by the vectors are reconstructed into an integrated tensor-represented feature map, and further a feature level information fusion process is achieved.
Further, in step S202, the abstract features represented by the vectors are reconstructed into an integrated tensor-represented feature map with a size of 32 × 1; where the first and second 32 represent feature map length and width pixel values and 1 represents a feature map having 1 number of channels.
Further, in the step S300, inputting the characteristic diagram into a convolutional network based on the inclusion net to classify the characteristic diagram, obtaining a fault category, and further implementing fault diagnosis of the locomotive proton exchange membrane fuel cell system, includes the steps of:
s3011, processing the input characteristic diagram sequentially through a 2D convolutional layer, a BN layer and a Relu layer; the characteristic that the convolution kernel performs local connection on the input feature map to extract features is utilized in the 2D convolution layer, all input feature map information is extracted along with the movement of the convolution kernel to obtain an output feature map, and the size of the convolution kernel is set to be 3 multiplied by 3; the BN layer realizes internal covariate transfer; the Relu layer enhances the nonlinear expression capability of the convolutional network;
s3012, the feature graph output after the processing in the step S3011 passes through a parallel processing structure block, and the parallel processing structure block I comprises four parallel branches to simultaneously extract features from the feature graph input in the step S3011; two branches use the three-layer structure in the step S3011, one branch is connected in series with the three-layer structure in the step S3011, and one branch is composed of a maximum pooling layer connected in series with the three-layer structure in the step S3011; in all 2D convolution layers in the parallel processing structure block I, the number of convolution kernels is set to be 16, and finally each branch circuit outputs a characteristic diagram with the channel number being 16;
s3013, performing scale splicing on the feature graphs with the number of 16 channels respectively output by the four parallel branches of the step S3012 by using a Concat layer to form a feature graph with the number of 64 channels to realize further deep extraction of information;
s3014, inputting the feature map output after processing in the step S3013 into a second structure block which has the same parallel structure and convolution kernel parameter setting as those in the step S3012;
s3015, performing scale splicing on the output of the second structural block in the step S3014 by using a Concat layer to form a feature map with 64 channels to realize further depth extraction of information;
s3016, inputting the feature map output after the processing of the step S3015 into a third structural block which has the same parallel structure as the step S3012 and sets the number of convolution kernels to be 32;
s3017, using a Concat layer to perform scale splicing on the outputs of the Block3 structure in step S3016 to form a feature map with a channel number of 128 (32 × 4), so as to implement further depth extraction of information.
S3018, inputting the feature map output after processing in the step S3017 into a structure block four, which has the same parallel structure as the one in the step S3012, and setting the number of convolution kernels to be 32;
s3019, performing scale splicing on the output of the structural block four in the step S3018 by using a Concat layer to form a feature map with 128 channels to realize further depth extraction of information;
s3020, extracting features from the feature map of the 128 channel obtained in step S3019 by using a global maximum pooling layer, to obtain a 128-dimensional vector;
s3021, using a full connection layer to enable the final output end of the network to have 6 state neurons to be identified; each neuron corresponds to a number from 1 to 6, and the number of the neuron with the largest output value of the neuron is used to indicate the diagnosis result of the multi-sensor data input in step S301 by the network, i.e. the state class of the proton exchange membrane system of the locomotive corresponding to the group of data.
Further, the calculation logic of the BN layer is expressed as the following formula:
Figure BDA0002716605050000041
Figure BDA0002716605050000042
wherein,
Figure BDA0002716605050000043
is a small set of batches of values of x,
Figure BDA0002716605050000044
is the average value of a small batch,
Figure BDA0002716605050000045
is the variance of the small batch,
Figure BDA0002716605050000046
is to mix xiNormalization is the result of a variance of 1 for the expected 0, ∈ is a constant to ensure numerical stability, γ and β are learnable parameters to adjust
Figure BDA0002716605050000047
Thereby obtaining an output yi
Further, in step S3021, a full connection layer is used to make the last output end of the network have 6 state-class neurons to be identified; each neuron corresponds to a serial number from 1 to 6, and the serial number of the neuron with the largest output value of the neuron is used to represent the diagnostic result of the network on the multi-sensor data input in step S301, that is, the state class of the locomotive proton exchange membrane system corresponding to the group of data; this is achieved by an argmax function, the calculation being expressed as follows:
Figure BDA0002716605050000051
where x or y represents the neuron number, f (x) is the output value obtained by the x-th neuron, and f (y) is the output value obtained by the y-th neuron.
The beneficial effects of the technical scheme are as follows:
firstly, extracting high-dimensional abstract features from multi-dimensional and multi-dimensional sensor data collected from a locomotive through a fully-connected back propagation neural network, and expressing the high-dimensional abstract features by using vectors, wherein the abstract features comprise all input information; secondly, the abstract features expressed by the vectors are reconstructed into a feature diagram expressed by a tensor, the size is set to be (32,32,1), the abstract features expressed by the tensor are more integrated and compact in space, and original data information can be better expressed; the feature map is then input into an IncepotionNet based convolution network for classification of the feature map. The invention can realize accurate and rapid fault diagnosis for the locomotive proton exchange membrane fuel cell system through the improved design of the method, and is beneficial to ensuring the stable operation of the complex locomotive proton exchange membrane fuel cell system, reducing the additional power loss of the system and prolonging the healthy operation time of the system.
Drawings
FIG. 1 is a schematic flow chart of a method for rapid fault diagnosis of a proton exchange membrane fuel cell system of a locomotive of the present invention;
fig. 2 is a schematic diagram of a processing framework for fault diagnosis based on the inclusion net convolution network in the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described with reference to the accompanying drawings.
In this embodiment, referring to fig. 1, the present invention provides a method for rapidly diagnosing a fault of a proton exchange membrane fuel cell system of a locomotive, including the steps of:
s100, inputting multi-sensor data acquired in the running process of the locomotive into a fully-connected back propagation neural network to extract high-dimensional abstract features and express the high-dimensional abstract features by vectors;
s200, reconstructing the abstract features expressed by the vectors into a tensor feature diagram to obtain integrated expression of the abstract features;
and S300, inputting the characteristic diagram into a convolution network based on the Inception Net to classify the characteristic diagram, obtaining a fault category, and further realizing fault diagnosis of the locomotive proton exchange membrane fuel cell system.
As an optimization scheme of the above embodiment, as shown in fig. 2, in the step S100, inputting multi-sensor data collected during the operation of the locomotive into a fully connected Back Propagation Neural Network (BPNN) for extracting high-dimensional abstract features and representing the extracted features by vectors, the method includes the steps of:
s101, collecting multi-sensor data of a proton exchange membrane fuel cell system and a current system state class in the running process of a locomotive;
and S102, inputting the multi-sensor data into a fully-connected three-layer Back Propagation Neural Network (BPNN) for extracting high-dimensional abstract features, wherein the multi-sensor data consists of neuron structures of an input layer, a hidden layer and an output layer, and the neuron structures are connected in a form that neurons in the same layer are not connected with one another and neurons in adjacent layers are fully connected with one another. The output values of the hidden layer neurons and the output layer neurons can be calculated as follows:
Figure BDA0002716605050000061
in the formula: h is(j)An output representing a jth neuron node of the hidden layer; relu is a rectification linear unit which can make a negative value be 0 and make a positive value keep output, and the nonlinear representation capability of the network can be increased through the nonlinear function; w is aijRepresenting a weight parameter between the ith neuron of the input layer and the jth neuron of the hidden layer; bjA bias term representing the jth neuron node of the hidden layer; m is the number of input layer nodes; y is(k)An output value representing a kth neuron node of an output layer; w is ajkRepresenting a weight parameter between the jth neuron of the hidden layer and the kth neuron of the output layer; bkA bias term representing the kth neuron node of the output layer; n is the number of hidden layer nodes.
S103, setting fully-connected BPNN related parameters, wherein in the invention, the number of nodes of an input layer is set to be 12 for matching the collected data dimension number of the multiple sensors; the number of hidden layer nodes is set to 512; the number of output layer nodes is set to 1024 to get a vector representation of the high-dimensional abstract features.
As an optimization scheme of the above embodiment, as shown in fig. 2, in the step S200, reconstructing the abstract features represented by the vectors into a tensor feature map to obtain an integrated representation of the abstract features, the method includes the steps of:
and S201, sequentially arranging the abstract features represented by the vectors obtained in the step S103 in order, and reconstructing the abstract features represented by the vectors into a feature map represented by an integrated tensor with the size of (32,32,1), wherein the first and second 32 represent feature map length and width pixel values, and the number of channels of the feature map 1 represents. And further, the characteristic level information fusion process is realized.
As an optimization scheme of the foregoing embodiment, as shown in fig. 2, in step S300, inputting the feature map into a newly proposed convolution network (CNN) based on the inclusion net to classify the feature map, including the steps of:
and S301, inputting the characteristic diagram expressed by the tensor with the size of (32,32,1) obtained in the step S201 into a newly proposed convolution network (CNN) based on IncepotionNet to classify the characteristic diagram, and further realizing fault diagnosis of the locomotive proton exchange membrane fuel cell system.
As an optimization scheme of the above embodiment, as shown in fig. 2, in step S301, the feature map represented by the tensor with the size of (32,32,1) is input to a newly proposed convolutional network (CNN) based on the inclusion net to classify the feature map, so as to implement the fault diagnosis of the locomotive proton exchange membrane fuel cell system, including the steps of:
s3011, the characteristic diagram input from the front end is processed by sequentially passing through a 2D convolution layer, a Batch Normalization (BN) layer and a Relu layer. The characteristic that the convolution kernel can be used for carrying out local connection on the input feature map to extract features is utilized in the 2D convolution layer, all input feature map information is extracted along with the movement of the convolution kernel to obtain an output feature map, and the size of the convolution kernel is set to be 3 multiplied by 3. The BN layer is to some extent to solve the problem of covariate transfer inside the network, and the related calculation logic of the BN layer can be expressed as the following formula:
Figure BDA0002716605050000071
Figure BDA0002716605050000072
wherein,
Figure BDA0002716605050000073
is a small set of batches of values of x,
Figure BDA0002716605050000074
is the average value of a small batch,
Figure BDA0002716605050000075
is the variance of the small batch,
Figure BDA0002716605050000076
is to mix xiNormalization is the result of a variance of 1 for the expected 0, ∈ is a constant to ensure numerical stability, γ and β are learnable parameters to adjust
Figure BDA0002716605050000077
Thereby obtaining an output yi. The calculation rule of the Relu layer is the same as the nonlinear Relu method in step S102, and the nonlinear expression capability of the convolutional network is enhanced.
S3012, the feature map output after the processing in step S3011 is to be subjected to feature extraction on the feature map input in step S3011 at the same time by using a parallel processing structure Block1, Block1 consisting of four parallel branches. Two branches use the three-layer structure described in step S3011, one branch is connected in series with the three-layer structure described in step S3011, and one branch is composed of one maximum pooling layer connected in series with the three-layer structure described in step S3011. In all 2D convolution layers in Block1, the number of convolution kernels is set to 16, and finally each branch outputs a feature map with 16 channels.
S3013, using one Concat layer to perform scale concatenation on the feature maps with the number of four channels 16 output by the four parallel branches in step S3012, so as to form a feature map with the number of 64 (16 × 4) channels, thereby implementing further deep extraction of information.
S3014, the signature graph output after the processing in step S3013 is input into a Block2 structure, which has the same parallel structure and convolution kernel parameter setting as those described in step S3012.
S3015, using a Concat layer to perform scale splicing on the outputs of the Block2 structure in step S3014 to form a feature map with a channel number of 64 (16 × 4), so as to implement further depth extraction of information.
S3016, the feature map output after the processing in step S3015 is input to a Block3 structure having the same parallel structure as described in step S3012, and the number of convolution kernels is set to 32.
S3017, using a Concat layer to perform scale splicing on the outputs of the Block3 structure in step S3016 to form a feature map with a channel number of 128 (32 × 4), so as to implement further depth extraction of information.
S3018, the feature map output after the processing in step S3017 is input to a Block4 structure having the same parallel structure as described in step S3012, and the number of convolution kernels is set to 32.
S3019, using a Concat layer to perform scale splicing on the outputs of the Block4 structure in step S3018 to form a feature map with a channel number of 128 (32 × 4), so as to implement further depth extraction of information.
S3020, extracting features from the feature map of the 128 channels obtained in step S3019 by using a global maximum pooling layer, a 128-dimensional vector is obtained.
S3021, using a full connectivity layer, the final output of the network has 6 (state classes to be identified) neurons. Each neuron corresponds to a number from 1 to 6, and the number of the neuron with the largest output value of the neuron is used to indicate the diagnosis result of the multi-sensor data input in step S301 by the network, i.e. the state class of the proton exchange membrane system of the locomotive corresponding to the group of data. This is achieved by an argmax function, the calculation being expressed as follows:
Figure BDA0002716605050000081
where x or y represents the neuron number, f (x) is the output value obtained by the x-th neuron, and f (y) is the output value obtained by the y-th neuron.
Example of implementation: 1680 data samples were taken from the locomotive, each data sample containing 12-dimensional multi-sensor data and each data sample having a sample label belonging thereto that represents a category of 6 locomotive states. 1680 data samples were subjected to data partitioning at a ratio of 6:2:2, representing the training set, validation set, and test set, respectively. The proposed diagnostic framework uses data in the training set to perform iterative update of model parameters, the validation set data is used to check the model performance during each iterative update of parameters to determine whether an overfitting phenomenon occurs, and the test set data is used to evaluate the final model performance after all iterations are completed. Model parameters are saved after 200 iterations with the training set data and the model representation is tested with the test set data. The test result shows that for a test set containing 336 data samples, 2 data samples actually belonging to the 3 rd state are misdiagnosed as the 1 st state, 1 data sample actually belonging to the 1 st state is misdiagnosed as the 3 rd state, and the rest data samples are correctly classified, so that the diagnosis method has higher fault classification accuracy of the locomotive proton exchange membrane fuel cell system. And the time for performing the test on the test set containing 336 data samples was 71.17 milliseconds, representing the ability of the proposed diagnostic method to diagnose quickly.
The foregoing shows and describes the general principles and broad features of the present invention and advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (9)

1. A quick fault diagnosis method for a locomotive proton exchange membrane fuel cell system is characterized by comprising the following steps:
s100, inputting multi-sensor data acquired in the running process of the locomotive into a fully-connected back propagation neural network to extract high-dimensional abstract features and express the high-dimensional abstract features by vectors;
s200, reconstructing the abstract features expressed by the vectors into a tensor feature diagram to obtain integrated expression of the abstract features;
and S300, inputting the characteristic diagram into a convolution network based on the Inception Net to classify the characteristic diagram, obtaining a fault category, and further realizing fault diagnosis of the locomotive proton exchange membrane fuel cell system.
2. The method for rapidly diagnosing faults of a proton exchange membrane fuel cell system of a locomotive according to claim 1, wherein in the step S100, the multi-sensor data collected during the operation of the locomotive is input into a fully connected back propagation neural network to extract high-dimensional abstract features and express the abstract features by vectors, comprising the steps of:
s101, collecting multi-sensor data of a proton exchange membrane fuel cell system and a current system state class in the running process of a locomotive;
s102, inputting the obtained data into a fully-connected back propagation neural network for extracting high-dimensional abstract features; the fully-connected back propagation neural network comprises three neuron structures of an input layer, a hidden layer and an output layer, and neurons in the same layer are not connected with one another and are connected in a fully-connected mode between adjacent layers;
s103, setting the parameters of the fully-connected back propagation neural network, and matching the acquired multi-sensor data dimension number according to the node number of the set input layer; setting the number of nodes of the hidden layer; the number of output layer nodes is set to obtain a vector representation of the high-dimensional abstract features.
3. The method for rapidly diagnosing the failure of the proton exchange membrane fuel cell system of the locomotive according to claim 2, wherein the fully-connected back propagation neural network comprises three neuron structures of an input layer, a hidden layer and an output layer, and the neurons in the same layer are not connected with each other and are connected with each other in a fully-connected manner between neurons in adjacent layers;
the output values of the hidden layer neuron and the output layer neuron are respectively calculated according to the following formulas:
Figure FDA0002716605040000011
in the formula: h is(j)An output representing a jth neuron node of the hidden layer; relu is a rectification linear unit which can make a negative value be 0 and make a positive value keep output, and the nonlinear representation capability of the network can be increased through the nonlinear function; w is aijRepresenting a weight parameter between the ith neuron of the input layer and the jth neuron of the hidden layer; bjA bias term representing the jth neuron node of the hidden layer; m is the number of input layer nodes; y is(k)An output value representing a kth neuron node of an output layer; w is ajkRepresenting a weight parameter between the jth neuron of the hidden layer and the kth neuron of the output layer; bkA bias term representing the kth neuron node of the output layer; n is the number of hidden layer nodes.
4. The method for rapidly diagnosing the fault of the proton exchange membrane fuel cell system of the locomotive according to claim 3, wherein parameters of a back propagation neural network which is fully connected are set, and the number of nodes of an input layer is set to be 12 so as to be matched with the number of dimensions of the acquired multi-sensor data; the number of hidden layer nodes is set to 512; the number of output layer nodes is set to 1024 to get a vector representation of the high-dimensional abstract features.
5. The method for rapidly diagnosing faults of a proton exchange membrane fuel cell system of a locomotive according to claim 1, wherein in the step S200, the abstract features of the vector representation are reconstructed into a tensor feature map to obtain an integrated representation of the abstract features, comprising the steps of:
s201, sequentially arranging the abstract features represented by the obtained vectors in sequence;
s202, according to the arrangement, the abstract features represented by the vectors are reconstructed into an integrated tensor-represented feature map, and further a feature level information fusion process is achieved.
6. The method of claim 5, wherein in step S202, the abstract features of the vector representation are reconstructed into an integrated tensor representation feature map with a size of 32 x 1; where the first and second 32 represent feature map length and width pixel values and 1 represents a feature map having 1 number of channels.
7. The method of claim 6, wherein in step S300, the characteristic map is input to a convolutional network based on the inclusion net to classify the characteristic map, so as to obtain a fault category, and further implement fault diagnosis of the proton exchange membrane fuel cell system of the locomotive, and the method includes the steps of:
s3011, processing the input characteristic diagram sequentially through a 2D convolutional layer, a BN layer and a Relu layer; the characteristic that the convolution kernel performs local connection on the input feature map to extract features is utilized in the 2D convolution layer, all input feature map information is extracted along with the movement of the convolution kernel to obtain an output feature map, and the size of the convolution kernel is set to be 3 multiplied by 3; the BN layer realizes internal covariate transfer; the Relu layer enhances the nonlinear expression capability of the convolutional network;
s3012, the feature graph output after the processing in the step S3011 passes through a parallel processing structure block, and the parallel processing structure block I comprises four parallel branches to simultaneously extract features from the feature graph input in the step S3011; two branches use the three-layer structure in the step S3011, one branch is connected in series with the three-layer structure in the step S3011, and one branch is composed of a maximum pooling layer connected in series with the three-layer structure in the step S3011; in all 2D convolution layers in the parallel processing structure block I, the number of convolution kernels is set to be 16, and finally each branch circuit outputs a characteristic diagram with the channel number being 16;
s3013, performing scale splicing on the feature graphs with the number of 16 channels respectively output by the four parallel branches of the step S3012 by using a Concat layer to form a feature graph with the number of 64 channels to realize further deep extraction of information;
s3014, inputting the feature map output after processing in the step S3013 into a second structure block which has the same parallel structure and convolution kernel parameter setting as those in the step S3012;
s3015, performing scale splicing on the output of the second structural block in the step S3014 by using a Concat layer to form a feature map with 64 channels to realize further depth extraction of information;
s3016, inputting the feature map output after the processing of the step S3015 into a third structural block which has the same parallel structure as the step S3012 and sets the number of convolution kernels to be 32;
s3017, using a Concat layer to perform scale splicing on the outputs of the Block3 structure in step S3016 to form a feature map with a channel number of 128 (32 × 4), so as to implement further depth extraction of information.
S3018, inputting the feature map output after processing in the step S3017 into a structure block four, which has the same parallel structure as the one in the step S3012, and setting the number of convolution kernels to be 32;
s3019, performing scale splicing on the output of the structural block four in the step S3018 by using a Concat layer to form a feature map with 128 channels to realize further depth extraction of information;
s3020, extracting features from the feature map of the 128 channel obtained in step S3019 by using a global maximum pooling layer, to obtain a 128-dimensional vector;
s3021, using a full connection layer to enable the final output end of the network to have 6 state neurons to be identified; each neuron corresponds to a number from 1 to 6, and the number of the neuron with the largest output value of the neuron is used to indicate the diagnosis result of the multi-sensor data input in step S301 by the network, i.e. the state class of the proton exchange membrane system of the locomotive corresponding to the group of data.
8. The method of claim 7, wherein the calculation logic of the BN layer is expressed by the following formula:
Figure FDA0002716605040000041
Figure FDA0002716605040000042
wherein,
Figure FDA0002716605040000046
is a small batch set of x values, μBIs the average value of a small batch,
Figure FDA0002716605040000043
is the variance of the small batch,
Figure FDA0002716605040000044
is to mix xiNormalization is the result of a variance of 1 for the expected 0, ∈ is a constant to ensure numerical stability, γ and β are learnable parameters to adjust
Figure FDA0002716605040000047
Thereby obtaining an output yi
9. The method as claimed in claim 7, wherein in step S3021, a full connection layer is used to make the final output end of the network have 6 state-like neurons to be identified; each neuron corresponds to a serial number from 1 to 6, and the serial number of the neuron with the largest output value of the neuron is used to represent the diagnostic result of the network on the multi-sensor data input in step S301, that is, the state class of the locomotive proton exchange membrane system corresponding to the group of data; this is achieved by an argmax function, the calculation being expressed as follows:
Figure FDA0002716605040000045
where x or y represents the neuron number, f (x) is the output value obtained by the x-th neuron, and f (y) is the output value obtained by the y-th neuron.
CN202011076131.XA 2020-10-10 2020-10-10 Rapid fault diagnosis method for locomotive proton exchange membrane fuel cell system Active CN112133941B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011076131.XA CN112133941B (en) 2020-10-10 2020-10-10 Rapid fault diagnosis method for locomotive proton exchange membrane fuel cell system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011076131.XA CN112133941B (en) 2020-10-10 2020-10-10 Rapid fault diagnosis method for locomotive proton exchange membrane fuel cell system

Publications (2)

Publication Number Publication Date
CN112133941A true CN112133941A (en) 2020-12-25
CN112133941B CN112133941B (en) 2021-07-30

Family

ID=73844002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011076131.XA Active CN112133941B (en) 2020-10-10 2020-10-10 Rapid fault diagnosis method for locomotive proton exchange membrane fuel cell system

Country Status (1)

Country Link
CN (1) CN112133941B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107123114A (en) * 2017-04-21 2017-09-01 佛山市南海区广工大数控装备协同创新研究院 A kind of cloth defect inspection method and device based on machine learning
CN107609488A (en) * 2017-08-21 2018-01-19 哈尔滨工程大学 A kind of ship noise method for identifying and classifying based on depth convolutional network
CN108152059A (en) * 2017-12-20 2018-06-12 西南交通大学 High-speed train bogie fault detection method based on Fusion
CN108614548A (en) * 2018-04-03 2018-10-02 北京理工大学 A kind of intelligent failure diagnosis method based on multi-modal fusion deep learning
CN109324291A (en) * 2018-08-21 2019-02-12 西南交通大学 A kind of prediction technique for Proton Exchange Membrane Fuel Cells life prediction
CN110059377A (en) * 2019-04-02 2019-07-26 西南交通大学 A kind of fuel battery service life prediction technique based on depth convolutional neural networks
CN110137547A (en) * 2019-06-20 2019-08-16 华中科技大学鄂州工业技术研究院 Control method, device and the electronic equipment of fuel cell system with reformer
CN110190306A (en) * 2019-06-04 2019-08-30 昆山知氢信息科技有限公司 A kind of on-line fault diagnosis method for fuel cell system
CN110727871A (en) * 2019-10-21 2020-01-24 河海大学常州校区 Multi-mode data acquisition and comprehensive analysis platform based on convolution decomposition depth model
CN111160139A (en) * 2019-12-13 2020-05-15 中国科学院深圳先进技术研究院 Electrocardiosignal processing method and device and terminal equipment
US20200234517A1 (en) * 2019-01-22 2020-07-23 ACV Auctions Inc. Vehicle audio capture and diagnostics
CN111523766A (en) * 2020-03-27 2020-08-11 中国平安财产保险股份有限公司 Driving risk assessment method and device, electronic equipment and readable storage medium
CN111600051A (en) * 2020-05-11 2020-08-28 中国科学技术大学 Proton exchange membrane fuel cell fault diagnosis method based on image processing

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107123114A (en) * 2017-04-21 2017-09-01 佛山市南海区广工大数控装备协同创新研究院 A kind of cloth defect inspection method and device based on machine learning
CN107609488A (en) * 2017-08-21 2018-01-19 哈尔滨工程大学 A kind of ship noise method for identifying and classifying based on depth convolutional network
CN108152059A (en) * 2017-12-20 2018-06-12 西南交通大学 High-speed train bogie fault detection method based on Fusion
CN108614548A (en) * 2018-04-03 2018-10-02 北京理工大学 A kind of intelligent failure diagnosis method based on multi-modal fusion deep learning
CN109324291A (en) * 2018-08-21 2019-02-12 西南交通大学 A kind of prediction technique for Proton Exchange Membrane Fuel Cells life prediction
US20200234517A1 (en) * 2019-01-22 2020-07-23 ACV Auctions Inc. Vehicle audio capture and diagnostics
CN110059377A (en) * 2019-04-02 2019-07-26 西南交通大学 A kind of fuel battery service life prediction technique based on depth convolutional neural networks
CN110190306A (en) * 2019-06-04 2019-08-30 昆山知氢信息科技有限公司 A kind of on-line fault diagnosis method for fuel cell system
CN110137547A (en) * 2019-06-20 2019-08-16 华中科技大学鄂州工业技术研究院 Control method, device and the electronic equipment of fuel cell system with reformer
CN110727871A (en) * 2019-10-21 2020-01-24 河海大学常州校区 Multi-mode data acquisition and comprehensive analysis platform based on convolution decomposition depth model
CN111160139A (en) * 2019-12-13 2020-05-15 中国科学院深圳先进技术研究院 Electrocardiosignal processing method and device and terminal equipment
CN111523766A (en) * 2020-03-27 2020-08-11 中国平安财产保险股份有限公司 Driving risk assessment method and device, electronic equipment and readable storage medium
CN111600051A (en) * 2020-05-11 2020-08-28 中国科学技术大学 Proton exchange membrane fuel cell fault diagnosis method based on image processing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈维荣等: "质子交换膜燃料电池故障诊断方法综述及展望", 《中国电机工程学报》 *

Also Published As

Publication number Publication date
CN112133941B (en) 2021-07-30

Similar Documents

Publication Publication Date Title
Wu et al. Data-driven remaining useful life prediction via multiple sensor signals and deep long short-term memory neural network
Li et al. A novel deep autoencoder and hyperparametric adaptive learning for imbalance intelligent fault diagnosis of rotating machinery
CN109555566B (en) Steam turbine rotor fault diagnosis method based on LSTM
CN112711953A (en) Text multi-label classification method and system based on attention mechanism and GCN
CN112379269A (en) Battery abnormity detection model training and detection method and device thereof
CN108830301A (en) The semi-supervised data classification method of double Laplace regularizations based on anchor graph structure
CN113608140A (en) Battery fault diagnosis method and system
Fei et al. A deep attention-assisted and memory-augmented temporal convolutional network based model for rapid lithium-ion battery remaining useful life predictions with limited data
CN113093058A (en) NPC three-level inverter open-circuit fault diagnosis method
CN109840593A (en) Diagnose the method and apparatus of solid oxide fuel battery system failure
CN114118138A (en) Bearing composite fault diagnosis method based on multi-label field self-adaptive model
CN108520310A (en) Wind speed forecasting method based on G-L mixed noise characteristic v- support vector regressions
CN114091504A (en) Rotary machine small sample fault diagnosis method based on generation countermeasure network
Gao et al. HFCM-LSTM: A novel hybrid framework for state-of-health estimation of lithium-ion battery
Sun et al. A novel fault prediction method based on convolutional neural network and long short-term memory with correlation coefficient for lithium-ion battery
CN111091141B (en) Photovoltaic backboard fault diagnosis method based on layered Softmax
Zu et al. A simple gated recurrent network for detection of power quality disturbances
CN112101418A (en) Method, system, medium and equipment for identifying breast tumor type
Koeppe et al. Explainable artificial intelligence for mechanics: physics-informing neural networks for constitutive models
Wang et al. A multi-source data feature fusion and expert knowledge integration approach on lithium-ion battery anomaly detection
Tang et al. Parameter identification for lithium batteries: Model variable-coupling analysis and a novel cooperatively coevolving identification algorithm
Wang et al. An efficient state-of-health estimation method for lithium-ion batteries based on feature-importance ranking strategy and PSO-GRNN algorithm
CN117725491A (en) SCITET-based power system fault state detection and classification method
CN117669656A (en) TCN-Semi PN-based direct-current micro-grid stability real-time monitoring method and device
CN112133941B (en) Rapid fault diagnosis method for locomotive proton exchange membrane fuel cell system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant