CN112700793A - Method and system for identifying fault collision of water turbine - Google Patents

Method and system for identifying fault collision of water turbine Download PDF

Info

Publication number
CN112700793A
CN112700793A CN202011553753.7A CN202011553753A CN112700793A CN 112700793 A CN112700793 A CN 112700793A CN 202011553753 A CN202011553753 A CN 202011553753A CN 112700793 A CN112700793 A CN 112700793A
Authority
CN
China
Prior art keywords
sound
fault
unit
real
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011553753.7A
Other languages
Chinese (zh)
Inventor
李芳芳
黄伟秦
王昕�
林新
黄维汉
赵建辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Fujian Electric Power Co Ltd
Fujian Shuikou Power Generation Group Co Ltd
Original Assignee
State Grid Fujian Electric Power Co Ltd
Fujian Shuikou Power Generation Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Fujian Electric Power Co Ltd, Fujian Shuikou Power Generation Group Co Ltd filed Critical State Grid Fujian Electric Power Co Ltd
Priority to CN202011553753.7A priority Critical patent/CN112700793A/en
Publication of CN112700793A publication Critical patent/CN112700793A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01HMEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
    • G01H17/00Measuring mechanical vibrations or ultrasonic, sonic or infrasonic waves, not provided for in the preceding groups
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M13/00Testing of machine parts
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M15/00Testing of engines
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M7/00Vibration-testing of structures; Shock-testing of structures
    • G01M7/08Shock-testing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/45Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of analysis window

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Testing Of Devices, Machine Parts, Or Other Structures Thereof (AREA)

Abstract

The invention relates to a method and a system for identifying fault collision of a water turbine, wherein the identification method comprises the following steps: sound collection, namely arranging a pickup beside a hydraulic turbine set, collecting the rotation sound of the hydraulic turbine, and marking the rotation sound of the hydraulic turbine as normal operation sound and fault collision sound; preprocessing sound data, namely preprocessing the marked normal operation sound and fault collision sound, and acquiring a spectrogram of the normal operation sound and a spectrogram of the fault collision sound as sample sets; training a fault recognition model, building a convolutional neural network, inputting a sample set into the convolutional neural network, and training the convolutional neural network to obtain the fault recognition model; and fault identification, namely acquiring real-time rotation sound of the water turbine, preprocessing the real-time rotation sound, acquiring a spectrogram of the real-time rotation sound, and inputting the spectrogram into a fault identification model, wherein the fault identification model identifies whether fault collision sound exists in the real-time rotation sound.

Description

Method and system for identifying fault collision of water turbine
Technical Field
The invention relates to a method and a system for identifying fault collision of a water turbine, and belongs to the technical field of water conservancy, hydropower and artificial intelligence.
Background
As a renewable energy source that can be developed and utilized on a large scale, hydropower is highly regarded as important in national infrastructure. In recent years, with the construction of large hydro-junction projects and the successive construction and production of large and medium-sized hydroelectric power plants, the installed capacity of hydropower plants is larger and larger, and the hydroelectric generating set plays roles in sending out electric energy, regulating peak, frequency, phase, reserving accidents and the like in a power grid, so that the development of state monitoring and fault diagnosis technology for the hydroelectric generating set is more and more necessary and urgent.
The hydraulic turbine of the hydropower station has a plurality of flow passage components and rotating equipment, wherein the hydraulic turbine can generate abnormal sound due to several common faults of the hydraulic generator, such as collision and abrasion, cracks, base looseness, bearing faults, cavitation and the like. The sound frequency spectrum characteristics generated by different operation conditions of the hydropower station are like human fingerprints, and the hydropower station has specificity and relative stability. In the past, acoustic emission signals of these anomalies could only be determined empirically by a person. Based on the reasons, in order to better monitor and evaluate the running state of the water turbine generator set, the invention provides a method and a system for identifying the collision and abrasion faults of the water turbine, which are used for the fault condition of the water turbine generator set and improving the emergency handling capacity and the safe running level of a hydropower station.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a method and a system for identifying the fault collision of the water turbine.
The technical scheme of the invention is as follows:
a method for identifying fault collision of a water turbine comprises the following steps:
sound collection, namely arranging a pickup beside a hydraulic turbine set, collecting the rotation sound of the hydraulic turbine, and marking the rotation sound of the hydraulic turbine as normal operation sound and fault collision sound;
preprocessing sound data, namely preprocessing the marked normal operation sound and fault collision sound, and acquiring a spectrogram of the normal operation sound and a spectrogram of the fault collision sound as sample sets;
training a fault recognition model, building a convolutional neural network, inputting a sample set into the convolutional neural network, and training the convolutional neural network to obtain the fault recognition model;
and fault identification, namely acquiring real-time rotation sound of the water turbine, preprocessing the real-time rotation sound, acquiring a spectrogram of the real-time rotation sound, and inputting the spectrogram into a fault identification model, wherein the fault identification model identifies whether fault collision sound exists in the real-time rotation sound.
Further, the step of preprocessing the marked normal operation sound and fault collision sound is specifically as follows:
respectively performing framing processing on normal operation sound and fault collision sound, setting the data length of each 1 frame as N, and setting the framing step length as N/M, namely performing data framing with the overlapping rate of (M-1)/M;
windowing the framed sound data, wherein the length of a window is equal to the frame length N;
performing short-time Fourier transform on the data in each window after windowing to obtain a short-time amplitude spectrum estimation value;
and taking a logarithmic value for each frame of short-time amplitude spectrum estimated value to obtain a spectrogram with a one-to-one correspondence relationship.
Furthermore, the convolutional neural network comprises an input layer, a convolutional pooling layer, a full-link layer and an output layer which are connected in sequence.
Further, the method also comprises the step of manually correcting the identification result, and specifically comprises the following steps:
storing unprocessed original real-time rotation sound data and an identification result output by the fault identification model, wherein the original real-time rotation sound data and the identification result have a one-to-one correspondence relationship;
the staff judges the corresponding recognition result according to the stored original real-time rotation sound data, judges whether the recognition result is accurate, and obtains a deviation correction result;
and inputting the deviation correcting result into a convolutional neural network for training, and optimizing a fault recognition model.
The second technical scheme is as follows:
a system for identifying a water turbine fault collision, comprising: the system comprises an audio acquisition unit, an artificial marking unit, a first preprocessing unit, a convolutional neural network training unit, a second preprocessing unit and an on-end identification unit;
the audio acquisition unit is used for acquiring the rotation sound of the water turbine;
the manual marking unit is connected with the audio acquisition unit and used for marking the rotation sound of the water turbine as normal operation sound and fault collision sound;
the preprocessing unit is connected with the throwing marking unit and is used for preprocessing the marked normal operation sound and fault collision sound to obtain a spectrogram of the normal operation sound and a spectrogram of the fault collision sound as a sample set;
the convolutional neural network training unit is connected with the preprocessing unit and used for building a convolutional neural network, inputting a sample set to the convolutional neural network and training the convolutional neural network to obtain a fault identification model;
the second preprocessing unit is connected with the audio acquisition unit and is used for acquiring real-time rotation sound of the water turbine, preprocessing the real-time rotation sound and acquiring a spectrogram of the real-time rotation sound;
and the end-to-end recognition unit is connected with the convolutional neural network training unit and the second preprocessing unit and is used for carrying the fault recognition model, inputting a spectrogram of the input real-time rotation sound into the fault recognition model and recognizing whether the real-time rotation sound has fault collision sound.
Further, the first preprocessing unit and the second preprocessing unit have the same structure, and specifically comprise a framing processing module, a windowing processing module, a short-time Fourier transform module and a spectrogram acquiring module;
the framing processing module is used for framing the sound data, setting the data length of each 1 frame as N, and setting the framing step length as N/M, namely, the overlapping rate is (M-1)/M to perform data framing;
the windowing processing module is used for windowing the framed sound data, and the length of a window is equal to the frame length N;
the short-time Fourier transform module is used for carrying out short-time Fourier transform on the data in each window after windowing processing to obtain a short-time amplitude spectrum estimation value;
the spectrogram acquisition module is used for taking logarithm values of each frame of short-time amplitude spectrum estimation values to obtain the spectrogram with one-to-one correspondence.
Furthermore, the convolutional neural network built by the convolutional neural network training unit specifically comprises an input layer, a convolutional pooling layer, a full-link layer and an output layer which are connected in sequence.
The system further comprises an alarm unit, wherein the alarm unit calculates the proportion of the frame length of the output fault collision sound to the full frame length of the real-time rotation sound after the fault recognition model recognizes whether the fault collision sound exists in the real-time rotation sound, a threshold value P is preset, and when the proportion is larger than the threshold value P, an alarm signal is sent out.
Further, the system also comprises an identification result storage unit, a manual deviation rectifying unit and a training sound library;
the identification result storage unit is connected with the on-end identification unit and is used for storing unprocessed original real-time rotation sound data and identification results output by the on-end identification unit, and the original real-time rotation sound data and the identification results have one-to-one correspondence;
the manual deviation rectifying unit is used for the intervention of workers to judge the identification result and rectifying the deviation of the identification result according to whether the identification result is accurate or not to obtain a deviation rectifying result;
the input end of the training sound library is connected with the first preprocessing unit and the manual deviation rectifying unit, and the output end of the training sound library is connected with the convolutional neural network training unit; and the training sound library is used for storing the sample set and the correction result and outputting the correction result to the convolutional neural network training unit to optimize the fault recognition model.
The invention has the following beneficial effects:
1. the invention relates to a method for identifying fault collision of a water turbine.
2. The invention relates to a water turbine fault collision recognition system, which is characterized in that normal operation sound and fault collision sound of a water turbine are collected and preprocessed into a spectrogram sample set, a convolutional neural network is trained by using the sample set to obtain a fault recognition model capable of recognizing the fault collision sound, and real-time rotation sound of the water turbine is monitored in real time through the fault recognition model to realize the recognition of whether the water turbine has fault collision through the fault collision sound.
3. The identification system for the fault collision of the water turbine is provided with the manual deviation rectifying unit, the robustness of the system is improved through manual intervention, the fault identification model can be further optimized according to the deviation rectifying result, and the identification accuracy is improved.
Drawings
FIG. 1 is a flow chart of a first embodiment of the present invention;
fig. 2 is a system framework diagram of a second embodiment of the invention.
Detailed Description
The invention is described in detail below with reference to the figures and the specific embodiments.
The first embodiment is as follows:
referring to fig. 1, a method for identifying a fault collision of a water turbine includes the following steps:
sound collection sets up the adapter by waterwheel indoor hydraulic turbine unit main shaft, gathers the hydraulic turbine and rotates sound, and adapter sound sampling rate is 16000Hz, and it is 3600 seconds long, according to the nyquist law, can recover the collection sound signal that is not more than 8000Hz frequency size. Marking the rotating sound of the water turbine as normal operation sound and fault collision sound, concentrating the maximum value of the collected sound frequency in 4000Hz-5000Hz under the normal operation condition of the water turbine, concentrating the maximum value of the sound frequency in 2500 Hz-3000 Hz under the fault collision condition, and marking the rotating sound of the water turbine by combining the judgment and the identification of experienced workers;
preprocessing sound data, namely preprocessing the marked normal operation sound and fault collision sound, and acquiring a spectrogram of the normal operation sound and a spectrogram of the fault collision sound as sample sets;
training a fault recognition model, building a convolutional neural network, inputting a sample set into the convolutional neural network, and training the convolutional neural network to obtain the fault recognition model;
and fault identification, namely acquiring real-time rotation sound of a main shaft of the water turbine through a pickup, preprocessing the real-time rotation sound, acquiring a spectrogram of the real-time rotation sound, and inputting the spectrogram into a fault identification model, wherein the fault identification model identifies whether fault collision sound exists in the real-time rotation sound.
This embodiment is through the normal operating sound and the trouble collision sound to the hydraulic turbine of gathering the adapter to the preliminary treatment becomes spectrogram sample set, utilizes the sample set to train convolution neural network, obtains the fault identification model that can discern trouble collision sound, rotates the sound through the fault identification model to the real-time of the hydraulic turbine and carries out real-time supervision, realizes through whether the trouble collision sound discernment hydraulic turbine breaks down the collision.
Example two:
further, the step of preprocessing the marked normal operation sound and fault collision sound is specifically as follows:
respectively performing framing processing on normal operation sound and fault collision sound, setting the data length of each 1 frame to be 1024, and setting the framing step length to be 512, namely performing data framing with the overlapping rate of 50%; in this embodiment, the original sound data of 3600 seconds duration is divided into 16000 × 3600/512-1 ═ 112499 frame data.
And windowing each frame of data by using a Hanning window, wherein the length of the window is 1024.
The numerical form of the hanning window function is as follows, where N is 1024:
Figure BDA0002858686380000081
performing short-time Fourier transform on the data in each window after windowing to obtain a short-time amplitude spectrum estimation value; the digital form of the short-time fourier transform is as follows, where xi (N) represents the windowed data of each frame, k ∈ [0, N-1], N ═ 1024, and m represents the frame number:
Figure BDA0002858686380000082
and taking a logarithmic value for each frame of short-time amplitude spectrum estimation value | X (m, k) | to obtain a spectrogram with a one-to-one correspondence relationship, wherein the abscissa of the spectrogram is time, the ordinate of the spectrogram is frequency, and the color depth of each pane represents the energy value at the corresponding time and frequency.
The convolutional neural network comprises an input layer, a convolutional pooling layer, a full-link layer and an output layer which are connected in sequence, wherein:
the first layer is an input layer and is used for receiving preprocessed spectrogram data;
the second layer is a convolution layer, the size of convolution kernels is 16, the number of the convolution kernels is 8, the convolution step length is 4, the activation function uses a Leaky ReLU (linear rectification function with leakage), the second layer network is finished, and the network output is connected to the third layer;
the mathematical form of the activation function leak ReLU is as follows:
Figure BDA0002858686380000083
the third layer is a convolution pooling layer, the size of convolution kernel is 3, the number of convolution kernels is 32, the convolution step size is 1, the activation function uses Leaky ReLU, then pooling is carried out, and the pooling size is 4 and the step size is 2 by adopting a maximum pooling mode. The third layer network is finished, and the network output is flattened and connected to the fourth layer;
the fourth layer is a full connection layer;
the fifth layer is an output layer, the output dimension is 2, and a softmax function is used;
the sixth layer is an output layer, the number of output dimensions is consistent with the number of set fault types, and a softmax function is used; the output layer uses the classification function softmax in the mathematical form as follows, and each output value p (i) has a value size between 0 and 1, and the sum is 1.
Figure BDA0002858686380000091
Further, after the fault recognition model recognizes whether a fault collision sound exists in the real-time rotation sound, the proportion of the frame length of the output fault collision sound to the full frame length of the real-time rotation sound is calculated, a threshold value P is preset, and when the proportion is greater than the preset threshold value P, an alarm signal is sent out; for example, the method includes the steps of collecting rotation sound of the water turbine for 60 seconds in real time, dividing the rotation sound into 1874 frames of data with the overlapping rate of 50% after preprocessing, inputting 1874 frames of data into a trained fault recognition model for recognition, setting a threshold value P to be 50%, sending an alarm signal to an operator on duty when the frame length of fault collision sound exceeds 1874 × 50%, and installing the fault recognition model into monitoring software of a computer to send the alarm signal through the monitoring software of the computer.
Further, the step of inputting the sample set to the convolutional neural network to train the convolutional neural network to obtain a fault identification model specifically includes:
dividing a sample set into a training set and a testing set according to the proportion of 8: 2;
inputting the spectrogram of normal operation sound and the spectrogram of fault collision sound in a training set into a convolution lifting network for training, and adjusting parameters of a convolution neural network by using a loss function in the training process;
and verifying the recognition result of the convolutional neural network through the test set, finishing training when the recognition accuracy of the convolutional neural network reaches a target value, and storing the current parameters to obtain a fault recognition model.
Further, the method also comprises the step of manually correcting the identification result, and specifically comprises the following steps:
storing unprocessed original real-time rotation sound data and an identification result output by the fault identification model, wherein the original real-time rotation sound data and the identification result have a one-to-one correspondence relationship;
an experienced worker intervenes to judge according to the stored original real-time rotation sound data and the corresponding identification result, and whether the identification result is accurate or not is judged to obtain a deviation correction result;
and inputting the deviation correcting result into a convolutional neural network, further training the convolutional neural network together with a training sample, optimizing a fault recognition model, and improving the recognition accuracy of the fault recognition model.
Example three:
referring to fig. 2, a system for identifying a fault collision of a water turbine includes: the system comprises an audio acquisition unit, an artificial marking unit, a first preprocessing unit, a convolutional neural network training unit, a second preprocessing unit and an on-end identification unit;
the system comprises an audio acquisition unit, a frequency-domain power supply unit and a control unit, wherein the audio acquisition unit is a sound pickup, the sound sampling rate of the sound pickup is 16000Hz, the time duration of the sound pickup is 3600 seconds, and acquired sound signals with the frequency not more than 8000Hz can be recovered according to the Nyquist law;
the manual marking unit is connected with the audio acquisition unit and used for marking the rotating sound of the water turbine as normal operation sound and fault collision sound, the maximum value of the collected sound frequency is concentrated in 4000Hz-5000Hz under the condition of normal operation of the water turbine, and the maximum value of the sound frequency is concentrated in 2500 Hz-3000 Hz under the condition of fault collision, and the rotating sound of the water turbine is marked by combining the judgment and the identification of experienced workers;
the preprocessing unit is connected with the throwing marking unit and is used for preprocessing the marked normal operation sound and fault collision sound to obtain a spectrogram of the normal operation sound and a spectrogram of the fault collision sound as a sample set;
the convolutional neural network training unit is connected with the preprocessing unit and used for building a convolutional neural network, inputting a sample set to the convolutional neural network and training the convolutional neural network to obtain a fault identification model;
the second preprocessing unit is connected with the audio acquisition unit and is used for acquiring real-time rotation sound of the water turbine, preprocessing the real-time rotation sound and acquiring a spectrogram of the real-time rotation sound;
and the end-to-end recognition unit is connected with the convolutional neural network training unit and the second preprocessing unit and is used for carrying the fault recognition model, inputting a spectrogram of the input real-time rotation sound into the fault recognition model and recognizing whether the real-time rotation sound has fault collision sound.
Example four:
furthermore, the first preprocessing unit and the second preprocessing unit have the same structure, and both comprise a framing processing module, a windowing processing module, a short-time Fourier transform module and a spectrogram acquiring module;
the framing processing module is used for framing the sound data, setting the data length of each 1 frame to be 1024, and setting the framing step length to be 512, namely, performing data framing with the overlapping rate of 50%; in the present embodiment, the original sound data of 3600 seconds duration is divided into 16000 × 3600/512-1 ═ 112499 frame data;
the windowing processing module performs windowing processing on each frame of data by using a Hanning window, wherein the length of the window is 1024;
the numerical form of the hanning window function is as follows, where N is 1024:
Figure BDA0002858686380000121
the short-time Fourier transform module is used for carrying out short-time Fourier transform on the data in each window after windowing processing to obtain a short-time amplitude spectrum estimation value; the digital form of the short-time fourier transform is as follows, where xi (N) represents the windowed data of each frame, k ∈ [0, N-1], N ═ 1024, and m represents the frame number:
Figure BDA0002858686380000122
the spectrogram acquisition module is used for taking a logarithm value of each frame of short-time amplitude spectrum estimation value | X (m, k) | to obtain a spectrogram with a one-to-one correspondence relationship, wherein the abscissa of the spectrogram is time, the ordinate of the spectrogram is frequency, and the color depth of each pane represents the energy value of the corresponding time and frequency.
Further, the convolutional neural network built by the convolutional neural network training unit specifically comprises an input layer, a convolutional pooling layer, a full-link layer and an output layer which are connected in sequence, wherein:
the first layer is an input layer and is used for receiving preprocessed spectrogram data;
the second layer is a convolution layer, the size of convolution kernels is 16, the number of the convolution kernels is 8, the convolution step length is 4, the activation function uses a Leaky ReLU (linear rectification function with leakage), the second layer network is finished, and the network output is connected to the third layer;
the mathematical form of the activation function leak ReLU is as follows:
Figure BDA0002858686380000131
the third layer is a convolution pooling layer, the size of convolution kernel is 3, the number of convolution kernels is 32, the convolution step size is 1, the activation function uses Leaky ReLU, then pooling is carried out, and the pooling size is 4 and the step size is 2 by adopting a maximum pooling mode. The third layer network is finished, and the network output is flattened and connected to the fourth layer;
the fourth layer is a full connection layer;
the fifth layer is an output layer, the output dimension is 2, and a softmax function is used;
the sixth layer is an output layer, the number of output dimensions is consistent with the number of set fault types, and a softmax function is used; the output layer uses the classification function softmax in the mathematical form as follows, and each output value p (i) has a value size between 0 and 1, and the sum is 1.
Figure BDA0002858686380000132
The system further comprises an alarm unit, wherein the alarm unit calculates the proportion of the frame length of the output fault collision sound to the full frame length of the real-time rotation sound after the fault recognition model recognizes whether the fault collision sound exists in the real-time rotation sound, a threshold value P is preset, and when the proportion is larger than the threshold value P, an alarm signal is sent out. For example, the method includes the steps of collecting rotation sound of the water turbine for 60 seconds in real time, dividing the rotation sound into 1874 frames of data with the overlapping rate of 50% after preprocessing, inputting 1874 frames of data into a trained fault recognition model for recognition, setting a threshold value P to be 50%, and sending an alarm signal to an operator on duty when the frame length of fault collision sound exceeds 1874 × 50%, wherein an alarm unit can be installed in monitoring software of a computer and sends the alarm signal through the monitoring software of the computer.
Further, the convolutional neural network training unit specifically comprises a sample dividing module, a training module and a testing module;
the sample dividing module is used for dividing a sample set into a training set and a testing set according to the proportion of 8: 2;
the training module is used for inputting the spectrogram of the normal operation sound and the spectrogram of the fault collision sound in the training set into the convolution lifting network for training, and the loss function is used for adjusting the parameters of the convolution neural network in the training process;
and the test module is used for verifying the recognition result of the convolutional neural network through the test set, finishing training when the recognition accuracy of the convolutional neural network reaches a target value, and storing the current parameters to obtain a fault recognition model.
Further, the system also comprises an identification result storage unit, a manual deviation rectifying unit and a training sound library;
the identification result storage unit is connected with the on-end identification unit and is used for storing unprocessed original real-time rotation sound data and identification results output by the on-end identification unit, and the original real-time rotation sound data and the identification results have one-to-one correspondence;
the manual deviation rectifying unit is used for enabling experienced workers to intervene to judge the identification result, improving the robustness of the system, and rectifying the identification result according to whether the identification result is accurate to obtain a deviation rectifying result;
the input end of the training sound library is connected with the first preprocessing unit and the manual deviation rectifying unit, and the output end of the training sound library is connected with the convolutional neural network training unit; the training sound library is used for storing a sample set and a correction result, and outputting the correction result to the convolutional neural network training unit to further train the fault recognition model so as to improve the recognition accuracy of the fault recognition model.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A method for identifying fault collision of a water turbine is characterized by comprising the following steps:
sound collection, namely arranging a pickup beside a hydraulic turbine set, collecting the rotation sound of the hydraulic turbine, and marking the rotation sound of the hydraulic turbine as normal operation sound and fault collision sound;
preprocessing sound data, namely preprocessing the marked normal operation sound and fault collision sound, and acquiring a spectrogram of the normal operation sound and a spectrogram of the fault collision sound as sample sets;
training a fault recognition model, building a convolutional neural network, inputting a sample set into the convolutional neural network, and training the convolutional neural network to obtain the fault recognition model;
and fault identification, namely acquiring real-time rotation sound of the water turbine, preprocessing the real-time rotation sound, acquiring a spectrogram of the real-time rotation sound, and inputting the spectrogram into a fault identification model, wherein the fault identification model identifies whether fault collision sound exists in the real-time rotation sound.
2. The method for identifying a water turbine fault impact according to claim 1, wherein the step of preprocessing the marked normal operation sound and fault impact sound is specifically as follows:
respectively performing framing processing on normal operation sound and fault collision sound, setting the data length of each 1 frame as N, and setting the framing step length as N/M, namely performing data framing with the overlapping rate of (M-1)/M;
windowing the framed sound data, wherein the length of a window is equal to the frame length N;
performing short-time Fourier transform on the data in each window after windowing to obtain a short-time amplitude spectrum estimation value;
and taking a logarithmic value for each frame of short-time amplitude spectrum estimated value to obtain a spectrogram with a one-to-one correspondence relationship.
3. The method for identifying a water turbine fault collision according to claim 1, wherein: the convolutional neural network comprises an input layer, a convolutional pooling layer, a full-connection layer and an output layer which are sequentially connected.
4. The method for identifying a water turbine fault collision according to claim 1, wherein: and after the fault recognition model recognizes whether the real-time rotation sound has fault collision sound, calculating the proportion of the frame length of the output fault collision sound to the full frame length of the real-time rotation sound, presetting a threshold value P, and sending an alarm signal when the proportion is greater than the preset threshold value P.
5. The method for identifying the water turbine fault collision according to claim 1, further comprising the step of manually correcting the identification result, specifically comprising the following steps:
storing unprocessed original real-time rotation sound data and an identification result output by the fault identification model, wherein the original real-time rotation sound data and the identification result have a one-to-one correspondence relationship;
the staff judges the corresponding recognition result according to the stored original real-time rotation sound data, judges whether the recognition result is accurate, and obtains a deviation correction result;
and inputting the deviation correcting result into a convolutional neural network for training, and optimizing a fault recognition model.
6. A water turbine fault collision recognition system, comprising: the system comprises an audio acquisition unit, an artificial marking unit, a first preprocessing unit, a convolutional neural network training unit, a second preprocessing unit and an on-end identification unit;
the audio acquisition unit is used for acquiring the rotation sound of the water turbine;
the manual marking unit is connected with the audio acquisition unit and used for marking the rotation sound of the water turbine as normal operation sound and fault collision sound;
the preprocessing unit is connected with the throwing marking unit and is used for preprocessing the marked normal operation sound and fault collision sound to obtain a spectrogram of the normal operation sound and a spectrogram of the fault collision sound as a sample set;
the convolutional neural network training unit is connected with the preprocessing unit and used for building a convolutional neural network, inputting a sample set to the convolutional neural network and training the convolutional neural network to obtain a fault identification model;
the second preprocessing unit is connected with the audio acquisition unit and is used for acquiring real-time rotation sound of the water turbine, preprocessing the real-time rotation sound and acquiring a spectrogram of the real-time rotation sound;
and the end-to-end recognition unit is connected with the convolutional neural network training unit and the second preprocessing unit and is used for carrying the fault recognition model, inputting a spectrogram of the input real-time rotation sound into the fault recognition model and recognizing whether the real-time rotation sound has fault collision sound.
7. The system for identifying a hydraulic turbine fault collision according to claim 6, wherein: the first preprocessing unit and the second preprocessing unit have the same structure and specifically comprise a framing processing module, a windowing processing module, a short-time Fourier transform module and a spectrogram acquisition module;
the framing processing module is used for framing the sound data, setting the data length of each 1 frame as N, and setting the framing step length as N/M, namely, the overlapping rate is (M-1)/M to perform data framing;
the windowing processing module is used for windowing the framed sound data, and the length of a window is equal to the frame length N;
the short-time Fourier transform module is used for carrying out short-time Fourier transform on the data in each window after windowing processing to obtain a short-time amplitude spectrum estimation value;
the spectrogram acquisition module is used for taking logarithm values of each frame of short-time amplitude spectrum estimation values to obtain the spectrogram with one-to-one correspondence.
8. The system for identifying a hydraulic turbine fault collision according to claim 6, wherein: the convolutional neural network built by the convolutional neural network training unit specifically comprises an input layer, a convolutional pooling layer, a full-link layer and an output layer which are sequentially connected.
9. The system for identifying a hydraulic turbine fault collision according to claim 6, wherein: the real-time rotation sound processing system is characterized by further comprising an alarm unit, wherein the alarm unit is used for calculating the proportion of the frame length of the output fault collision sound to the full frame length of the real-time rotation sound after the fault recognition model recognizes whether the fault collision sound exists in the real-time rotation sound, and sending an alarm signal when the proportion is larger than the preset threshold value P.
10. The system for identifying a hydraulic turbine fault collision according to claim 6, wherein: the system also comprises an identification result storage unit, a manual deviation rectifying unit and a training sound library;
the identification result storage unit is connected with the on-end identification unit and is used for storing unprocessed original real-time rotation sound data and identification results output by the on-end identification unit, and the original real-time rotation sound data and the identification results have one-to-one correspondence;
the manual deviation rectifying unit is used for the intervention of workers to judge the identification result and rectifying the deviation of the identification result according to whether the identification result is accurate or not to obtain a deviation rectifying result;
the input end of the training sound library is connected with the first preprocessing unit and the manual deviation rectifying unit, and the output end of the training sound library is connected with the convolutional neural network training unit; and the training sound library is used for storing the sample set and the correction result and outputting the correction result to the convolutional neural network training unit to optimize the fault recognition model.
CN202011553753.7A 2020-12-24 2020-12-24 Method and system for identifying fault collision of water turbine Pending CN112700793A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011553753.7A CN112700793A (en) 2020-12-24 2020-12-24 Method and system for identifying fault collision of water turbine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011553753.7A CN112700793A (en) 2020-12-24 2020-12-24 Method and system for identifying fault collision of water turbine

Publications (1)

Publication Number Publication Date
CN112700793A true CN112700793A (en) 2021-04-23

Family

ID=75510112

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011553753.7A Pending CN112700793A (en) 2020-12-24 2020-12-24 Method and system for identifying fault collision of water turbine

Country Status (1)

Country Link
CN (1) CN112700793A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113250961A (en) * 2021-05-10 2021-08-13 广东葆德科技有限公司 Compressor fault detection method and detection system
CN113470694A (en) * 2021-04-25 2021-10-01 重庆市科源能源技术发展有限公司 Remote listening monitoring method, device and system for hydraulic turbine set
CN113566948A (en) * 2021-07-09 2021-10-29 中煤科工集团沈阳研究院有限公司 Fault audio recognition and diagnosis method for robot coal pulverizer
CN113593605A (en) * 2021-07-09 2021-11-02 武汉工程大学 Industrial audio fault monitoring system and method based on deep neural network
CN113792829A (en) * 2021-07-29 2021-12-14 湖南五凌电力科技有限公司 Water turbine inspection method and device, computer equipment and storage medium
CN113798774A (en) * 2021-11-08 2021-12-17 兖州煤业股份有限公司 Pipe butt joint auxiliary device
CN113870896A (en) * 2021-09-27 2021-12-31 动者科技(杭州)有限责任公司 Motion sound false judgment method and device based on time-frequency graph and convolutional neural network
CN113909713A (en) * 2021-11-05 2022-01-11 泰尔重工股份有限公司 Anti-collision protection system and method for laser processing
CN114483417A (en) * 2022-01-10 2022-05-13 中国长江三峡集团有限公司 Water turbine guide vane water leakage defect rapid identification method based on voiceprint identification

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015175770A (en) * 2014-03-17 2015-10-05 中国電力株式会社 Sound identification condition setting support device and sound identification condition setting support method
CN106546892A (en) * 2016-11-10 2017-03-29 华乘电气科技(上海)股份有限公司 The recognition methodss of shelf depreciation ultrasonic audio and system based on deep learning
CN108492626A (en) * 2018-04-28 2018-09-04 上海工程技术大学 A kind of traffic accidents visualization prevention and control device
CN109977920A (en) * 2019-04-11 2019-07-05 福州大学 Fault Diagnosis of Hydro-generator Set method based on time-frequency spectrum and convolutional neural networks
CN110322896A (en) * 2019-06-26 2019-10-11 上海交通大学 A kind of transformer fault sound identification method based on convolutional neural networks
CN110534118A (en) * 2019-07-29 2019-12-03 安徽继远软件有限公司 Transformer/reactor method for diagnosing faults based on Application on Voiceprint Recognition and neural network
CN110597240A (en) * 2019-10-24 2019-12-20 福州大学 Hydroelectric generating set fault diagnosis method based on deep learning
CN110867196A (en) * 2019-12-03 2020-03-06 桂林理工大学 Machine equipment state monitoring system based on deep learning and voice recognition

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015175770A (en) * 2014-03-17 2015-10-05 中国電力株式会社 Sound identification condition setting support device and sound identification condition setting support method
CN106546892A (en) * 2016-11-10 2017-03-29 华乘电气科技(上海)股份有限公司 The recognition methodss of shelf depreciation ultrasonic audio and system based on deep learning
CN108492626A (en) * 2018-04-28 2018-09-04 上海工程技术大学 A kind of traffic accidents visualization prevention and control device
CN109977920A (en) * 2019-04-11 2019-07-05 福州大学 Fault Diagnosis of Hydro-generator Set method based on time-frequency spectrum and convolutional neural networks
CN110322896A (en) * 2019-06-26 2019-10-11 上海交通大学 A kind of transformer fault sound identification method based on convolutional neural networks
CN110534118A (en) * 2019-07-29 2019-12-03 安徽继远软件有限公司 Transformer/reactor method for diagnosing faults based on Application on Voiceprint Recognition and neural network
CN110597240A (en) * 2019-10-24 2019-12-20 福州大学 Hydroelectric generating set fault diagnosis method based on deep learning
CN110867196A (en) * 2019-12-03 2020-03-06 桂林理工大学 Machine equipment state monitoring system based on deep learning and voice recognition

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KYUNG-WON KANG 等: "CNN-based Automatic Machine Fault Diagnosis Method Using Spectrogram Images", 融合信号处理学会论文杂志, pages 121 - 126 *
강경원等: "스펙트로그램 이미지를 이용한 CNN 기반 자동화 기계 고장 진단 기법", 融合信号处理学会论文杂志, vol. 21, no. 3, pages 121 - 126 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470694A (en) * 2021-04-25 2021-10-01 重庆市科源能源技术发展有限公司 Remote listening monitoring method, device and system for hydraulic turbine set
CN113250961A (en) * 2021-05-10 2021-08-13 广东葆德科技有限公司 Compressor fault detection method and detection system
CN113566948A (en) * 2021-07-09 2021-10-29 中煤科工集团沈阳研究院有限公司 Fault audio recognition and diagnosis method for robot coal pulverizer
CN113593605A (en) * 2021-07-09 2021-11-02 武汉工程大学 Industrial audio fault monitoring system and method based on deep neural network
CN113593605B (en) * 2021-07-09 2024-01-26 武汉工程大学 Industrial audio fault monitoring system and method based on deep neural network
CN113792829A (en) * 2021-07-29 2021-12-14 湖南五凌电力科技有限公司 Water turbine inspection method and device, computer equipment and storage medium
CN113870896A (en) * 2021-09-27 2021-12-31 动者科技(杭州)有限责任公司 Motion sound false judgment method and device based on time-frequency graph and convolutional neural network
CN113909713A (en) * 2021-11-05 2022-01-11 泰尔重工股份有限公司 Anti-collision protection system and method for laser processing
CN113909713B (en) * 2021-11-05 2024-03-29 泰尔重工股份有限公司 Anti-collision protection system and method for laser processing
CN113798774A (en) * 2021-11-08 2021-12-17 兖州煤业股份有限公司 Pipe butt joint auxiliary device
CN114483417A (en) * 2022-01-10 2022-05-13 中国长江三峡集团有限公司 Water turbine guide vane water leakage defect rapid identification method based on voiceprint identification
CN114483417B (en) * 2022-01-10 2023-06-16 中国长江三峡集团有限公司 Water leakage defect quick identification method for guide vanes of water turbine based on voiceprint identification

Similar Documents

Publication Publication Date Title
CN112700793A (en) Method and system for identifying fault collision of water turbine
CN107220469B (en) Method and system for estimating state of fan
CN108252873B (en) System for wind generating set on-line data monitoring and performance evaluation
CN107153150A (en) A kind of power distribution network over-voltage fault type recognition method and device
CN105547730A (en) Fault detection system of water-wheel generator set
CN115559890B (en) Water pump unit operation fault prediction adjustment method and system
CN102434387A (en) Draught fan detection and diagnosis system
CN103852255B (en) Based on neutral net wind power generating set typical case's drive failures intelligent diagnosing method
CN111412114B (en) Wind turbine generator impeller imbalance detection method based on stator current envelope spectrum
CN115374091A (en) Distributed new energy output data quality improving method and system
CN114184956B (en) Service wind generating set fault prediction method based on big data management
CN111878322A (en) Wind power generator device
CN115977855A (en) Hydropower station fault diagnosis system based on artificial intelligence
CN111639852B (en) Real-time evaluation method and system for vibration state of hydroelectric generating set based on wavelet singular value
CN110608133B (en) Offshore wind power generation control system and method
CN116453526B (en) Multi-working-condition abnormality monitoring method and device for hydroelectric generating set based on voice recognition
CN111894814A (en) Fault processing system and method for power generation system
CN117093938A (en) Fan bearing fault detection method and system based on deep learning
CN117006000A (en) Disconnect-type aerogenerator monitoring devices
CN111237136A (en) Method and system for extracting state information of wind driven generator sensor
CN116365713A (en) Photovoltaic power station cluster monitoring method and system
CN115165364A (en) Wind turbine generator bearing fault diagnosis model construction method based on transfer learning
CN114320773A (en) Wind turbine generator fault early warning method based on power curve analysis and neural network
CN114607571A (en) Offshore wind power gear box fault identification method and system based on lubricating system monitoring
CN103713236A (en) Method for automatically judging electric transmission line icing fault

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination