CN110867196A - Machine equipment state monitoring system based on deep learning and voice recognition - Google Patents
Machine equipment state monitoring system based on deep learning and voice recognition Download PDFInfo
- Publication number
- CN110867196A CN110867196A CN201911222026.XA CN201911222026A CN110867196A CN 110867196 A CN110867196 A CN 110867196A CN 201911222026 A CN201911222026 A CN 201911222026A CN 110867196 A CN110867196 A CN 110867196A
- Authority
- CN
- China
- Prior art keywords
- module
- sound
- neural network
- network model
- machine equipment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 18
- 238000013135 deep learning Methods 0.000 title claims abstract description 12
- 238000003062 neural network model Methods 0.000 claims abstract description 57
- 230000005236 sound signal Effects 0.000 claims abstract description 45
- 238000012549 training Methods 0.000 claims abstract description 34
- 238000000605 extraction Methods 0.000 claims abstract description 24
- 238000007781 pre-processing Methods 0.000 claims description 17
- 230000032683 aging Effects 0.000 claims description 13
- 238000012423 maintenance Methods 0.000 claims description 13
- 238000012360 testing method Methods 0.000 claims description 11
- 238000012795 verification Methods 0.000 claims description 11
- 230000006870 function Effects 0.000 claims description 10
- 238000006243 chemical reaction Methods 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 8
- 238000004519 manufacturing process Methods 0.000 claims description 8
- 238000001514 detection method Methods 0.000 claims description 7
- 238000009432 framing Methods 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 6
- 238000002790 cross-validation Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 4
- 238000012935 Averaging Methods 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 3
- 238000013527 convolutional neural network Methods 0.000 claims description 3
- 230000007613 environmental effect Effects 0.000 claims description 3
- 238000010183 spectrum analysis Methods 0.000 claims description 3
- 238000012546 transfer Methods 0.000 claims description 3
- 238000000034 method Methods 0.000 abstract description 22
- 206010063385 Intellectualisation Diseases 0.000 abstract 1
- 238000003745 diagnosis Methods 0.000 description 15
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000037433 frameshift Effects 0.000 description 2
- 238000013178 mathematical model Methods 0.000 description 2
- 238000005299 abrasion Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000002405 diagnostic procedure Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000004171 remote diagnosis Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01H—MEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
- G01H17/00—Measuring mechanical vibrations or ultrasonic, sonic or infrasonic waves, not provided for in the preceding groups
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01M—TESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
- G01M13/00—Testing of machine parts
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01M—TESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
- G01M99/00—Subject matter not provided for in other groups of this subclass
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/27—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
- G10L25/30—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Signal Processing (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Theoretical Computer Science (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)
Abstract
The invention discloses a machine equipment state monitoring system based on deep learning and voice recognition. The method comprises the steps that a training data acquisition module acquires a sound signal; the manual marking module marks the sound signal to form a sound sample library; the sound sample is sent to a preset neural network model for training through pretreatment and feature extraction; the real-time data acquisition module acquires a sound signal and sends the sound signal to the trained neural network model; and the state identification module is used for comprehensively identifying and judging the running state of the machine through sound signals by combining artificial experience, and feeding back and outputting the result. The invention not only can monitor the running state of the machine equipment in real time, but also can send out an alarm signal when the machine equipment is in fault or in a dangerous state, so as to inform an equipment manager to maintain in time and improve the working efficiency; meanwhile, the neural network model is trained by combining a deep learning algorithm and artificial experience, so that the method has the advantages of high identification accuracy, good safety, high efficiency, intellectualization and the like.
Description
Technical Field
The invention relates to the technical field of sound signal recognition, in particular to a machine equipment state monitoring system based on deep learning and sound recognition.
Background
At present, in the use process of machine equipment in a factory environment, a plurality of problems such as abrasion, aging and the like easily occur to the machine equipment due to the influence of natural factors such as temperature, humidity, geographical position and the like and human factors. The machine equipment state monitoring is a very complicated process, and although many researches on machine equipment state monitoring and fault diagnosis exist at present, due to more fault types, the occurrence of faults is accidental or random, and due to the complexity of the machine equipment, the machine equipment state monitoring and fault diagnosis still remain a problem worthy of discussion.
According to the characteristic description and decision method adopted by the system, the existing machine equipment state monitoring is mainly developed aiming at machine equipment fault diagnosis and is divided into two categories in summary: the fault diagnosis method based on the system mathematical model and the fault diagnosis method based on the non-model. The fault diagnosis method based on the system mathematical model is to estimate the system output by constructing an observer, then compare the system output with the output measured value, and obtain fault information from the system output. The fault diagnosis method based on the non-model includes a fault diagnosis method based on measurable signal processing, a diagnosis method based on a fault diagnosis expert system, a fault diagnosis method based on fault pattern recognition, a fault diagnosis method based on a fault tree, a fault diagnosis method based on an artificial neural network, and the like. However, the existing fault diagnosis technology and method have the following problems:
(1) the production of large machines or large expensive units, which are not accessible or can not be disassembled when malfunctioning.
(2) For machine equipment with high safety requirements, the maintenance is difficult, and the maintenance cost is high.
(3) The method has insufficient consideration on the aspects of production importance, personal safety, environmental protection, social influence and the like.
(4) When data is analyzed and processed, most diagnostic methods adopt various independent models to solve the problem of a formula, the method needs to well combine various models, and various conditions need to be considered in different problems, so that the method has certain limitation.
(5) At present, a good remote diagnosis method for diagnosing the fault of the machine equipment of the complex system is difficult to completely solve.
The invention collects the sound data of the machine equipment and key parts thereof through a sensor, carries out manual marking to the sound data to form a sound sample library, then carries out pretreatment, carries out characteristic extraction, sends the sound sample library into a preset neural network model, judges the running state of the machine through the recognition of the sound by the neural network model, simultaneously sends the data collected in real time into the neural network model through the pretreatment and the characteristic extraction to carry out sound recognition, finally carries out comprehensive judgment on the recognition result by combining artificial experience, and carries out re-marking on the sound signal to form a new sample, thereby continuously enlarging the sound sample library and improving the recognition rate of the neural network model. The invention can not only monitor the running state of the machine equipment in real time and display the running state of the machine, but also send out an alarm signal when the machine equipment or key parts thereof are in fault or in a dangerous state, inform an equipment manager to maintain in time, improve the working efficiency and reduce the economic loss.
Disclosure of Invention
The invention aims to provide a machine equipment state monitoring system based on deep learning and voice recognition aiming at machine equipment on a factory assembly line, so as to make up for the defects of the traditional machine equipment in running state and fault monitoring.
In order to solve the above technology, the technical scheme adopted by the invention is as follows: a machine equipment state monitoring system based on deep learning and voice recognition comprises: the system comprises a training data acquisition module, an artificial marking module, a sound sample library, preprocessing, feature extraction, a neural network model, a real-time data acquisition module, a state recognition module, a recognition result module, an artificial experience module, a state display module and an alarm module. The training data acquisition module is connected with the artificial marking module, the artificial marking module is respectively connected with the sound sample library and the recognition result module, the sound sample library is connected with the preprocessing, the preprocessing is respectively connected with the real-time data acquisition module and the feature extraction, the feature extraction is connected with the neural network model, the neural network model is connected with the state recognition module, the state recognition module is connected with the recognition result module, and the recognition result module is respectively connected with the artificial experience module, the artificial marking module, the state display module and the alarm module.
The training data acquisition module adopts a sensor to acquire sound signals of machine equipment and key parts thereof running on a production line under a factory production environment.
The manual marking module is used for judging the running states of the machine equipment and key parts thereof, including whether the machine equipment runs normally and the aging degree, by equipment maintenance personnel or machine fault experts through sound signals according to own experiences. Wherein whether to operate normally includes: normal operation and failure; the degree of aging includes: good, moderate, dangerous.
The sound sample library is a sound signal which is marked by people.
The pre-processing includes filtering, a/D conversion, pre-emphasis, frame windowing, and endpoint detection.
The filtering adopts an FIR filter to filter out non-audio components in the signal, and the signal-to-noise ratio of the input signal is improved to the maximum extent.
The a/D conversion is to convert an analog signal into a digital signal.
The pre-emphasis emphasizes the high-frequency part of the signal to enhance the high-frequency resolution of the sound signal, thereby facilitating the subsequent spectral analysis. A first order FIR high pass digital filter with a transfer function of H (z) is selected for pre-emphasis processing1-az-1,0.9<a<1.0。
The frame windowing divides the sound signal into small time periods, namely frames, and then performs windowing on the framed sound signal, and the main purpose is to keep the short-time stationarity of the sound signal and reduce the Gibbs effect. Where the frame length is set to 20ms, the frame shift takes the length of the frame 1/3. The windowing adopts a Hamming window, and the functional expression of the Hamming window is shown as (1), wherein N is the window length equal to the frame length.
The endpoint detection is used to distinguish background noise from ambient noise in the sound signal and accurately determine the starting point and the ending point of the sound signal.
The feature extraction is used for extracting feature parameters of sound signals, and the deep learning and sound identification-based machine equipment state monitoring system adopts a Mel frequency cepstrum coefficient as the feature parameters of the sound of the machine equipment.
The neural network model adopts a designed convolutional neural network model and comprises 4 convolutional layers, 4 pooling layers and 2 full-connection layers, a ReLU is used as an intermediate layer activation function, softmax is used as a last layer, and Batch Normalization (Batch Normalization) accelerated training is used after each convolutional layer. The optimizer used a random gradient descent (SGD), using Dropout ratios of 0.5, Cross Entropy (Cross Entropy) for the loss function, and global averaging pooling. And inputting the voice data subjected to data processing and feature extraction into a pre-designed neural network model, and training the neural network model. Dividing the sound data sample into three parts, namely a training set, a verification set and a test set, wherein the proportion is 8: 1: 1, and performing ten-fold cross validation. Modeling a simulated sound data sample on a training set, judging whether the model reaches a required standard through whether the recognition rate of the simulated sound data sample reaches a set threshold, returning to continuous learning if the recognition rate does not reach the required standard, verifying the neural network model through a verification set if the model reaches the required standard, primarily evaluating the super-parameters of the model and the capability of the model through the verification set, judging whether the model reaches the required standard according to whether the recognition rate reaches the set threshold, returning to continuous learning if the recognition rate does not reach the threshold requirement, and testing if the recognition rate reaches the threshold requirement; and the test set is used for evaluating the generalization ability of the neural network model, if the generalization ability reaches a preset threshold value, the training is finished, and if not, the retraining is returned.
The state recognition module sends the sound samples and the real-time sound data which are subjected to preprocessing and feature extraction into a preset neural network model, and recognizes the running states of the machine equipment and key parts of the machine equipment through the neural network model.
And the identification result module outputs and displays the result of the state identification module on one hand, judges the type of the running state on the other hand, and sends information to the alarm module when the running state is 'failed' or the aging degree is 'dangerous'.
The artificial experience module is mainly used for comprehensively analyzing the identification result through professional equipment maintenance personnel or machine fault experts, judging whether the identification result of the neural network model accords with the self experience judgment, and feeding back the result to the artificial marking module after comprehensively analyzing the result, so that the sound sample base is continuously increased, namely training data of the neural network model is increased, and further the accuracy of the neural network model in identifying the running states of the machine equipment and key parts of the machine equipment is improved.
The state display module is responsible for displaying the operation states identified by the identification result module, including the operation states and corresponding positions of all monitored machine equipment and key parts thereof, and highlighting the machine which has failed or is in high risk and the key parts thereof.
The alarm module is responsible for receiving the alarm signal sent by the identification result module and sending an alarm so as to inform maintenance personnel to take corresponding measures.
The invention has the following beneficial effects and advantages:
(1) the sensor is adopted to collect the sound signals of the machine equipment and the key parts thereof during operation, and the sound signals are remotely processed, so that the machine fault is remotely diagnosed, maintenance personnel do not need to approach or disassemble to check the machine equipment, and the intelligence and the safety are higher;
(2) the invention can monitor the running state and the aging degree of the machine equipment, can identify whether the machine equipment fails or not, and reduces the economic loss caused by the machine failure and shutdown;
(3) the method has the advantages that the neural network is utilized to train the sound sample library, the sound sample library is established, meanwhile, the recognition result is re-marked by combining artificial experience, a new sound sample is formed, the sound sample library is continuously enlarged, and the neural network model is further trained, so that the designed neural network model is more perfect, the recognition result is more accurate, and good conditions are provided for monitoring of machine equipment.
Drawings
Fig. 1 is a block diagram of a system for monitoring a state of a machine device based on deep learning and voice recognition according to the present invention.
Fig. 2 is a block diagram of sound preprocessing used in the present invention.
FIG. 3 is a flow chart of neural network model training in the present invention.
1. A training data acquisition module; 2. a manual marking module; 3. a library of sound samples; 4. pre-treating; 401. filtering; 402. A/D conversion; 403. pre-emphasis; 404. framing and windowing; 405. detecting an end point; 5. extracting characteristics; 6. a neural network model; 7. a real-time data acquisition module; 8. a state identification module; 9. a recognition result module; 10. a manual experience module; 11. a status display module; 12. and an alarm module.
Detailed Description
Example (b):
as shown in fig. 1, the present invention provides a machine equipment state monitoring system based on deep learning and voice recognition, including: the system comprises a training data acquisition module 1, an artificial marking module 2, a sound sample library 3, preprocessing 4, feature extraction 5, a neural network model 6, a real-time data acquisition module 7, a state recognition module 8, a recognition result module 9, an artificial experience module 10, a state display module 11 and an alarm module 12. Training data acquisition module 1 links to each other with artifical mark module 2, artifical mark module 2 links to each other with sound sample storehouse 3 and identification result module 9 respectively, sound sample storehouse 2 links to each other with preliminary treatment 4, preliminary treatment 4 links to each other with real-time data acquisition module 7 and feature extraction 5 respectively, feature extraction 5 links to each other with neural network model 6, neural network model 6 links to each other with state identification module 8, state identification module 8 links to each other with identification result module 9, identification result module 9 links to each other with artifical experience module 10, artifical mark module 2, state display module 11 and alarm module 12 respectively.
The training data acquisition module 1 adopts a sensor to acquire sound signals of machine equipment and key parts thereof running on a production line under a factory production environment.
The manual marking module 2 is used for judging the running states of the machine equipment and key parts thereof, including whether the machine equipment runs normally and the aging degree, by equipment maintenance personnel or machine fault experts through sound signals according to own experiences. Wherein whether to operate normally includes: normal operation and failure; the degree of aging includes: good, moderate, dangerous.
The sound sample library 3 is a sound signal that is artificially labeled.
The pre-processing 4 comprises filtering 401, a/D conversion 402, pre-emphasis 403, framing windowing 404 and end point detection 405.
The filtering 401 adopts an FIR filter to filter out non-audio components in the signal, so as to improve the signal-to-noise ratio of the input signal to the maximum extent;
the A/D conversion 402 is to convert an analog signal into a digital signal;
the pre-emphasis 403 emphasizes the high frequency part of the signal, enhancing the high frequency resolution of the sound signal, facilitating the following spectral analysis. A first order FIR high-pass digital filter with a transfer function of H (z) 1-az is selected for pre-emphasis processing-1,0.9<a<1.0;
The framing windowing 404 is to divide the sound signal into small time segments, i.e., frames, and then perform windowing on the framed sound signal, mainly for the purpose of keeping the short-time stationarity of the sound signal and reducing the Gibbs effect. Where the frame length is set to 20ms, the frame shift takes the length of the frame 1/3. The windowing adopts a Hamming window, and the functional expression of the Hamming window is shown as (2), wherein N is the window length equal to the frame length;
the end point detection 405 is provided to accurately determine the start point and the end point of a sound signal in order to distinguish between background noise and environmental noise in the sound signal.
The feature extraction 5 mainly extracts feature parameters of the sound signals, and the invention adopts Mel frequency cepstrum coefficients as the feature parameters of the sound of the machine equipment.
The neural network model 6 adopts a designed convolutional neural network model, and comprises 4 convolutional layers, 4 pooling layers and 2 full-connection layers, wherein a ReLU is used as an intermediate layer activation function, softmax is used as a last layer, and Batch Normalization (Batch Normalization) accelerated training is used after each convolutional layer. The optimizer used a random gradient descent (SGD), using Dropout ratios of 0.5, Cross Entropy (Cross Entropy) for the loss function, and global averaging pooling. And inputting the sound data subjected to the pretreatment 4 and the feature extraction 5 into a pre-designed neural network model 6, and training the neural network model 6. Dividing the sound data sample into three parts, namely a training set, a verification set and a test set, wherein the proportion is 8: 1: 1, and performing ten-fold cross validation. Modeling a simulated sound data sample on a training set, judging whether the model reaches a required standard through whether the recognition rate of the simulated sound data sample reaches a set threshold, returning to continuous learning if the recognition rate does not reach the required standard, verifying a neural network model 6 through a verification set if the model reaches the required standard, primarily evaluating the super-parameters of the model and the capability of the model through the verification set, judging whether the model reaches the required standard according to whether the recognition rate reaches the set threshold, returning to continuous learning if the recognition rate does not reach the threshold requirement, and testing if the recognition rate reaches the threshold requirement; and the test set is used for evaluating the generalization ability of the neural network model 6, if the generalization ability reaches a preset threshold value, the training is finished, and if not, the retraining is returned.
The state recognition module 8 sends the sound samples 3 and the real-time sound data which are subjected to the preprocessing 4 and the feature extraction 5 into a preset neural network model 6, and recognizes the running states of the machine equipment and key parts thereof through the neural network model 6.
The recognition result module 9 outputs and displays the result of the state recognition module 8 on one hand, and judges the type of the operation state on the other hand, and sends information to the alarm module 12 when the operation state is 'failure occurred' or the aging degree is 'dangerous'.
The artificial experience module 10 mainly performs comprehensive analysis on the recognition result by professional equipment maintenance personnel or machine fault experts, judges whether the recognition result of the neural network model 6 is consistent with the self experience judgment, performs comprehensive analysis on the result, and feeds back the result to the artificial marking module 2, so that the sound sample library 3 is continuously increased, namely training data of the neural network model 6 is increased, and further the accuracy of the neural network model 6 in recognizing the operation states of the machine equipment and key parts thereof is improved.
The state display module 11 is responsible for displaying the operation states identified by the identification result module 9, including the operation states and corresponding positions of all monitored machine equipment and critical parts thereof, and highlighting the machine and the critical parts thereof which have failed or are in high risk.
The alarm module 12 is responsible for receiving the alarm signal sent by the identification result module 9 and sending an alarm, so as to inform maintenance personnel to take corresponding measures.
The working process of the machine equipment fault diagnosis method based on artificial experience and voice recognition comprises the following steps:
(1) firstly, a sound sensor is used for collecting sound signals of a machine and key parts thereof in a working state, professional equipment maintenance personnel or machine fault experts manually mark the sound signals according to self experience, and the type of the sound signals is marked, wherein the type of the sound signals is mainly the running state of the machine equipment and the key parts thereof: including whether it is operating properly and the degree of aging. Wherein whether to operate normally includes: normal operation and failure; the degree of aging includes: good, moderate, dangerous. Therefore, when and where the machine equipment has faults can be predicted, the faults are prepared in advance, accidents are prevented, and losses are avoided or minimized. The artificially marked sound signals are then formed into a sound sample library 3.
(2) Performing pre-processing 4 and feature extraction 5 on the sound sample library 3, wherein the pre-processing 4 comprises filtering 401, a/D conversion 402, pre-emphasis 403, framing windowing 404 and endpoint detection 405, as shown in fig. 2; and the feature extraction 5 adopts a Mel frequency cepstrum coefficient as a feature parameter of the sound of the machine equipment.
(3) The sound sample is sent into a trained neural network model 6 after being preprocessed 4; as shown in fig. 3, the training of the neural network model 6 divides the data sample into three parts, which are respectively a training set, a verification set and a test set, and the proportion is 8: 1: and 1, performing cross validation by ten folds, respectively judging whether the set neural network model 6 meets the set threshold requirement, if so, performing the next validation and test, and if not, returning to continue training.
(4) The sensor collects sound signals of the machine equipment and key parts thereof in real time, the sound signals are preprocessed 4 and extracted 5, state recognition is carried out through a trained neural network model 6, and professional equipment maintenance personnel or machine fault experts comprehensively judge the working states of the machine equipment and the key parts thereof according to self experience and neural network recognition results. Because the data of machine equipment faults in earlier work is limited, a better neural network model 6 is difficult to train when the sample data is less, the identification result of the state identification result module 9 possibly has deviation, the real-time data subjected to preprocessing 4 and feature extraction 5 is input into the trained neural network model 6 for state identification, verification and judgment are carried out through artificial experience, the real-time data is marked to form a new sound sample, the new sound sample is added into the original sound sample library 3, the trained neural network model 6 is more and more stable along with the continuous increase of the sound sample data, and the obtained monitoring result is more accurate.
The above description is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention.
Claims (1)
1. A machine equipment state monitoring system based on deep learning and voice recognition is characterized by comprising a training data acquisition module, an artificial marking module, a voice sample library, preprocessing, feature extraction, a neural network model, a real-time data acquisition module, a state recognition module, a recognition result module, an artificial experience module, a state display module and an alarm module; the training data acquisition module is connected with the artificial marking module, the artificial marking module is respectively connected with the sound sample library and the recognition result module, the sound sample library is connected with the preprocessing, the preprocessing is respectively connected with the real-time data acquisition module and the feature extraction, the feature extraction is connected with the neural network model, the neural network model is connected with the state recognition module, the state recognition module is connected with the recognition result module, and the recognition result module is respectively connected with the artificial experience module, the artificial marking module, the state display module and the alarm module;
the training data acquisition module adopts a sensor to acquire sound signals of machine equipment and key parts thereof running on a production line in a factory production environment;
the manual marking module is used for judging the running states of the machine equipment and key parts thereof by equipment maintenance personnel or machine fault experts through sound signals according to own experiences, including whether the machine equipment runs normally and the aging degree; wherein whether to operate normally includes: normal operation and failure; the degree of aging includes: good, moderate, dangerous;
the sound sample library is a sound signal which is marked manually;
the preprocessing comprises filtering, A/D conversion, pre-emphasis, framing and windowing and end point detection;
the filtering adopts an FIR filter to filter out non-audio components in the signal, and the signal-to-noise ratio of the input signal is improved to the maximum extent;
the A/D conversion is to convert analog signals into digital signals;
the pre-emphasis is to emphasize the high-frequency part of the signal, enhance the high-frequency resolution of the sound signal and facilitate the subsequent spectral analysis; a first order FIR high-pass digital filter with a transfer function of H (z) 1-az is selected for pre-emphasis processing-1,0.9<a<1.0;
The framing windowing is to divide the sound signal into frames and then perform windowing processing on the framed sound signal, wherein the frame length is set to be 20ms, and the frame length is 1/3 of the frame length; the windowing adopts a Hamming window, and the function expression of the Hamming window is shown as (1), wherein N is the window length equal to the frame length;
the endpoint detection is used for distinguishing background noise and environmental noise in the sound signal and accurately judging a starting point and an ending point of the sound signal;
the characteristic extraction is used for extracting characteristic parameters of sound signals, and the deep learning and sound identification-based machine equipment state monitoring system adopts a Mel frequency cepstrum coefficient as the characteristic parameters of sound of machine equipment;
the neural network model adopts a designed convolutional neural network model, and comprises 4 convolutional layers, 4 pooling layers and 2 full-connection layers, wherein a ReLU is used as an intermediate layer activation function, softmax is used as a last layer, and Batch Normalization (Batch Normalization) accelerated training is used after each convolutional layer; the optimizer uses Stochastic Gradient Descent (SGD), uses Dropout ratios of 0.5, uses Cross Entropy (Cross Entropy) as a loss function, and performs global averaging pooling; inputting the voice data after data processing and feature extraction into a pre-designed neural network model, and training the neural network model; dividing the sound data sample into three parts, namely a training set, a verification set and a test set, wherein the proportion is 8: 1: 1, performing ten-fold cross validation; modeling a simulated sound data sample on a training set, judging whether the model reaches a required standard through whether the recognition rate of the simulated sound data sample reaches a set threshold, returning to continuous learning if the recognition rate does not reach the required standard, verifying the neural network model through a verification set if the model reaches the required standard, primarily evaluating the super-parameters of the model and the capability of the model through the verification set, judging whether the model reaches the required standard according to whether the recognition rate reaches the set threshold, returning to continuous learning if the recognition rate does not reach the threshold requirement, and testing if the recognition rate reaches the threshold requirement; the test set is used for evaluating the generalization ability of the neural network model, if the generalization ability reaches a preset threshold value, the training is finished, otherwise, the retraining is returned;
the state recognition module is used for sending the sound samples and the real-time sound data which are subjected to preprocessing and feature extraction into a preset neural network model, and recognizing the running states of the machine equipment and key parts thereof through the neural network model;
the recognition result module outputs and displays the result of the state recognition module on one hand, judges the type of the running state on the other hand, and sends information to the alarm module when the running state is 'failed' or the aging degree is 'dangerous';
the artificial experience module is mainly used for comprehensively analyzing the identification result through professional equipment maintenance personnel or machine fault experts, judging whether the identification result of the neural network model is consistent with the self experience judgment, and feeding back the result to the artificial marking module after comprehensively analyzing the result, so that a sound sample base is continuously increased, namely training data of the neural network model is increased, and further the accuracy of the neural network model in identifying the running states of the machine equipment and key parts of the machine equipment is improved;
the state display module is responsible for displaying the operation states identified by the identification result module, including the operation states and corresponding positions of all monitored machine equipment and key parts thereof, and highlighting the machine which has failed or is in high risk and the key parts thereof;
and the alarm module is responsible for receiving an alarm signal sent by the identification result module and sending an alarm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911222026.XA CN110867196B (en) | 2019-12-03 | 2019-12-03 | Machine equipment state monitoring system based on deep learning and voice recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911222026.XA CN110867196B (en) | 2019-12-03 | 2019-12-03 | Machine equipment state monitoring system based on deep learning and voice recognition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110867196A true CN110867196A (en) | 2020-03-06 |
CN110867196B CN110867196B (en) | 2024-04-05 |
Family
ID=69658389
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911222026.XA Active CN110867196B (en) | 2019-12-03 | 2019-12-03 | Machine equipment state monitoring system based on deep learning and voice recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110867196B (en) |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111256814A (en) * | 2020-03-13 | 2020-06-09 | 天津商业大学 | Tower monitoring system and method |
CN111398965A (en) * | 2020-04-09 | 2020-07-10 | 电子科技大学 | Danger signal monitoring method and system based on intelligent wearable device and wearable device |
CN111413925A (en) * | 2020-03-20 | 2020-07-14 | 华中科技大学 | Machine tool fault prediction method based on sound signals |
CN111524523A (en) * | 2020-04-26 | 2020-08-11 | 中南民族大学 | Instrument and equipment state detection system and method based on voiceprint recognition technology |
CN111581425A (en) * | 2020-04-28 | 2020-08-25 | 上海鼎经自动化科技股份有限公司 | Equipment sound classification method based on deep learning |
CN112700793A (en) * | 2020-12-24 | 2021-04-23 | 国网福建省电力有限公司 | Method and system for identifying fault collision of water turbine |
CN112733588A (en) * | 2020-08-13 | 2021-04-30 | 精英数智科技股份有限公司 | Machine running state detection method and device and electronic equipment |
CN113129918A (en) * | 2021-04-15 | 2021-07-16 | 浙江大学 | Voice dereverberation method combining beam forming and deep complex U-Net network |
CN113178032A (en) * | 2021-03-03 | 2021-07-27 | 北京迈格威科技有限公司 | Video processing method, system and storage medium |
CN113298134A (en) * | 2021-05-20 | 2021-08-24 | 华中科技大学 | BPNN-based remote non-contact health monitoring system and method for fan blade |
CN113593605A (en) * | 2021-07-09 | 2021-11-02 | 武汉工程大学 | Industrial audio fault monitoring system and method based on deep neural network |
CN113657628A (en) * | 2021-08-20 | 2021-11-16 | 武汉霖汐科技有限公司 | Industrial equipment monitoring method and system, electronic equipment and storage medium |
CN113852612A (en) * | 2021-09-15 | 2021-12-28 | 桂林理工大学 | Network intrusion detection method based on random forest |
CN113988202A (en) * | 2021-11-04 | 2022-01-28 | 季华实验室 | Mechanical arm abnormal vibration detection method based on deep learning |
CN114147740A (en) * | 2021-12-09 | 2022-03-08 | 中科计算技术西部研究院 | Robot patrol planning system and method based on environment state |
CN114543983A (en) * | 2022-03-29 | 2022-05-27 | 阿里云计算有限公司 | Vibration signal identification method and device |
CN114764538A (en) * | 2020-12-30 | 2022-07-19 | 河北云酷科技有限公司 | Equipment sound signal pattern recognition model |
CN115512688A (en) * | 2022-09-02 | 2022-12-23 | 广东美云智数科技有限公司 | Abnormal sound detection method and device |
CN116189349A (en) * | 2023-04-28 | 2023-05-30 | 深圳黑蚂蚁环保科技有限公司 | Remote fault monitoring method and system for self-service printer |
CN116434502A (en) * | 2023-05-25 | 2023-07-14 | 中南大学 | Automobile alarm device containing sound absorption piezoelectric aerogel and automobile alarm method |
CN116665711A (en) * | 2023-07-26 | 2023-08-29 | 中国南方电网有限责任公司超高压输电公司广州局 | Gas-insulated switchgear on-line monitoring method and device and computer equipment |
CN117889943A (en) * | 2024-03-13 | 2024-04-16 | 浙江维度仪表有限公司 | Gas ultrasonic flowmeter inspection method and system based on machine learning |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20180130294A (en) * | 2017-05-29 | 2018-12-07 | 부경대학교 산학협력단 | Method for diagnosing machine fault based on sound |
CN109357749A (en) * | 2018-09-04 | 2019-02-19 | 南京理工大学 | A kind of power equipment audio signal analysis method based on DNN algorithm |
CN109767785A (en) * | 2019-03-06 | 2019-05-17 | 河北工业大学 | Ambient noise method for identifying and classifying based on convolutional neural networks |
CN110335617A (en) * | 2019-05-24 | 2019-10-15 | 国网新疆电力有限公司乌鲁木齐供电公司 | A kind of noise analysis method in substation |
-
2019
- 2019-12-03 CN CN201911222026.XA patent/CN110867196B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20180130294A (en) * | 2017-05-29 | 2018-12-07 | 부경대학교 산학협력단 | Method for diagnosing machine fault based on sound |
CN109357749A (en) * | 2018-09-04 | 2019-02-19 | 南京理工大学 | A kind of power equipment audio signal analysis method based on DNN algorithm |
CN109767785A (en) * | 2019-03-06 | 2019-05-17 | 河北工业大学 | Ambient noise method for identifying and classifying based on convolutional neural networks |
CN110335617A (en) * | 2019-05-24 | 2019-10-15 | 国网新疆电力有限公司乌鲁木齐供电公司 | A kind of noise analysis method in substation |
Non-Patent Citations (1)
Title |
---|
邵思羽: "《博士学位论文》", 东南大学 * |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111256814A (en) * | 2020-03-13 | 2020-06-09 | 天津商业大学 | Tower monitoring system and method |
CN111413925A (en) * | 2020-03-20 | 2020-07-14 | 华中科技大学 | Machine tool fault prediction method based on sound signals |
CN111398965A (en) * | 2020-04-09 | 2020-07-10 | 电子科技大学 | Danger signal monitoring method and system based on intelligent wearable device and wearable device |
CN111524523A (en) * | 2020-04-26 | 2020-08-11 | 中南民族大学 | Instrument and equipment state detection system and method based on voiceprint recognition technology |
CN111581425A (en) * | 2020-04-28 | 2020-08-25 | 上海鼎经自动化科技股份有限公司 | Equipment sound classification method based on deep learning |
CN112733588A (en) * | 2020-08-13 | 2021-04-30 | 精英数智科技股份有限公司 | Machine running state detection method and device and electronic equipment |
CN112700793A (en) * | 2020-12-24 | 2021-04-23 | 国网福建省电力有限公司 | Method and system for identifying fault collision of water turbine |
CN114764538A (en) * | 2020-12-30 | 2022-07-19 | 河北云酷科技有限公司 | Equipment sound signal pattern recognition model |
CN114764538B (en) * | 2020-12-30 | 2024-04-26 | 河北云酷科技有限公司 | Equipment sound signal mode identification method |
CN113178032A (en) * | 2021-03-03 | 2021-07-27 | 北京迈格威科技有限公司 | Video processing method, system and storage medium |
CN113129918A (en) * | 2021-04-15 | 2021-07-16 | 浙江大学 | Voice dereverberation method combining beam forming and deep complex U-Net network |
CN113298134B (en) * | 2021-05-20 | 2023-07-28 | 华中科技大学 | System and method for remotely and non-contact health monitoring of fan blade based on BPNN |
CN113298134A (en) * | 2021-05-20 | 2021-08-24 | 华中科技大学 | BPNN-based remote non-contact health monitoring system and method for fan blade |
CN113593605B (en) * | 2021-07-09 | 2024-01-26 | 武汉工程大学 | Industrial audio fault monitoring system and method based on deep neural network |
CN113593605A (en) * | 2021-07-09 | 2021-11-02 | 武汉工程大学 | Industrial audio fault monitoring system and method based on deep neural network |
CN113657628A (en) * | 2021-08-20 | 2021-11-16 | 武汉霖汐科技有限公司 | Industrial equipment monitoring method and system, electronic equipment and storage medium |
CN113852612B (en) * | 2021-09-15 | 2023-06-27 | 桂林理工大学 | Network intrusion detection method based on random forest |
CN113852612A (en) * | 2021-09-15 | 2021-12-28 | 桂林理工大学 | Network intrusion detection method based on random forest |
CN113988202A (en) * | 2021-11-04 | 2022-01-28 | 季华实验室 | Mechanical arm abnormal vibration detection method based on deep learning |
CN114147740A (en) * | 2021-12-09 | 2022-03-08 | 中科计算技术西部研究院 | Robot patrol planning system and method based on environment state |
CN114543983A (en) * | 2022-03-29 | 2022-05-27 | 阿里云计算有限公司 | Vibration signal identification method and device |
WO2023185801A1 (en) * | 2022-03-29 | 2023-10-05 | 阿里云计算有限公司 | Vibration signal identification method and apparatus |
CN115512688A (en) * | 2022-09-02 | 2022-12-23 | 广东美云智数科技有限公司 | Abnormal sound detection method and device |
CN116189349A (en) * | 2023-04-28 | 2023-05-30 | 深圳黑蚂蚁环保科技有限公司 | Remote fault monitoring method and system for self-service printer |
CN116434502A (en) * | 2023-05-25 | 2023-07-14 | 中南大学 | Automobile alarm device containing sound absorption piezoelectric aerogel and automobile alarm method |
CN116665711B (en) * | 2023-07-26 | 2024-01-12 | 中国南方电网有限责任公司超高压输电公司广州局 | Gas-insulated switchgear on-line monitoring method and device and computer equipment |
CN116665711A (en) * | 2023-07-26 | 2023-08-29 | 中国南方电网有限责任公司超高压输电公司广州局 | Gas-insulated switchgear on-line monitoring method and device and computer equipment |
CN117889943A (en) * | 2024-03-13 | 2024-04-16 | 浙江维度仪表有限公司 | Gas ultrasonic flowmeter inspection method and system based on machine learning |
CN117889943B (en) * | 2024-03-13 | 2024-05-14 | 浙江维度仪表有限公司 | Gas ultrasonic flowmeter inspection method and system based on machine learning |
Also Published As
Publication number | Publication date |
---|---|
CN110867196B (en) | 2024-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110867196B (en) | Machine equipment state monitoring system based on deep learning and voice recognition | |
CN110940539B (en) | Machine equipment fault diagnosis method based on artificial experience and voice recognition | |
CN106769052B (en) | A kind of mechanical system rolling bearing intelligent failure diagnosis method based on clustering | |
WO2022156330A1 (en) | Fault diagnosis method for rotating device | |
CN113469060A (en) | Multi-sensor fusion convolution neural network aeroengine bearing fault diagnosis method | |
WO2019080367A1 (en) | Method for evaluating health status of mechanical device | |
CN112660745B (en) | Intelligent diagnosis method and system for carrier roller fault and readable storage medium | |
CN111507376A (en) | Single index abnormality detection method based on fusion of multiple unsupervised methods | |
CN113566948A (en) | Fault audio recognition and diagnosis method for robot coal pulverizer | |
CN112669305B (en) | Metal surface rust resistance test bench and rust resistance evaluation method | |
WO2019043600A1 (en) | Remaining useful life estimator | |
CN107844067A (en) | A kind of gate of hydropower station on-line condition monitoring control method and monitoring system | |
CN113345399A (en) | Method for monitoring sound of machine equipment in strong noise environment | |
CN117251812A (en) | High-voltage power line operation fault detection method based on big data analysis | |
CN115424635B (en) | Cement plant equipment fault diagnosis method based on sound characteristics | |
CN113283310A (en) | System and method for detecting health state of power equipment based on voiceprint features | |
US20210149387A1 (en) | Facility failure prediction system and method for using acoustic signal of ultrasonic band | |
CN116432071A (en) | Rolling bearing residual life prediction method | |
CN107169268A (en) | A kind of airport noise monitoring point abnormality recognition method based on trend segment similarity | |
CN113757093A (en) | Fault diagnosis method for flash steam compressor unit | |
CN114021620B (en) | BP neural network feature extraction-based electric submersible pump fault diagnosis method | |
CN108377209A (en) | Equipment fault detecting system based on SCADA and detection method | |
CN106096634B (en) | Fault detection method based on Adaptive windowing mental arithmetic method with interval halving algorithm | |
CN110231165B (en) | Mechanical equipment fault diagnosis method based on expectation difference constraint confidence network | |
CN114417704A (en) | Wind turbine generator health assessment method based on improved stack type self-coding |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |